text
stringlengths 4
2.78M
|
---|
---
abstract: 'We are interested in the problem of robust parametric estimation of a density from i.i.d observations. By using a practice-oriented procedure based on robust tests, we build an estimator for which we establish non-asymptotic risk bounds with respect to the Hellinger distance under mild assumptions on the parametric model. We prove that the estimator is robust even for models for which the maximum likelihood method is bound to fail. We also evaluate the performance of the estimator by carrying out numerical simulations for which we observe that the estimator is very close to the maximum likelihood one when the model is regular enough and contains the true underlying density.'
address: 'Univ. Nice Sophia Antipolis, CNRS, LJAD, UMR 7351, 06100 Nice, France.'
author:
- Mathieu Sart
bibliography:
- 'biblio.bib'
date: 'September, 2013'
title: Robust estimation on a parametric model with tests
---
Introduction
============
Consider $n$ independent and identically random variables $X_1,\dots,X_n$ defined on an abstract probability space ($\Omega, \mathcal{E},{\mathbb{P}\,})$ with values in the measured space $({\mathbb{X}},\mathcal{F},\mu)$. We suppose that the distribution of $X_i$ admits a density $s$ with respect to $\mu$ and aim at estimating $s$ by using a parametric approach.
When the unknown density $s$ is assumed to belong to a parametric model ${\mathscr{F}}= \{f_{\theta}, \, \theta \in \Theta \}$ of densities, a traditional method to estimate $s = f_{\theta_0}$ is the maximum likelihood one. It is indeed well known that the maximum likelihood estimator (m.l.e for short) possesses nice statistical properties such as consistency and asymptotic efficiency when the model ${\mathscr{F}}$ is regular enough. However, it is also well known that this estimator breaks down for many models ${\mathscr{F}}$ of interest and counter examples may be found in [@Pitman1979; @Ferguson1982; @Lecam1990mle; @BirgeTEstimateurs] among other references.
Another drawback of the m.l.e lies in the fact that it is not robust. This means that if $s$ lies in a small neighbourhood of the model ${\mathscr{F}}$ but not in it, the m.l.e may perform poorly. Several kinds of robust estimators have been suggested in the literature to overcome this issue. We can cite the well known $L$ and $M$ estimators (which includes the class of minimum divergences estimators of [@Basu1998divergence]) and the class of estimators built from a preliminary non-parametric estimator (such as the minimum Hellinger distance estimators introduced in [@Beran1977] and the related estimators of [@Lindsay1994; @Basu1994]).
In this paper, we focus on estimators built from robust tests. This approach, which begins in the 1970s with the works of Lucien Lecam and Lucien Birgé ([@LeCam1973; @LeCam1975; @Birge1983; @Birge1984; @Birge1984a]), has the nice theoretical property to yield robust estimators under weak assumptions on the model ${\mathscr{F}}$. A key modern reference on this topic is [@BirgeTEstimateurs]. The recent papers [@BirgeGaussien2002; @BirgePoisson; @Birge2012; @BirgeDens; @BaraudBirgeHistogramme; @BaraudMesure; @Baraud2012; @SartMarkov; @Sart2012] show that increasing attention is being paid to this kind of estimator. Their main interest is to provide general theoretical results in various statistical settings (such as general model selection theorems) which are usually unattainable by the traditional procedures (such as those based on the minimization of a penalized contrast).
For our statistical issue, the procedures using tests are based on the pairwise comparison of the elements of a thin discretisation ${\mathscr{F}_{\text{dis}}}$ of ${\mathscr{F}}$, that is, a finite or countable subset ${\mathscr{F}_{\text{dis}}}$ of ${\mathscr{F}}$ such that for all function $f \in {\mathscr{F}}$, the distance between $f$ and ${\mathscr{F}_{\text{dis}}}$ is small (in a suitable sense). As a result, their complexities are of order the square of the cardinality of ${\mathscr{F}_{\text{dis}}}$. Unfortunately, this cardinality is often very large, making the construction of the estimators difficult in practice. The aim of this paper is to develop a faster way of using tests to build an estimator when the cardinality of ${\mathscr{F}_{\text{dis}}}$ is large.
From a theoretical point of view, the estimator we propose possesses similar statistical properties than those proved in [@BirgeTEstimateurs; @BaraudMesure]. Under mild assumptions on ${\mathscr{F}}$, we build an estimator $\hat{s} = f_{\hat{\theta}} $ of $s$ such that $$\begin{aligned}
\label{RelIntro}
{\mathbb{P}\,}\left[C h^2 (s, f_{\hat{\theta}} ) \geq \inf_{\theta \in \Theta} h^2 (s, f_{\theta}) + \frac{d}{n} + \xi \right] \leq e^{-n \xi} \quad \text{for all $\xi > 0$,}\end{aligned}$$ where $C$ is a positive number depending on ${\mathscr{F}}$, $h$ the Hellinger distance and $d$ such that $\Theta \subset {\mathbb{R}}^d$. We recall that the Hellinger distance is defined on the cone ${\mathbb{L}}^1_+ ({\mathbb{X}}, \mu)$ of non-negative integrable functions on ${\mathbb{X}}$ with respect to $\mu$ by $$h^2(f,g) = \frac{1}{2} \int_{{\mathbb{X}}} \left(\sqrt{f (x)} - \sqrt{g(x)} \right)^2 {\, \mathrm{d}}\mu (x) \quad \text{for all $f,g \in {\mathbb{L}}^1_+ ({\mathbb{X}}, \mu)$.}$$ Let us make some comments on (\[RelIntro\]). When $s$ does belong to the model ${\mathscr{F}}$, the estimator achieves a quadratic risk of order $n^{-1}$ with respect to the Hellinger distance. Besides, there exists $\theta_0 \in \Theta$ such that $s = f_{\theta_0}$ and we may then derive from [(\[RelIntro\])]{} the rate of convergence of $\hat{\theta}$ to $\theta_0$. In general, we do not suppose that the unknown density belongs to the model but rather use ${\mathscr{F}}$ as an approximate class (sieve) for $s$. Inequality (\[RelIntro\]) shows then that the estimator $\hat{s} = f_{\hat{\theta}}$ cannot be strongly influenced by small departures from the model. As a matter of fact, if $\inf_{\theta \in \Theta} h^2 (s, f_{\theta}) \leq n^{-1}$, which means that the model is slightly misspecified, the quadratic risk of the estimator $\hat{s} = f_{\hat{\theta}}$ remains of order $n^{-1}$. This can be interpreted as a robustness property.
The preceding inequality (\[RelIntro\]) is interesting because it proves that our estimator is robust and converges at the right rate of convergence when the model is correct. However, the constant $C$ depends on several parameters on the model such as the size of $\Theta$. It is thus far from obvious that such an estimator can be competitive against more traditional estimators (such as the m.l.e).
In this paper, we try to give a partial answer for our estimator by carrying out numerical simulations. When a very thin discretisation ${\mathscr{F}_{\text{dis}}}$ is used, the simulations show that our estimator is very close to the m.l.e when the model is regular enough and contains $s$. More precisely, the larger is the number of observations $n$, the closer they are, suggesting that our estimator inherits the efficiency of the m.l.e. Of course, this does not in itself constitute a proof but this allows to indicate what kind of results can be expected. A theoretical connection between estimators built from tests (with the procedure described in [@BaraudMesure]) and the m.l.e will be found in a future paper of Yannick Baraud and Lucien Birgé.
In the present paper, we consider the problem of estimation on a single model. Nevertheless, when the statistician has at disposal several candidate models for $s$, a natural issue is model selection. In order to address it, one may associate to each of these models the estimator resulting from our procedure and then select among those estimators by means of the procedure of [@BaraudMesure]. By combining his Theorem 2 with our risk bounds on each individual estimator, we obtain that the selected estimator satisfies an oracle-type inequality.
We organize this paper as follows. We begin with a glimpse of the results in Section 2. We then present a procedure and its associated theoretical results to deal with models parametrized by an unidimensional parameter in Section 3. We evaluate its performance in practice by carrying out numerical simulations in Section 4. We work with models parametrized by a multidimensional parameter in Sections 5 and 6. The proofs are postponed to Section 6. Some technical results about the practical implementation of our procedure devoted to the multidimensional models are delayed to Section 7.
Let us introduce some notations that will be used all along the paper. The number $x \vee y$ (respectively $x \wedge y$) stands for $\max(x,y)$ (respectively $\min(x,y)$) and $x_+$ stands for $x \vee 0$. We set ${\mathbb{N}}^{\star} = {\mathbb{N}}\setminus \{0\}$. The vector $(\theta_1,\dots,\theta_d)$ of ${\mathbb{R}}^d$ is denoted by the bold letter ${\boldsymbol{\theta}}$. Given a set of densities ${\mathscr{F}}= \{f_{{\boldsymbol{\theta}}}, \, {\boldsymbol{\theta}}\in \Theta\}$, for all $A \subset \Theta$, the notation ${\text{diam}}A$ stands for $\sup_{{\boldsymbol{\theta}}, {\boldsymbol{\theta}}' \in A} h^2 (f_{{\boldsymbol{\theta}}},f_{{\boldsymbol{\theta}}'})$. The cardinality of a finite set $A$ is denoted by $|A|$. For $(E,d)$ a metric space, $x \in E$ and $A \subset E$, the distance between $x$ and $A$ is denoted by $d(x,A)= \inf_{a \in A} d(x,a)$. The indicator function of a subset $A$ is denoted by ${\mathbbm{1}}_A$. The notations $C$,$C'$,$C''$…are for the constants. The constants $C$,$C'$,$C''$…may change from line to line.
An overview of the paper
========================
Assumption.
-----------
In this paper, we shall deal with sets of densities ${\mathscr{F}}= \left\{f_{{\boldsymbol{\theta}}}, \; {\boldsymbol{\theta}}\in \Theta \right\}$ indexed by a rectangle $$\Theta = \prod_{j=1}^d [m_j, M_j]$$ of ${\mathbb{R}}^d$. A such set will be called model. From now on, we consider models satisfying the following assumption.
\[HypSurLeModeleQuelquonqueDebutDimD\] There exist positive numbers $\alpha_1,\dots,\alpha_d$, $\underline{R}_1,\dots,\underline{R}_d$, ${ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_1,\dots,{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_d$ such that for all ${\boldsymbol{\theta}}= (\theta_1,\dots,\theta_d)$, ${\boldsymbol{\theta}}' = (\theta_1',\dots,\theta_d') \in \Theta = \prod_{j=1}^d [m_j, M_j]$ $$\begin{aligned}
\sup_{j \in \{1,\dots,d\}} \underline{R}_j |\theta_j - \theta'_j|^{\alpha_j} \leq h^2 \left(f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'} \right) \leq \sup_{j \in \{1,\dots,d\}} { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j |\theta_j - \theta'_j|^{\alpha_j}.
\end{aligned}$$
This assumption allows to connect a (quasi) distance between the parameters to the Hellinger one between the corresponding densities. A similar assumption may be found in Theorem 5.8 of Chapter 1 of [@Ibragimov1981] to prove results on the maximum likelihood estimator. They require however that the application $\theta \mapsto f_{\theta} (x)$ is continuous for $\mu$ almost all $x$ to ensure the existence and the consistency of the m.l.e. Without this additional assumption, the m.l.e may not exist as shown by the translation model $$\begin{aligned}
{\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [-1, 1]\right\} \quad \text{where} \quad f_{\theta} (x) =
\begin{cases}
\frac{1}{ 4 \sqrt{|x-\theta|}} {\mathbbm{1}}_{[-1,1]} (x-\theta) & \text{for all $x \in {\mathbb{R}}\setminus \{\theta\}$ } \\
0 & \text{for $x = \theta$}
\end{cases} \end{aligned}$$ for which Assumption \[HypSurLeModeleQuelquonqueDebutDimD\] holds with $\alpha_1 = 1/2$.
Under suitable regularity conditions on the model, Theorem 7.6 of Chapter 1 of [@Ibragimov1981] shows that this assumption is fulfilled with $\alpha_1 = \cdots = \alpha_d = 2$. Other kinds of sufficient conditions implying Assumption \[HypSurLeModeleQuelquonqueDebutDimD\] may be found in this book (see the beginning of Chapter 5 and Theorem 1.1 of Chapter 6). Other examples and counter-examples are given in Chapter 7 of [@DacunhaCastelle]. Several models of interest satisfying this assumption will appear later in the paper.
Risk bound.
-----------
In this paper, the risk bound we get for our estimator $\hat{s}$ is similar to the one we would get by the procedures of [@BirgeTEstimateurs; @BaraudMesure]. More precisely:
\[ThmPrincipalDimQuelquonqueDansOverview\] Suppose that Assumption \[HypSurLeModeleQuelquonqueDebutDimD\] holds. We can build an estimator $\hat{s}$ of the form $\hat{s} = f_{{\boldsymbol{\hat{\theta}}}}$ such that for all $\xi > 0$. $$\begin{aligned}
\label{eqRiskBoundThmGen}
{\mathbb{P}\,}\left[ C h^2(s,f_{{\boldsymbol{\hat{\theta}}}}) \geq h^2(s, {\mathscr{F}}) + \frac{d}{n} + \xi \right] \leq e^{- n \xi}\end{aligned}$$ where $C > 0$ depends on $\sup_{1 \leq j \leq d}{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j/\underline{R}_j$ and $\min_{1 \leq j \leq d} \alpha_j$.
We deduce from this risk bound that if $s = f_{{\boldsymbol{\theta}}_0}$ belongs to the model ${\mathscr{F}}$, the estimator ${{\boldsymbol{\hat{\theta}}}}$ converges to ${\boldsymbol{\theta}}_0$ and the random variable $h^2(s,f_{{{\boldsymbol{\hat{\theta}}}}})$ is of order $n^{-1}$. Besides, we may then derive from Assumption \[HypSurLeModeleQuelquonqueDebutDimD\] that there exist positive numbers $a$, $b_j$ such that $${\mathbb{P}\,}\left[n^{1/\alpha_j} \big|\hat{\theta}_j - \theta_{0,j} \big| \geq \xi \right] \leq a e^{-b_j \xi^{\alpha_j}} \quad \text{for all $j \in \{1,\dots, d\}$ and $\xi > 0$.}$$ Precisely, $a = e^{d}$ and $b_j = C \underline{R}_j$. We emphasize here that this exponential inequality on $\hat{\theta}_j$ is non-asymptotic but that the constants $a$, $b_j$ are unfortunately far from optimal.
As explained in the introduction, there is no assumption on the true underlying density $s$, which means that the model ${\mathscr{F}}$ may be misspecified. In particular, when the squared Hellinger distance between the unknown density and the model ${\mathscr{F}}$ is of order $n^{-1}$, the random variable $h^2(s,f_{{{\boldsymbol{\hat{\theta}}}}})$ remains of order $n^{-1}$. This shows that the estimator $\hat{s}$ possesses robustness properties.
Numerical complexity.
---------------------
The main interest of our procedures with respect to those of [@BirgeTEstimateurs; @BaraudMesure] lies in their numerical complexity. More precisely, we shall prove the proposition below.
\[PropCalculComplexiteDimenQuelquonque\] Under Assumption \[HypSurLeModeleQuelquonqueDebutDimD\], we can build an estimator $\hat{s}$ satisfying [(\[eqRiskBoundThmGen\])]{} in less than $$4 n C^{d/\boldsymbol{\bar{\alpha}}} \left[\prod_{j=1}^d \left(1 + \left( {{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j}/{ \underline{R}_j}\right)^{1/\alpha_j} \right) \right] \left[\sum_{j=1}^d \max \left\{1, \log \left( (n { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j/d)^{1/\alpha_j} (M_j-m_j) \right) \right\} \right]$$ operations. In the above inequality, $C$ is a constant larger than $1$ (independent of $n$ and the model ${\mathscr{F}}$) and $\boldsymbol{\bar{\alpha}}$ stands for the harmonic mean of $\boldsymbol{\alpha}$, that is $$\frac{1}{\boldsymbol{\bar{\alpha}}} = \frac{1}{d} \sum_{j=1}^d \frac{1}{\alpha_j}.$$
If we are interested in the complexity when $n$ is large, we may deduce that this upper-bound is asymptotically equivalent to $C' n \log n$ where $$C' = 4 ({d}/{ \boldsymbol{\bar{\alpha}}}) C^{d/ \boldsymbol{\bar{\alpha}}} \prod_{j=1}^d \left(1 + \left({{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j}/{ \underline{R}_j}\right)^{1/\alpha_j} \right).$$ This constant is of reasonable size when $d$, $1/{ \boldsymbol{\bar{\alpha}}}$, $({ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j/\underline{R}_j)^{1/\alpha_j}$ are not too large.
#### Remark.
The constant $C'$ does not depend only on the model but also on its parametrisation. As a matter of fact, in the uniform model $${\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [m_1, M_1]\right\} \quad \text{where} \quad f_{\theta} = \theta^{-1} {\mathbbm{1}}_{[0,\theta]}$$ we can compute explicitly the Hellinger distance $$h^2(f_{\theta}, f_{\theta'}) = \frac{|\theta' - \theta|}{(\sqrt{\theta} + \sqrt{\theta'} ) \sqrt{\max \left(\theta,\theta' \right) }}$$ and bound it from above and from below by $$\frac{1}{2 M_1} |\theta' - \theta| \leq h^2(f_{\theta}, f_{\theta'}) \leq \frac{1}{2 m_1} |\theta'- \theta|.$$ Now, if we parametrise ${\mathscr{F}}$ as $${\mathscr{F}}= \left\{f_{e^{t}}, \, t \in [\log m_1, \log M_1]\right\},$$ then the Hellinger becomes $h^2(f_{e^t}, f_{e^{t'}}) = 1 - e^{- |t'-t|/2}$, and we can bound it from above and from below by $$\frac{1 - \sqrt{m_1/M_1}}{\log (M_1/m_1)} |t' - t| \leq h^2(f_{e^t}, f_{e^{t'}}) \leq \frac{1}{2} |t' - t|.$$ Assumption \[HypSurLeModeleQuelquonqueDebutDimD\] is satisfied in both case but with different values of $\underline{R}_1$ and ${ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_1$. When $M_1/m_1$ is large, the second parametrisation is much more interesting since it leads to a smaller constant $C'$.
Models parametrized by an unidimensional parameter {#SectionEstimationDim1}
==================================================
We now describe our procedure when the parametric model ${\mathscr{F}}$ is indexed by an interval $\Theta = [m_1,M_1]$ of ${\mathbb{R}}$. Throughout this section, Assumption \[HypSurLeModeleQuelquonqueDebutDimD\] is supposed to be fulfilled. For the sake of simplicity, the subscripts of $m_1, M_1$ and $\alpha_1$ are omitted.
Basic ideas. {#SectionHeuristique}
------------
We begin to detail the heuristics on which is based our procedure. We assume in this section that $s$ belongs to the model ${\mathscr{F}}$, that is, there exists $\theta_0 \in \Theta = [m, M]$ such that $s = f_{\theta_0}$. The starting point is the existence for all $\theta, \theta' \in \Theta$ of a measurable function $T(\theta, \theta')$ of the observations $X_1,\dots,X_n$ such that
1. For all $\theta, \theta' \in\Theta $, $T(\theta, \theta') = - T (\theta' , \theta) $.
2. There exists $\kappa > 0$ such that if ${\mathbb{E}}\left[T (\theta,\theta') \right]$ is non-negative, then $$\begin{aligned}
h^2 (s, f_{\theta}) > {\kappa} h^2 (f_{\theta},f_{\theta'}).\end{aligned}$$
3. For all $\theta,\theta' \in\Theta $, $T(\theta , \theta') $ and ${\mathbb{E}}\left[T (\theta, \theta') \right]$ are close (in a suitable sense).
For all $\theta \in \Theta$, $r > 0$, let ${\mathcal{B}}(\theta, r)$ be the Hellinger ball centered at $\theta$ with radius $r$, that is $$\begin{aligned}
\label{eqDefinitionBouleHel}
{\mathcal{B}}(\theta, r) = \left\{\theta' \in \Theta, \, h (f_{\theta}, f_{\theta'}) \leq r \right\}.\end{aligned}$$ For all $\theta, \theta' \in \Theta$, we deduce from the first point that either $T ({\theta}, {\theta'})$ is non-negative, or $T ({\theta'}, {\theta})$ is non-negative. It is likely that it follows from 2. and 3. that in the first case $$\theta_0 \in \Theta \setminus {\mathcal{B}}\big(\theta, \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big)$$ while in the second case $$\theta_0 \in \Theta \setminus {\mathcal{B}}\big(\theta', \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big).$$ These sets may be interpreted as confidence sets for $\theta_0$.
The main idea is to build a decreasing sequence (in the sense of inclusion) of intervals $(\Theta_i)_i$. Set $\theta^{(1)} = m$, $\theta'^{(1)} = M$, and $\Theta_1 = [\theta^{(1)}, \theta'^{(1)}]$ (which is merely $\Theta$). If $T ({\theta^{(1)}}, {\theta'^{(1)}} )$ is non-negative, we consider a set $\Theta_2$ such that $$\Theta_1 \setminus {\mathcal{B}}\left(\theta^{(1)}, \kappa^{1/2} h (f_{\theta^{(1)}}, f_{\theta'^{(1)}}) \right) \subset \Theta_2 \subset \Theta_1$$ while if $T ({\theta^{(1)}}, {\theta'^{(1)}} )$ is non-positive, we consider a set $\Theta_2$ such that $$\Theta_1 \setminus {\mathcal{B}}\left(\theta'^{(1)}, \kappa^{1/2} h (f_{\theta^{(1)}}, f_{\theta'^{(1)}}) \right) \subset \Theta_2 \subset \Theta_1.$$ The set $\Theta_2$ may thus also be interpreted as a confidence set for $\theta_0$. Thanks to Assumption \[HypSurLeModeleQuelquonqueDebutDimD\], we can define $\Theta_2$ as an interval $\Theta_2 = [\theta^{(2)}, \theta'^{(2)}]$.
We then repeat the idea to build an interval $\Theta_3 = [\theta^{(3)}, \theta'^{(3)}]$ included in $\Theta_2$ and containing either $$\Theta_3 \supset \Theta_2 \setminus {\mathcal{B}}\left(\theta^{(2)}, \kappa^{1/2} h (f_{\theta^{(2)}}, f_{\theta'^{(2)}}) \right) \quad \text{or} \quad \Theta_3 \supset \Theta_2 \setminus {\mathcal{B}}\left(\theta'^{(2)}, \kappa^{1/2} h (f_{\theta^{(2)}}, f_{\theta'^{(2)}}) \right)$$ according to the sign of $T ({\theta^{(2)}}, {\theta'^{(2)}} )$.
By induction, we build a decreasing sequence of such intervals $(\Theta_i)_i$. We now consider an integer $N$ large enough so that the length of $\Theta_N$ is small enough. We then define the estimator $\hat{\theta}$ as the center of the set $\Theta_N$ and estimate $s$ by $f_{{\hat{\theta}}}$.
Definition of the test. {#SectionDefTest}
-----------------------
The test $T(\theta,\theta')$ we use in our estimation strategy is the one of [@BaraudMesure] applied to two suitable densities of the model. More precisely, let ${ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}$ be the functional defined for all $g,g' \in {\mathbb{L}}^1_+ ({\mathbb{X}},\mu)$ by $$\begin{aligned}
\label{eqFonctionnalBaraud}
\quad { \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(g , g') = \frac{1}{n} \sum_{i=1}^n \frac{\sqrt{g' (X_i)} - \sqrt{g (X_i)}}{\sqrt{g (X_i) + g'(X_i)}} + \frac{1}{2 } \int_{{\mathbb{X}}} \sqrt{g(x) + g'(x)} \left(\sqrt{g' (x)} - \sqrt{g (x)} \right) {\, \mathrm{d}}\mu (x)\end{aligned}$$ where the convention $0 / 0 = 0$ is in use.
We consider $t \in (0,1]$ and $\epsilon = t ({ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} n)^{-1/\alpha}$. We then define the finite sets $$\begin{aligned}
\Theta_{\text{dis}} = \left\{m+ k \epsilon , \; k \in {\mathbb{N}},\; k \leq (M-m) \epsilon^{-1} \right\}, \quad {\mathscr{F}_{\text{dis}}}= \{f_{\theta}, \, \theta \in \Theta_{\text{dis}} \}\end{aligned}$$ and the map $\pi$ on $[m, M]$ by $$\pi({x}) = m + \lfloor (x - m) / \epsilon \rfloor \epsilon \quad \text{for all $x \in[m, M]$}$$ where $\lfloor \cdot \rfloor$ denotes the integer part. We then define $T(\theta,\theta')$ by $${T} ({\theta},{\theta'}) = { \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f_{\pi(\theta)},f_{\pi(\theta')}) \quad \text{for all $\theta,\theta' \in[m, M]$}.$$ The aim of the parameter $t$ is to tune the thinness of the net ${\mathscr{F}_{\text{dis}}}$.
Procedure.
----------
We shall build a decreasing sequence $(\Theta_i)_{i \geq 1}$ of intervals of $\Theta = [m,M]$ as explained in Section \[SectionHeuristique\]. Let $\kappa > 0$, and for all $\theta, \theta' \in[m,M]$ such that $\theta' < \theta$, let $\bar{r} (\theta,\theta')$, $\underline{r} (\theta,\theta')$ be two positive numbers such that $$\begin{aligned}
\, [m,M] \bigcap \left[ \theta, \theta+\bar{r} (\theta,\theta') \right] &\subset& {\mathcal{B}}\big(\theta, \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big) \label{EquationSurRi1}\\
\, [m,M] \bigcap \left[ \theta'-\underline{r} (\theta,\theta') , \theta'\right] &\subset& {\mathcal{B}}\big(\theta', \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big) \label{EquationSurRi2}\end{aligned}$$ where we recall that $ {\mathcal{B}}(\theta, {\kappa}^{1/2} h (f_{\theta}, f_{\theta'}) )$ and $ {\mathcal{B}}(\theta', {\kappa}^{1/2} h (f_{\theta}, f_{\theta'}) )$ are the Hellinger balls defined by [(\[eqDefinitionBouleHel\])]{}.
We set $\theta^{(1)} = m$, $\theta'^{(1)} = M$ and $\Theta_1 = [\theta^{(1)},\theta'^{(1)}]$. We define the sequence $(\Theta_i)_{i \geq 1}$ by induction. When $\Theta_i = [\theta^{(i)}, \theta'^{(i)}]$, we set $$\begin{aligned}
\theta^{(i+1)} &=&
\begin{cases}
\theta^{(i)} + \min \left\{\bar{r} (\theta^{(i)},\theta'^{(i)}), \frac{\theta'^{(i)} - \theta^{(i)}}{2} \right\} & \text{if $ T ({\theta^{(i)}},{\theta'^{(i)}}) \geq 0$} \\
\theta^{(i)} & \text{otherwise}
\end{cases} \\
\theta'^{(i+1)} &=&
\begin{cases}
\theta'^{(i)} - \min \left\{\underline{r} (\theta^{(i)},\theta'^{(i)}), \frac{\theta'^{(i)} - \theta^{(i)}}{2} \right\} & \text{if $ T ({\theta^{(i)}},{\theta'^{(i)}}) \leq 0$} \\
\theta'^{(i)} & \text{otherwise.}
\end{cases}\end{aligned}$$ We then define $\Theta_{i+1} = [\theta^{(i+1)}, \theta'^{(i+1)}]$.
The role of conditions (\[EquationSurRi1\]) and (\[EquationSurRi2\]) is to ensure that $\Theta_{i+1}$ is big enough to contain one of the two confidence sets $$\Theta_i \setminus {\mathcal{B}}\left(\theta^{(i)}, {\kappa}^{1/2} h (f_{\theta^{(i)}}, f_{\theta'^{(i)}}) \right) \quad \text{and} \quad \Theta_i \setminus {\mathcal{B}}\left(\theta'^{(i)}, {\kappa}^{1/2} h (f_{\theta^{(i)}}, f_{\theta'^{(i)}}) \right).$$ The parameter $\kappa$ allows to tune the level of these confidence sets. There is a minimum in the definitions of $\theta^{(i+1)}$ and $\theta'^{(i+1)}$ in order to guarantee the inclusion of $\Theta_{i+1}$ in $\Theta_i$.
We now consider a positive number $\eta$ and build these intervals until their lengths become smaller than $\eta$. The estimator we consider is then the center of the last interval built. This parameter $\eta$ stands for a measure of the accuracy of the estimation and must be small enough to get a suitable risk bound for our estimator.
The algorithm is the following.
\[1\] $\theta \leftarrow m$, $\theta' \leftarrow M$ Compute $r = \min \left\{\bar{r} (\theta,\theta') , (\theta' - \theta)/2\right\}$ Compute $r' = \min \left\{\underline{r} (\theta,\theta'), (\theta' - \theta)/2 \right\}$ Compute $\text{Test} = T(\theta,\theta')$ $\theta \leftarrow \theta + r$ $\theta' \leftarrow \theta' - r'$ $\hat{\theta} = (\theta + \theta')/2$
Risk bound. {#SectionPropEstimateurDim1}
-----------
The following theorem specify the values of the parameters $t$, $\kappa$, $\eta$ that allow to control the risk of the estimator $\hat{s} = f_{\hat{\theta}}$.
\[ThmPrincipalDim1\] Suppose that Assumption \[HypSurLeModeleQuelquonqueDebutDimD\] holds. Set $$\begin{aligned}
\label{eqEsperanceTest}
\bar{\kappa} = \left(1 + \sqrt{\frac{2+\sqrt{2}}{2-\sqrt{2}}}\right)^{-2}.\end{aligned}$$ Assume that $t \in (0,1]$, $\kappa \in (0,\bar{\kappa})$, $\eta \in [\epsilon, ({ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} n)^{-1/\alpha} ]$ and that $\bar{r} (\theta,\theta')$, $\underline{r} (\theta,\theta')$ are such that (\[EquationSurRi1\]) and (\[EquationSurRi2\]) hold.
Then, for all $\xi > 0$, the estimator $\hat{\theta}$ built in Algorithm \[AlgorithmDim1\] satisfies $$\begin{aligned}
{\mathbb{P}\,}\left[ C h^2(s,f_{\hat{\theta}}) \geq h^2(s, {\mathscr{F}}) + \frac{1}{n} + \xi \right] \leq e^{- n \xi}
\end{aligned}$$ where $C > 0$ depends only on $\kappa, t, \alpha, { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}/\underline{R}$.
A slightly sharper risk bound may be found in the proof of this theorem.
Choice of $\bar{r} (\theta,\theta')$ and $\underline{r} (\theta,\theta')$. {#SectionDefinitionRminBarreDim1}
--------------------------------------------------------------------------
These parameters are chosen by the statistician. They do not change the risk bound given by Theorem \[ThmPrincipalDim1\] (provided that (\[EquationSurRi1\]) and (\[EquationSurRi2\]) hold) but affect the speed of the procedure. The larger they are, the faster the procedure is. There are three different situations.
#### First case:
the Hellinger distance $ h (f_{\theta}, f_{\theta'}) $ can be made explicit. We have thus an interest in defining them as the largest numbers for which (\[EquationSurRi1\]) and (\[EquationSurRi2\]) hold, that is $$\begin{aligned}
\, \bar{r} (\theta,\theta') &=& \sup \left\{r > 0, \; [m,M] \cap \left[ \theta, \theta+r\right] \subset {\mathcal{B}}\big(\theta, \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big) \right\} \label{EqDefinitionRDim1Optimal1}\\
\, \underline{r} (\theta,\theta') &=& \sup \left\{r > 0, \; [m,M] \cap \left[ \theta'-r, \theta'\right] \subset {\mathcal{B}}\big(\theta', \kappa^{1/2} h (f_{\theta}, f_{\theta'}) \big) \right\} \label{EqDefinitionRDim1Optimal2}. \end{aligned}$$
#### Second case:
the Hellinger distance $ h (f_{\theta}, f_{\theta'}) $ can be quickly evaluated numerically but the computation of (\[EqDefinitionRDim1Optimal1\]) and (\[EqDefinitionRDim1Optimal2\]) is difficult. We may then define them by $$\begin{aligned}
\label{EqDefintionRDim2}
\underline{r} (\theta,\theta') = \bar{r} (\theta,\theta') = \left( (\kappa / {\bar{R}}) h^2 (f_{\theta}, f_{\theta'}) \right)^{1/\alpha}.\end{aligned}$$ One can verify that (\[EquationSurRi1\]) and (\[EquationSurRi2\]) hold. When the model is regular enough and $\alpha = 2$, the value of ${ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}$ can be calculated by using Fisher information (see for instance Theorem 7.6 of Chapter 1 of [@Ibragimov1981]).
#### Third case:
the computation of the Hellinger distance $ h (f_{\theta}, f_{\theta'}) $ involves the numerical computation of an integral and this computation is slow. An alternative definition is then $$\begin{aligned}
\label{EqDefintionRDim1}
\underline{r} (\theta,\theta') = \bar{r} (\theta,\theta') = (\kappa \underline{R}/ { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
})^{1/\alpha} \left(\theta' - \theta \right).\end{aligned}$$ As in the second point, one can check that (\[EquationSurRi1\]) and (\[EquationSurRi2\]) hold. Note however that the computation of the test also involves in most cases the numerical computation of an integral (see (\[eqFonctionnalBaraud\])). This third case is thus mainly devoted to models for which this numerical integration can be avoided, as for the translation models ${\mathscr{F}}= \left\{f (\cdot - \theta), \, \theta \in [m,M]\right\}$ with $f$ even, ${\mathbb{X}}= {\mathbb{R}}$ and $\mu$ the Lebesgue measure (the second term of (\[eqFonctionnalBaraud\]) is $0$ for these models).
We can upper-bound the numerical complexity of the algorithm when $\bar{r} (\theta,\theta')$ and $\underline{r} (\theta,\theta')$ are large enough. Precisely, we prove the proposition below.
\[PropCalculComplexiteDimen1\] Suppose that the assumptions of Theorem \[ThmPrincipalDim1\] hold and that $\underline{r} (\theta,\theta')$, $\bar{r} (\theta,\theta')$ are larger than $$\begin{aligned}
\label{eqSurretR}
(\kappa \underline{R}/ { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
})^{1/\alpha} \left(\theta' - \theta\right).\end{aligned}$$
Then, the number of tests computed to build the estimator $\hat{\theta}$ is smaller than $$1 + \max \left\{ \left( {{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}}/{ (\kappa \underline{R})} \right)^{1/\alpha}, {1}/{\log 2} \right\} \log \left( \frac{M-m}{\eta} \right).$$
It is worthwhile to notice that this upper-bound does not depend on $t$, that is the size of the net ${\mathscr{F}_{\text{dis}}}$ contrary to the preceding procedures based on tests. Obviously, the parameter $\eta$ is involved in this upper-bound, but the whole point is that it grows slowly with $1/\eta$, which allows to use the procedure with $\eta$ very small.
Simulations for unidimensional models {#SectionSimuDim1}
=====================================
In what follows, we carry out a simulation study in order to evaluate more precisely the performance of our estimator. We simulate samples $(X_1,\dots,X_n)$ with density $s$ and use our procedure to estimate $s$.
Models.
-------
Our simulation study is based on the following models.
${\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [0.01, 100]\right\} $ where $$f_{\theta} (x) = \theta e^{- \theta x} {\mathbbm{1}}_{[0,+\infty)} (x) \quad \text{for all $x \in {\mathbb{R}}$.}$$
${\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [-100, 100]\right\} $ where $$f_{\theta} (x) = \frac{1}{\sqrt{2 \pi}} \exp \left(- \frac{(x-\theta)^2}{2} \right) \quad \text{ for all $x \in {\mathbb{R}}$.}$$
${\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [0.01, 100]\right\} $ where $$f_{\theta} (x) = \frac{x}{\theta^2} \exp \left( - \frac{x^2}{2 \theta^2} \right) {\mathbbm{1}}_{[0,+\infty)} (x) \quad \text{ for all $x \in {\mathbb{R}}$.}$$
${\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [-10, 10]\right\} $ where $$f_{\theta} (x) = \frac{1}{\pi \left( 1 + (x - \theta)^2 \right)} \quad \text{ for all $x \in {\mathbb{R}}$.}$$
${\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [0.01, 10]\right\} $ where $f_{\theta} = \theta^{-1} {\mathbbm{1}}_{[0,\theta]}$.
${\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [-10, 10]\right\} $ where $$f_{\theta} (x) = \frac{1}{(x-\theta +1)^2 }{\mathbbm{1}}_{[\theta,+\infty)} (x) \quad \text{for all $x \in {\mathbb{R}}$.}$$
${\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [-10, 10]\right\} $ where $f_{\theta} = {\mathbbm{1}}_{[\theta - 1/2, \theta + 1/2]}$.
${\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [-1, 1]\right\} $ where $$f_{\theta} (x) = \frac{1}{ 4 \sqrt{|x-\theta|}} {\mathbbm{1}}_{[-1,1]} (x-\theta) \quad \text{for all $x \in {\mathbb{R}}\setminus \{\theta\}$}$$ and $f_{\theta} (\theta) = 0$.
In these examples, we shall mainly compare our estimator to the maximum likelihood one. In examples 1,2,3,5 and 6, the m.l.e $\tilde{\theta}_{\text{mle}}$ can be made explicit and is thus easy to compute. Finding the m.l.e is more delicate for the problem of estimating the location parameter of a Cauchy distribution, since the likelihood function may be multimodal. We refer to [@Barnett1966] for a discussion of numerical methods devoted to the maximization of the likelihood. In our simulation study, we avoid the issues of the numerical algorithms by computing the likelihood at $10^6$ equally spaced points between $\max(-10, \hat{\theta}-1)$ and $\min(10, \hat{\theta}+1)$ (where $\hat{\theta}$ is our estimator) and at $10^{6}$ equally spaced points between $\max(-10, \tilde{\theta}_{\text{median}}-1)$ and $\min(10, \tilde{\theta}_{\text{median}}+1)$ where $\tilde{\theta}_{\text{median}}$ is the median. We then select among these points the one for which the likelihood is maximal. In Example 5, we shall also compare our estimator to the minimum variance unbiased estimator defined by $$\tilde{\theta}_{\text{mvub}} = \frac{n+1}{n} \max_{1 \leq i \leq n} X_i.$$ In Example 7, we shall compare our estimator to $$\tilde{\theta}' = \frac{1}{2} \left(\max_{1 \leq i \leq n} X_i + \min_{1 \leq i \leq n} X_i\right).$$ In the case of Example 8, the likelihood is infinite at each observation and the maximum likelihood method fails. We shall then compare our estimator to the median and the empirical mean but also to the maximum spacing product estimator $\tilde{\theta}_{\text{mspe}}$ (m.s.p.e for short). This estimator was introduced by [@Cheng1983; @Ranneby1984] to deal with statistical models for which the likelihood is unbounded. The m.s.p.e is known to possess nice theoretical properties such as consistency and asymptotic efficiency and precise results on the performance of this estimator may be found in [@Cheng1983; @Ranneby1984; @Ekstrom1998; @Shao1999; @Ghost2001; @Anatolyev2005] among other references. This last method involves the problem of finding a global maximum of the maximum product function on $\Theta = [-1,1]$ (which may be multimodal). We compute it by considering $2 \times 10^5$ equally spaced points between $-1$ and $1$ and by calculating for each of these points the function to maximize. We then select the point for which the function is maximal. Using more points to compute the m.s.p.e would give more accurate results, especially when $n$ is large, but we are limited by the computer.
Implementation of the procedure.
--------------------------------
Our procedure involves several parameters that must be chosen by the statistician.
#### **Choice of** $t$.
This parameter tunes the thinness of the net ${\mathscr{F}_{\text{dis}}}$. When the model is regular enough and contains $s$, a good choice of $t$ seems to be $t = 0$ (that is $\Theta_{\text{dis}} = \Theta$, ${\mathscr{F}_{\text{dis}}}= {\mathscr{F}}$ and $T ({\theta}, {\theta'}) = { \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f_{\theta}, f_{\theta'})$), since then the simulations suggest that our estimator is very close to the m.l.e when the model is true (with large probability). In the simulations, we take $t = 0$.
#### **Choice of** $\eta$.
We take $\eta$ small: $\eta = (M-m) / 10^{8}$.
#### **Choice of** $\kappa$.
This constant influences the level of the confidence sets and thus the time of construction of the estimator: the larger is $\kappa$, the faster is the procedure. We take arbitrary $\kappa = \bar{\kappa}/2$.
#### **Choice of** $\underline{r} (\theta,\theta')$ **and** $\bar{r} (\theta,\theta')$.
In examples 1,2,3,5, and 7, we define them by (\[EqDefinitionRDim1Optimal1\]) and (\[EqDefinitionRDim1Optimal2\]). In examples 4 and 6, we define them by (\[EqDefintionRDim2\]). In the first case, $\alpha = 2$ and ${ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} = 1/16$, while in the second case, $\alpha = 1$ and ${ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} = 1/2$. In the case of Example 8, we use (\[EqDefintionRDim1\]) with $\alpha = 1/2$, $\underline{R} = 0.17$ and ${ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} = 1/\sqrt{2}$.
Simulations when $s \in {\mathscr{F}}$. {#SectionDim1VraiS}
---------------------------------------
We begin to simulate $N$ samples $(X_1,\dots,X_n)$ when the true density $s$ belongs to the model ${\mathscr{F}}$. They are generated according to the density $s = f_1$ in examples $1,3,5$ and according to $s = f_0$ in examples $2,4,6,7,8$.
We evaluate the performance of an estimator $\tilde{\theta}$ by computing it on each of the $N$ samples. Let $\tilde{\theta}^{(i)}$ be the value of this estimator corresponding to the $i^{\text{\tiny th}}$ sample and let $$\begin{aligned}
\widehat{R}_N (\tilde{\theta}) = \frac{1}{N} \sum_{i=1}^N h^2 (s, f_{\tilde{\theta}^{(i)}}) \quad \text{and} \quad \widehat{\text{std}}_N (\tilde{\theta})= \sqrt{\frac{1}{N-1} \sum_{i=1}^N \left(h^2 (s, f_{\tilde{\theta}^{(i)}}) - \widehat{R}_N (\tilde{\theta}) \right)^2}.\end{aligned}$$ The risk ${\mathbb{E}}\left[ h^2 (s, f_{\tilde{\theta}}) \right]$ of the estimator $\tilde{\theta}$ is thus estimated by $\widehat{R}_N(\tilde{\theta})$. More precisely, if $Q_c$ denotes the $c/2$ quantile of a standard Gaussian distribution, $$\left[\widehat{R}_N(\tilde{\theta}) - Q_{c} \frac{\widehat{\text{std}}_N(\tilde{\theta})}{\sqrt{N}}, \widehat{R}_N(\tilde{\theta}) + Q_{c} \frac{\widehat{\text{std}}_N(\tilde{\theta})}{\sqrt{N}} \right]$$ is a confidence interval for ${\mathbb{E}}\left[ h^2 (s, f_{\tilde{\theta}}) \right]$ with asymptotic confidence level $c$. We also introduce $$\mathcal{\widehat{R}}_{N,\text{rel}} (\tilde{\theta}) = \frac{\widehat{R}_N (\hat{\theta})}{\widehat{R}_N (\tilde{\theta})}- 1$$ in order to make the comparison of our estimator $\hat{\theta}$ and the estimator $\tilde{\theta}$ easier. When $\mathcal{R}_{\text{rel}} (\tilde{\theta})$ is negative our estimator is better than $\tilde{\theta}$ whereas if $\mathcal{R}_{\text{rel}} (\tilde{\theta})$ is positive, our estimator is worse than $\tilde{\theta}$. More precisely, if $\mathcal{R}_{\text{rel}} (\tilde{\theta}) = \alpha$, the risk of our estimator corresponds to the one of $\tilde{\theta}$ reduced of $100 |\alpha| \%$ when $\alpha < 0$ and increased of $100 \alpha \%$ when $\alpha > 0$.
The results are gathered below.
$n = 10$ $n = 25$ $n = 50$ $n = 75$ $n = 100$
-------------- ------------------------------------------------------------------------------ -------------------- -------------------- --------------------- --------------------- ---------------------
Example 1 $\widehat{R}_{10^6}(\hat{\theta})$ 0.0130 0.0051 0.0025 0.0017 0.0013
\* $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mle}})$ 0.0129 0.0051 0.0025 0.0017 0.0013
\* $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mle}})$ $6 \cdot 10^{-4}$ $10^{-5}$ $7 \cdot 10^{-7}$ $-8 \cdot 10^{-9}$ $2 \cdot 10^{-9}$
\* $ \widehat{\text{std}}_{10^6} (\hat{\theta})$ 0.0192 0.0073 0.0036 0.0024 0.0018
\* $ \widehat{\text{std}}_{10^6} (\tilde{\theta}_{\text{mle}})$ 0.0192 0.0073 0.0036 0.0024 0.0018
\* Example 2 $\widehat{R}_{10^6}(\hat{\theta})$ 0.0123 0.0050 0.0025 0.0017 0.0012
\* $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mle}})$ 0.0123 0.0050 0.0025 0.0017 0.0012
\* $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mle}})$ $5 \cdot 10^{-10}$ $9 \cdot 10^{-10}$ $- 2 \cdot 10^{-9}$ $- 2 \cdot 10^{-9}$ $- 3 \cdot 10^{-9}$
\* $ \widehat{\text{std}}_{10^6} (\hat{\theta})$ 0.0170 0.0070 0.0035 0.0023 0.0018
\* $ \widehat{\text{std}}_{10^6} (\tilde{\theta}_{\text{mle}})$ 0.0170 0.0070 0.0035 0.0023 0.0018
\* Example 3 $\widehat{R}_{10^6}(\hat{\theta})$ 0.0130 0.0051 0.0025 0.0017 0.0013
\* $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mle}})$ 0.0129 0.0051 0.0025 0.0017 0.0013
\* $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mle}})$ $6 \cdot 10^{-4}$ $2 \cdot 10^{-5}$ $10^{-6}$ $-10^{-7}$ $-4 \cdot 10^{-9}$
\* $ \widehat{\text{std}}_{10^6} (\hat{\theta})$ 0.0192 0.0073 0.0036 0.0024 0.0018
\* $ \widehat{\text{std}}_{10^6}(\tilde{\theta}_{\text{mle}})$ 0.0192 0.0073 0.0036 0.0024 0.0018
\* Example 4 $\widehat{R}_{10^6}(\hat{\theta})$ 0.0152 0.0054 0.0026 0.0017 0.0013
\* $\widehat{R}_{10^4}(\tilde{\theta}_{\text{mle}})$ 0.0149 0.0054 0.0026 0.0017 0.0012
\* $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\tilde{\theta}_{\text{mle}})$ -0.001 $-2 \cdot 10^{-4}$ $- 10^{-8}$ $-3 \cdot 10^{-8}$ $9 \cdot 10^{-8}$
\* $ \widehat{\text{std}}_{10^6} (\hat{\theta})$ 0.0267 0.0083 0.0038 0.0025 0.0018
\* $ \widehat{\text{std}}_{10^6}(\tilde{\theta}_{\text{mle}})$ 0.0255 0.0083 0.0039 0.0025 0.0018
\* Example 5 $\widehat{R}_{10^6}(\hat{\theta})$ 0.0468 0.0192 0.0096 0.0064 0.0048
\* $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mle}})$ 0.0476 0.0196 0.0099 0.0066 0.0050
\* $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mvub}})$ 0.0350 0.0144 0.0073 0.0049 0.0037
\* $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mle}})$ -0.0160 -0.0202 -0.0287 -0.0271 -0.0336
\* $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mvub}})$ 0.3390 0.3329 0.3215 0.3243 0.3148
\* $ \widehat{\text{std}}_{10^6}(\hat{\theta})$ 0.0529 0.0223 0.0112 0.0075 0.0056
\* $ \widehat{\text{std}}_{10^6}(\tilde{\theta}_{\text{mle}})$ 0.0453 0.0192 0.0098 0.0066 0.0049
\* $ \widehat{\text{std}}_{10^6}(\tilde{\theta}_{\text{mvub}})$ 0.0316 0.0132 0.0067 0.0045 0.0034
Example 6 $\widehat{R}_{10^6}(\hat{\theta})$ 0.0504 0.0197 0.0098 0.0065 0.0049
\* $\widehat{R}_{10^6}(\tilde{\theta}_{\text{mle}})$ 0.0483 0.0197 0.0099 0.0066 0.0050
\* $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}_{\text{mle}})$ 0.0436 -0.0019 -0.0180 -0.0242 -0.0263
\* $ \widehat{\text{std}}_{10^6}(\hat{\theta})$ 0.0597 0.0233 0.0115 0.0076 0.0057
\* $ \widehat{\text{std}}_{10^6}(\tilde{\theta}_{\text{mle}})$ 0.0467 0.0195 0.0099 0.0066 0.0050
Example 7 $\widehat{R}_{10^6}(\hat{\theta})$ 0.0455 0.0193 0.0098 0.0066 0.0050
\* $\widehat{R}_{10^6}(\tilde{\theta}')$ 0.0454 0.0192 0.0098 0.0066 0.0050
\* $\mathcal{\widehat{R}}_{10^6,\text{rel}} (\tilde{\theta}')$ 0.0029 0.0029 0.0031 0.0028 0.0030
\* $ \widehat{\text{std}}_{10^6}(\hat{\theta})$ 0.0416 0.0186 0.0096 0.0065 0.0049
\* $ \widehat{\text{std}}_{10^6}(\tilde{\theta}')$ 0.0415 0.0185 0.0096 0.0065 0.0049
Example 8 $\widehat{R}_{10^4}(\hat{\theta})$ 0.050 0.022 0.012 0.008 0.006
\* $\widehat{R}_{10^4}(\tilde{\theta}_{\text{mean}})$ 0.084 0.061 0.049 0.043 0.039
\* $\widehat{R}_{10^4}(\tilde{\theta}_{\text{median}})$ 0.066 0.036 0.025 0.019 0.017
\* $\widehat{R}_{10^4}(\tilde{\theta}_{\text{mspe}})$ 0.050 0.022 0.012 0.008 0.006
\* $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\tilde{\theta}_{\text{mean}})$ -0.40 -0.64 -0.76 -0.82 -0.85
\* $\mathcal{\widehat{R}}_{10^4, \text{rel}} (\tilde{\theta}_{\text{median}})$ -0.25 -0.39 -0.54 -0.59 -0.65
\* $ \widehat{\text{std}}_{10^4}(\hat{\theta})$ 0.054 0.025 0.013 0.009 0.007
\* $ \widehat{\text{std}}_{10^4}(\tilde{\theta}_{\text{mean}})$ 0.045 0.032 0.025 0.022 0.020
$ \widehat{\text{std}}_{10^4}(\tilde{\theta}_{\text{median}})$ 0.052 0.032 0.020 0.016 0.014
$ \widehat{\text{std}}_{10^4}(\tilde{\theta}_{\text{mspe}})$ 0.051 0.025 0.014 0.009 0.007
In the first four examples, the risk of our estimator is very close to the maximum likelihood estimator one, whatever the value of $n$. In Example 5, our estimator slightly improves the maximum likelihood estimator but is worse than the minimum variance unbiased estimator. In Example 6, the risk of our estimator is larger than the one of the m.l.e when $n = 10$ but is slightly smaller as soon as $n$ becomes larger than $25$. In Example 7, the risk of our estimator is [$0.3 \%$]{} larger than the one of $\tilde{\theta}'$. In Example 8, our estimator significantly improves the empirical mean and the median. Its risk is comparable to the one of the m.s.p.e (we omit in this example the value of $\mathcal{\widehat{R}}_{10^4, \text{rel}} (\tilde{\theta}_{\text{mspe}})$ because it is influenced by the procedure we have used to build the m.s.p.e).
When the model is regular enough, these simulations show that our estimation strategy provides an estimator whose risk is almost equal to the one of the maximum likelihood estimator. Moreover, our estimator seems to work rather well in a model where the m.l.e does not exist (case of Example 8). Remark that contrary to the maximum likelihood method, our procedure does not involve the search of a global maximum.
We now bring to light the connection between our estimator and the m.l.e when the model is regular enough (that is in the first four examples). Let for $c \in \{0.99,0.999, 1\}$, $q_{c}$ be the $c$-quantile of the random variable $ \big|\hat{\theta} - \tilde{\theta}_{\text{mle}} \big|$, and $\hat{q}_c$ be the empirical version based on $N$ samples ($N = 10^{6}$ in examples 1,2,3 and $N = 10^{4}$ in Example 4). The results are the following.
$n = 10$ $n = 25$ $n = 50$ $n = 75$ $n = 100$
----------- -------------------- ------------------- ------------------- ------------------- ------------------- -------------------
Example 1 $\hat{q}_{0.99}$ $10^{-7}$ $10^{-7}$ $10^{-7}$ $10^{-7}$ $10^{-7}$
$\hat{q}_{0.999} $ $0.07$ $10^{-7}$ $10^{-7}$ $10^{-7}$ $10^{-7}$
$\hat{q}_{1} $ $1.9$ $0.3$ $0.06$ $0.005$ $10^{-7}$
Example 2 $\hat{q}_{0.99}$ $2 \cdot 10^{-7}$ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$
$\hat{q}_{0.999} $ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$
$\hat{q}_{1}$ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$ $3 \cdot 10^{-7}$
Example 3 $\hat{q}_{0.99}$ $10^{-7}$ $10^{-7}$ $10^{-7}$ $10^{-7}$ $10^{-7}$
$\hat{q}_{0.999} $ $0.03$ $10^{-7}$ $10^{-7} $ $10^{-7}$ $10^{-7}$
$\hat{q}_{1} $ $0.38$ $0.12$ $0.01$ 0.007 $10^{-7}$
Example 4 $\hat{q}_{0.99}$ $10^{-6}$ $10^{-6}$ $10^{-6}$ $10^{-6}$ $10^{-6}$
$\hat{q}_{0.999} $ $3\cdot10^{-6}$ $10^{-6}$ $10^{-6}$ $10^{-6}$ $10^{-6}$
$\hat{q}_{1}$ $1.5$ $0.1$ $10^{-6}$ $10^{-6}$ $10^{-6}$
This array shows that with large probability our estimator is very close to the m.l.e. This probability is quite high for small values of $n$ and even more for larger values of $n$. This explains why the risks of these two estimators are very close in the first four examples. Note that the value of $\eta$ prevents the empirical quantile from being lower than something of order $10^{-7}$ according to the examples (in Example 4, the value of $10^{-6}$ is due to the way we have built the m.l.e).
Speed of the procedure.
-----------------------
For the sake of completeness, we specify below the number of tests that have been calculated in the preceding examples.
$n = 10$ $n = 25$ $n = 50$ $n = 75$ $n = 100$
----------- ------------- ------------ -------------- ------------- --------------
Example 1 77 (1.4) 77 (0.9) 77 (0.7) 77 (0.6) 77 (0.5)
Example 2 293 (1) 294 (1) 294 (0.9) 295 (0.9) 295 (0.9)
Example 3 89 (0.75) 90 (0.5) 90 (0.5) 90 (0.5) 90 (0.5)
Example 4 100 (3.5) 100 (0.5) 100 (0.001) 100 (0) 100 (0)
Example 5 460 (3) 461 (1) 462 (0.6) 462 (0.4) 462 (0.3)
Example 6 687 (0) 687 (0) 687 (0) 687 (0) 687 (0)
Example 7 412 (8) 419 (8) 425 (8) 429 (8) 432 (8)
Example 8 173209 (10) 173212 (0) 173212 (0.9) 173206 (12) 173212 (0.3)
Simulations when $s \not \in {\mathscr{F}}$. {#SectionRobustessDim1}
--------------------------------------------
In Section \[SectionDim1VraiS\], we were in the favourable situation where the true distribution $s$ belonged to the model ${\mathscr{F}}$, which may not hold true in practice. We now work with random variables $X_1,\dots,X_n$ simulated according to a density $s \not \in {\mathscr{F}}$ to illustrate the robustness properties of our estimator.
We begin with an example proposed in [@BirgeTEstimateurs]. We generate $X_1,\dots,X_n$ according to the density $$s(x) = 10 \left[ (1-2n^{-1}) {\mathbbm{1}}_{[0,1/10]} (x) + 2n^{-1} {\mathbbm{1}}_{[9/10,1]}(x) \right] \quad \text{for all $x \in {\mathbb{R}}$}$$ and compare our estimator to the maximum likelihood estimator for the uniform model $$\begin{aligned}
\label{FModelUniformeSimu}
{\mathscr{F}}= \left\{f_{\theta}, \, \theta \in [0.01, 10]\right\} \quad \text{where} \quad f_{\theta} = \theta^{-1} {\mathbbm{1}}_{[0, \theta]}.\end{aligned}$$ It is worthwhile to notice that $h^2(s,{\mathscr{F}}) = \mathcal{O} (n^{-1})$, which means that $s$ is close to ${\mathscr{F}}$ when $n$ is large, and that our estimator still satisfies ${\mathbb{E}}[h^2 (s,f_{\hat{\theta}})] = \mathcal{O} (n^{-1})$. Contrary to our estimator, the outliers make the m.l.e unstable as shown in the array below.
$n = 10$ $n = 25$ $n = 50$ $n = 75$ $n = 100$
------------------------------------------------ ---------- ---------- ---------- ---------- ----------- --
$\widehat{R}_N (\hat{\theta}) $ 0.20 0.06 0.03 0.02 0.015
$\widehat{R}_N (\tilde{\theta}_{\text{mle}}) $ 0.57 0.56 0.56 0.56 0.57
We now propose a second example based on the mixture of two uniform laws. We use the same statistical model ${\mathscr{F}}$ but we modify the distribution of the observations. We take $p \in (0,1)$ and define the true underlying density by $$s_{p} (x) = (1-p) f_1(x) + p f_2(x) \quad \text{for all $x \in {\mathbb{R}}$.}$$ Set $p_0 = 1-1/\sqrt{2}$. One can check that $$\begin{aligned}
H^2(s_p,{\mathscr{F}}) &=&
\begin{cases}
H^2(s_p,f_1) & \text{if $p \leq p_0$} \\
H^2(s_p,f_2) & \text{if $p > p_0$,}
\end{cases} \\
&=& \begin{cases}
1 - \sqrt{2-p}/\sqrt{2} & \text{if $p \leq p_0$} \\
1 - (\sqrt{2-p}+\sqrt{p})/{2} & \text{if $p > p_0$,}
\end{cases} \end{aligned}$$ which means that the best estimator of ${\mathscr{F}}$ is $f_1$ when $p < p_0$ and $f_2$ when $p > p_0$.
We now compare our estimator $\hat{\theta}$ to the m.l.e $\tilde{\theta}_{\text{mle}}$. For a lot of values of $p$, we simulate $N$ samples of $n$ random variables with density $s_p$ and investigate the behaviour of the estimator $\tilde{\theta} \in \{\hat{\theta}, \tilde{\theta}_{\text{mle}}\}$ by computing the function $$\begin{aligned}
\widehat{R}_{p,n,N} (\tilde{\theta}) = \frac{1}{N} \sum_{i=1}^N h^2 (s_p, f_{\tilde{\theta}^{(p,i)}}) \end{aligned}$$ where $\tilde{\theta}^{(p,i)}$ is the value of the estimator $\tilde{\theta}$ corresponding to the $i^{\text{\tiny th}}$ sample whose density is $s_p$. We draw below the functions $p \mapsto \widehat{R}_{p,n,N} (\hat{\theta}) $, $p \mapsto \widehat{R}_{p,n,N} (\tilde{\theta}_{\text{mle}}) $ and $p \mapsto H^2(s_p, {\mathscr{F}})$ for $n = 100$ and then for $n = 10^4$.
![Red: $p \mapsto H^2(s_p, {\mathscr{F}})$. Blue: $p \mapsto \widehat{R}_{p,n,5000} (\hat{\theta}) $. Green: $p \mapsto \widehat{R}_{p,n,5000} (\tilde{\theta}_{\text{mle}}) $.](FonctionRobustessen100Arxiv.eps "fig:") ![Red: $p \mapsto H^2(s_p, {\mathscr{F}})$. Blue: $p \mapsto \widehat{R}_{p,n,5000} (\hat{\theta}) $. Green: $p \mapsto \widehat{R}_{p,n,5000} (\tilde{\theta}_{\text{mle}}) $.](FonctionRobustessen10000Arxiv.eps "fig:")
We observe that the m.l.e is rather good when $p \geq p_0$ and very poor when $p < p_0$. This can be explained by the fact that the m.l.e $\tilde{\theta}_{\text{mle}}$ is close to $2$ as soon as the number $n$ of observations is large enough. The shape of the function $p \mapsto \widehat{R}_{p,n,5000} (\hat{\theta}) $ is quite more satisfying since it looks more like the function $p \mapsto H^2(s_p, {\mathscr{F}})$. The lower figure suggests that $ \widehat{R}_{p,n,N} (\hat{\theta}) $ converges to $H^2(s_p, {\mathscr{F}})$ when $n,N$ go to infinity except on a small neighbourhood before $p_0$.
Models parametrized by a multidimensional parameter {#SectionEstimationDimQuel}
===================================================
Assumption.
-----------
We now deal with models ${\mathscr{F}}= \left\{f_{{\boldsymbol{\theta}}}, \; {\boldsymbol{\theta}}\in \Theta \right\}$ indexed by a rectangle $$\Theta = \prod_{j=1}^d [m_j, M_j]$$ of ${\mathbb{R}}^d$ with $d$ larger than $2$. Assumption \[HypSurLeModeleQuelquonqueDebutDimD\] is supposed to be fulfilled all along this section.
Definition of the test. {#SectionDefTestDimd}
-----------------------
As previously, our estimation strategy is based on the existence for all ${\boldsymbol{\theta}}, {\boldsymbol{\theta}}' \in \Theta$ of a measurable function $T({\boldsymbol{\theta}}, {\boldsymbol{\theta}}')$ of the observations possessing suitable statistical properties. The definition of this functional is the natural extension of the one we have proposed in Section \[SectionDefTest\].
Let for $j \in \{1,\dots,d\}$, $t_j \in (0,d^{1/\alpha_j}]$ and $\epsilon_j = t_j ({ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} n)^{-1/\alpha_j}$. We then define the finite sets $$\begin{aligned}
\Theta_{\text{dis}} &=& \left\{\left(m_1+ k_1 \epsilon_1, \dots, m_d+ k_d \epsilon_d\right), \; \forall j \in \{1,\dots, d\}, \; k_j \leq (M_j-m_j) \epsilon_j^{-1} \right\}\\
{\mathscr{F}_{\text{dis}}}&=& \{f_{{\boldsymbol{\theta}}}, \, {\boldsymbol{\theta}}\in \Theta_{\text{dis}} \}.\end{aligned}$$ Let $\pi$ be the map defined on $\prod_{j=1}^d [m_j, M_j]$ by $$\pi({\boldsymbol{x}}) = \big(m_1 + \lfloor (x_1 - m_1) / \epsilon_1\rfloor \epsilon_1, \dots, m_d + \lfloor (x_d - m_d) / \epsilon_d\rfloor \epsilon_d \big) \quad \text{for all $\boldsymbol{x} = (x_1,\dots,x_d) \in \prod_{j=1}^d [m_j, M_j]$}$$ where $\lfloor \cdot \rfloor$ is the integer part. We then define $T({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ for all ${\boldsymbol{\theta}}, {\boldsymbol{\theta}}' \in \Theta$ by $$\begin{aligned}
\label{eqDefinitionTDimD}
{T} ({{\boldsymbol{\theta}}},{{\boldsymbol{\theta}}'}) = { \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f_{\pi({\boldsymbol{\theta}})},f_{\pi({\boldsymbol{\theta}}')}) \quad \text{for all ${\boldsymbol{\theta}},{\boldsymbol{\theta}}' \in \Theta = \prod_{j=1}^d [m_j, M_j]$}\end{aligned}$$ where ${ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}$ is the functional given by (\[eqFonctionnalBaraud\]).
Basic ideas. {#SectionHeuristiqueDim2}
------------
For the sake of simplicity, we first consider the case $d = 2$. We shall build a decreasing sequence $(\Theta_i)_i$ of rectangles by induction. When there exists ${\boldsymbol{\theta}}_0 \in \Theta$ such that $s = f_{{\boldsymbol{\theta}}_0}$, these rectangles $\Theta_i$ can be interpreted as confidence sets for ${\boldsymbol{\theta}}_0$. Their construction is strongly inspired from the heuristics of Section \[SectionHeuristique\].
We set $ \Theta_1 = \Theta$. Assume that $\Theta_i= [a_1,b_1] \times [a_2,b_2]$ and let us explain how we can build a confidence set $\Theta_{i+1} = [a_1,b_1] \times [a_2',b_2']$ with $a_2',b_2'$ satisfying $b_2'-a_2' < b_2-a_2$.
We begin to build by induction two preliminary finite sequences $({\boldsymbol{\theta}}^{(j)})_{1 \leq j \leq N}$, $({\boldsymbol{\theta}}^{(j)})_{1 \leq j \leq N}$ of elements of ${\mathbb{R}}^2$. Let ${\boldsymbol{\theta}}^{(1)} = (a_1,b_1)$ be the bottom left-hand corner of $\Theta_i$ and ${\boldsymbol{\theta}}'^{(1)} = (a_1,b_2)$ be the top left-hand corner of $\Theta_i$. Let $\bar{r}_1 ({\boldsymbol{\theta}}^{(1)},{\boldsymbol{\theta}}'^{(1)})$, $\bar{r}_2 ({\boldsymbol{\theta}}^{(1)},{\boldsymbol{\theta}}'^{(1)})$, $\bar{r}_1 ({\boldsymbol{\theta}}'^{(1)},{\boldsymbol{\theta}}^{(1)})$, $\underline{r}_2 ({\boldsymbol{\theta}}'^{(1)},{\boldsymbol{\theta}}^{(1)})$ be positive numbers such that the rectangles $$\begin{aligned}
\mathcal{R}_1 &=& [a_1, a_1+\bar{r}_1 ({\boldsymbol{\theta}}^{(1)},{\boldsymbol{\theta}}'^{(1)})] \times [a_2, a_2+ \bar{r}_2 ({\boldsymbol{\theta}}^{(1)},{\boldsymbol{\theta}}'^{(1)})]\\
\mathcal{R}_1' &=& [a_1, a_1+\bar{r}_1 ({\boldsymbol{\theta}}'^{(1)},{\boldsymbol{\theta}}^{(1)})] \times [b_2-\underline{r}_2 ({\boldsymbol{\theta}}'^{(1)},{\boldsymbol{\theta}}^{(1)}), b_2]\end{aligned}$$ are respectively included in the Hellinger balls $${\mathcal{B}}\big({\boldsymbol{\theta}}^{(1)}, \kappa^{1/2} h (f_{{\boldsymbol{\theta}}^{(1)}}, f_{{\boldsymbol{\theta}}'^{(1)}}) \big) \quad \text{and} \quad {\mathcal{B}}\big({\boldsymbol{\theta}}'^{(1)}, \kappa^{1/2} h (f_{{\boldsymbol{\theta}}^{(1)}}, f_{{\boldsymbol{\theta}}'^{(1)}}) \big).$$ See (\[eqDefinitionBouleHel\]) for the precise definition of these balls.
We define ${\boldsymbol{\theta}}^{(2)},{\boldsymbol{\theta}}'^{(2)} \in {\mathbb{R}}^2$ as follows $$\begin{aligned}
{\boldsymbol{\theta}}^{(2)} &=&
\begin{cases}
{\boldsymbol{\theta}}^{(1)} + (\bar{r}_1 ({\boldsymbol{\theta}}^{(1)},{\boldsymbol{\theta}}'^{(1)}), 0) & \text{if $T ({{\boldsymbol{\theta}}^{(1)}},{{\boldsymbol{\theta}}'^{(1)}}) \geq 0$} \\
{\boldsymbol{\theta}}^{(1)} & \text{otherwise}
\end{cases} \\
{\boldsymbol{\theta}}'^{(2)} &=&
\begin{cases}
{\boldsymbol{\theta}}'^{(1)} + (\bar{r}_1 ({\boldsymbol{\theta}}'^{(1)},{\boldsymbol{\theta}}^{(1)}), 0) & \text{if $T ({{\boldsymbol{\theta}}^{(1)}},{{\boldsymbol{\theta}}'^{(1)}}) \leq 0$} \\
{\boldsymbol{\theta}}'^{(1)} & \text{otherwise.}
\end{cases}\end{aligned}$$ Here is an illustration.
\[scale = 4\] (0,0) rectangle (1,1); (0,0) rectangle (1/6,1.4/6); (0,0) node[$\bullet$]{}; (1/6,0) node[$\bullet$]{}; (0,0) node\[below \][${\boldsymbol{\theta}}^{(1)}$]{}; (1/6,0) node\[below right \][${\boldsymbol{\theta}}^{(2)}$]{}; (1/12,1.4/12) node[$\mathcal{R}_1$]{}; (0,1) node[$\bullet$]{}; (0,1) node \[above \] [${\boldsymbol{\theta}}'^{(1)} \!\! = \!\! {\boldsymbol{\theta}}'^{(2)}$]{};
It is worthwhile to notice that in this figure, the heuristics of Section \[SectionHeuristique\] suggest that ${\boldsymbol{\theta}}_0$ belongs to $\Theta_i \setminus \mathcal{R}_1$.
Now, if the first component of ${\boldsymbol{\theta}}^{(2)} = (\theta^{(2)}_1, \theta^{(2)}_2)$ is larger than $b_1$, that is $\theta^{(2)}_1 \geq b_1$, we set $N = 1$ and stop the construction of the vectors ${\boldsymbol{\theta}}^{(i)}$, ${\boldsymbol{\theta}}'^{(i)}$. Similarly, if $\theta'^{(2)}_1 \geq b_1$, we set $N = 1$ and stop the construction of the ${\boldsymbol{\theta}}^{(i)}$, ${\boldsymbol{\theta}}'^{(i)}$.
If $\theta^{(2)}_1 < b_1$ and $\theta'^{(2)}_1 < b_1$, we consider positive numbers $\bar{r}_1 ({\boldsymbol{\theta}}^{(2)},{\boldsymbol{\theta}}'^{(2)})$, $\bar{r}_2 ({\boldsymbol{\theta}}^{(2)},{\boldsymbol{\theta}}'^{(2)})$, $\bar{r}_1 ({\boldsymbol{\theta}}'^{(2)},{\boldsymbol{\theta}}^{(2)})$, $\underline{r}_2 ({\boldsymbol{\theta}}'^{(2)},{\boldsymbol{\theta}}^{(2)})$ such that the rectangles $$\begin{aligned}
\mathcal{R}_2 &=& [\theta_1^{(2)}, \theta_{1}^{(2)}+ \bar{r}_1 ({\boldsymbol{\theta}}^{(2)},{\boldsymbol{\theta}}'^{(2)})] \times [a_2, a_2+\bar{r}_2 ({\boldsymbol{\theta}}^{(2)},{\boldsymbol{\theta}}'^{(2)})]\\
\mathcal{R}_2' &=& [\theta_{1}'^{(2)}, \theta_{1}'^{(2)}+\bar{r}_1 ({\boldsymbol{\theta}}'^{(2)},{\boldsymbol{\theta}}^{(2)})] \times [b_2-\underline{r}_2 ({\boldsymbol{\theta}}'^{(2)},{\boldsymbol{\theta}}^{(2)}), b_2]\end{aligned}$$ are respectively included in the Hellinger balls $${\mathcal{B}}\big({\boldsymbol{\theta}}^{(2)}, \kappa^{1/2} h (f_{{\boldsymbol{\theta}}^{(2)}}, f_{{\boldsymbol{\theta}}'^{(2)}}) \big) \quad \text{and} \quad {\mathcal{B}}\big({\boldsymbol{\theta}}'^{(2)}, \kappa^{1/2} h (f_{{\boldsymbol{\theta}}^{(2)}}, f_{{\boldsymbol{\theta}}'^{(2)}}) \big).$$ We then define ${\boldsymbol{\theta}}^{(3)},{\boldsymbol{\theta}}'^{(3)} \in {\mathbb{R}}^2$ by $$\begin{aligned}
{\boldsymbol{\theta}}^{(3)} &=&
\begin{cases}
{\boldsymbol{\theta}}^{(2)} + (\bar{r}_1 ({\boldsymbol{\theta}}^{(2)},{\boldsymbol{\theta}}'^{(2)}), 0) & \text{if $T ({{\boldsymbol{\theta}}^{(2)}},{{\boldsymbol{\theta}}'^{(2)}}) \geq 0$} \\
{\boldsymbol{\theta}}^{(2)} & \text{otherwise}
\end{cases} \\
{\boldsymbol{\theta}}'^{(3)} &=&
\begin{cases}
{\boldsymbol{\theta}}'^{(2)} + (\bar{r}_1 ({\boldsymbol{\theta}}'^{(2)},{\boldsymbol{\theta}}^{(2)}), 0) & \text{if $T ({{\boldsymbol{\theta}}^{(2)}},{{\boldsymbol{\theta}}'^{(2)}}) \leq 0$} \\
{\boldsymbol{\theta}}'^{(2)} & \text{otherwise.}
\end{cases}\end{aligned}$$ If $\theta_{1}^{(3)} \geq b_1$ or if $\theta_{1}'^{(3)} \geq b_1$ we stop the construction and set $N = 2$. In the contrary case, we repeat this step to build the vectors ${\boldsymbol{\theta}}^{(4)}$ and ${\boldsymbol{\theta}}'^{(4)}$.
We repeat these steps until the construction stops. Let $N$ be the integer for which $\theta_{1}^{(N+1)} \geq b_1$ or $\theta_{1}'^{(N+1)} \geq b_1$. We then define $$\begin{aligned}
a_2' &=&
\begin{cases}
a_2 + \min_{1 \leq j \leq N} \bar{r}_2 ({\boldsymbol{\theta}}^{(j)},{\boldsymbol{\theta}}'^{(j)}) & \text{if $\theta_1^{(N+1)} \geq b_1$} \\
a_2 & \text{otherwise}
\end{cases} \\
b_2' &=&
\begin{cases}
b_2 - \min_{1 \leq j \leq N} \underline{r}_2 ({\boldsymbol{\theta}}'^{(j)},{\boldsymbol{\theta}}^{(j)}) & \text{if $\theta_{1}'^{(N+1)} \geq b_1$}\\
b_2 & \text{otherwise}
\end{cases}\end{aligned}$$ and set $\Theta_{i+1} = [a_1,b_1] \times [a_2',b_2']$.
\[scale = 4\] (0,0) rectangle (1,1); (0,0) rectangle (1/6,1.4/6); (1/6,0) rectangle (3/6,1/6); (3/6,0) rectangle (4.8/6,1.7/6); (4.8/6,0) rectangle (1,0.8/6); (0,5/6) rectangle (2.2/6,1); (0,0) node[$\bullet$]{}; (1/6,0) node[$\bullet$]{}; (0,0) node\[below \][${\boldsymbol{\theta}}^{(1)}$]{}; (1/6,0) node\[below \][${\boldsymbol{\theta}}^{(2)}$]{}; (1/12,1.4/12) node[$\mathcal{R}_1$]{}; (3/6,0) node[$\bullet$]{}; (3/6,0) node\[below \][${\boldsymbol{\theta}}^{(3)} \!\! = \!\! {\boldsymbol{\theta}}^{(4)}$]{}; (1/6+2/12,1/12) node[$\mathcal{R}_2$]{}; (4.8/6,0) node[$\bullet$]{}; (4.8/6,0) node\[below \][${\boldsymbol{\theta}}^{(5)}$]{}; (3/6+1.8/12,1.7/12) node[$\mathcal{R}_4$]{}; (4.8/6+1.2/12,0.8/12) node[$\mathcal{R}_5$]{}; (0,1) node[$\bullet$]{}; (0,1) node \[above \] [${\boldsymbol{\theta}}'^{(1)} \!\!=\!\! {\boldsymbol{\theta}}'^{(2)} \!\!=\!\! {\boldsymbol{\theta}}'^{(3)}$]{}; (2.2/6,1) node [$\bullet$]{}; (2.2/6,1) node \[above right\] [${\boldsymbol{\theta}}'^{(4)}$]{}; (2.2/12,5.5/6) node[$\mathcal{R}_3'$]{};
\[scale = 4\] (0,0) rectangle (1,1); (0,0.8/6) rectangle (1,1); (0.5,0.8/6+5.2/12) node[$\Theta_{i+1}$]{};
In this figure, the set $$\Theta_i \setminus \left( \mathcal{R}_1 \cup \mathcal{R}_2 \cup \mathcal{R}_3' \cup \mathcal{R}_4 \cup \mathcal{R}_5 \right)$$ is a confidence set for ${\boldsymbol{\theta}}_0$. The set $\Theta_{i+1}$ is the smallest rectangle containing this confidence set.
#### Remark 1.
We define $\Theta_{i+1}$ as a rectangle to make the procedure easier to implement.
#### Remark 2.
By using a similar strategy, we can also build a confidence set $\Theta_{i+1}$ of the form $\Theta_{i+1} = [a_1',b_1'] \times [a_2,b_2]$ where $a_1',b_1'$ are such that $b_1'-a_1' < b_1-a_1$.
We shall build the rectangles $\Theta_i$ until their diameters become sufficiently small. The estimator we shall consider will be the center of the last rectangle built.
Procedure.
----------
In the general case, that is when $d \geq 2$, we build a finite sequence of rectangles $(\Theta_i)_i$ of $\Theta = \prod_{j=1}^d [m_j,M_j]$. We consider $\kappa > 0$ and for all rectangle ${\mathcal{C}}= \prod_{j=1}^{d} [a_j,b_j] \subset\Theta$, vectors ${\boldsymbol{\theta}}, {\boldsymbol{\theta}}' \in {\mathcal{C}}$, and integers $j \in \{1,\dots,d\}$, we introduce positive numbers $\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$, $\underline{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ such that $$\begin{aligned}
\qquad {\mathcal{C}}\bigcap \prod_{j=1}^d \left[ \theta_j - \underline{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}'), \theta_j+\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') \right] \subset {\mathcal{B}}\big({\boldsymbol{\theta}}, \kappa^{1/2} h (f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'}) \big) \label{eqInclusionRC1}.\end{aligned}$$ We also consider for all $j \in \{1,\dots,d\}$, $\underline{R}_{{\mathcal{C}},j} \geq \underline{R}_j$ such that $$\begin{aligned}
\label{eqMinorationRCj}
h^2 \left(f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'} \right) \geq \sup_{1 \leq j \leq d} \underline{R}_{{\mathcal{C}},j} |\theta_j - \theta'_j|^{\alpha_j} \quad \text{for all ${\boldsymbol{\theta}}$, ${\boldsymbol{\theta}}' \in {\mathcal{C}}$.}
\end{aligned}$$ We finally consider for all $j \in \{1,\dots,d\}$, an one-to-one map $\psi_j$ from $\{1,\dots,d-1\}$ into $\{1,\dots,d\}\setminus \{j\}$.
We set $\Theta_1 =\Theta$. Given $\Theta_i$, we define $\Theta_{i+1}$ by using the following algorithm.
$\Theta_i = \prod_{j=1}^d [a_j, b_j]$ Choose ${k} \in \{1,\dots,d\}$ such that $$\underline{R}_{\Theta_i, k} (b_{k} - a_{k})^{\alpha_k} = \max_{1 \leq j \leq d} \underline{R}_{\Theta_i,j} (b_{j} - a_{j})^{\alpha_j}$$ ${\boldsymbol{\theta}}= (\theta_1,\dots,\theta_d) \leftarrow (a_1,\dots,a_d)$, ${\boldsymbol{\theta}}' = (\theta_1',\dots,\theta_d') \leftarrow {\boldsymbol{\theta}}$ and $\theta'_{{k}} \leftarrow b_{k}$ ${\varepsilon_j} \leftarrow \bar{r}_{\Theta_i,j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ and $\varepsilon_j'\leftarrow \bar{r}_{\Theta_i,j} ({\boldsymbol{\theta}}',{\boldsymbol{\theta}})$ for all $j \neq k$ $\varepsilon_{k} \leftarrow (b_k - a_k)/2$ and $\varepsilon_k' \leftarrow (b_k - a_k)/2$ $\text{Test} \leftarrow T({\boldsymbol{\theta}},{\boldsymbol{\theta}}') $ For all $j$, $\bar{r}_j \leftarrow \bar{r}_{\Theta_i,j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') $, $\bar{r}_j' \leftarrow \bar{r}_{\Theta_i,j} ({\boldsymbol{\theta}}',{\boldsymbol{\theta}})$, $\underline{r}_j' \leftarrow \underline{r}_{\Theta_i,j} ({\boldsymbol{\theta}}',{\boldsymbol{\theta}})$ $\varepsilon_{\psi_{k}(1)} \leftarrow \bar{r}_{\psi_k(1)} $ $\varepsilon_{\psi_k(j)} \leftarrow \min (\varepsilon_{\psi_k(j)}, \bar{r}_{\psi_k(j)})$ for all $j \in \{2,\dots, d-1\}$ $\varepsilon_{k} \leftarrow \min (\varepsilon_{k}, \bar{r}_{k})$ $J \leftarrow \left\{1 \leq j \leq d -1,\; \theta_{\psi_{k}(j)} + \varepsilon_{\psi_{k}(j)} < b_{\psi_{k}(j)} \right\}$ $\mathfrak{j}_{\text{min}} \leftarrow \min J$ $\theta_{\psi_{k}(j)} \leftarrow a_{\psi_{k}(j)}$ for all $j \leq \mathfrak{j}_{\text{min}} - 1$ $\theta_{\psi_{k}(\mathfrak{j}_{\text{min}})} \leftarrow \theta_{\psi_{k}(\mathfrak{j}_{\text{min}} )} + \varepsilon_{\psi_{k}(\mathfrak{j}_{\text{min}})}$ $\mathfrak{j}_{\text{min}} \leftarrow d$ $\varepsilon_{\psi_{k}(1)}' \leftarrow \bar{r}'_{\psi_k(1)}$ $\varepsilon_{\psi_{k}(j)}' \leftarrow \min (\varepsilon_{\psi_{k}(j)}', \bar{r}'_{\psi_{k}(j)})$ for all $j \in \{2,\dots, d-1\}$ $\varepsilon_{k}' \leftarrow \min (\varepsilon_{k}', \underline{r}'_{k})$ $J' \leftarrow \left\{1 \leq j' \leq d -1,\; \theta_{\psi_{k}(j')}' + \varepsilon_{\psi_{k}(j')}' < b_{\psi_{k}(j')}\right\}$ $\mathfrak{j}_{\text{min}}' \leftarrow \min J'$ $\theta_{\psi_{k}(j)}' \leftarrow a_{\psi_{k}(j)}$ for all $j \leq \mathfrak{j}_{\text{min}}' - 1$ $\theta_{\psi_{k}(\mathfrak{j}_{\text{min}} ')}' \leftarrow \theta_{\psi_{k}(\mathfrak{j}_{\text{min}} ')}' + \varepsilon_{\psi_{k}(\mathfrak{j}_{\text{min}} ')}'$ $\mathfrak{j}_{\text{min}}' \leftarrow d$
$a_{k} \leftarrow a_{k} + \varepsilon_{k}$ $b_{k} \leftarrow b_{k} - \varepsilon_{k}' $ $\Theta_{i+1} \leftarrow \prod_{j=1}^{d} [a_j, b_j] $ $\Theta_{i+1}$
We now consider $d$ positive numbers $\eta_1,\dots,\eta_d$ and use the algorithm below to build our estimator ${\boldsymbol{\hat{\theta}}}$.
Set $a_j = m_j$ and $b_j = M_j$ for all $j \in \{1,\dots,d\}$ $i \leftarrow 0$ $i \leftarrow i + 1$ Build $\Theta_i$ and set $a_1,\dots,a_d$, $b_1,\dots,b_d$ such that $\prod_{j=1}^d [a_j,b_j] = \Theta_i$ $${\boldsymbol{\hat{\theta}}}= \left(\frac{a_1 + b_1}{2}, \dots, \frac{a_d + b_d}{2} \right)$$
The parameters $\kappa$, $t_j$, $\eta_j$ $\bar{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$, $\underline{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ can be interpreted as in dimension $1$. We have introduced a new parameter $\underline{R}_{{\mathcal{C}},j}$ whose role is to control more accurately the Hellinger distance in order to make the procedure faster. Sometimes, the computation of this parameter is difficult in practice, in which case we can avoid it by proceeding as follows. For all ${\boldsymbol{\theta}}, {\boldsymbol{\theta}}' \in \Theta$, $$\begin{aligned}
h^2 \left(f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'} \right) \geq \sup_{1 \leq j \leq d} \underline{R} |\theta_j - \theta'_j|^{\alpha_j}
\end{aligned}$$ where $\underline{R } = \min_{1 \leq j \leq d} \underline{R}_j$, which means that we can always assume that $\underline{R}_j$ is independent of $j$. Choosing $\underline{R}_{\Theta_i, j} = \underline{R}$ simplifies the only line where this parameter is involved (line 1 of Algorithm \[algoConstructionDimQuelquonqueAvant\]). It becomes $$(b_{k} - a_{k})^{\alpha_k} = \max_{1 \leq j \leq d} (b_{j} - a_{j})^{\alpha_j}$$ and $k$ can be calculated without computing $\underline{R}$.
Risk bound. {#SectionPropEstimateurDimD}
-----------
Suitable values of the parameters lead to a risk bound for our estimator ${\boldsymbol{\hat{\theta}}}$.
\[ThmPrincipalDimQuelquonque\] Suppose that Assumption \[HypSurLeModeleQuelquonqueDebutDimD\] holds. Let $\bar{\kappa}$ be defined by (\[eqEsperanceTest\]), and assume that $\kappa \in (0, \bar{\kappa})$, and for all $j \in \{1,\dots,d\}$, $t_j \in (0,d^{1/\alpha_j}]$, $$\epsilon_j = t_j ({ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j n)^{-1/\alpha_j}, \quad \eta_j \in [\epsilon_j, d^{1/\alpha_j} ({ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j n)^{-1/\alpha_j}].$$ Suppose that for all rectangle ${\mathcal{C}}$, ${\boldsymbol{\theta}}, {\boldsymbol{\theta}}' \in {\mathcal{C}}$, the numbers $\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$, $\underline{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$, are such that (\[eqInclusionRC1\]) holds.
Then, for all $\xi > 0$, the estimator ${\boldsymbol{\hat{\theta}}}$ built by Algorithm \[algoConstructionDimQuelquonque\] satisfies $${\mathbb{P}\,}\left[ C h^2(s,f_{{\boldsymbol{\hat{\theta}}}}) \geq h^2(s, {\mathscr{F}}) + \frac{d}{n} + \xi \right] \leq e^{- n \xi}$$ where $C > 0$ depends only on $\kappa$, $({ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j/\underline{R}_j)_{1 \leq j \leq d}$, $(\alpha_j)_{1 \leq j \leq d}$, $(t_j)_{1 \leq j \leq d}$.
#### Remark.
A look at the proof of the theorem shows that Theorem \[ThmPrincipalDimQuelquonqueDansOverview\] ensues from this theorem when $t_j = d^{1/\alpha_j}$ and $\eta_j = \epsilon_j$.
Choice of $\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ and $\underline{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The parameters $\bar{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$, $\underline{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ are involved in the procedure and must be calculated. They may be chosen arbitrary provided that the rectangle $$\begin{aligned}
\qquad {\mathcal{C}}\bigcap \prod_{j=1}^d \left[ \theta_j - \underline{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}'), \theta_j+\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') \right] \end{aligned}$$ is included in the Hellinger ball ${\mathcal{B}}\big({\boldsymbol{\theta}}, \kappa^{1/2} h (f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'}) \big) $. Indeed, the theoretical properties of the estimator given by the preceding theorem does not depend on these values.
However, the numerical complexity of the algorithm strongly depends on these parameters. The algorithm computes less tests when $\bar{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$, $\underline{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ are large and we have thus an interest in defining them as the largest numbers possible. In the cases where a direct computation of these numbers is difficult, we may use a similar strategy that the one adopted in the unidimensional case (Section \[SectionDefinitionRminBarreDim1\]).
#### First way.
We may consider $({ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{{\mathcal{C}},1},\dots,{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{{\mathcal{C}},d}) \in \prod_{j=1}^d (0,{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j]$ such that $$\begin{aligned}
\label{eqDefintionRhojDimD}
h^2 \left(f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'} \right) \leq \sup_{1 \leq j \leq d} { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{{\mathcal{C}},j} |\theta_j - \theta'_j|^{\alpha_j} \quad \text{for all ${\boldsymbol{\theta}}, {\boldsymbol{\theta}}' \in {\mathcal{C}}$} \end{aligned}$$ and define them by $$\begin{aligned}
\label{DefinitionRDimensionQuelquonque2}
\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') = \underline{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') = \left( (\kappa / { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{{\mathcal{C}},j}) h^2(f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'}) \right)^{1/\alpha_j}.\end{aligned}$$ One can verify that this definition implies (\[eqInclusionRC1\]).
#### Second way.
An alternative definition that does not involve the Hellinger distance is $$\begin{aligned}
\label{DefinitionRDimensionQuelquonque1}
\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') = \underline{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') = \left( \kappa /{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{{\mathcal{C}},j} \sup_{1 \leq k \leq d} \underline{R}_{{\mathcal{C}},k} |\theta'_k - \theta_k|^{\alpha_k} \right)^{1/\alpha_j}.\end{aligned}$$ Similarly, one can check that (\[eqInclusionRC1\]) holds.
The complexity of our procedure can be upper-bounded as soon as $\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ and $\underline{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ are large enough.
\[PropCalculComplexiteDimenQuelquonque\] Suppose that the assumptions of Theorem \[ThmPrincipalDimQuelquonque\] are fulfilled and that for all $j \in \{1,\dots,d\}$, all rectangle ${\mathcal{C}}$, ${\boldsymbol{\theta}}, {\boldsymbol{\theta}}' \in {\mathcal{C}}$, the numbers $\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$, $\underline{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ are larger than $$\begin{aligned}
\label{eqSurretRDimD}
\left( \kappa /{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{{\mathcal{C}},j} \sup_{1 \leq k \leq d} \underline{R}_{{\mathcal{C}},k} |\theta'_k - \theta_k|^{\alpha_k} \right)^{1/\alpha_j}\end{aligned}$$ where the $\underline{R}_{{\mathcal{C}},j} $ and ${ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{{\mathcal{C}},j}$ are respectively such that (\[eqMinorationRCj\]) and (\[eqDefintionRhojDimD\]) hold and such that $\underline{R}_{{\mathcal{C}},j} \geq \underline{R}_j$ and ${ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{{\mathcal{C}},j} \leq { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j$.
Then, the number of tests computed to build the estimator ${\boldsymbol{\hat{\theta}}}$ is smaller than $$4 \left[\prod_{j=1}^d \left(1 + \left( {{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j} / {(\kappa \underline{R}_j)}\right)^{1/\alpha_j} \right) \right] \left[\sum_{j=1}^d \max \left\{1, \log \left( \frac{M_j-m_j}{\eta_j} \right) \right\} \right].$$
Simulations for multidimensional models {#SectionSimulationDimD}
=======================================
In this section, we complete the simulation study of Section \[SectionSimuDim1\] by dealing with multidimensional models.
Models.
-------
We propose to work with the following models.
${\mathscr{F}}= \left\{f_{(m,\sigma)}, \, (m,\sigma) \in [-5, 5]\times [1/5,5]\right\} $ where $$f_{(m,\sigma)} (x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left(- \frac{(x-m)^2}{2 \sigma^2} \right) \quad \text{ for all $x \in {\mathbb{R}}$.}$$
${\mathscr{F}}= \left\{f_{(m,\sigma)}, \, (m,\sigma) \in [-5, 5]\times [1/5,5]\right\} $ where $$f_{(m,\sigma)} (x) = \frac{ \sigma}{\pi \left((x-m)^2 + \sigma^2\right)} \quad \text{for all $x \in {\mathbb{R}}$.}$$
${\mathscr{F}}= \left\{f_{(a,b)}, \, (a,b) \in [0.6, 10]\times [0.1,20]\right\} $ where $$f_{(a,b)} (x) =\frac{b^{a}}{ \Gamma(a)} x^{a - 1} e^ {-b x} {\mathbbm{1}}_{[0,+\infty)}(x) \quad \text{for all $x \in {\mathbb{R}}$}$$ where $\Gamma$ is the Gamma function.
${\mathscr{F}}= \left\{f_{(a,b)}, \, (a,b) \in [0.7, 20]\times [0.7,20]\right\} $ where $$f_{(a,b)} (x) = \frac{1}{B(a,b)} x^{a - 1} (1-x)^{b - 1} {\mathbbm{1}}_{[0,1]} (x) \quad \text{for all $x \in {\mathbb{R}}$.}$$ and where $B(a,b)$ is the Beta function.
${\mathscr{F}}= \left\{f_{(m,\lambda)}, \, (m,\lambda) \in [-1, 1]\times [1/5,5]\right\} $ where $$f_{(m,\lambda)} (x) = \lambda e^{- \lambda (x - m)} {\mathbbm{1}}_{[m,+\infty)} (x) \quad \text{for all $x \in {\mathbb{R}}$.}$$
${\mathscr{F}}= \left\{f_{(m,r)}, \, (m,r) \in [-0.5, 0.5]\times [0.1,2]\right\} $ where $$f_{(m,r)} (x) = r^{-1} {\mathbbm{1}}_{[m,m + r]} (x) \quad \text{for all $x \in {\mathbb{R}}$.}$$
We shall use our procedure with $t_j = 0$ for all $j \in \{1,\dots,d\}$ (that is $\Theta_{\text{dis}} = \Theta$, ${\mathscr{F}_{\text{dis}}}= {\mathscr{F}}$ and $T ({{\boldsymbol{\theta}}}, {{\boldsymbol{\theta}}'}) = { \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'})$) and with $\kappa = 0.9 \bar{\kappa}$, $\eta_j = (M_j-m_j) 10^{-6}$. In order to avoid technicalities, we delay to Section \[SectionImplementationProcedure\] the values of $\underline{R}_{{\mathcal{C}},j}$, $\bar{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$, $\underline{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ that have been chosen in this simulation study.
Simulations when $s \in {\mathscr{F}}$. {#simulations-when-s-in-mathscrf.}
---------------------------------------
We simulate $N = 10^4$ independent samples $(X_1,\dots,X_n)$ according to a density $s \in {\mathscr{F}}$ and use our procedure to estimate $s$ on each of the samples. In examples 1,2,5,6 the density is $s = f_{(0,1)}$, in Example 3, $ s = f_{(2,3)}$ and in Example 4, $s = f_{(3,4)}$. The results are the following.
$n = 25$ $n = 50$ $n = 75$ $n = 100$
-------------- --------------------------------------------------------------------------------------- ------------------- ------------------- -------------------- -------------------
Example 1 $\widehat{R}_{10^4}({\boldsymbol{\hat{\theta}}})$ 0.011 0.0052 0.0034 0.0025
\* $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.011 0.0052 0.0034 0.0025
\* $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ $10^{-4}$ $6 \cdot 10^{-5}$ $-5 \cdot 10^{-8}$ $3 \cdot 10^{-8}$
\* $\widehat{\text{std}}_{10^4} ({{\boldsymbol{\hat{\theta}}}})$ 0.012 0.0055 0.0035 0.0026
\* $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.012 0.0055 0.0035 0.0026
\* Example 2 $\widehat{R}_{10^4}({\boldsymbol{\hat{\theta}}})$ 0.011 0.0052 0.0034 0.0026
\* $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.011 0.0052 0.0034 0.0026
\* $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ $10^{-8}$ $10^{-8}$ $-10^{-9}$ $4 \cdot 10^{-8}$
\* $\widehat{\text{std}}_{10^4} ({{\boldsymbol{\hat{\theta}}}})$ 0.011 0.0052 0.0035 0.0026
\* $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.011 0.0052 0.0035 0.0026
\* Example 3 $\widehat{R}_{10^4}({\boldsymbol{\hat{\theta}}})$ 0.011 0.0052 0.0034 0.0025
\* $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.011 0.0052 0.0034 0.0025
\* $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ $2 \cdot 10^{-4}$ $2 \cdot 10^{-5}$ $10^{-7}$ $10^{-7}$
\* $\widehat{\text{std}}_{10^4} ({{\boldsymbol{\hat{\theta}}}})$ 0.011 0.0053 0.0035 0.0026
\* $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.011 0.0053 0.0035 0.0026
\* Example 4 $\widehat{R}_{10^4}({\boldsymbol{\hat{\theta}}})$ 0.011 0.0052 0.0034 0.0025
\* $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.011 0.0052 0.0034 0.0025
\* $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ $2 \cdot 10^{-4}$ $10^{-5}$ $2 \cdot 10^{-7}$ $ 2\cdot 10^{-7}$
\* $\widehat{\text{std}}_{10^4} ({{\boldsymbol{\hat{\theta}}}})$ 0.011 0.0053 0.0035 0.0026
\* $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.011 0.0053 0.0035 0.0026
\* Example 5 $\widehat{R}_{10^4}({\boldsymbol{\hat{\theta}}})$ 0.025 0.012 0.0082 0.0063
\* $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.025 0.012 0.0083 0.0063
\* $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.020 -0.0020 -0.0073 0.0012
\* $\widehat{\text{std}}_{10^4} ({{\boldsymbol{\hat{\theta}}}})$ 0.025 0.012 0.0079 0.0061
\* $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.021 0.011 0.0070 0.0053
\* Example 6 $\widehat{R}_{10^4}({\boldsymbol{\hat{\theta}}})$ 0.040 0.019 0.013 0.0098
\* $\widehat{R}_{10^4}(\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.039 0.020 0.013 0.010
\* $\mathcal{\widehat{R}}_{10^4,\text{rel}} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.010 -0.015 -0.018 -0.016
\* $\widehat{\text{std}}_{10^4} ({{\boldsymbol{\hat{\theta}}}})$ 0.033 0.016 0.011 0.0080
\* $\widehat{\text{std}}_{10^4} (\boldsymbol{\tilde{\theta}}_{\text{mle}})$ 0.027 0.014 0.0093 0.0069
\*
The risk of our estimator is very close to the one of the m.l.e. In the first four examples they are even almost indistinguishable. As in dimension 1, this can be explained by the fact that the first four models are regular enough to ensure that our estimator is very close to the maximum likelihood one.
To see this, let for $c \in \{0.99,0.999, 1\}$, $q_{c}$ be the $c$-quantile of the random variable $$\max \left\{ \big|\hat{\theta}_1 - \tilde{\theta}_{\text{mle},1} \big|, \big|\hat{\theta}_2 - \tilde{\theta}_{\text{mle},2} \big| \right\}$$ and $\hat{q}_c$ be the empirical version based on $10^4$ samples. These empirical quantiles are very small as shown in the array below.
$n = 25$ $n = 50$ $n = 75$ $n = 100$
----------- -------------------- ------------------- ------------------- ------------------- ---------------------
Example 1 $\hat{q}_{0.99}$ $9 \cdot 10^{-7}$ $9 \cdot 10^{-7}$ $9 \cdot 10^{-7}$ $9\cdot 10^{-7}$
\* $\hat{q}_{0.999} $ 0.023 $ 10^{-6}$ $9\cdot 10^{-7}$ $10^{-6}$
$\hat{q}_{1} $ 0.22 0.072 $10^{-6}$ $ 10^{-6}$
Example 2 $\hat{q}_{0.99}$ $4 \cdot 10^{-7}$ $4 \cdot 10^{-7}$ $4 \cdot 10^{-7}$ $4 \cdot 10^{-7}$
\* $\hat{q}_{0.999} $ $5 \cdot 10^{-7}$ $5 \cdot 10^{-7}$ $5 \cdot 10^{-7}$ $5 \cdot 10^{-7}$
$\hat{q}_{1}$ $5 \cdot 10^{-7}$ $5 \cdot 10^{-7}$ $5 \cdot 10^{-7}$ $5 \cdot 10^{-7}$
Example 3 $\hat{q}_{0.99}$ $7 \cdot 10^{-7}$ $7 \cdot 10^{-7}$ $7 \cdot 10^{-7}$ $7 \cdot 10^{-7}$
\* $\hat{q}_{0.999} $ $9 \cdot 10^{-7}$ $8 \cdot 10^{-7}$ $8 \cdot 10^{-7}$ $8 \cdot 10^{-7}$
$\hat{q}_{1} $ $1.5$ $0.29$ $ 10^ {-6}$ $ 9 \cdot 10^ {-7}$
Example 4 $\hat{q}_{0.99}$ $10^{-6}$ $10^{-6}$ $ 10^{-6}$ $ 10^{-6}$
$\hat{q}_{0.999} $ $2\cdot 10^{-6}$ $10^{-6}$ $10^{-6}$ $10^{-6}$
$\hat{q}_{1}$ $1.6$ $0.27$ $2 \cdot 10^{-6}$ $2 \cdot 10^{-6}$
Simulations when $s \not \in {\mathscr{F}}$. {#SectionSimuRobustesseDimD}
--------------------------------------------
Contrary to the maximum likelihood estimator, our estimator possesses robustness properties. The goal of this section is to illustrate them.
Suppose that we observe $n = 100$ i.i.d random variables $X_1,\dots, X_n$ from which we wish to estimate their distribution by using a Gaussian model $${\mathscr{F}}= \left\{f_{(m,\sigma)}, \, (m,\sigma) \in [-10,10]\times [0.5,10]\right\}$$ where $f_{(m,\sigma)}$ is the density of a Gaussian random variable with mean $m$ and variance $\sigma^2$. The preceding section shows that when the unknown underlying density $s$ belongs to ${\mathscr{F}}$, our estimator is as good as the m.l.e. We now consider $p \in [0,1]$ and define $s = s_p$ where $$s_p (x) = (1-p) f_{(-5,1)} +p f_{(5,1)} \quad \text{for all $x \in {\mathbb{R}}$. }$$ This density belongs to the model only if $p = 0$ or $p = 1$ and we are interested in comparing our estimator to the m.l.e when $p \neq 0$ and $p \neq 1$.
We then proceed as in Section \[SectionRobustessDim1\]. For a lot of values of $p \in [0,1]$, we simulate $N = 1000$ samples of $100$ random variables with density $s_p$ and measure the quality of the estimator $\boldsymbol{\tilde{\theta}}$ by $$\begin{aligned}
\widehat{R}_{p,N} (\boldsymbol{\tilde{\theta}}) = \frac{1}{1000} \sum_{i=1}^{1000} h^2 (s_p, f_{\boldsymbol{\tilde{\theta}}^{(p,i)}}) \end{aligned}$$ where $\boldsymbol{\tilde{\theta}}^{(p,i)}$ is the value of $\boldsymbol{\tilde{\theta}}$ corresponding to the $i^{\text{\tiny th}}$ sample whose density is $s_p$. We compute this function for $\boldsymbol{\tilde{\theta}} \in \{\boldsymbol{\tilde{\theta}}_{\text{mle}}, {\boldsymbol{\hat{\theta}}}\}$ and obtain the graph below.
![Red: $p \mapsto H^2(s_p, {\mathscr{F}})$. Blue: $p \mapsto \widehat{R}_{p,1000} ({{\boldsymbol{\hat{\theta}}}}) $. Green: $p \mapsto \widehat{R}_{p,1000} (\boldsymbol{\tilde{\theta}}_{\text{mle}}) $.](FonctionRobustesseLoiNormalen100Arxiv.eps)
This figure shows that the risk of our estimator is smaller than the one of the m.l.e when $p$ is close to $0$ or $1$ (says $p \leq 0.2$ or $p \geq 0.8$) and is similar otherwise. For the Gaussian model, our estimator may thus be interpreted as a robust version of the m.l.e.
Proofs
======
A preliminary result. {#SectionProcedureGenerique}
---------------------
In this section, we show a result that will allow us to prove Theorems \[ThmPrincipalDim1\] and \[ThmPrincipalDimQuelquonque\]. Given $\Theta' \subset \Theta$, we recall that in this paper that ${\text{diam}}\Theta'$ stands for $${\text{diam}}\Theta' = \sup_{{\boldsymbol{\theta}}, {\boldsymbol{\theta}}' \in \Theta'} h^2(f_{{\boldsymbol{\theta}}},f_{{\boldsymbol{\theta}}'}).$$
\[ThmPrincipal\] Suppose that Assumption \[HypSurLeModeleQuelquonqueDebutDimD\] holds. Let $\kappa \in (0,\bar{\kappa})$, $N \in {\mathbb{N}}^{\star}$ and let $\Theta_1 \dots \Theta_N$ be $N$ non-empty subsets of $\Theta$ such that $\Theta_1 =\Theta$. For all $j \in \{1,\dots,d\}$, let $t_j$ be an arbitrary number of $(0,d^{1/\alpha_j}]$ and $$\epsilon_j = t_j ({ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j n)^{-1/\alpha_j}.$$ Assume that for all $i \in \{1,\dots, N-1\}$, there exists $L_i \geq 1$ such that for all $\ell \in \{1,\dots,L_i\}$ there exist two elements ${\boldsymbol{\theta}}^{(i,\ell)} \neq {\boldsymbol{\theta}}'^{(i,\ell)} $ of $\Theta_i$ such that $$\Theta_i \setminus \bigcup_{\ell=1}^{L_i} B^{(i,\ell)} \subset \Theta_{i+1} \subset \Theta_i$$ where $B^{(i,\ell)} $ is the set defined by $$\begin{aligned}
B^{(i,\ell)} =
\begin{cases}
A^{(i,\ell)} & \text{if $T ({{\boldsymbol{\theta}}^{(i,\ell)}},{{\boldsymbol{\theta}}'^{(i,\ell)}}) > 0$} \\
A'^{(i,\ell)} & \text{if $T ({{\boldsymbol{\theta}}^{(i,\ell)}},{{\boldsymbol{\theta}}'^{(i,\ell)}}) < 0$} \\
A^{(i,\ell)} \bigcup A'^{(i,\ell)} & \text{if $T ({{\boldsymbol{\theta}}^{(i,\ell)}},{{\boldsymbol{\theta}}'^{(i,\ell)}}) = 0$}
\end{cases}\end{aligned}$$ where $ A^{(i,\ell)}$ and $ A'^{(i,\ell)}$ are the Hellinger balls defined by $$\begin{aligned}
A^{(i,\ell)} &=& \left\{ {\boldsymbol{\theta}}'' \in \Theta_i, \, h^2 (f_{{\boldsymbol{\theta}}''}, f_{{\boldsymbol{\theta}}^{(i,\ell)}}) \leq \kappa h^2 (f_{{\boldsymbol{\theta}}^{(i,\ell)}}, f_{{\boldsymbol{\theta}}'^{(i,\ell)}}) \right\} \\
A'^{(i,\ell)} &=& \left\{ {\boldsymbol{\theta}}'' \in \Theta_i, \, h^2 (f_{{\boldsymbol{\theta}}''}, f_{{\boldsymbol{\theta}}'^{(i,\ell)}}) \leq \kappa h^2 (f_{{\boldsymbol{\theta}}^{(i,\ell)}}, f_{{\boldsymbol{\theta}}'^{(i,\ell)}}) \right\}\end{aligned}$$ and where $T$ is the functional defined by (\[eqDefinitionTDimD\]). Suppose moreover that there exists $\kappa_0 > 0$ such that $$\kappa_0 {\text{diam}}(\Theta_i) \leq \inf_{1 \leq \ell \leq L_i} h^2(f_{{\boldsymbol{\theta}}^{(i,\ell)}},f_{{\boldsymbol{\theta}}'^{(i,\ell)}}) \quad \text{for all $i \in \{1,\dots,N-1\}$}.$$ Then, for all $\xi > 0$, $${\mathbb{P}\,}\left[ C \inf_{{\boldsymbol{\theta}}\in \Theta_N} h^2(s,f_{{\boldsymbol{\theta}}}) \geq h^2(s, {\mathscr{F}}) + \frac{D_{{\mathscr{F}}}}{n} + \xi \right] \leq e^{- n \xi}$$ where $C > 0$ depends only on $\kappa, \kappa_0$, where $$\begin{aligned}
D_{{\mathscr{F}}} = \max \left\{d, \sum_{j=1}^d \log \left(1 + t_j^{-1} \left( (d/ \boldsymbol{\bar{\alpha}}) (c { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j / \underline{R}_j) \right)^{1/\alpha_j}\right) \right\}\end{aligned}$$ and where $c$ depends only on $\kappa$.
This result says that if $(\Theta_i)_{1 \leq i \leq N}$ is a finite sequence of subsets of $\Theta$ satisfying the assumptions of the theorem, then there exists an estimator ${\boldsymbol{\hat{\theta}}}$ with values in $\Theta_N$ whose risk can be upper bounded by $${\mathbb{P}\,}\left[ C h^2(s,f_{{\boldsymbol{\hat{\theta}}}}) \geq h^2(s, {\mathscr{F}}) + \frac{D_{{\mathscr{F}}}}{n} + \xi \right] \leq e^{- n \xi}.$$ We shall show that algorithms \[AlgorithmDim1\] and \[algoConstructionDimQuelquonque\] correspond to suitable choices of sets $\Theta_i$.
Proof of Theorem \[ThmPrincipal\].
----------------------------------
Let ${\boldsymbol{\theta}}_0 \in\Theta$ be such that $$h^2(s, f_{{\boldsymbol{\theta}}_0}) \leq h^2(s, {\mathscr{F}}) + 1/n.$$ Define $C_{\kappa}$ such that $(1 + \sqrt{C_{\kappa}})^2 = \kappa^{-1}$ and $\varepsilon \in (1/\sqrt{2},1)$ such that $$\frac{\left(1 + \min \left( \frac{1 - \varepsilon}{2}, \varepsilon - \frac{1}{\sqrt{2}} \right) \right)^4 (1 + \varepsilon) + \min \left( \frac{1 - \varepsilon}{2}, \varepsilon - \frac{1}{\sqrt{2}} \right) }{1 - \varepsilon - \min \left( \frac{1 - \varepsilon}{2}, \varepsilon -\frac{1}{\sqrt{2}} \right)} = C_{\kappa}.$$ We then set $$\begin{aligned}
\beta &=& \min \big\{ (1 - \varepsilon)/{2}, \varepsilon - 1/{\sqrt{2}} \big\} \\
\gamma &=& (1+\beta) (1 + \beta^{-1}) \left[1-\varepsilon + (1 +\beta) (1+\varepsilon)\right]\\
c &=& 24 \big(2 + \sqrt{2}/6\, (\varepsilon - 1/\sqrt{2} ) \big) / (\varepsilon - 1/\sqrt{2} )^2 \cdot 10^{3} \\
\delta &=& (1+\beta^{-1}) \left[1 - \varepsilon + (1+\beta)^3 (1+\varepsilon) \right] + c (1 + \beta)^2.\end{aligned}$$ The proof of the theorem is based on the lemma below whose proof is delayed to Section \[SectionPreuveLemmeControle\].
\[LemmeControleH\] For all $\xi > 0$, there exists an event $\Omega_{\xi}$ such that ${\mathbb{P}\,}(\Omega_{\xi}) \geq 1 - e^{-n \xi}$ and on which the following assertion holds: if there exists $p \in \{1,\dots,N-1\}$ such that ${\boldsymbol{\theta}}_0 \in \Theta_p$ and such that $$\begin{aligned}
\label{eqControleH}
\gamma h^2(s, f_{{\boldsymbol{\theta}}_0}) + \delta\left(\frac{D_{{\mathscr{F}}}}{n} + \xi \right) < \beta \inf_{\ell \in \{1,\dots,L_p\}} \left(h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}}) + h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}'^{(p,\ell)}}) \right) \end{aligned}$$ then ${\boldsymbol{\theta}}_0 \in \Theta_{p+1}$.
The result of Theorem \[ThmPrincipal\] is straightforward if ${\boldsymbol{\theta}}_0 \in \Theta_{N}$, and we shall thus assume that ${\boldsymbol{\theta}}_0 \not \in \Theta_N$. Set $$p = \max \left\{i \in \{1,\dots,N-1\}, \, {\boldsymbol{\theta}}_0 \in \Theta_i\right\}.$$ Let ${\boldsymbol{\theta}}_0'$ be any element of $\Theta_N$. Then, ${\boldsymbol{\theta}}_0'$ belongs to $\Theta_p$ and $$\begin{aligned}
h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}_0'}) &\leq& \sup_{{\boldsymbol{\theta}}, {\boldsymbol{\theta}}' \in \Theta_{p}} h^2(f_{{\boldsymbol{\theta}}},f_{{\boldsymbol{\theta}}'}) \\
&\leq& \kappa_0^{-1} \inf_{\ell \in \{1,\dots,L_p\}} h^2(f_{{\boldsymbol{\theta}}^{(p,\ell)}},f_{{\boldsymbol{\theta}}'^{(p,\ell)}}) \\
&\leq& 2 \kappa_0^{-1} \inf_{\ell \in \{1,\dots,L_p\}} \left(h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}}) + h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}'^{(p,\ell)}}) \right).\end{aligned}$$ By definition of $p$, ${\boldsymbol{\theta}}_0 \in \Theta_p \setminus \Theta_{p+1}$. We then derive from the above lemma that on $\Omega_{\xi}$, $$\begin{aligned}
\beta \inf_{\ell \in \{1,\dots,L_p\}} \left(h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}}) + h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}'^{(p,\ell)}}) \right) \leq \gamma h^2(s, f_{{\boldsymbol{\theta}}_0}) + \delta \frac{D_{{\mathscr{F}}} + n \xi}{n}.\end{aligned}$$ Hence, $$\begin{aligned}
h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}_0'}) \leq \frac{2}{ \beta \kappa_0} \left( \gamma h^2(s, f_{{\boldsymbol{\theta}}_0}) + \delta\frac{D_{{\mathscr{F}}} + n \xi}{n} \right)\end{aligned}$$ and thus $$\begin{aligned}
h^2(s,f_{{\boldsymbol{\theta}}_0'}) &\leq& 2 h^2 \left(s, f_{{\boldsymbol{\theta}}_0} \right) + 2 h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}_0'}) \\
&\leq& \left(2 + \frac{4 \gamma}{\beta \kappa_0 }\right) h^2(s, f_{{\boldsymbol{\theta}}_0}) + \frac{4 \delta }{n \beta \kappa_0} \left( D_{{\mathscr{F}}} + n \xi \right).\end{aligned}$$ Since $ h^2(s, f_{{\boldsymbol{\theta}}_0}) \leq h^2(s, {\mathscr{F}}) + 1/n$, there exists $C > 0$ such that $$\begin{aligned}
C h^2(s,f_{{\boldsymbol{\theta}}_0'}) \leq h^2(s, {\mathscr{F}}) + \frac{D_{{\mathscr{F}}} }{n} + \xi \quad \text{on $\Omega_{\xi}$.}\end{aligned}$$ This concludes the proof.
### Proof of Lemma \[LemmeControleH\] {#SectionPreuveLemmeControle}
We use the claim below whose proof is postponed to Section \[SectionPreuveClaimOmegaXi\]
\[ClaimOmegaXi\] For all $\xi > 0$, there exists an event $\Omega_{\xi}$ such that ${\mathbb{P}\,}(\Omega_{\xi}) \geq 1 - e^{-n \xi}$ and on which, for all $f,f' \in {\mathscr{F}_{\text{dis}}}$ , $$\left(1 - \varepsilon \right) h^2(s,f' ) + \frac{{ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
} (f,f') }{\sqrt{2}} \leq \left(1 + \varepsilon \right) h^2(s,f ) + c \frac{ \left( D_{{\mathscr{F}}} + n \xi \right)}{n}$$ (see Section \[SectionDefTestDimd\] for the definition of ${\mathscr{F}_{\text{dis}}}$).
Let $p \in \{1,\dots,N-1\}$ such that ${\boldsymbol{\theta}}_0 \in \Theta_p$ and (\[eqControleH\]) holds. Let $\ell \in \{1,\dots,L_p\}$. The aim is to show that ${\boldsymbol{\theta}}_0 \not \in B^{(p,\ell)}$. Without lost of generality, we assume that $T ({{\boldsymbol{\theta}}^{(p,\ell)}},{{\boldsymbol{\theta}}'^{(p,\ell)}}) = { \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f_{\pi({\boldsymbol{\theta}}^{(p,\ell) })}, f_{\pi( {\boldsymbol{\theta}}'^{(p,\ell)})} )$ is non-negative, and prove that ${\boldsymbol{\theta}}_0 \not \in A^{(p,\ell)}$.
On the event $\Omega_{\xi}$, we deduce from the claim $$\left(1 - \varepsilon \right) h^2(s,f_{\pi({\boldsymbol{\theta}}'^{(p,\ell)})} ) \leq \left(1 + \varepsilon \right) h^2(s,f_{\pi({\boldsymbol{\theta}}^{(p,\ell)})} ) + c \frac{ \left( D_{{\mathscr{F}}} + n \xi \right)}{n}.$$ Consequently, by using the triangular inequality and the above inequality $$\begin{aligned}
\left(1 - \varepsilon \right) h^2(f_{{\boldsymbol{\theta}}_0},f_{\pi({\boldsymbol{\theta}}'^{(p,\ell)})} ) &\leq& \left(1 + \beta^{-1} \right) \left(1 - \varepsilon\right) h^2(s,f_{{\boldsymbol{\theta}}_{0}} ) \\
& & \quad + (1 + \beta) \left(1 - \varepsilon \right) h^2(s,f_{\pi({\boldsymbol{\theta}}'^{(p,\ell)})} ) \\
&\leq& \left(1 + \beta^{-1} \right) \left(1 - \varepsilon \right) h^2(s,f_{{\boldsymbol{\theta}}_{0}} ) \\
& & + (1 + \beta) \left[\left(1 + \varepsilon \right) h^2(s,f_{\pi({\boldsymbol{\theta}}^{(p,\ell)})} ) + c \frac{ \left( D_{{\mathscr{F}}} + n \xi \right)}{n} \right].
\end{aligned}$$ Since $h^2(s,f_{\pi({\boldsymbol{\theta}}^{(p,\ell)})} ) \leq (1+\beta^{-1}) h^2(s,f_{{\boldsymbol{\theta}}_0}) + (1+\beta) h^2(f_{{\boldsymbol{\theta}}_0},f_{\pi({\boldsymbol{\theta}}^{(p,\ell)})})$, $$\begin{aligned}
\label{eqPreuveThmPrincipalm}
\left(1 - \varepsilon \right) h^2(f_{{\boldsymbol{\theta}}_0},f_{\pi({\boldsymbol{\theta}}'^{(p,\ell)})} ) &\leq& (1 + \beta^{-1}) \left[1-\varepsilon + (1 +\beta) (1+\varepsilon)\right] h^2(s, f_{{\boldsymbol{\theta}}_0}) \\
& & \quad + (1+\beta)^2 (1 + \varepsilon) h^2(f_{{\boldsymbol{\theta}}_0},f_{\pi({\boldsymbol{\theta}}^{(p,\ell)})}) \nonumber\\
& & \quad + \frac{c (1 + \beta) \left( D_{{\mathscr{F}}} + n \xi \right)}{n}. \nonumber
\end{aligned}$$ Remark now that for all ${\boldsymbol{\theta}}\in\Theta$, $$\begin{aligned}
h^2(f_{{\boldsymbol{\theta}}},f_{\pi({\boldsymbol{\theta}})} ) \leq \sup_{1 \leq j \leq d} { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j \epsilon_j^{\alpha_j} \leq d/n.\end{aligned}$$ By using the triangular inequality, $$\begin{aligned}
h^2(f_{{\boldsymbol{\theta}}_0},f_{\pi({\boldsymbol{\theta}}^{(p,\ell)})} ) &\leq& (1+\beta) h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}}) + d (1 + \beta^{-1})/n \\
h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}'^{(p,\ell)}} ) &\leq& (1+\beta) h^2(f_{{\boldsymbol{\theta}}_0},f_{\pi({\boldsymbol{\theta}}'^{(p,\ell)})}) + d (1+\beta^{-1})/n.\end{aligned}$$ We deduce from these two inequalities and from (\[eqPreuveThmPrincipalm\]) that $$\begin{aligned}
\left(1 - \varepsilon \right) h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}'^{(p,\ell)}} ) \!\!\!\!\! &\leq& \!\!\!\!\! \gamma h^2(s, f_{{\boldsymbol{\theta}}_0}) + (1+\beta)^4 (1 + \varepsilon) h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}}) \\
& & \!\!\!\!\!\!\!\!\!\! + \frac{d (1+\beta^{-1}) \left[1 - \varepsilon + (1+\beta)^3 (1+\varepsilon) \right] + c (1 + \beta)^2 \left( D_{{\mathscr{F}}} + n \xi \right)}{n}.
\end{aligned}$$ Since $D_{{\mathscr{F}}} \geq d$ and $\delta \geq 1$, $$\begin{aligned}
\left(1 - \varepsilon \right) h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}'^{(p,\ell)}} ) &\leq& \gamma h^2(s, f_{{\boldsymbol{\theta}}_0}) + \frac{\delta \left( D_{{\mathscr{F}}} + n \xi \right)}{n} \\
& & \quad + (1+\beta)^4 (1 + \varepsilon) h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}}).
\end{aligned}$$ By using (\[eqControleH\]), $$\begin{aligned}
\left(1 - \varepsilon \right) h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}'^{(p,\ell)}} ) &<& \beta \left(h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}}) + h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}'^{(p,\ell)}}) \right) \\
& & \quad + (1+\beta)^4 (1 + \varepsilon) h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}})\end{aligned}$$ and thus $$h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}'^{(p,\ell)}} ) < C_{\kappa} h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}} ).$$ Finally, $$\begin{aligned}
h^2(f_{{\boldsymbol{\theta}}^{(p,\ell)}},f_{{\boldsymbol{\theta}}'^{(p,\ell)}} ) &\leq& \left(h(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}} ) + h(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}'^{(p,\ell)}} ) \right)^2 \\
&<& \left(1 + \sqrt{C_{\kappa}} \right)^2 h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}}) \\
&<& \kappa^{-1} h^2(f_{{\boldsymbol{\theta}}_0},f_{{\boldsymbol{\theta}}^{(p,\ell)}})\end{aligned}$$ which leads to ${\boldsymbol{\theta}}_0 \not \in A^{(p,\ell)}$ as wished.
### Proof of Claim \[ClaimOmegaXi\] {#SectionPreuveClaimOmegaXi}
This claim ensues from the work of [@BaraudMesure]. More precisely, we derive from Proposition 2 of [@BaraudMesure] that for all $f,f' \in {\mathscr{F}_{\text{dis}}}$, $$\left(1 - \frac{1}{\sqrt{2}}\right) h^2(s,f') + \frac{{ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
} (f,f')}{\sqrt{2}} \leq \left(1 + \frac{1}{\sqrt{2}}\right) h^2(s,f ) + \frac{{ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f,f') - {\mathbb{E}}\left[{ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f,f')\right]}{\sqrt{2}}.$$ Let $z = \varepsilon - 1/\sqrt{2} \in (0, 1-1/\sqrt{2})$. We define $\Omega_{\xi}$ by $$\Omega_{\xi} = \bigcap_{f,f' \in {\mathscr{F}_{\text{dis}}}} \left[ \frac{{ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f,f') - {\mathbb{E}}\left[{ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f,f')\right]}{ z \left(h^2(s,f ) + h^2(s,f' ) \right) + c (D_{{\mathscr{F}}} + n \xi)/n} \leq \sqrt{2} \right].$$ On this event, we have $$\left(1 - \varepsilon\right) h^2(s,f') + \frac{{ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
} (f,f')}{\sqrt{2}} \leq \left(1 + \varepsilon \right) h^2(s,f ) + c \frac{D_{{\mathscr{F}}} + n \xi}{n}$$ and it remains to prove that ${\mathbb{P}\,}(\Omega_{\xi}^c) \leq e^{- n \xi}$.
The following claim shows that Assumption 3 of [@BaraudMesure] is fulfilled.
\[ClaimDimMetriqueF2\] Let $$\begin{aligned}
\tau &=& 4 \frac{2 + \frac{n \sqrt{2}}{6} z}{\frac{n^2}{6} z^2} \\
\eta^2_{{\mathscr{F}}} &=& \max \left\{3 d e^{4}, \sum_{j=1}^d \log \left(1 + 2 t_j^{-1} \left( (d/ \boldsymbol{\bar{\alpha}}) (c { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j / \underline{R}_j) \right)^{1/\alpha_j}\right) \right\}.\end{aligned}$$ Then, for all $r \geq 2 \eta_{{\mathscr{F}}}$, $$\begin{aligned}
\label{EqControlDimensioNFdis2}
\left|{\mathscr{F}_{\text{dis}}}\cap {\mathcal{B}}_h \left(s, r \sqrt{\tau} \right)\right| \leq \exp (r^2 / 2)\end{aligned}$$ where ${\mathcal{B}}_h(s, r \sqrt{\tau})$ is the Hellinger ball centered at $s$ with radius $r \sqrt{\tau}$ defined by $${\mathcal{B}}_h (s, r \sqrt{\tau}) = \left\{f \in {\mathbb{L}}^1_+ ({\mathbb{X}},\mu), \; h^2 (s,f) \leq r^2 \tau \right\}.$$
We then derive from Lemma 1 of [@BaraudMesure] that for all $\xi > 0$ and $y^2 \geq \tau \left( 4 \eta^2_{{\mathscr{F}}} + n \xi \right)$, $${\mathbb{P}\,}\left[ \sup_{f,f' \in {\mathscr{F}_{\text{dis}}}} \frac{ \left({ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f,f') - {\mathbb{E}}\left[{ \begingroup
\def\mathaccent#T##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{T} \endgroup
}(f,f')\right] \right)/ \sqrt{2}}{\left( h^2(s,f ) + h^2(s,f' ) \right) \vee y^2} \geq z \right] \leq e^{- n \xi}.$$ Notice now that $4 \eta_{{\mathscr{F}}}^2 \leq 10^{3} D_{{\mathscr{F}}}$ and $10^{3} \tau \leq c /n$. This means that we can choose $$y^2 = c \left(D_{{\mathscr{F}}} + n \xi \right)/n,$$ which concludes the proof of Claim \[ClaimOmegaXi\].
\[Proof of Claim \[ClaimDimMetriqueF2\]\] If ${\mathscr{F}_{\text{dis}}}\cap {\mathcal{B}}_h(s, r \sqrt{\tau}) = \emptyset$, (\[EqControlDimensioNFdis2\]) holds. In the contrary case, there exists ${\boldsymbol{\theta}}_0 = (\theta_{0,1},\dots,\theta_{0,d}) \in \Theta_{\text{dis}}$ such that $h^2(s,f_{{\boldsymbol{\theta}}_0}) \leq r^2 \tau$ and thus $$|{\mathscr{F}_{\text{dis}}}\cap {\mathcal{B}}_h (s, r \sqrt{\tau}) | \leq |{\mathscr{F}_{\text{dis}}}\cap {\mathcal{B}}_h (f_{{\boldsymbol{\theta}}_0}, 2 r \sqrt{\tau})|.$$ Now, $$\begin{aligned}
|{\mathscr{F}_{\text{dis}}}\cap {\mathcal{B}}_h (f_{{\boldsymbol{\theta}}_0}, 2 r \sqrt{\tau})| &=& \left|\left\{f_{{\boldsymbol{\theta}}}, \, {\boldsymbol{\theta}}\in \Theta_{\text{dis}}, \, h^2(f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}_0}) \leq 4 r^2\tau \right\} \right| \\
&\leq& \left|\left\{{\boldsymbol{\theta}}\in \Theta_{\text{dis}}, \; \forall j \in \{1,\dots,d\}, \; \underline{R}_j |\theta_j - \theta_{0,j} |^{\alpha_j} \leq 4 r^2 \tau \right\} \right|. \end{aligned}$$ Let $k_{0,j} \in {\mathbb{N}}$ be such that $\theta_{0,j} = m_j + k_{0,j} \epsilon_j$. Then, $$\begin{aligned}
|{\mathscr{F}_{\text{dis}}}\cap {\mathcal{B}}_h (f_{{\boldsymbol{\theta}}_0}, 2 r \sqrt{\tau})| &\leq& \prod_{j=1}^d \left|\left\{k_j \in {\mathbb{N}}, \; |k_j - k_{0,j} | \leq \big( {4 r^2 \tau}/{ \underline{R}_j } \big)^{1/\alpha_j} \epsilon_j^{-1} \right\} \right| \\
&\leq& \prod_{j=1}^d \left(1 + 2 \epsilon_j^{-1} \left( {4 r^2 \tau}/{ \underline{R}_j } \right)^{1/\alpha_j} \right).\end{aligned}$$ By using $4 \tau \leq c/n$ and $\epsilon_j = t_j ({ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j n)^{-1/\alpha_j}$, $$\begin{aligned}
|{\mathscr{F}_{\text{dis}}}\cap {\mathcal{B}}_h (f_{{\boldsymbol{\theta}}_0}, 2 r \sqrt{\tau})| \leq \prod_{j=1}^d \left(1 + 2 t_j^{-1} \left(r^2 c { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j/{\underline{R}_j} \right)^{1/\alpha_j}\right).\end{aligned}$$ If $\boldsymbol{\bar{\alpha}} \leq e^{-4}$, one can check that $\eta_{{\mathscr{F}}}^2 \geq 4 d/ \boldsymbol{\bar{\alpha}}$ (since $c \geq 1$ and $t_j^{-1} \geq d^{-1/\alpha_j}$). If now $\boldsymbol{\bar{\alpha}} \geq e^{-4}$, then $\eta_{{\mathscr{F}}}^2 \geq 3 d e^{4}
\geq 3 d/ \boldsymbol{\bar{\alpha}}$. In particular, we always have $r^2 \geq 10 \left( d/ \boldsymbol{\bar{\alpha}}\right)$.
We derive from the weaker inequality $r^2 \geq d/ \boldsymbol{\bar{\alpha}}$ that $$\begin{aligned}
|{\mathscr{F}_{\text{dis}}}\cap {\mathcal{B}}_h (f_{{\boldsymbol{\theta}}_0}, 2 r \sqrt{\tau})| &\leq& \left(\frac{r^2}{ d/ \boldsymbol{\bar{\alpha}} }\right) ^{d/ \boldsymbol{\bar{\alpha}}} \prod_{j=1}^d \left(1 + 2 t_j^{-1} \left( ( d/ \boldsymbol{\bar{\alpha}}) (c { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j / \underline{R}_j) \right)^{1/\alpha_j}\right) \\
&\leq& \exp \left( \frac{\log \left(r^2/( d/ \boldsymbol{\bar{\alpha}}) \right)}{r^2/( d/ \boldsymbol{\bar{\alpha}})} r^2\right) \exp \left(\eta_{{\mathscr{F}}}^2 \right).\end{aligned}$$ We then deduce from the inequalities $r^2/(d/ \boldsymbol{\bar{\alpha}}) \geq 10$ and $\eta_{{\mathscr{F}}}^2 \leq r^2/4$ that $$\begin{aligned}
|{\mathscr{F}_{\text{dis}}}\cap {\mathcal{B}}_h (f_{{\boldsymbol{\theta}}_0}, 2 r \sqrt{\tau})| \leq \exp \left( r^2/4\right) \exp \left( r^2/4\right) \leq \exp (r^2/2)\end{aligned}$$ as wished.
Proof of Theorem \[ThmPrincipalDim1\].
--------------------------------------
This theorem ensues from the following result.
Suppose that the assumptions of Theorem \[ThmPrincipalDim1\] holds. For all $\xi > 0$, the estimator $\hat{\theta}$ built in Algorithm \[AlgorithmDim1\] satisfies $$\begin{aligned}
{\mathbb{P}\,}\left[ C h^2(s,f_{\hat{\theta}}) \geq h^2(s, {\mathscr{F}}) + \frac{D_{{\mathscr{F}}}}{n} + \xi \right] \leq e^{- n \xi}
\end{aligned}$$ where $C > 0$ depends only on $\kappa, { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}/\underline{R}$, where $$D_{{\mathscr{F}}} = \max \left\{1, \log \left(1 + t^{-1} \big(c {{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}}/{(\alpha \underline{R}}\right)^{1/\alpha} \big) \right\}$$ and where $c$ depends on $\kappa$ only. Besides, if $$h^2(f_{\theta_2}, f_{\theta_2'}) \leq h^2(f_{\theta_1}, f_{\theta_1'}) \quad \text{for all $m \leq \theta_1 \leq \theta_2 < \theta_2' \leq \theta_1' \leq M$}$$ then $C$ depends only on $\kappa$.
The theorem follows from Theorem \[ThmPrincipal\] page where $\Theta_i = [\theta^{(i)}, \theta'^{(i)}]$ and $L_i = 1$. Note that $${\text{diam}}\Theta_i \leq { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} \big(\theta'^{(i)} - \theta^{(i)}\big)^{\alpha} \leq ({{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}}/{\underline{R}}) h^2(f_{\theta^{(i)}}, f_{\theta'^{(i)}} )$$ which implies that the assumptions of Theorem \[ThmPrincipal\] are fulfilled with $\kappa_0 = \underline{R}/{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}$. Consequently, $${\mathbb{P}\,}\left[ C \inf_{\theta \in \Theta_N } h^2(s,f_{\theta}) \geq h^2(s, {\mathscr{F}}) + \frac{D_{{\mathscr{F}}}}{n} + \xi \right] \leq e^{- n \xi}$$ where $\Theta_N = [\theta^{(N)}, \theta'^{(N)}]$ is such that $\theta'^{(N)} - \theta^{(N)} \leq \eta$. Now, for all $\theta \in \Theta_N$, $$\begin{aligned}
h^2(s,f_{\hat{\theta}}) &\leq& 2 h^2(s,f_{\theta}) + 2 h^2(f_{\theta},f_{\hat{\theta}}) \\
&\leq& 2 h^2(s,f_{\theta}) + 2 { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} \eta^{\alpha}\end{aligned}$$ hence, $$\begin{aligned}
h^2(s,f_{\hat{\theta}}) \leq 2 \inf_{\theta \in \Theta_N} h^2(s,f_{\theta}) + 2 /n\end{aligned}$$ which establishes the first part of the theorem. The second part derives from the fact that under the additional assumption, ${\text{diam}}\Theta_i \leq h^2(f_{\theta^{(i)}}, f_{\theta'^{(i)}} )$, which means that the assumptions of Theorem \[ThmPrincipal\] are fulfilled with $\kappa_0 = 1$.
Proof of Proposition \[PropCalculComplexiteDimen1\].
----------------------------------------------------
For all $i \in \{1,\dots,N-1\}$, $$\begin{aligned}
\theta^{(i+1)}&\in& \left\{ \theta^{(i)}, \theta^{(i)} + \min \left(\bar{r} (\theta^{(i)},\theta'^{(i)}) , (\theta'^{(i)} - \theta^{(i)})/2 \right) \right\} \\
\theta'^{(i+1)} &\in& \left\{\theta'^{(i)}, \theta'^{(i)} - \min \left(\underline{r} (\theta^{(i)},\theta'^{(i)}), (\theta'^{(i)} - \theta^{(i)})/2 \right) \right\}.\end{aligned}$$ Since $\bar{r} (\theta^{(i)},\theta'^{(i)})$ and $\underline{r} (\theta^{(i)},\theta'^{(i)})$ are larger than $$(\kappa \underline{R}/ { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
})^{1/\alpha} (\theta'^{(i)} - \theta^{(i)} ),$$ we have $$\begin{aligned}
\theta'^{(i+1)} - \theta^{(i+1)} \leq \max \left\{1 - {\left(\kappa \underline{R}/{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} \right)^{1/\alpha}} , {1}/{2} \right\} (\theta'^{(i)} - \theta^{(i)} ). \end{aligned}$$ By induction, we derive that for all $i \in \{1,\dots,N-1\}$, $$\begin{aligned}
\theta'^{(i+1)} - \theta^{(i+1)} \leq \left(\max \left\{1 - {\left(\kappa \underline{R}/{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} \right)^{1/\alpha}} , {1}/{2} \right\} \right)^{i} (M -m).\end{aligned}$$ The procedure requires thus the computation of at most $N$ tests where $N$ is the smallest integer such that $$\left( \max \left\{1 - {\left(\kappa \underline{R}/{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} \right)^{1/\alpha}} , {1}/{2} \right\} \right)^{N} (M-m) \leq \eta$$ that is $$N\geq \frac{\log \left( (M- m )/\eta \right)}{ -\log \left[ \max \left\{1 - {\left(\kappa \underline{R}/{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
} \right)^{1/\alpha}} , 1/2 \right\} \right]}.$$ We conclude by using the inequality $-1/\log (1-x) \leq 1/x$ for all $x \in (0,1)$.
Proofs of Proposition \[PropCalculComplexiteDimenQuelquonque\] and Theorem \[ThmPrincipalDimQuelquonque\].
----------------------------------------------------------------------------------------------------------
### Rewriting of Algorithm \[algoConstructionDimQuelquonque\]. {#SectionReecritureProcedure}
We rewrite the algorithm to introduce some notations that will be essential to prove Proposition \[PropCalculComplexiteDimenQuelquonque\] and Theorem \[ThmPrincipalDimQuelquonque\].
$\Theta_i = \prod_{j=1}^d [a_j^{(i)}, b_j^{(i)}]$ Choose $k^{(i)} \in \{1,\dots,d\}$ such that $$\underline{R}_{\Theta_i, k^{(i)}} \big(b_{{k^{(i)}}}^{(i)} - a_{{k^{(i)}}}^{(i)}\big)^{\alpha_{k^{(i)}}} = \max_{1 \leq j \leq d} \underline{R}_{\Theta_i, j} \big(b_{j}^{(i)} - a_{j}^{(i)}\big)^{\alpha_j}.$$ ${\boldsymbol{\theta}}^{(i,1)} = (a_1^{(i)},\dots,a_d^{(i)})$, ${\boldsymbol{\theta}}'^{(i,1)} = {\boldsymbol{\theta}}^{(i,1)}$ and $\theta'^{(i,1)}_{{k^{(i)}}} = b_{k^{(i)}}^{(i)}$. ${\varepsilon_j}^{(i,0)} = \bar{r}_{\Theta_i,j} ({\boldsymbol{\theta}}^{(i,1)},{\boldsymbol{\theta}}'^{(i,1)})$ and ${\varepsilon_j}'^{(i,0)} = \bar{r}_{\Theta_i,j} ({\boldsymbol{\theta}}'^{(i,1)},{\boldsymbol{\theta}}^{(i,1)})$ for all $j \neq k^{(i)} $ $\varepsilon_{k^{(i)}}^{(i,0)} = (b_{k^{(i)}}^{(i)} - a_{k^{(i)}}^{(i)})/2$ and $\varepsilon_{k^{(i)}}'^{(i,0)} = (b_{k^{(i)}}^{(i)} - a_{k^{(i)}}^{(i)})/2$ ${\boldsymbol{\theta}}^{(i,\ell+1)} = {\boldsymbol{\theta}}^{(i,\ell)}$ and ${\boldsymbol{\theta}}'^{(i,\ell+1)} = {\boldsymbol{\theta}}'^{(i,\ell)}$ $\varepsilon_{\psi_{k^{(i)}}(1)}^{(i,\ell)} = \bar{r}_{\Theta_i,\psi_{k^{(i)}}(1)} ({\boldsymbol{\theta}}^{(i,\ell)},{\boldsymbol{\theta}}'^{(i,\ell)})$ $\varepsilon_{\psi_{k^{(i)}} (j)}^{(i,\ell)} = \min (\varepsilon_{\psi_{k^{(i)}} (j)}^{(i,\ell-1)}, \bar{r}_{\Theta_i,\psi_{k^{(i)}}(j)} ({\boldsymbol{\theta}}^{(i,\ell)},{\boldsymbol{\theta}}'^{(i,\ell)}))$, for all $j \in \{2,\dots, d-1\}$ $\varepsilon_{k^{(i)}}^{(i,\ell)} = \min (\varepsilon_{k^{(i)}}^{(i,\ell-1)}, \bar{r}_{\Theta_i,k^{(i)}} ({\boldsymbol{\theta}}^{(i,\ell)},{\boldsymbol{\theta}}'^{(i,\ell)}))$ $\mathfrak{J}^{(i,\ell)} = \left\{1 \leq j \leq d -1,\; \theta_{\psi_{k^{(i)}}(j)}^{(i,\ell)} + \varepsilon_{\psi_{k^{(i)}}(j)}^{(i,\ell)} < b_{\psi_{k^{(i)}}(j)}^{(i)}\right\}$ $\mathfrak{j}_{\text{min}}^{(i,\ell)} = \min \mathfrak{J}^{(i,\ell)}$ $\theta_{\psi_{k^{(i)}}(j)}^{(i,\ell+1)} = a_{\psi_{k^{(i)}}(j)}^{(i)}$ for all $j \leq \mathfrak{j}_{\text{min}}^{(i,\ell)} - 1$ $\theta_{\psi_{k^{(i)}}(\mathfrak{j}_{\text{min}}^{(i,\ell)})}^{(i,\ell+1)} = \theta_{\psi_{k^{(i)}}(\mathfrak{j}_{\text{min}}^{(i,\ell)})}^{(i,\ell)} + \varepsilon_{\psi_{k^{(i)}}(\mathfrak{j}_{\text{min}}^{(i,\ell)})}^{(i,\ell)}$ $\mathfrak{j}^{(i,\ell)}_{\text{min}} = d$ $\varepsilon_{\psi_{k^{(i)}}(1)}'^{(i,\ell)} = \bar{r}_{\Theta_i,\psi_{k^{(i)}}(1)} ({\boldsymbol{\theta}}'^{(i,\ell)},{\boldsymbol{\theta}}^{(i,\ell)})$ $\varepsilon_{\psi_{k^{(i)}} (j)}'^{(i,\ell)} = \min (\varepsilon_{\psi_{k^{(i)}} (j)}'^{(i,\ell-1)}, \bar{r}_{\Theta_i,\psi_{k^{(i)}}(j)} ({\boldsymbol{\theta}}'^{(i,\ell)},{\boldsymbol{\theta}}^{(i,\ell)}))$, for all $j \in \{2,\dots, d-1\}$ $\varepsilon_{k^{(i)}}'^{(i,\ell)} = \min (\varepsilon_{k^{(i)}}'^{(i,\ell-1)}, {\underline{r}}_{\Theta_i,k^{(i)}} ({\boldsymbol{\theta}}'^{(i,\ell)},{\boldsymbol{\theta}}^{(i,\ell)}))$ $\mathfrak{J}'^{(i,\ell)} = \left\{1 \leq j \leq d -1,\; \theta_{\psi_{k^{(i)}}(j)}'^{(i,\ell)} + \varepsilon_{\psi_{k^{(i)}}(j)}'^{(i,\ell)} < b_{\psi_{k^{(i)}}(j)}^{(i)}\right\}$ $\mathfrak{j}'^{(i,\ell)}_{\text{min}} = \mathfrak{J}'^{(i,\ell)} $ $\theta_{\psi_{k^{(i)}}(j)}'^{(i,\ell+1)} = a_{\psi_{k^{(i)}}(j)}^{(i)}$ for all $j \leq \mathfrak{j}'^{(i,\ell)}_{\text{min}} - 1$
$\theta_{\psi_{k^{(i)}}(\mathfrak{j}'^{(i,\ell)}_{\text{min}})}'^{(i,\ell+1)} = \theta_{\psi_{k^{(i)}}(\mathfrak{j}_{\text{min}}'^{(i,\ell)})}^{(i,\ell)} + \varepsilon_{\psi_{k^{(i)}}(\mathfrak{j}_{\text{min}}'^{(i,\ell)})}'^{(i,\ell)}$ $\mathfrak{j}'^{(i,\ell)}_{\text{min}} = d$ $L_i = \ell$ and quit the loop $a^{(i+1)}_j = a^{(i)}_j$ and $b^{(i+1)}_j = b^{(i)}_j$ for all $j \neq k^{(i)}$ $a_{k^{(i)}}^{(i+1)} = a_{k^{(i)}}^{(i)} + \varepsilon_{k^{(i)} }^{(i,L_i)} $ $b_{k^{(i)}}^{(i+1)} = b_{k^{(i)}}^{(i)} - \varepsilon_{k^{(i)} }'^{(i,L_i)} $ $\Theta_{i+1} = \prod_{j=1}^d [a_j^{(i+1)}, b_j^{(i+1)}]$
$\Theta_1 = \prod_{j=1}^d [a_j^{(1)},b_j^{(1)}] = \prod_{j=1}^d [m_j,M_j]$ Compute $\Theta_{i+1}$ Leave the loop and set ${N} = i$ $${\boldsymbol{\hat{\theta}}}= \left(\frac{a_1^{(N)} + b_1^{(N)}}{2}, \dots, \frac{a_d^{(N)} + b_d^{(N)}}{2} \right)$$
### Proof of Proposition \[PropCalculComplexiteDimenQuelquonque\].
The algorithm computes $\sum_{i=1}^{N} L_i$ tests. Define for all $j \in \{1,\dots,d\}$, $$I_j = \left\{ i \in \{1,\dots,N\} , \, k^{(i)} = j \right\}.$$ Then, $\cup_{j=1}^d I_j = \{1,\dots,N\} $. Since $$\begin{aligned}
\label{eqPreuvePropoCalculComplexite}
\sum_{i=1}^{N} L_i \leq \sum_{j=1}^d |I_j| \sup_{i \in I_j} L_i,\end{aligned}$$ we begin to bound $|I_j|$ from above. For all $i \in \{1,\dots,N-1\}$, $$\begin{aligned}
b_{j}^{(i+1)} - a_{j}^{(i+1)} &\leq& b_{j}^{(i)} - a_{j}^{(i)} - \min \left(\varepsilon_{j}^{(i, L_i) }, \varepsilon_{j}'^{(i, L_i) }\right) \quad \text{if $i \in I_j$ } \\
b_{j}^{(i+1)} - a_{j}^{(i+1)} &=& b_{j}^{(i)} - a_{j}^{(i)}\quad \text{if $i \not \in I_j$.}\end{aligned}$$ For all $i \in I_j$, and all $\ell \in \{1,\dots,L_i\}$, we derive from (\[eqSurretRDimD\]), from the equality $\theta'^{(i,\ell)}_j = b_j^{(i)}$, $\theta^{(i,\ell)}_j = a_j^{(i)}$ and from the inequalities $\underline{R}_{\Theta_i,j} \geq \underline{R}_j$ and ${ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{\Theta_i,j} \leq { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j$ that $$\begin{aligned}
\bar{r}_{\Theta_i,j} ({\boldsymbol{\theta}}^{(i,\ell)},{\boldsymbol{\theta}}'^{(i,\ell)}) &\geq& (\kappa \underline{R}_j/ { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j)^{1/\alpha_j} (b_j^{(i)} - a_j^{(i)} ) \label{eqMinorationr1} \\
\underline{r}_{\Theta_i,j} ({\boldsymbol{\theta}}'^{(i,\ell)},{\boldsymbol{\theta}}^{(i,\ell)}) &\geq& (\kappa \underline{R}_j/ { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j)^{1/\alpha_j} (b_j^{(i)} - a_j^{(i)}) \label{eqMinorationr2}.\end{aligned}$$ Consequently, $$\begin{aligned}
\min \left(\varepsilon_{j}^{(i, L_i) }, \varepsilon_{j}'^{(i, L_i) }\right) \geq \min \left\{{(b_j^{(i)}-a_j^{(i)})}/{2} , (\kappa \underline{R}_j/{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j)^{1/\alpha_j} (b_j^{(i)} - a_j^{(i)}) \right\}.\end{aligned}$$ We then have, $$\begin{aligned}
b_{j}^{(i+1)} - a_{j}^{(i+1)} &\leq& \max \left(1/2, 1 - \left({\kappa \underline{R}_j}/{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j \right)^{1/\alpha_{j}} \right) \big(b_j^{(i)} - a_j^{(i)}\big) \quad \text{when $i \in I_j$} \\
b_{j}^{(i+1)} - a_{j}^{(i+1)} &=& b_j^{(i)} - a_j^{(i)} \quad \text{when $i \not \in I_j$.}\end{aligned}$$ Let $n_j$ be any integer such that $$\left(\max \left\{1/2, 1 - \left({\kappa \underline{R}_j/{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j} \right)^{1/\alpha_{j}} \right\} \right)^{ n_J} \leq \eta_j/(M_j-m_j).$$ If $|I_j| > n_j$, then for $i = \max I_j$, $$\begin{aligned}
b_{j}^{(i)} - a_{j}^{(i)} &\leq& \left( \max \left\{1/2, 1 - \left({\kappa \underline{R}_j} /{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j\right)^{1/\alpha_{j}} \right\} \right)^{|I_j| - 1} (M_j-m_j) \\
&\leq& \left(\max \left\{1/2, 1 - \left({\kappa \underline{R}_j} /{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j\right)^{1/\alpha_{j}} \right\} \right)^{ n_J} (M_j-m_j) \end{aligned}$$ and thus $ b_{j}^{(i)} - a_{j}^{(i)} \leq \eta_j$. This is impossible because $i \in I_j$ implies that $b_{j}^{(i)} - a_{j}^{(i)} > \eta_j$. Consequently, $|I_J| \leq n_j$. We then set $n_j$ as the smallest integer larger than $$\frac{\log \left(\eta_j / (M_j-m_j) \right)}{ \log \left(\max \left\{1/2, 1 - \left({\kappa \underline{R}_j} /{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j\right)^{1/\alpha_{j}} \right\} \right)}.$$ By using the inequality $-1/\log (1-x) \leq 1/x$ for all $x \in (0,1)$, we obtain $$\begin{aligned}
|I_j| \leq 1 + \max \left( 1/\log 2, \left(\kappa \underline{R}_j /{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j\right)^{-1/\alpha_{j}} \right) \log \left( \frac{M_j-m_j}{\eta_j} \right).\end{aligned}$$ We now roughly bound from above the right-hand side of this inequality: $$\begin{aligned}
|I_j| \leq 2 \left( 1 + \left( { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j/(\kappa \underline{R}_j) \right)^{1/\alpha_{j}} \right) \left( 1 \vee \log \left( \frac{M_j-m_j}{\eta_j} \right) \right).\end{aligned}$$ We recall that our aim is to bound from above $\sum_{i=1}^{N} L_i$. Thanks to (\[eqPreuvePropoCalculComplexite\]), it remains to upper bound $ \sup_{i \in I_j} L_i$. This ensues from the following lemma.
\[LemmeDansPreuveComplexiteAlgo\] Let $$\mathcal{L} = \left\{1 \leq \ell \leq L_i, \; T({\boldsymbol{\theta}}^{(i,\ell)},{\boldsymbol{\theta}}'^{(i,\ell)}) \geq 0 \right\} \quad \text{and} \quad \mathcal{L}' = \left\{1 \leq \ell \leq L_i, \; T({\boldsymbol{\theta}}^{(i,\ell)},{\boldsymbol{\theta}}'^{(i,\ell)}) \leq 0 \right\}.$$ Then, $$\begin{aligned}
|\mathcal{L} | &\leq& \prod_{k \in \{1,\dots,d\} \setminus \{ k^{(i)}\}} \left[ 1 + \left( {{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_k}/({\kappa \underline{R}_k})\right)^{1/\alpha_k}\right] \\
|\mathcal{L}'| &\leq& \prod_{k \in \{1,\dots,d\} \setminus \{ k^{(i)}\}} \left[ 1 + \left( {{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_k}/({\kappa \underline{R}_k})\right)^{1/\alpha_k}\right].\end{aligned}$$
Since $\{1,\dots,L_i\} \subset \mathcal{L} \cup \mathcal{L}'$, we obtain $$\sum_{j=1}^d |I_j| \sup_{i \in I_j} L_i \leq 4 \left[\sum_{j=1}^d \left( 1 \vee \log \left( \frac{M_j-m_j}{\eta_j} \right)\right)\right] \left[\prod_{j=1}^d \left(1 + \left( {{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j}/({\kappa \underline{R}_j})\right)^{1/\alpha_j} \right) \right],$$ which completes the proof.
### Proof of Lemma \[LemmeDansPreuveComplexiteAlgo\].
Without lost of generality and for the sake of simplicity, we assume that $k^{(i)} = d$ and $\psi_{d } (j) = j$ for all $j \in \{1,\dots,d-1\}$. Let $\ell_1 < \cdots < \ell_r$ be the elements of $\mathcal{L} $. Define for all $p \in \{1,\dots,d-1\}$, $k_{p,0} = 0$ and by induction for all integer ${\mathfrak{m}}$, $$\begin{aligned}
k_{p,{\mathfrak{m}}+1} =
\begin{cases}
\inf \left\{k > k_{p,{\mathfrak{m}}}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p \right\} & \text{if there exists $k \in \{k_{p,{\mathfrak{m}}} +1,\dots,r \}$ such that $\mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p$} \\
r & \text{otherwise.}
\end{cases}\end{aligned}$$ Let $\mathfrak{M}_p$ be the smallest integer ${\mathfrak{m}}$ for which $k_{p,{\mathfrak{m}}} = r$. Set for all ${\mathfrak{m}}\in \{0,\dots,\mathfrak{M}_p-1\}$, $$K_{p,{\mathfrak{m}}} = \left\{k_{p,{\mathfrak{m}}}+1, \dots, k_{p,{\mathfrak{m}}+1} \right\}.$$ The cardinality of $K_{p,{\mathfrak{m}}}$ can be upper bounded by the claim below.
\[ClaimMajorationCardinalDeKpm\] For all $p \in \{1,\dots,d-1\}$ and ${\mathfrak{m}}\in \{0,\dots,\mathfrak{M}_p-1\}$, $$\begin{aligned}
\label{eqMajorationKpm}
|K_{p,{\mathfrak{m}}}| \leq \prod_{k=1}^p \left[ 1 + \left(\frac{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_k}{\kappa \underline{R}_k}\right)^{1/\alpha_k} \right]. \end{aligned}$$
Lemma \[LemmeDansPreuveComplexiteAlgo\] follows from the equality $\mathcal{L} = K_{d-1,0}$. The cardinality of $\mathcal{L}'$ can be bounded from above in the same way.
\[Proof of Claim \[ClaimMajorationCardinalDeKpm\]\] The result is proved by induction. We begin to prove (\[eqMajorationKpm\]) when $p = 1$.
Let ${\mathfrak{m}}\in \{0,\dots,\mathfrak{M}_1-1\}$. We have $\theta_1^{(i,\ell_{ k_{1,{\mathfrak{m}}} + 1})} = a_1^{(i)}$ and for $j \in \{ 1,\dots, k_{1,{\mathfrak{m}}+1}-k_{1,{\mathfrak{m}}}-1\}$, $$\theta_1^{(i,\ell_{ k_{1,{\mathfrak{m}}}+j+1})} \geq \theta_1^{(i,\ell_{ k_{1,{\mathfrak{m}}}+j})} + \bar{r}_{\Theta_i,1} \left({\boldsymbol{\theta}}^{(i,\ell_{ k_{1,{\mathfrak{m}}}+j})}, {\boldsymbol{\theta}}'^{(i,\ell_{ k_{1,{\mathfrak{m}}}+j})}\right).$$ Now, $$\begin{aligned}
\bar{r}_{\Theta_i,1} \left({\boldsymbol{\theta}}^{(i,\ell_{ k_{1,{\mathfrak{m}}}+j})}, {\boldsymbol{\theta}}'^{(i,\ell_{ k_{1,{\mathfrak{m}}}+j})}\right) \geq
\left( (\kappa \underline{R}_{\Theta_i,d} / { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{\Theta_i,1}) ( b_d^{(i)}-a_d^{(i)} )^{\alpha_d} \right)^{1/\alpha_1}.
\end{aligned}$$ Since $\underline{R}_{\Theta_i,d} (b_d^{(i)}-a_d^{(i)} )^{\alpha_d} \geq \underline{R}_{\Theta_i,1} (b_1^{(i)}-a_1^{(i)} )^{\alpha_1} $, $$\begin{aligned}
\bar{r}_{\Theta_i,1} \left({\boldsymbol{\theta}}^{(i,\ell_{ k_{1,{\mathfrak{m}}}+j})}, {\boldsymbol{\theta}}'^{(i,\ell_{ k_{1,{\mathfrak{m}}}+j})}\right) &\geq& \left( \kappa \underline{R}_{\Theta_i,1} / { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{\Theta_i,1} \right)^{1/\alpha_1} ( b_1^{(i)}-a_1^{(i)}) \nonumber \\
&\geq& (\kappa \underline{R}_{1}/ { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_1)^{1/\alpha_1} ( b_1^{(i)}-a_1^{(i)}) \label{eqMinorationbarrPreuve}.
\end{aligned}$$ This leads to $$\theta_1^{(i,\ell_{ k_{1,{\mathfrak{m}}}+j+1})} \geq \theta_1^{(i,\ell_{ k_{1,{\mathfrak{m}}}+j})} + \left({\kappa \underline{R}_{1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_1}\right)^{1/\alpha_1} (b_1^{(i)} - a_1^{(i)} ).$$ Moreover, $ \theta_1^{(i,\ell_{ k_{1,{\mathfrak{m}}+1}})} \leq b_1^{(i)} $ (because all the ${\boldsymbol{\theta}}^{(i,\ell)}, {\boldsymbol{\theta}}'^{(i,\ell)}$ belong to $\Theta_i$). Consequently, $$a_1^{(i)} + \left(k_{1,{\mathfrak{m}}+1}-k_{1,{\mathfrak{m}}}-1\right) \left({\kappa \underline{R}_{1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_1}\right)^{1/\alpha_1} \left(b_1^{(i)} - a_1^{(i)}\right) \leq b_1^{(i)},$$ which shows the result for $p = 1$.
Suppose now that (\[eqMajorationKpm\]) holds for $p \in \{1,\dots,d-2\}$. We shall show that it also holds for $p+1$. Let ${\mathfrak{m}}\in \{0,\dots,\mathfrak{M}_{p+1} -1 \}$. We use the claim below whose proof is postponed to Section \[SectionPreuveDesClaims\].
\[ClaimInclusionKpDansKp\] For all ${\mathfrak{m}}\in \{0,\dots, \mathfrak{M}_{p+1}-1\}$, there exists ${\mathfrak{m}}' \in \{0,\dots, \mathfrak{M}_{p}-1\}$ such that $k_{p,{\mathfrak{m}}'+1} \in K_{p+1,{\mathfrak{m}}}$.
The claim says that we can consider the smallest integer ${\mathfrak{m}}_0$ of $\{0,\dots, \mathfrak{M}_p-1\}$ such that $k_{p,{\mathfrak{m}}_0+1} > k_{p+1,{\mathfrak{m}}}$, and the larger integer ${\mathfrak{m}}_1$ of $\{0,\dots, \mathfrak{M}_p-1\}$ such that $k_{p,{\mathfrak{m}}_1+1} \leq k_{p+1,{\mathfrak{m}}+1}$. We define $$\begin{aligned}
I_{{\mathfrak{m}}_0} &=& \left\{k_{p+1,{\mathfrak{m}}}+1,\dots,k_{p,{\mathfrak{m}}_0+1}\right\}\\
I_{{\mathfrak{m}}'} &=& \left\{k_{p,{\mathfrak{m}}'}+1,\dots,k_{p,{\mathfrak{m}}'+1}\right\} \quad \text{for all ${\mathfrak{m}}' \in \{{\mathfrak{m}}_0+1,\dots,{\mathfrak{m}}_1\}$}\\
I_{{\mathfrak{m}}_1+1} &=& \left\{k_{p,{\mathfrak{m}}_1+1}+1,\dots,k_{p+1,{\mathfrak{m}}+1}\right\}.\end{aligned}$$ We then have $$K_{p+1,{\mathfrak{m}}} =\bigcup_{{\mathfrak{m}}'={\mathfrak{m}}_0}^{{\mathfrak{m}}_1+1} I_{{\mathfrak{m}}'}.$$ Notice that for all ${\mathfrak{m}}' \in \{{\mathfrak{m}}_0,\dots,{\mathfrak{m}}_1\}$, $I_{{\mathfrak{m}}'} \subset K_{p,{\mathfrak{m}}'}$. We consider two cases.
- If $k_{p,{\mathfrak{m}}_1+1} = k_{p+1,{\mathfrak{m}}+1}$, then $I_{{\mathfrak{m}}_1+1} = \emptyset$ and thus, by using the above inclusion and the induction assumption, $$\begin{aligned}
|K_{p+1,{\mathfrak{m}}'}| \leq ({\mathfrak{m}}_1-{\mathfrak{m}}_0+1) \prod_{k=1}^p \left[ 1 + \left(\frac{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_k}{\kappa \underline{R}_k}\right)^{1/\alpha_k}\right].\end{aligned}$$
- If $k_{p,{\mathfrak{m}}_1+1} < k_{p+1,{\mathfrak{m}}+1}$ then ${\mathfrak{m}}_1 + 1\leq \mathfrak{M}_p-1$. Indeed, if this is not true, then ${\mathfrak{m}}_1 = \mathfrak{M}_p-1$, which leads to $k_{p,{\mathfrak{m}}_1+1} = r$ and thus $k_{p+1,{\mathfrak{m}}+1} > r$. This is impossible since $k_{p+1,{\mathfrak{m}}+1}$ is always smaller than $r$ (by definition). Consequently, $I_{{\mathfrak{m}}_1+1} \subset K_{p,{\mathfrak{m}}_1+1}$ and we derive from the induction assumption, $$|K_{p+1,{\mathfrak{m}}'}| \leq ({\mathfrak{m}}_1-{\mathfrak{m}}_0+2) \prod_{k=1}^p \left[ 1 + \left(\frac{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_k}{\kappa \underline{R}_k}\right)^{1/\alpha_k}\right].$$
We now bound from above ${\mathfrak{m}}_1-{\mathfrak{m}}_0$.
Since for all $k \in \left\{k_{p+1,{\mathfrak{m}}}+1,\dots,k_{p,{\mathfrak{m}}_0+1}-1\right\}$, $\mathfrak{j}_{\text{min}}^{(i,\ell_k)} \leq p$, we have $$\theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}_0+1}})} = \theta_{p+1}^{(i,\ell_{k_{p+1,{\mathfrak{m}}}+1})} = a_{p+1}^{(i)}.$$ Since $\mathfrak{j}_{\text{min}}^{(i,\ell_{ k_{p,{\mathfrak{m}}_0+1}})} = p+1$, $$\theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}_0+1}+1})} \geq \theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}_0+1}})} +
\bar{r}_{\Theta_i,p+1} \left({\boldsymbol{\theta}}^{(i,1)}, {\boldsymbol{\theta}}'^{(i,1)}\right)$$ and thus by using a similar argument as the one used in the proof of (\[eqMinorationbarrPreuve\]), $$\begin{aligned}
\theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}_0+1}+1})} &\geq& \theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}_0+1}})} + \left({\kappa \underline{R}_{p+1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right) \\
&\geq& a_{p+1}^{(i)} + \left({\kappa \underline{R}_{p+1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right).\end{aligned}$$ Similarly, for all ${\mathfrak{m}}' \in \{{\mathfrak{m}}_0+1,\dots,{\mathfrak{m}}_1\}$ and $k \in \left\{k_{p,{\mathfrak{m}}'}+1,\dots,k_{p,{\mathfrak{m}}'+1}-1\right\}$, $\mathfrak{j}_{\text{min}}^{(i,\ell_k)} \leq p$ and thus $$\theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}'+1}})} = \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}'}+1})}.$$ Moreover, for all ${\mathfrak{m}}' \in \{{\mathfrak{m}}_0+1,\dots,{\mathfrak{m}}_1-1\}$, $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,{\mathfrak{m}}'+1}})} = p+1$ and thus $$\begin{aligned}
\label{EqDansPreuveComplexiteAlgoDimQuel}
\theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}'+1}+1})} &\geq& \theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}'+1}})} + \left({\kappa \underline{R}_{p+1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right) \\
&\geq& \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}'}+1})} + \left({\kappa \underline{R}_{p+1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right) \nonumber.\end{aligned}$$ This leads to $$\begin{aligned}
\theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}_1}+1})} &\geq& \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0+1 }+1})} + \left({\mathfrak{m}}_1 - {\mathfrak{m}}_0-1 \right) \left({\kappa \underline{R}_{p+1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right) \\
&\geq& a_{p+1}^{(i)} + \left({\mathfrak{m}}_1 - {\mathfrak{m}}_0 \right) \left({\kappa \underline{R}_{p+1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right).\end{aligned}$$ There are two types of cases involved: if $k_{p,{\mathfrak{m}}_1+1} = k_{p+1,{\mathfrak{m}}+1}$ and if $k_{p,{\mathfrak{m}}_1+1} < k_{p+1,{\mathfrak{m}}+1}$.
- If $k_{p,{\mathfrak{m}}_1+1} = k_{p+1,{\mathfrak{m}}+1}$, $$\begin{aligned}
\theta_{p+1}^{(i,\ell_{ k_{p+1,{\mathfrak{m}}+1}})} &=& \theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}_1}+1})} \\
&\geq& a_{p+1}^{(i)} + \left({\mathfrak{m}}_1 - {\mathfrak{m}}_0 \right) \left({\kappa \underline{R}_{p+1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right).\end{aligned}$$ Since $ \theta_{p+1 }^{(i,\ell_{ k_{p+1,{\mathfrak{m}}+1}})} \leq b_{p+1}^{(i)}$, we have $${\mathfrak{m}}_1 - {\mathfrak{m}}_0 \leq \left({{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}/{(\kappa \underline{R}_{p+1})}\right)^{1/\alpha_{p+1}}.$$
- If now $k_{p,{\mathfrak{m}}_1+1} < k_{p+1,{\mathfrak{m}}+1}$, then [(\[EqDansPreuveComplexiteAlgoDimQuel\])]{} also holds for ${\mathfrak{m}}' = {\mathfrak{m}}_1$. This implies $$\begin{aligned}
\theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}_1+1}+1})} \geq a_{p+1}^{(i)} + \left({\mathfrak{m}}_1 - {\mathfrak{m}}_0 + 1 \right) \left({\kappa \underline{R}_{p+1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right).\end{aligned}$$ Since $\mathfrak{j}_{\text{min}}^{(i,\ell_k)} \leq p$ for all $k \in \left\{k_{p,{\mathfrak{m}}_1+1},\dots,k_{p+1,{\mathfrak{m}}+1}-1\right\}$, $$\begin{aligned}
\theta_{p+1}^{(i,\ell_{ k_{p+1,{\mathfrak{m}}+1}})} &=& \theta_{p+1}^{(i,\ell_{ k_{p,{\mathfrak{m}}_1+1}+1})} \\
&\geq& a_{p+1}^{(i)} + \left({\mathfrak{m}}_1 - {\mathfrak{m}}_0 + 1 \right) \left({\kappa \underline{R}_{p+1}}/{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}\right)^{1/\alpha_{p+1}} \left(b_{p+1}^{(i)} - a_{p+1}^{(i)}\right).\end{aligned}$$ Since, $ \theta_{p+1}^{(i,\ell_{ k_{p+1,{\mathfrak{m}}+1}})} \leq b_{p+1}^{(i)}$, $${\mathfrak{m}}_1 - {\mathfrak{m}}_0 + 1 \leq \left({{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{p+1}}/{(\kappa \underline{R}_{p+1})}\right)^{1/\alpha_{p+1}}.$$
This ends the proof.
### Proof of Theorem \[ThmPrincipalDimQuelquonque\].
The lemma and claim below show that the assumptions of Theorem \[ThmPrincipal\] (page ) are satisfied.
\[PreuveLemmeAlgoDimQuelc\] For all $i \in \{1,\dots, N-1\}$, $$\begin{aligned}
\Theta_i \setminus \bigcup_{\ell=1}^{L_i} B^{(i,\ell)} \subset \Theta_{i+1} \subset \Theta_i.\end{aligned}$$
\[ClaimPreuveAlgoDim2Kappa0\] For all $i \in \{1,\dots,N - 1\}$ and $\ell \in \{1,\dots,L_i\}$, $$\kappa_0 {\text{diam}}(\Theta_i) \leq h^2(f_{{\boldsymbol{\theta}}^{(i,\ell)}},f_{{\boldsymbol{\theta}}'^{(i,\ell)}})$$ where $\kappa_0 = \inf_{1 \leq j \leq d} \underline{R}_j/{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j$.
We now derive from Theorem \[ThmPrincipal\] that $${\mathbb{P}\,}\left[ C \inf_{{\boldsymbol{\theta}}\in \Theta_N} h^2(s,f_{{\boldsymbol{\theta}}}) \geq h^2(s, {\mathscr{F}}) + \frac{D_{{\mathscr{F}}}}{n} + \xi \right] \leq e^{- n \xi}$$ where $C > 0$ depends only on $\kappa, \sup_{1 \leq j \leq d} { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j/\underline{R}_j$. Consequently, with probability larger than $1 - e^{-n \xi}$, $$\begin{aligned}
h^2(s,f_{{\boldsymbol{\hat{\theta}}}}) &\leq& 2 \inf_{{\boldsymbol{\theta}}\in \Theta_N} h^2(s,f_{{\boldsymbol{\theta}}}) + 2 {\text{diam}}\, \Theta_N \\
&\leq& 2 C^{-1} \left(h^2(s, {\mathscr{F}}) + \frac{D_{{\mathscr{F}}}}{n} + \xi\right) + 2 \sup_{1 \leq j \leq d} { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_j \eta_j^{\alpha_j} \\
&\leq& 2 C^{-1} \left(h^2(s, {\mathscr{F}}) + \frac{D_{{\mathscr{F}}}}{n} + \xi\right) + 2d /n\\
&\leq& C' \left(h^2(s, {\mathscr{F}}) + \frac{d}{n} + \xi\right).\end{aligned}$$ The theorem follows.
### Proof of Lemma \[PreuveLemmeAlgoDimQuelc\].
Since $$\varepsilon_{k^{(i)} }^{(i,L_i)} \leq \frac{b^{(i)}_{k^{(i)} } - a^{(i)}_{k^{(i)} }}{2} \quad \text{and} \quad \varepsilon_{k^{(i)} }'^{(i,L_i)} \leq \frac{b^{(i)}_{k^{(i)} } - a^{(i)}_{k^{(i)} }}{2},$$ we have $\Theta_{i+1} \subset \Theta_i$. We now aim at proving $\Theta_i \setminus \cup_{\ell=1}^{L_i} B^{(i,\ell)} \subset \Theta_{i+1}$.
We introduce the rectangles $$\begin{aligned}
\mathcal{R}^{(i,\ell)} &=& \prod_{q=1}^d \left[\theta_q^{(i,\ell)}, \theta_q^{(i,\ell)} + \varepsilon_{q}^{(i,\ell)} \right] \\
\mathcal{R}'^{(i,\ell)} &=& \prod_{q=1}^{k^{(i)}-1} \left[\theta_q'^{(i,\ell)}, \theta_q'^{(i,\ell)} + \varepsilon_{q}'^{(i,\ell)} \right] \times\left[\theta_{k^{(i)}}'^{(i,\ell)} - \varepsilon_{k^{(i)}}'^{(i,\ell)} , \theta_{k^{(i)}}'^{(i,\ell)} \right] \times \prod_{q=k^{(i)}+1}^{d} \left[\theta_q'^{(i,\ell)}, \theta_q'^{(i,\ell)} + \varepsilon_{q}'^{(i,\ell)} \right] \end{aligned}$$ and we set $$\begin{aligned}
\mathcal{R}''^{(i,\ell)}=
\begin{cases}
\mathcal{R}^{(i,\ell)}& \text{if $T ({{\boldsymbol{\theta}}^{(i,\ell)}},{{\boldsymbol{\theta}}'^{(i,\ell)}}) > 0$} \\
\mathcal{R}'^{(i,\ell)}& \text{if $T ({{\boldsymbol{\theta}}^{(i,\ell)}},{{\boldsymbol{\theta}}'^{(i,\ell)}}) < 0$} \\
\mathcal{R}^{(i,\ell)} \bigcup \mathcal{R}'^{(i,\ell)} & \text{if $T ({{\boldsymbol{\theta}}^{(i,\ell)}},{{\boldsymbol{\theta}}'^{(i,\ell)}}) = 0$}.
\end{cases}\end{aligned}$$ We derive from (\[eqInclusionRC1\]) that $\Theta_i \cap \mathcal{R}''^{(i,\ell)} \subset B^{(i,\ell)}$. It is then sufficient to show $$\Theta_i \setminus \bigcup_{\ell=1}^{L_i} \mathcal{R}''^{(i,\ell)} \subset \Theta_{i+1}.$$ For this purpose, note that either $T({\boldsymbol{\theta}}^{(i,L_i)},{\boldsymbol{\theta}}'^{(i,L_i)}) \geq 0$ or $T({\boldsymbol{\theta}}^{(i,L_i)},{\boldsymbol{\theta}}'^{(i,L_i)}) \leq 0$. In what follows, we assume that $T({\boldsymbol{\theta}}^{(i,L_i)},{\boldsymbol{\theta}}'^{(i,L_i)}) \geq 0$ but a similar proof can be made if $T({\boldsymbol{\theta}}^{(i,L_i)},{\boldsymbol{\theta}}'^{(i,L_i)})$ is non-positive. Without lost of generality, and for the sake of simplicity, we suppose as in the proof of Lemma \[LemmeDansPreuveComplexiteAlgo\] that $k^{(i)} = d$ and $\psi_{d} (j) = j$ for all $j \in \{1,\dots,d-1\}$. Let $$\mathcal{L} = \left\{1 \leq \ell \leq L_i, \; T({\boldsymbol{\theta}}^{(i,\ell)},{\boldsymbol{\theta}}'^{(i,\ell)}) \geq 0 \right\}$$ and $\ell_1 < \cdots < \ell_r$ be the elements of $\mathcal{L} $. We have $$\Theta_{i+1} = \prod_{q=1}^{d-1} \left[a_q^{(i)}, b_q^{(i)}\right] \times \left[a_d^{(i)} + \varepsilon_d^{(i,L_i)} , b_d^{(i)} \right]$$ and it is sufficient to prove that $$\prod_{q=1}^{d-1} \left[a_q^{(i)}, b_q^{(i)}\right] \times \left[a_d^{(i)}, a_d^{(i)} + \varepsilon_d^{(i,L_i)} \right] \subset \bigcup_{k=1}^{r} \mathcal{R}^{(i, \ell_k)}.$$ For this, remark that for all $k \in \{1,\dots,r\}$, $\theta_d^{(i,\ell_{k})} = a_d^{(i)} $ and thus $$\mathcal{R}^{(i, \ell_k)} = \prod_{q=1}^{d-1} \left[ \theta_q^{(i,\ell_{k})}, \theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right] \times \left[ a_d^{(i)}, a_d^{(i)} + \varepsilon_{d}^{(i,\ell_k)} \right].$$ By using the fact that the sequence $ (\varepsilon_{d}^{(i,\ell_k)})_k$ is non-increasing, $$\left[a_d^{(i)}, a_d^{(i)} + \varepsilon_d^{(i,L_i)} \right] \subset \bigcap_{k=1}^{r}\left[ a_d^{(i)}, a_d^{(i)} + \varepsilon_{d}^{(i,\ell_k)} \right].$$ This means that it is sufficient to show that $$\begin{aligned}
\label{eqInclusionPreuveDimenQ}
\prod_{q=1}^{d-1} \left[a_q^{(i)}, b_q^{(i)}\right] \subset \bigcup_{k=1}^{r} \prod_{q=1}^{d-1} \left[ \theta_q^{(i,\ell_{k})} , \theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right].\end{aligned}$$ Let us now define (as in the proof of Lemma \[LemmeDansPreuveComplexiteAlgo\]) for all $p \in \{1,\dots,d-1\}$, $k_{p,0} = 0$ and by induction for all integer ${\mathfrak{m}}$, $$\begin{aligned}
k_{p,{\mathfrak{m}}+1} =
\begin{cases}
\inf \left\{k > k_{p,{\mathfrak{m}}}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p \right\} & \text{if there exists $k \in \{k_{p,{\mathfrak{m}}} +1,\dots,r \}$ such that $\mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p$} \\
r & \text{otherwise.}
\end{cases}\end{aligned}$$ Let $\mathfrak{M}_p$ be the smallest integer ${\mathfrak{m}}$ such that $k_{p,{\mathfrak{m}}} = r$. Let then for all ${\mathfrak{m}}\in \{0,\dots,\mathfrak{M}_p-1\}$, $$K_{p,{\mathfrak{m}}} = \left\{k_{p,{\mathfrak{m}}}+1, \dots, k_{p,{\mathfrak{m}}+1} \right\}.$$ We shall use the claim below (whose proof is delayed to Section \[SectionPreuveDesClaims\]).
\[ClaimPreuveAlgoDimen2\] Let ${\mathfrak{m}}' \in \{0,\dots,\mathfrak{M}_{p+1}-1\}$, $p \in \{1,\dots,d-1\}$. There exists a subset ${\mathcal{M}}$ of $\{0,\dots,\mathfrak{M}_p-1\}$ such that $$K'_{p} = \left\{k_{p,{\mathfrak{m}}+1}, \, {\mathfrak{m}}\in {\mathcal{M}}\right\} \subset K_{p+1,{\mathfrak{m}}'}$$ and $$\left[a_{p+1}^{(i)}, b_{p+1}^{(i)}\right] \subset \bigcup_{ k \in K_p'} \left[ \theta_{p+1}^{(i,\ell_{k})}, \theta_{p+1}^{(i,\ell_{k})} + \varepsilon_{p+1}^{(i,\ell_{k })} \right].$$
We prove by induction on $p$ the following result. For all $p \in \{1,\dots,d-1\}$, and all ${\mathfrak{m}}\in \{0,\dots,\mathfrak{M}_p-1\}$, $$\begin{aligned}
\label{EqInclusionDansPreuve}
\prod_{q=1}^{p} \left[a_q^{(i)}, b_q^{(i)}\right] \subset \bigcup_{k\in K_{p,{\mathfrak{m}}}} \prod_{q=1}^p \left[ \theta_q^{(i,\ell_{k})}, \theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right]\end{aligned}$$ Note that [(\[eqInclusionPreuveDimenQ\])]{} ensues from this inclusion when $p = d -1$ and ${\mathfrak{m}}= 0$.
We begin to prove (\[EqInclusionDansPreuve\]) for $p = 1$ and all ${\mathfrak{m}}\in \{0,\dots,\mathfrak{M}_1-1\}$. For all $k \in \{k_{1,{\mathfrak{m}}} +1, \dots, k_{1,{\mathfrak{m}}+1} - 1\}$, $\mathfrak{j}_{\text{min}}^{(i,\ell_{k})} \leq 1$ and thus $$\theta_1^{(i,\ell_{k+1})} \in \left\{ \theta_1^{(i,\ell_{k})} , \theta_1^{(i,\ell_{k})} + \varepsilon_{1}^{(i,\ell_{k})}\right\}.$$ This implies that the set $$\bigcup_{k=k_{1,{\mathfrak{m}}}+1}^{k_{1,{\mathfrak{m}}+1}} \left[ \theta_1^{(i,\ell_{k})}, \theta_1^{(i,\ell_{k})} + \varepsilon_{1}^{(i,\ell_{k})} \right].$$ is an interval. Now, $\theta_1^{(i,\ell_{k_{1,{\mathfrak{m}}}+1})} = a_{1}^{(i)}$ , $ \theta_1^{(i,\ell_{k_1,{\mathfrak{m}}+1})} + \varepsilon_{1}^{(i,\ell_{k_1,{\mathfrak{m}}+1})}\geq b_1^{(i)}$ since $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_1,{\mathfrak{m}}+1})} > 1$. We have $$[a_{1}^{(i)}, b_1^{(i)}] \subset \bigcup_{k=k_{1,{\mathfrak{m}}}+1}^{k_{1,{\mathfrak{m}}+1}} \left[ \theta_1^{(i,\ell_{k})}, \theta_1^{(i,\ell_{k})} + \varepsilon_{1}^{(i,\ell_{k})} \right]$$ which establishes (\[EqInclusionDansPreuve\]) when $p = 1$.
Let now $p \in \{1,\dots,d-2\}$ and assume that for all ${\mathfrak{m}}\in \{0,\dots, \mathfrak{M}_p-1\}$, $$\begin{aligned}
\prod_{q=1}^{p} \left[a_q^{(i)}, b_q^{(i)}\right] \subset \bigcup_{k\in K_{p,{\mathfrak{m}}}} \prod_{q=1}^p \left[ \theta_q^{(i,\ell_{k})},\theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right].\end{aligned}$$ Let ${\mathfrak{m}}' \in \{0,\dots , \mathfrak{M}_{p+1}-1\}$. We shall show that $$\begin{aligned}
\prod_{q=1}^{p+1} \left[a_q^{(i)}, b_q^{(i)}\right] \subset \bigcup_{k \in K_{p+1,{\mathfrak{m}}'}} \prod_{q=1}^{p+1} \left[ \theta_q^{(i,\ell_{k})}, \theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right].\end{aligned}$$ Let $\boldsymbol{x} \in \prod_{q=1}^{p+1} \left[a_q^{(i)}, b_q^{(i)}\right]$. By using Claim \[ClaimPreuveAlgoDimen2\], there exists ${\mathfrak{m}}\in \{0,\dots,\mathfrak{M}_{p}-1\}$ such that $$x_{p+1} \in \left[ \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1}})}, \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1}})} \right]$$ and such that $k_{p,{\mathfrak{m}}+1} \in K_{p+1,{\mathfrak{m}}'}$. By using the induction assumption, there exists $k \in K_{p,{\mathfrak{m}}}$ such that $$\boldsymbol{x} = (x_1,\dots,x_p) \in \prod_{q=1}^p \left[ \theta_q^{(i,\ell_{k})} , \theta_q^{(i,\ell_{k})} + \varepsilon_{q}^{(i,\ell_k)} \right].$$ Since $k \in K_{p,{\mathfrak{m}}}$, $\theta_{p+1}^{(i,\ell_{k})} = \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1}})} $ and $ \varepsilon_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1}})} \leq \varepsilon_{p+1}^{(i,\ell_{k})}$. Hence, $$x_{p+1} \in \left[ \theta_{p+1}^{(i,\ell_{k})} , \theta_{p+1}^{(i,\ell_{k})} + \varepsilon_{p+1}^{(i,\ell_{k})} \right].$$ We finally use the claim below to show that $k \in K_{p+1,{\mathfrak{m}}'}$ which concludes the proof.
\[ClaimPreuveAlgoDimen2deuxieme\] Let ${\mathfrak{m}}\in \{0,\dots,\mathfrak{M}_p-1\}$ and ${\mathfrak{m}}' \in \{0,\dots, \mathfrak{M}_{p+1}-1\}$. If $k_{p,{\mathfrak{m}}+1} \in K_{p+1,{\mathfrak{m}}'}$, then $K_{p,{\mathfrak{m}}} \subset K_{p+1,{\mathfrak{m}}'}$.
### Proof of the claims. {#SectionPreuveDesClaims}
\[Proof of Claim \[ClaimInclusionKpDansKp\].\] The set $\{{\mathfrak{m}}' \in \{0,\dots, \mathfrak{M}_p-1\}, \; k_{p,{\mathfrak{m}}'+1} \leq k_{p+1,{\mathfrak{m}}+1} \}$ is non empty and we can thus define the largest integer ${\mathfrak{m}}'$ of $\{0,\dots, \mathfrak{M}_p-1\}$ such that $ k_{p,{\mathfrak{m}}'+1} \leq k_{p+1,{\mathfrak{m}}+1} $. We then have $$k_{p,{\mathfrak{m}}'} = \sup \left\{k < k_{p,{\mathfrak{m}}'+1}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_k)} > p \right\}.$$ Since $k_{p,{\mathfrak{m}}'} < k_{p+1,{\mathfrak{m}}+1}$, $$\begin{aligned}
k_{p,{\mathfrak{m}}'} &=& \sup \left\{k < k_{p+1,{\mathfrak{m}}+1} , \; \mathfrak{j}_{\text{min}}^{(i,\ell_k)} > p \right\} \\
&\geq& \sup \left\{k < k_{p+1,{\mathfrak{m}}+1} , \; \mathfrak{j}_{\text{min}}^{(i,\ell_k)} > p + 1 \right\} \\
&\geq& k_{p+1,{\mathfrak{m}}}.\end{aligned}$$ Hence, $k_{p,{\mathfrak{m}}'+1} \geq k_{p,{\mathfrak{m}}'} + 1 \geq k_{p+1,{\mathfrak{m}}} + 1$. Finally, $k_{p,{\mathfrak{m}}'+1} \in K_{p,{\mathfrak{m}}}$.
\[Proof of Claim \[ClaimPreuveAlgoDim2Kappa0\]\] Let $i \in \{1,\dots,N-1\}$ and $\ell \in \{1,\dots,L_i\}$. Then, $$\begin{aligned}
{\text{diam}}(\Theta_i) &\leq& \sup_{1 \leq j \leq d} { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{\Theta_i,j} \big(b_{j}^{(i)} - a_{j}^{(i)} \big)^{\alpha_j} \\
&\leq& \left( \sup_{1 \leq j \leq d}\frac{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{\Theta_i,j} }{\underline{R}_{\Theta_i,j} } \right) \sup_{1 \leq j \leq d} \underline{R}_{\Theta_i,j} \big(b_{j}^{(i)} - a_{j}^{(i)} \big)^{\alpha_j} \\
&\leq& \left( \sup_{1 \leq j \leq d}\frac{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{\Theta_i,j} }{\underline{R}_{\Theta_i,j} } \right) \underline{R}_{\Theta_i,k^{(i)}} \big(b_{{k^{(i)}}}^{(i)} - a_{{k^{(i)}}}^{(i)} \big)^{\alpha_{k^{(i)}}}.\end{aligned}$$ Now, $\theta^{(i,\ell)}_{k^{(i)}} = a_{k^{(i)}}^{(i)}$ and $\theta'^{(i,\ell)}_{k^{(i)}} = b_{k^{(i)}}^{(i)}$ and thus $$\begin{aligned}
{\text{diam}}(\Theta_i) &\leq& \left( \sup_{1 \leq j \leq d}\frac{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{\Theta_i,j} }{\underline{R}_{\Theta_i,j} } \right) \underline{R}_{\Theta_i,k^{(i)}} \big(\theta^{(i,\ell)}_{{k^{(i)}}} - \theta^{(i,\ell)}_{{k^{(i)}}} \big)^{\alpha_{k^{(i)}}} \\
&\leq& \left( \sup_{1 \leq j \leq d}\frac{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{\Theta_i,j} }{\underline{R}_{\Theta_i,j} } \right) \sup_{1 \leq j \leq d} \underline{R}_{\Theta_i,j} \big(\theta'^{(i,\ell)}_{j} - \theta^{(i,\ell)}_{j})^{\alpha_j} \\
&\leq& \left( \sup_{1 \leq j \leq d}\frac{{ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{\Theta_i,j} }{\underline{R}_{\Theta_i,j} } \right) h^2(f_{{\boldsymbol{\theta}}^{(i,\ell)}},f_{{\boldsymbol{\theta}}'^{(i,\ell)}}).\end{aligned}$$ We conclude by using ${ \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{\Theta_i,j}/ \underline{R}_{\Theta_i,j} \leq { \begingroup
\def\mathaccent#R##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{R} \endgroup
}_{j}/\underline{R}_{j}$.
\[Proof of Claim \[ClaimPreuveAlgoDimen2\].\] Thanks to Claim \[ClaimInclusionKpDansKp\] (page ), we can define the smallest integer ${\mathfrak{m}}_0$ of $\{0,\dots,\mathfrak{M}_{p}-1\}$ such that $k_{p,{\mathfrak{m}}_0 + 1} \in K_{p+1,{\mathfrak{m}}'}$, and the largest integer ${\mathfrak{m}}_1$ of $\{0,\dots,\mathfrak{M}_{p}-1\}$ such that $k_{p_1 + 1} \in K_{p+1,{\mathfrak{m}}'}$. Define now $${\mathcal{M}}= \left\{{\mathfrak{m}}_0,{\mathfrak{m}}_0+1,\dots,{\mathfrak{m}}_1\right\}.$$ Note that for all ${\mathfrak{m}}\in \{{\mathfrak{m}}_0,\dots,{\mathfrak{m}}_1\}$, $k_{p,{\mathfrak{m}}+1} \in K_{p+1,{\mathfrak{m}}'}$ (this ensues from the fact that the sequence $(k_{p,{\mathfrak{m}}})_{{\mathfrak{m}}}$ is increasing).
Let ${\mathfrak{m}}\in \{0,\dots,\mathfrak{M}_p-1\}$ be such that $k_{p,{\mathfrak{m}}} \in K_{p+1,{\mathfrak{m}}'}$ and $k_{p,{\mathfrak{m}}} \neq k_{p+1,{\mathfrak{m}}'+1}$. Then $ \mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,{\mathfrak{m}}}})} \leq p + 1$ and since $ \mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,{\mathfrak{m}}}})} > p $, we also have $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,{\mathfrak{m}}}})} = p + 1$. Consequently, $$\theta_{p+1}^{ \left(i,\ell_{k_{p,{\mathfrak{m}}}+1}\right)} = \theta_{p+1}^{ \left(i,\ell_{k_{p,{\mathfrak{m}}}}\right)} + \varepsilon_{p+1}^{ \left(i,\ell_{k_{p,{\mathfrak{m}}}}\right)}.$$ Now, $\theta_{p+1}^{ \left(i,\ell_{k_{p,{\mathfrak{m}}}+1}\right)} = \theta_{p+1}^{ (i,\ell_{k_{p,{\mathfrak{m}}+1}})}$ since $k_{p,{\mathfrak{m}}}+1$ and $k_{p,{\mathfrak{m}}+1}$ belong to $K_{p,{\mathfrak{m}}}$. The set $$\left[ \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}}})} , \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}} })} \right] \bigcup \left[ \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1}})}, \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1} })} \right]$$ is thus the interval $$\left[ \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}}})} , \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1} })} \right].$$ We apply this argument for each ${\mathfrak{m}}\in \{{\mathfrak{m}}_0+1,\dots,{\mathfrak{m}}_1\}$ to derive that the set $$I = \bigcup_{{\mathfrak{m}}={\mathfrak{m}}_0}^{{\mathfrak{m}}_1} \left[ \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1}})} , \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}+1} })} \right]$$ is the interval $$I = \left[ \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0+1}})} , \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_1+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_1+1} })} \right].$$ The claim is proved if we show that $$\left[a_{p+1}^{(i)}, b_{p+1}^{(i)}\right] \subset I.$$ Since $I$ is an interval, it remains to prove that $a_{p+1}^{(i)} \in I$ and $b_{p+1}^{(i)} \in I$.
We begin to show $a_{p+1}^{(i)} \in I$ by showing that $ a_{p+1}^{(i)} = \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0+1}})}$. If $k_{p+1,{\mathfrak{m}}'} = 0$, then ${\mathfrak{m}}' = 0$ and ${\mathfrak{m}}_0 = 0$. Besides, since $1$ and $k_{p,1}$ belong to $K_{p,0}$, we have $\theta_{p+1}^{(i,\ell_{k_{p,1}})} = \theta_{p+1}^{(i,\ell_{1})}$. Now, $\theta_{p+1}^{(i,\ell_{1})} = a_{p+1}^{(i)}$ and thus $a_{p+1}^{(i)} \in I$. We now assume that $k_{p+1,{\mathfrak{m}}'} \neq 0$. Since $k_{p,{\mathfrak{m}}_0} \leq k_{p+1,{\mathfrak{m}}'}$, there are two cases.
- First case: $k_{p,{\mathfrak{m}}_0} = k_{p+1,{\mathfrak{m}}'}$. We then have $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,{\mathfrak{m}}_0}})} > p + 1$ and thus $\theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0}+1})}= a_{p+1}^{(i)}$. Since $k_{p,{\mathfrak{m}}_0+1}$ and $k_{p,{\mathfrak{m}}_0}+1$ belong to $K_{p,{\mathfrak{m}}_0}$, $\theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0+1}})} = \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0}+1})}$ and thus $\theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0+1}})} = a_{p+1}^{(i)}$ as wished.
- Second case: $k_{p,{\mathfrak{m}}_0} + 1 \leq k_{p+1,{\mathfrak{m}}'}$. Then, $k_{p+1,{\mathfrak{m}}'} \in K_{p,{\mathfrak{m}}_0}$ and thus $$\theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0}+1})} = \theta_{p+1}^{(i,\ell_{k_{p+1,{\mathfrak{m}}'}})}.$$ Since $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p+1,{\mathfrak{m}}'}})} > p + 1$, we have $\theta_{p+1}^{(i,\ell_{k_{p+1,{\mathfrak{m}}'}} )} + \varepsilon_{p+1}^{(i,\ell_{k_{p+1,{\mathfrak{m}}'}} )} \geq b_{p+1}^{(i)}$. By using the fact that the sequence $( \varepsilon_{p+1}^{(i,\ell_{k})})_k$ is decreasing, we then deduce $$\theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0} + 1})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0} + 1} )} \geq b_{p+1}^{(i)}$$ and thus $\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,{\mathfrak{m}}_0} + 1})} > p + 1$. This establishes that $$\begin{aligned}
\label{EqPreuveThetap}
\theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0} + 2})}= a_{p+1}^{(i)}.\end{aligned}$$ Let us now show $k_{p,{\mathfrak{m}}_0} + 2 \leq k_{p,{\mathfrak{m}}_0+1}$. Otherwise, $k_{p,{\mathfrak{m}}_0} + 2 \geq k_{p,{\mathfrak{m}}_0+1} + 1 $ and thus $k_{p,{\mathfrak{m}}_0} + 1 \geq k_{p,{\mathfrak{m}}_0+1}$ which means that $k_{p,{\mathfrak{m}}_0} + 1 = k_{p,{\mathfrak{m}}_0+1}$ (we recall that $(k_{p,{\mathfrak{m}}})_{{\mathfrak{m}}}$ is an increasing sequence of integers). Since we are in the case where $ k_{p,{\mathfrak{m}}_0}+1 \leq k_{p+1,{\mathfrak{m}}'}$, we have $ k_{p,{\mathfrak{m}}_0+1} \leq k_{p+1,{\mathfrak{m}}'}$ which is impossible since $ k_{p,{\mathfrak{m}}_0+1} \in K_{p+1,{\mathfrak{m}}'}$.
Consequently, since $k_{p,{\mathfrak{m}}_0} + 2 \leq k_{p,{\mathfrak{m}}_0+1}$, we have $k_{p,{\mathfrak{m}}_0} + 2 \in K_{p,{\mathfrak{m}}_0}$ and thus $\theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0+1} })}= \theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0}+2 })}$. We then deduce from (\[EqPreuveThetap\]) that $\theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_0+1} })} = a_{p+1}^{(i)}$ as wished.
We now show that $b_{p+1}^{(i)} \in I$ by showing that $\theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_1+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_1+1} })} \geq b_{p+1}^{(i)}$. If ${\mathfrak{m}}_1 = \mathfrak{M}_p-1$, $$\theta_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_1+1}})} + \varepsilon_{p+1}^{(i,\ell_{k_{p,{\mathfrak{m}}_1+1} })} = \theta_{p+1}^{(i,\ell_{r})} + \varepsilon_{p+1}^{(i,\ell_{r })} = \theta_{p+1}^{(i,L_i)} + \varepsilon_{p+1}^{(i,L_{i })}.$$ Since $\mathfrak{j}_{\text{min}}^{(i, L_i)} = d$, we have $\theta_{p+1}^{(i,L_i)} + \varepsilon_{p+1}^{(i,L_{i })} \geq b_{p+1}^{(i)}$ which proves the result.
We now assume that ${\mathfrak{m}}_1 < \mathfrak{M}_p-1$. We begin to prove that $k_{p,{\mathfrak{m}}_1+1} = k_{p+1,{\mathfrak{m}}'+1}$. If this inequality does not hold, we derive from the inequality $k_{p,{\mathfrak{m}}_1+1} \leq k_{p+1,{\mathfrak{m}}'+1} < k_{p,{\mathfrak{m}}_1+2}$, that $k_{p,{\mathfrak{m}}_1+1} + 1 \leq k_{p+1,{\mathfrak{m}}'+1}$ and thus $k_{p+1,{\mathfrak{m}}'+1} \in K_{p,{\mathfrak{m}}_1+1}$. Since $\mathfrak{j}_{\text{min}}^{(i, \ell_{k_{p+1,{\mathfrak{m}}'+1}})} > p +1$, we have $$\theta_{p+1}^{(i,\ell_{k_{p+1,{\mathfrak{m}}'+1}} )} + \varepsilon_{p+1}^{(i,\ell_{k_{p+1,{\mathfrak{m}}'+1}} )} \geq b_{p+1}^{(i)}.$$ Hence, $$\theta_{p+1}^{(i,\ell_{(k_{p,{\mathfrak{m}}_1+1})+1 })} + \varepsilon_{p+1}^{(i,\ell_{(k_{p,{\mathfrak{m}}_1+1})+1 } )} \geq b_{p+1}^{(i)} \quad \text{which implies} \quad \mathfrak{j}_{\text{min}}^{(i,\ell_{(k_{p,{\mathfrak{m}}_1+1})+1 })} > p + 1.$$ Since, $$k_{p+1,{\mathfrak{m}}'+1} = \inf \left\{k > k_{p+1,{\mathfrak{m}}'} , \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p + 1\right\}$$ and $k_{p,{\mathfrak{m}}_1+1} + 1 > k_{p+1,{\mathfrak{m}}'}$, we have $ k_{p+1,{\mathfrak{m}}'+1} \leq k_{p,{\mathfrak{m}}_1+1} + 1 $. Moreover, since $ k_{p+1,{\mathfrak{m}}'+1} \geq k_{p,{\mathfrak{m}}_1+1} + 1 $, we have $k_{p,{\mathfrak{m}}_1+1} + 1 = k_{p+1,{\mathfrak{m}}'+1}$. Consequently, $$k_{p,{\mathfrak{m}}_1+2} = \inf \left\{k > k_{p,{\mathfrak{m}}_1+1} , \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p \right\} = k_{p+1,{\mathfrak{m}}'+1}.$$ This is impossible because $ k_{p+1,{\mathfrak{m}}'+1} < k_{p,{\mathfrak{m}}_1+2}$, which finally implies that $k_{p,{\mathfrak{m}}_1+1} = k_{p+1,{\mathfrak{m}}'+1}$.
We then deduce from this equality, $$\mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p,{\mathfrak{m}}_1+1}})} = \mathfrak{j}_{\text{min}}^{(i,\ell_{k_{p+1,{\mathfrak{m}}'+1}})} > p + 1.$$ Hence $ \theta_{p+1}^{(i, \ell_{\ell_{k_{p,{\mathfrak{m}}_1+1} }} )} + \varepsilon_{p+1}^{(i, \ell_{\ell_{k_{p,{\mathfrak{m}}_1+1} }} )} \geq b_{p+1}^{(i)} $ and thus $b_{p+1}^{(i)} \in I$. This ends the proof.
\[Proof of Claim \[ClaimPreuveAlgoDimen2deuxieme\].\] We have $$\begin{aligned}
k_{p+1,{\mathfrak{m}}'} = \sup \left\{ k < k_{p+1,{\mathfrak{m}}'+1}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p + 1 \right\}.\end{aligned}$$ Since $k_{p,{\mathfrak{m}}+1} > k_{p+1,{\mathfrak{m}}'}$, $$\begin{aligned}
k_{p+1,{\mathfrak{m}}'} &=& \sup \left\{ k < k_{p,{\mathfrak{m}}+1}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p + 1 \right\}\\
&\leq& \sup \left\{ k < k_{p,{\mathfrak{m}}+1}, \; \mathfrak{j}_{\text{min}}^{(i,\ell_{k})} > p \right\}\\
&\leq& k_{p,{\mathfrak{m}}}.\end{aligned}$$ We then derive from the inequalities $k_{p+1,{\mathfrak{m}}'} \leq k_{p,{\mathfrak{m}}}$ and $k_{p,{\mathfrak{m}}+1} \leq k_{p+1
'+1}$ that $K_{p,{\mathfrak{m}}} \subset K_{p+1,{\mathfrak{m}}'}$.
Annexe: implementation of the procedure when $d \geq 2$ {#SectionImplementationProcedure}
=======================================================
We carry out in the following sections the values of $\underline{R}_{{\mathcal{C}},j}$, $\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ and $\underline{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ we have used in the simulation study of Section \[SectionSimulationDimD\]. We do not claim that they minimize the number of tests to compute. The number of tests that have been computed in the simulation study with these choices of parameters may be found in Section \[SectionVitesseProcedureDimD\].
Example 1.
----------
In the case of the Gaussian model, it is worthwhile to notice that the Hellinger distance between two densities $f_{(m,\sigma)}$ and $f_{(m',\sigma')}$ can be made explicit: $$h^2 \left(f_{(m,\sigma)}, f_{(m',\sigma')}\right) = 1 - \sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} \exp \left(- \frac{(m-m')^2}{4 (\sigma^2 + \sigma'^2)}\right).$$ For all $\xi > 0$, a sufficient condition for that $ h^2 (f_{(m,\sigma)}, f_{(m',\sigma')}) \leq \xi $ is thus $$\sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} \geq \sqrt{1-\xi} \quad \text{and} \quad \exp \left(- \frac{(m-m')^2}{4 (\sigma^2 + \sigma'^2)}\right) \geq \sqrt{1-\xi}.$$ One then deduces that the rectangle $$\begin{aligned}
& & \hspace{-2cm} \left[ m - 2 \frac{1-\sqrt{2\xi-\xi^2}}{1-\xi} \sqrt{\log \left(\frac{1}{1-\xi}\right) } \sigma, m +2 \frac{1-\sqrt{2\xi-\xi^2}}{1-\xi} \sqrt{\log \left(\frac{1}{1-\xi}\right) } \sigma \right] \\
& & \qquad \times \left[\frac{1-\sqrt{2\xi-\xi^2}}{1-\xi}\sigma , \frac{1+\sqrt{2\xi-\xi^2}}{1-\xi}\sigma \right]\end{aligned}$$ is included in the Hellinger ball $$\left\{(m',\sigma') \in {\mathbb{R}}\times (0,+\infty), \, h^2 (f_{(m,\sigma)}, f_{(m',\sigma')}) \leq \xi \right\}.$$ Given ${\boldsymbol{\theta}}= (m,\sigma)$, ${\boldsymbol{\theta}}' = (m', \sigma')$, we can then define $\underline{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$, $\bar{{r}}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ by $$\begin{aligned}
\underline{\boldsymbol{r}}_{{\mathcal{C}}} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') &=& \left(2 \frac{1-\sqrt{2\xi-\xi^2}}{1-\xi} \sqrt{\log \left(\frac{1}{1-\xi}\right) } \sigma, \frac{-\xi+\sqrt{2\xi-\xi^2}}{1-\xi}\sigma \right) \\
\bar{\boldsymbol{r}}_{{\mathcal{C}}} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') &=& \left( 2 \frac{1-\sqrt{2\xi-\xi^2}}{1-\xi} \sqrt{\log \left(\frac{1}{1-\xi}\right) }\sigma, \frac{\xi+\sqrt{2\xi-\xi^2}}{1-\xi}\sigma\right)\end{aligned}$$ where $\xi = \kappa H^2 (f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'}) $.
We now consider a rectangle $\mathcal{C} = \left[\underline{m}_0, { \begingroup
\def\mathaccent#m##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{m} \endgroup
}_0 \right] \times \left[\underline{\sigma}_0, \bar{\sigma}_0\right]$ of ${\mathbb{R}}\times (0,+\infty)$ and aim at choosing $\boldsymbol{\underline{R}}_{{\mathcal{C}}} = \left(\underline{R}_{{\mathcal{C}},1}, \underline{R}_{{\mathcal{C}},2}\right) $. For all $(m,\sigma)$, $(m', \sigma') \in \mathcal{C}$, $$\begin{aligned}
h^2 \left(f_{(m,\sigma)}, f_{(m',\sigma')}\right) = 1 - \sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} + \sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} \left[1 - \exp \left(- \frac{(m-m')^2}{4 (\sigma^2 + \sigma'^2)}\right)\right].\end{aligned}$$ Yet, $$\begin{aligned}
1 - \sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} = \frac{\left(\sigma'-\sigma\right)^2}{ \left(\sqrt{\sigma^2 + \sigma'^2} + \sqrt{2 \sigma \sigma'} \right) \sqrt{\sigma^2 + \sigma'^2}} \geq \frac{\left(\sigma'-\sigma\right)^2}{4 \bar{\sigma}_0^2}
\end{aligned}$$ and $$\sqrt{\frac{2\sigma \sigma' }{\sigma^2 + \sigma'^2}} \geq \sqrt{\frac{2 \bar{\sigma}_0 \underline{\sigma}_0}{ \bar{\sigma}^2_0+ \underline{\sigma}^2_0 }}.$$ Moreover, $$\begin{aligned}
1 - \exp \left(- \frac{(m-m')^2}{4 (\sigma^2 + \sigma'^2)}\right) \geq \frac{1 - e^{-{({ \begingroup
\def\mathaccent#m##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{m} \endgroup
}_0-\underline{m}_0)^2}/{(8 \bar{\sigma}_0^2)}}}{ ({ \begingroup
\def\mathaccent#m##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{m} \endgroup
}_0-\underline{m}_0)^2} (m'-m)^2.\end{aligned}$$ In particular, we have proved that $$h^2 \left(f_{(m,\sigma)}, f_{(m',\sigma')}\right) \geq \max \left\{ \sqrt{\frac{2 \bar{\sigma}_0 \underline{\sigma}_0}{ \bar{\sigma}^2_0+ \underline{\sigma}^2_0 }} \frac{1 - e^{-{({ \begingroup
\def\mathaccent#m##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{m} \endgroup
}_0-\underline{m}_0)^2}/{(8 \bar{\sigma}_0^2)}}}{ ({ \begingroup
\def\mathaccent#m##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{m} \endgroup
}_0-\underline{m}_0)^2} (m'-m)^2, \frac{1}{4 \bar{\sigma}^2_0} (\sigma - \sigma')^2\right\}$$ which means that we can take $$\begin{aligned}
\boldsymbol{\underline{R}}_{{\mathcal{C}}} = \left(\sqrt{\frac{2 \bar{\sigma}_0 \underline{\sigma}_0}{ \bar{\sigma}^2_0+ \underline{\sigma}^2_0 }} \frac{1 - e^{-{({ \begingroup
\def\mathaccent#m##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{m} \endgroup
}_0-\underline{m}_0)^2}/{(8 \bar{\sigma}_0^2)}}}{ ({ \begingroup
\def\mathaccent#m##2{ \rel@kern{0.8} \overline{\rel@kern{-0.8}\macc@nucleus\rel@kern{0.2}} \rel@kern{-0.2} } \macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax} \macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\macc@nested@a\relax111{m} \endgroup
}_0-\underline{m}_0)^2} , \frac{1}{4 \bar{\sigma}_0^2}\right).\end{aligned}$$
Example 2.
----------
In the case of the Cauchy model, the Hellinger distance cannot be made explicit. However, we can use Theorem 7.6 of [@Ibragimov1981] (chapter 1) to show that for all $m \in {\mathbb{R}}$, $\sigma > 0$, $$h^2 \left(f_{(0,1)},f_{(m,1)}\right) \leq m^2/16 \quad \text{and} \quad h^2 \left(f_{(0,1)},f_{(0,\sigma)}\right) \leq (\log^2 \sigma)/16.$$ Now, $$\begin{aligned}
h \left(f_{(m,\sigma)},f_{(m',\sigma')}\right) &\leq& h \left(f_{(m,\sigma)},f_{(m',\sigma)}\right) + h \left(f_{(m',\sigma)},f_{(m',\sigma')}\right) \\
&\leq& h \left(f_{(0,1)},f_{((m'-m)/\sigma,1)}\right) + h \left(f_{(0,1)},f_{(0,\sigma'/\sigma)}\right) \\
&\leq& \frac{|m'-m|}{4 \sigma} + \frac{|\log (\sigma'/\sigma)|}{4 }.\end{aligned}$$ For all $\xi > 0$, one then deduces that the rectangle $$\left[ m- 2 \sigma \sqrt{\xi} , m +2 \sigma \sqrt{\xi} \right] \times \left[\sigma e^{- 2 \sqrt{ \xi}}, \sigma e^{2 \sqrt{\xi}} \right]$$ is included in the Hellinger ball $$\left\{(m',\sigma') \in {\mathbb{R}}\times (0,+\infty), \, h^2 (f_{(m,\sigma)}, f_{(m',\sigma')}) \leq \xi \right\}.$$ This provides the values of $\bar{r}_{{\mathcal{C}},j} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$ and $\bar{{r}}_{{\mathcal{C}},j} ' ({\boldsymbol{\theta}},{\boldsymbol{\theta}}')$: given ${\mathcal{C}}\subset \Theta$, ${\boldsymbol{\theta}}= (m,\sigma)$, ${\boldsymbol{\theta}}' = (m', \sigma') \in {\mathcal{C}}$, we can take $$\begin{aligned}
\underline{\boldsymbol{r}}_{{\mathcal{C}}} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') &=& \left(2 \sigma \sqrt{{ \kappa H^2 (f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'})}} , \sigma - \sigma e^{- 2 \sqrt{ \kappa H^2 (f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'}) }} \right)\\
\bar{\boldsymbol{r}}_{{\mathcal{C}}} ({\boldsymbol{\theta}},{\boldsymbol{\theta}}') &=& \left(2 \sigma \sqrt{{ \kappa H^2 (f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'})}} , \sigma e^{2 \sqrt{ \kappa H^2 (f_{{\boldsymbol{\theta}}}, f_{{\boldsymbol{\theta}}'}) }} - \sigma\right).
\end{aligned}$$ For all rectangle $\mathcal{C} \subset {\mathbb{R}}\times (0,+\infty)$ we choose $\underline{R}_{{\mathcal{C}},1} = \underline{R}_{{\mathcal{C}},2}$. Notice that this choice allows to find easily the number $k$ that appears at line 1 of Algorithm \[algoConstructionDimQuelquonqueAvant\] since then the equation becomes $$b_{k} - a_{k} = \max_{1 \leq j \leq 2} (b_{j} - a_{j}).$$
Example 3.
----------
Let $\xi > 0$, $a, b > 0$ and ${\mathcal{C}}$ be the rectangle ${\mathcal{C}}= [a_1, a_2] \times [b_1, b_2] \subset (0,+\infty)^2$. We aim at finding a rectangle $\mathcal{R}$ containing $(a,b)$ such that $${\mathcal{C}}\cap \mathcal{R} \subset \left\{(a',b') \in (0,+\infty)^2, \, h^2 (f_{(a,b)}, f_{(a',b')}) \leq \xi \right\}.$$ For this, notice that for all positive numbers $a', b'$, $$\begin{aligned}
h^2 \left(f_{(a,b)}, f_{(a',b')}\right) \leq 2 h^2 \left(f_{(a,b)}, f_{(a,b')}\right) + 2 h^2 \left(f_{(a,b')}, f_{(a',b')} \right).\end{aligned}$$ Now, $$\begin{aligned}
h^2 \left(f_{(a,b)}, f_{(a,b')} \right) = 1 - \left(\frac{2 \sqrt{b b'}}{b+b'}\right)^{a}.\end{aligned}$$ Let $\Gamma'$ be the derivative of the Gamma function $\Gamma$ and $\psi$ be the derivative of the digamma function $\Gamma'/\Gamma$. We derive from Theorem 7.6 of [@Ibragimov1981] that $$\begin{aligned}
h^2 \left(f_{(a,b')}, f_{(a',b')}\right) \leq \frac{(a'-a)^2}{8} \sup_{t \in [\min(a,a'), \max(a,a')]} \psi (t).\end{aligned}$$ The function $\psi$ being non-increasing, $$\begin{aligned}
h^2 \left(f_{(a,b')}, f_{(a',b')}\right) \leq \begin{cases}
1/8 \psi (a) (a' - a)^2 & \text{if $a' \geq a$} \\
1/8 \psi(a_1) (a' - a)^2 & \text{if $a' < a$.}
\end{cases}\end{aligned}$$ We deduce from the above inequalities that we can take $$\mathcal{R} = \left[a - \sqrt{\frac{2}{\psi (a_1)} \xi}, a + \sqrt{\frac{2}{\psi (a)} \xi} \right] \times \left[ \frac{2-\xi'^2- 2 \sqrt{1-\xi'^2}}{\xi'^2} b, \frac{2-\xi'^2+ 2 \sqrt{1-\xi'^2}}{\xi'^2} b \right]$$ where $\xi' = (1 - \xi/4)^{1/a}$. For all rectangle $\mathcal{C}' \subset (0,+\infty)^2$ we define $\underline{R}_{{\mathcal{C}}',1} = \underline{R}_{{\mathcal{C}}',2}$.
Example 4.
----------
As in the preceding example, we consider $\xi > 0$, $a, b > 0$ and the rectangle ${\mathcal{C}}= [a_1, a_2] \times [b_1, b_2] \subset (0,+\infty)^2$. Our aim is to find a rectangle $\mathcal{R}$ containing $(a,b)$ such that $${\mathcal{C}}\cap \mathcal{R} \subset \left\{(a',b') \in (0,+\infty)^2, \, h^2 (f_{(a,b)}, f_{(a',b')}) \leq \xi \right\}.$$ For all positive numbers $a', b'$, $$\begin{aligned}
h^2 \left(f_{(a,b)}, f_{(a',b')}\right) \leq 2 h^2 \left(f_{(a,b)}, f_{(a,b')}\right) + 2 h^2 \left(f_{(a,b')}, f_{(a',b')} \right).\end{aligned}$$ We derive from Theorem 7.6 of [@Ibragimov1981] that $$\begin{aligned}
h^2 \left(f_{(a,b)}, f_{(a,b')} \right) \leq \frac{ (b' - b)^2 }{8} \sup_{t \in [\min(b,b'), \max(b,b')]} \left| \psi \left( t \right) - \psi \left(a + t \right) \right|\end{aligned}$$ where $\psi$ is defined in the preceding example. By using the monotony of the function $t \mapsto \psi (t) - \psi (a+t)$, we deduce that if $b' > b_1$, $$\begin{aligned}
h^2 \left(f_{(a,b)}, f_{(a,b')} \right) \leq \begin{cases}
1/8 \left( \psi (b) - \psi (a+b) \right) (b' - b)^2 & \text{if $b' \geq b$} \\
1/8 \left( \psi (b_1) - \psi (a+b_1)\right) (b' - b)^2 & \text{if $b' < b$.}
\end{cases}\end{aligned}$$ Similarly, $$\begin{aligned}
h^2 \left(f_{(a,b')}, f_{(a',b')} \right) \leq \frac{ (a' - a)^2 }{8} \sup_{t \in [\min(a,a'), \max(a,a')]} \left| \psi \left( t \right) - \psi \left(b' + t \right) \right|.\end{aligned}$$ Hence, if $a' \in [a_1,a_2]$ and $b' \in [b_1,b_2]$, $$\begin{aligned}
h^2 \left(f_{(a,b')}, f_{(a',b')} \right) \leq
\begin{cases}
1/8 \left( \psi (a) - \psi (a+b_2) \right) (a' - a)^2 & \text{if $a' \geq a$} \\
1/8 \left( \psi (a_1) - \psi (a_1+b_2)\right) (a' - a)^2 & \text{if $a' < a$.}
\end{cases}\end{aligned}$$ We deduce from the above inequalities that we can take $$\begin{aligned}
\mathcal{R} &=& \left[a - \sqrt{\frac{{ 2 \xi}}{ \psi (a_1) - \psi (a_1 +b_2)} }, a + \sqrt{\frac{{ 2 \xi}}{ \psi (a) - \psi (a +b_2)} } \right] \\
& & \qquad \times \left[b - \sqrt{\frac{{ 2 \xi}}{ \psi (b_1) - \psi (a +b_1)} }, b + \sqrt{\frac{{ 2 \xi}}{ \psi (b) - \psi (a +b)} } \right].\end{aligned}$$ As in the two last examples, we take $\underline{R}_{{\mathcal{C}}',1} = \underline{R}_{{\mathcal{C}}',2}$ for all rectangle ${\mathcal{C}}' \subset (0,+\infty)^2$.
Example 5.
----------
For all $m,m' \in {\mathbb{R}}$, $\lambda, \lambda' > 0$, $$h^2 \left(f_{(m,\lambda)},f_{(m',\lambda')}\right) =
\begin{cases}
1 - \frac{2 \sqrt{\lambda \lambda'}}{\lambda + \lambda'} e^{- \frac{\lambda}{2} |m' - m| } & \text{if $m' \geq m$} \\
1 - \frac{2 \sqrt{\lambda \lambda'}}{\lambda + \lambda'} e^{ - \frac{\lambda'}{2} |m' - m|} & \text{if $m' \leq m$.}
\end{cases}$$ We consider $\xi > 0$ and aim at finding $\mathcal{R}$ containing $(m,\lambda)$ such that $$\mathcal{R} \subset \left\{(m',\lambda') \in {\mathbb{R}}\times (0,+\infty), \, h^2 (f_{(m,\lambda)}, f_{(m',\lambda')}) \leq \xi \right\}.$$ Notice that if $m' \geq m$ and if $$\begin{aligned}
\frac{2 \sqrt{\lambda \lambda'}}{\lambda + \lambda'} \geq \sqrt{1-\xi} \quad \text{and} \quad e^{- \frac{\lambda}{2} (m'-m)} \geq \sqrt{1-\xi} \end{aligned}$$ then $h^2 (f_{(m,\lambda)}, f_{(m',\lambda')}) \leq \xi $. Similarly, if $m' \leq m$ and if $$\begin{aligned}
\frac{2 \sqrt{\lambda \lambda'}}{\lambda + \lambda'} \geq \sqrt{1-\xi} \quad \text{and} \quad e^{-\frac{\lambda'}{2} (m-m')} \geq \sqrt{1-\xi} \end{aligned}$$ then $h^2 (f_{(m,\lambda)}, f_{(m',\lambda')}) \leq \xi $. We can then take $$\mathcal{R} = \left[m- \frac{ 1-\xi}{1 + \xi + 2 \sqrt{\xi}} \frac{ \log \left(1/ (1-\xi) \right)}{\lambda}, m +\frac{ \log \left(1/ (1-\xi) \right)}{\lambda}\right] \times \left[ \frac{1 + \xi - 2 \sqrt{\xi}}{ 1 - \xi }\lambda, \frac{1 + \xi + 2 \sqrt{\xi}}{ 1-\xi }\lambda \right].$$
Let now $\mathcal{C}' = \left[\underline{m}_0, \bar{m}_0 \right] \times \left[\underline{\lambda}_0, \bar{\lambda}_0\right]$ be a rectangle of ${\mathbb{R}}\times (0,+\infty)$. By proceeding as in the Gaussian model, we can define $\boldsymbol{\underline{R}}_{{\mathcal{C}}'} $ by $$\begin{aligned}
\boldsymbol{\underline{R}}_{{\mathcal{C}}'} = \left(\underline{R}_{{\mathcal{C}}',1}, \underline{R}_{{\mathcal{C}}',2}\right) = \left(\frac{2 \sqrt{\bar{\lambda}_0 \underline{\lambda}_0}}{ \bar{\lambda}_0+ \underline{\lambda}_0 } \frac{1- e^{-\underline{\lambda}_0 (\bar{m}_0-\underline{m}_0)/2}} {\bar{m}_0 - \underline{m}_0} , \frac{1}{8 \bar{\lambda}_0^2}\right).\end{aligned}$$
Example 6.
----------
For all $m, m' \in {\mathbb{R}}$, $r, r' > 0$, $$\begin{aligned}
h^2 \left(f_{(m,r)},f_{(m',r')}\right) = 1 - \frac{\left(\min \{m +r , m'+r' \} - \max \{m,m'\}\right)_+}{\sqrt{r r'}}\end{aligned}$$ where $(\cdot)_+$ is the positive part of $(\cdot)$. We consider $\xi \in (0,\bar{\kappa})$, and aim at finding a rectangle $\mathcal{R}$ containing $(m,r)$ such that $$\mathcal{R} \subset \left\{(m',r') \in (0,+\infty)^2, \, h^2 (f_{(m,r)}, f_{(m',r')}) \leq \xi \right\}.$$ For this, we begin to assume that $m' \leq m + r$ and $m' + r' \geq m$ to ensure that $$h^2 \left(f_{(m,r)},f_{(m',r')}\right) = 1 - \frac{ \min \{m +r , m'+r' \} - \max \{m,m'\} }{\sqrt{r r'}}.$$ Several cases are involved
- If $m' \leq m$ and $m'+r' \geq m + r$, a sufficient condition for $h^2 \left(f_{(m,r)},f_{(m',r')}\right) \leq \xi$ is $$r' \leq \frac{1}{(1-\xi)^2} r.$$
- If $m' \geq m$ and $m'+r' \leq m + r$, a sufficient condition is $$r' \geq (1-\xi)^2 r.$$
- If $m' \leq m$ and if $m' + r' \leq m + r$, a sufficient condition is $$m - m' \leq \left( \sqrt{r'} - (1-\xi) \sqrt{r} \right) \sqrt{r'},$$ which holds when $$r' \geq (1 - \xi/2)^2 r \quad \text{and} \quad |m' - m| \leq \xi/2\sqrt{1-\xi/2} r.$$
- If $m' \geq m$ and if $m' + r' \geq m + r$, a sufficient condition is $$m' - m \leq \left(\sqrt{r} - (1-\xi) \sqrt{r'} \right) \sqrt{r}.$$ This condition is fulfilled when $$r' \leq \frac{1}{(1 - \xi/2)^2} r \quad \text{and} \quad |m' - m| \leq \frac{\xi}{2 - \xi} r.$$
We can verify that if $(m',r')$ belongs to the rectangle $$\begin{aligned}
\mathcal{R} = \left[m - \frac{\xi \sqrt{2 - \xi}}{2 \sqrt{2}} r, m+ \frac{\xi}{2-\xi} r \right] \times \left[ \left(1 - \xi/2 \right)^2 r, \frac{r}{\left(1 - \xi/2 \right)^2} \right]\end{aligned}$$ then $m' \leq m + r$ and $m' + r' \geq m$ (since $\xi \leq \bar{\kappa}$). The rectangle $\mathcal{R}$ suits. For all rectangle $\mathcal{C}' \subset (0,+\infty)^2$ we choose in this example $\underline{R}_{{\mathcal{C}}',1} = \underline{R}_{{\mathcal{C}}',2}$.
Speed of the procedure. {#SectionVitesseProcedureDimD}
-----------------------
By way of indication, we give below the number of tests that have been calculated in the simulation study of Section \[SectionSimulationDimD\].
$n = 25$ $n = 50$ $n = 75$ $n = 100$
----------- -------------- -------------- -------------- --------------
Example 1 1602 (117) 1577 (72) 1570 (60) 1567 (52)
Example 2 2935 (90) 2937 (76) 2938 (69) 2938 (64)
Example 3 9082 (1846) 8700 (1183) 8569 (934) 8511 (800)
Example 4 10411 (778) 10272 (461) 10236 (357) 10222 (304)
Example 5 6691 (296) 6699 (210) 6715 (175) 6726 (158)
Example 6 32614 (1238) 33949 (1211) 34822 (1190) 35397 (1177)
[^1]
[^1]: Acknowledgements: the author acknowledges the support of the French Agence Nationale de la Recherche (ANR), under grant Calibration (ANR 2011 BS01 010 01). We are thankful to Yannick Baraud for his valuable suggestions and careful reading of the paper.
|
---
author:
- 'F. Govoni'
- 'M. Murgia'
- 'M. Markevitch'
- 'L. Feretti'
- 'G. Giovannini'
- 'G.B. Taylor'
- 'E. Carretti'
date: 'Received; accepted'
title: 'A search for diffuse radio emission in the relaxed, cool-core galaxy clusters A1068, A1413, A1650, A1835, A2029, and Ophiuchus'
---
[ We analyze sensitive, high-dynamic-range, observations to search for extended, diffuse, radio emission in relaxed and cool-core galaxy clusters. ]{} [We performed deep 1.4 GHz Very Large Array observations, of A1068, A1413, A1650, A1835, A2029, and complemented our dataset with archival observations of Ophiuchus. ]{} [We find that, in the central regions of A1835, A2029, and Ophiuchus, the dominant radio galaxy is surrounded by diffuse low-brightness radio emission that takes the form of a mini-halo. We detect no diffuse emission in A1650, at a surface brightness level of the other mini-halos. We find low significance indications of diffuse emission in A1068 and A1413, although to be classified as mini-halos they would require further investigation, possibly with data of higher signal-to-noise ratio. In the Appendix, we report on the serendipitous detection of a giant radio galaxy with a total spatial extension of $\sim$1.6 Mpc. ]{}
Introduction
============
There is now firm evidence that the intra-cluster medium (ICM) consists of a mixture of hot plasma, magnetic fields, and relativistic particles. While the baryonic content of the galaxy clusters is dominated by the hot ($T\simeq 2 - 10$ keV) intergalactic gas, whose thermal emission is observed in X-rays, a fraction of clusters also exhibit megaparsec-scale radio halos (see e.g., Feretti & Giovannini 2008, Ferrari et al. 2008, and references therein for reviews). Radio halos are diffuse, low-surface-brightness ($\simeq 10^{-6}$ Jy/arcsec$^2$ at 1.4 GHz), steep-spectrum[^1] ($\alpha > 1$) sources, permeating the central regions of clusters, produced by synchrotron radiation of relativistic electrons with energies of $\simeq 10$ GeV in magnetic fields with $B\simeq 0.5-1\;\mu$G. Radio halos represent the strongest evidence of large-scale magnetic fields and relativistic particles throughout the intra-cluster medium.
Radio halos are typically found in clusters that display significant evidence for an ongoing merger (e.g., Buote 2001, Govoni et al. 2004). Recent cluster mergers were proposed to play an important role in the reacceleration of the radio-emitting relativistic particles, thus providing the energy to these extended sources (e.g., Brunetti et al. 2001, Petrosian 2001).
To date about 30 radio halos are known (e.g., Giovannini & Feretti 2000, Bacchi et al. 2003, Govoni et al. 2001, Venturi et al. 2007, 2008, Giovannini et al. in preparation). Because of their extremely low surface brightness and large angular extent ($>$10$'$ at a redshift z$\le$0.1) radio halos are most appropriately studied at low spatial resolution. Several radio halos were detected by Giovannini et al. (1999) in the NRAO VLA Sky Survey (NVSS; Condon et al. 1998) and by Kempner & Sarazin (2001) in the Westerbork Northern Sky Survey (WENSS; Rengelink et al. 1997), where the relatively large beam of these surveys provide of the necessary sensitivity to large-scale emission in identifying these elusive sources.
A major merger event is expected to disrupt cooling cores and create disturbances that are readily visible in an X-ray image of the cluster. Therefore, the merger scenario predicts the absence of large-scale radio halos in symmetric cooling-core clusters. However, a few cooling-core clusters exhibit signs of diffuse synchrotron emission that extends far from the dominant radio galaxy at the cluster center, forming what is referred to as a mini-halo. These diffuse radio sources are extended on a moderate scale (typically $\simeq$ 500 kpc) and, in common with large-scale halos, have a steep spectrum and a very low surface brightness.
Because of a combination of small angular size and the strong radio emission of the central radio galaxy, the detection of a mini-halo requires data of a much higher dynamic range and resolution than those in available surveys, and this complicates their detection. As a consequence, our current observational knowledge on mini-halos is limited to only a handful of clusters (e.g., Perseus: Burns et al. 1992; A2390: Bacchi et al. 2003; RXJ1347.5-1145: Gitti et al. 2007), and their origin and physical properties are still poorly known. The study of radio emission from the center of cooling-core clusters is of large importance not only in understanding the feedback mechanism involved in the energy transfer between the AGN and the ambient medium (e.g., McNamara & Nulsen 2007) but also in the formation process of the non-thermal mini-halos. The energy released by the central AGN may also play a role in the formation of these extended structures (e.g. Fujita et al. 2007).
On the other hand, the radiative lifetime of the relativistic electrons in mini-halos is of the order of $\simeq$10$^7$$-$10$^8$ yrs, much shorter than the time necessary for them to diffuse from the central radio galaxy to the mini-halo periphery. Thus, relativistic electrons must be reaccelerated and/or injected in-situ with high efficiency in mini-halos. Gitti et al. (2002) suggested that the mini-halo emission is due to a relic population of relativistic electrons reaccelerated by MHD turbulence via Fermi-like processes, the necessary energetics being supplied by the cooling flow. In support of mini-halo emission being triggered by the central cooling flow, Gitti et al. (2004) found a trend between the radio power of mini-halos and the cooling flow power. Although mini-halos are usually found in cooling-core clusters with no evidence of major mergers, signatures of minor-merging activities and gas-sloshing mechanisms in clusters containing mini-halos (e.g., Gitti et al. 2007, Mazzotta & Giacintucci 2008) have been revealed, suggesting that turbulence related to minor mergers could also play a role in the electron acceleration.
Alternatively, Pfrommer & En[ß]{}lin (2004) proposed that relativistic electrons in mini-halos are of secondary origin and thus continuously produced by the interaction of cosmic ray protons with the ambient, thermal protons.
Cassano et al. (2008) found that the synchrotron emissivity (energy per unit volume, per unit time, per unit frequency) of mini-halos is about a factor of 50 higher than that of radio halos. In the framework of the particle re-acceleration scenario, they suggested that, an extra amount of relativistic electrons would be necessary to explain the higher radio emissivity of mini-halos. These electrons could be provided by the central radio galaxy or be of secondary origin.
To search for new extended diffuse radio emission in relaxed and cool-core galaxy clusters, we performed deep observations of A1068, A1413, A1650, A1835, and A2029, carried out with the Very Large Array at 1.4 GHz, and complemented our data set with a VLA archival observation of Ophiuchus. Here, we present the new mini-halos that we identified in these data. In Murgia et al. (submitted, hereafter Paper II), we quantitatively investigate the radio properties of these new sources and compare them with the radio properties of a statistically significant sample of mini-halos and halos already known in the literature, for which high quality VLA radio images at 1.4 GHz are available.
The radio observations and data reduction are described in Sect. 2. For each cluster, in Sect. 3, we investigate the possible presence of a central mini-halo. In Sect. 4, we discuss the interplay between the mini-halos and the cluster X-ray emission. In Sect. 5, we analyze a possible connection between the central cD galaxy and the surrounding mini-halo. Finally, our conclusions are presented in Sect. 6.
Throughout this paper, we assume a $\Lambda$CDM cosmology with $H_0$ = 71 km s$^{-1}$Mpc$^{-1}$, $\Omega_m$ = 0.27, and $\Omega_{\Lambda}$ = 0.73.
VLA observations and data reduction
===================================
To investigate the presence of diffuse, extended, radio emission in relaxed systems, we analyzed new and archival VLA data of cooling-core clusters. The list of the clusters is reported in Table 1, while the details of the observations are described in Table 2.
[cccc]{} Cluster & z & kpc/$''$ & $D_L$\
& & & Mpc\
A1068 & 0.1375 & 2.40 & 641.45\
A1413 & 0.1427 & 2.48 & 667.98\
A1650 & 0.0843 & 1.56 & 379.24\
A1835 & 0.2532 & 3.91 & 1267.62\
A2029 & 0.0765 & 1.43 & 342.24\
Ophiuchus & 0.028 & 0.55 & 120.84\
\
\
\
\[tab1\]
[ccccccccc]{} Cluster & RA & DEC & Frequency & Bandwidth & Config. & Obs. Time & Date & Note\
& J2000 & J2000 & MHz & MHz & & hours & &\
A1068 & 10 40 47.0 & $+$39 57 18.0 & 1465/1415 & 25 & C & 3.4,1.0,5.7 & 2006 Oct 21,23,28 &\
A1413 & 11 55 19.0 & $+$23 24 30.0 & 1465/1415 & 25 & C & 8.0 & 2006 Oct 23 &\
A1650 & 12 58 41.0 & $-$01 45 25.0 & 1465/1415 & 25 & C & 7.0 & 2006 Oct 22 &\
A1835 & 14 01 02.0 & $+$02 51 30.0 & 1465/1385 & 50 & D & 4.2 & 2003 March 10 &\
A1835 & $''$ & $''$ & 1465/1415 & 25 & C & 0.7,4.7 & 2006 Oct 23,28 &\
A1835 & 14 01 02.1 & $+$02 52 42.7 & 1465/1665 & 25 & A & 2 & 1998 Apr 23 & archive (AT211)\
A2029 & 15 10 58.0 & $+$05 45 42.0 & 1465/1385 & 50 & D & 4.7 & 2003 March 09 &\
A2029 & $''$ & $''$ & 1465/1415 & 50 & C & 6.3 & 2006 Dec 26 &\
A2029 & 15 10 56.1 & 05 44 42.6 & 1465/1515 & 25 & A & 0.1 & 1993 Jan 14 &archive (AL252)\
Ophiuchus & 17 12 31.9 & $-$23 20 32.6 & 1452/1502 & 25 & D & 0.5 & 1990 Jan 25 &archive (AC261)\
\
\
\[data\]
We selected a small sample of 5 relaxed galaxy clusters: A1068, A1413, A1650, A1835, and A2029. They were selected on the basis of their extremely regular cluster-scale X-ray morphology and lack of evidence of a recent major merger. All have been well observed by the Chandra satellite. The X-ray data indicate that they all have cool, dense cores (see references in sections on individual clusters below). As often observed in these clusters (e.g., Markevitch et al. 2003), all exhibit signs of the sloshing of dense gas around their cD galaxy, the presumed center of the cluster gravitational potential. We also selected these cooling core clusters because of their intermediate redshifts (z$>$0.05). This selection criterion ensures that a mini-halo of 500 kpc in extent would have an angular size smaller than 10$'$. Therefore, there would be no significant missing flux for the shortest of Very Large Array (VLA) baselines. We observed A1068, A1413, and A1650 at 1.4 GHz with the VLA in C configuration, while A1835 and A2029 were observed both in C and D configurations. These data provided an excellent combination of resolution and sensitivity in studying the large-scale cluster emission.
Calibration and imaging were performed with the NRAO Astronomical Image Processing System (AIPS) package. The data were calibrated in both phase and amplitude. The phase calibration was completed using nearby secondary calibrators, observed at intervals of $\sim$20 minutes. The flux-density scale was calibrated by observing 3C286. Brightness images were produced following the standard procedures: Fourier-Transform, Clean, and Restore. Self-calibration was applied to remove residual phase variations.
In C array, the 1$\sigma$ rms noise reached in our observations range from 0.022 mJy/beam (in A2029) to 0.045 mJy/beam (in A1650), while in D array the rms noise is of 0.025 mJy/beam in A1835 and 0.031 mJy/beam in A2029. The cluster A2029 also corresponds to the highest peak intensity and in this case we reached a extraordinary high dynamic range. The dynamic range, measured to be the ratio of the peak brightness to the rms noise, was 10850:1 and 14450:1 in the C and D configuration respectively.
To isolate the mini-halo emission from the radio emission of the central cD galaxy, we retrieved high resolution archival observations at 1.4 GHz with the VLA in A configuration of A1825 and A2029 (program AT211 and AL252 respectively).
While a wide-field X-ray image of the Ophiuchus cluster from ROSAT PSPC suggests merging on a cluster-wide scale, the Chandra X-ray image indicates that its cool core is insignificantly disturbed (apart from the usual sloshing, Ascasibar & Markevitch 2006), thus we decided to include this interesting cluster in our sample.
We retrieved an archival observation of the Ophiuchus cluster at 1.4 GHz with the VLA in D configuration (program AC261). After precessing the original data to J2000 coordinates, the data were calibrated in phase and amplitude, and a cluster radio image was obtained following standard procedures. The phase calibration was completed using the nearby secondary calibrator $1730-130$, observed at intervals of $\sim$10 minutes, while the flux-density scale was calibrated by observing 3C286. In this observation, we achieved a 1$\sigma$ rms noise level of 0.1 mJy/beam. At the distance of the Ophiuchus cluster, a mini-halo of 500 kpc in extent would have an angular size of about 15$'$. This value is close to the maximum angular size observable at 1.4 GHz by the VLA in D configuration.
Results
=======
For each cluster, we report the results obtained in this campaign of observations. We find that in the central regions of A1835, A2029, and Ophiuchus, the dominant radio galaxy is surrounded by a diffuse low-brightness radio emission that we identify as a mini-halo. Low-significance indication of diffuse emission are present in A1068 and A1413, although their unambiguous classification as mini-halos would require further investigation. Finally, we detect no diffuse emission in A1650. We indicate the radio flux densities of both the mini-halos and the central point sources in A1835, A2029, and Ophiuchus, calculated by using the fit procedure described in Paper II, for more details of the density flux calculation, see that paper.
Abell 1068
----------
A1068 is a relaxed cluster that has a peaked X-ray surface brightness profile, a declining temperature gradient, and an increasing gas metallicity toward the cluster center (Wise et al. 2004, McNamara et al. 2004), exhibiting many features commonly seen in other cooling-core clusters.
![image](fig1_lr.ps){width="15cm"}
Figure 1 shows the central cluster radio emission compared with optical and X-ray images. In the top left panel, we present the optical cluster image with a radio contour plot overlaid. The total intensity radio contours are from the 1.4 GHz image with a resolution of 15$''$, while the optical image was taken from the POSS2 red plate in the Optical Digitalized Sky Survey [^2]. The field of view of this image is about 3.5$'$. This panel suggests that diffuse emission on a scale larger than 100 kpc may be present around the central radio emission. The central radio emission exhibits two peaks. The first coincides with the cluster cD galaxy, while the second is located toward the south-west at a distance of about 15$''$ and is associated with another bright cluster galaxy. The separation of these two radio galaxies is shown in the top right panel of Fig. 1, where the higher resolution ($\simeq$5$''$) FIRST image (Becker, White & Helfand 1995) is overlaid on the HST image. The radio emission associated with the cD galaxy has a flux density of 8.5$\pm$0.6 mJy, which corresponds to a power $L_{1.4GHz}=4.2\times10^{23}$W/Hz. It is marginally resolved and extends to the north-west. The south-western source shows a head-tail morphology and has a flux density of 10.4$\pm$0.6 mJy.
To highlight the possible diffuse large-scale emission associated with the intra-cluster medium, we subtracted the flux density of the two central sources, calculated in the FIRST survey from our lower resolution image. The total radio emission estimated in the cluster central emission from our map was 22.3$\pm$0.7 mJy. Therefore, 3.4$\pm$1.1 mJy, which corresponds to a power $L_{1.4GHz}=1.7\times10^{23}$W/Hz, appears to be associated with low diffusion emission. However, we note that the FIRST subtraction may be uncertain. A possible variation in the core-component density flux and/or any absolute calibration error between the FIRST and this dataset could cause in an under or over subtraction of flux, and therefore the residual flux associated with low diffusion emission must be interpreted with caution.
In the bottom panel, our radio image is overlaid on the Chandra X-ray emission of the cluster in the $0.5-4$ keV band. The X-ray emission appears elongated in the north-west south-east direction. The low surface brightness radio emission that is possibly associated with cluster diffuse emission appears to have the same orientation as both the cluster X-ray emission and the central cD galaxy.
In conclusion, a hint of diffuse emission has been detected in our data around the dominant cluster galaxy in A1068. However, given the uncertainty related to comparing datasets, this result requires further confirmation.
We note that in the field of view of this observation, we detected a giant radio galaxy that is not related to A1068, whose properties are shown in Appendix A.
Abell 1413
----------
![image](fig2_lr.ps){width="15cm"}
In the top left panel of Fig. 2, our deep radio observation at 1.4 GHz with a resolution of 15$''$ is overlaid on the POSS2 red plate image. The field of view of the image is about 3$'$. A diffuse low-surface-brightness emission of about 1.5$'$ ($\simeq 220$ kpc) in size was detected in close proximity to the cluster center. However, the radio-optical overlay clearly indicates that the peak of the central radio emission is offset to the east with respect to the central cD galaxy.
To investigate in detail the central radio emission of the cluster, in the top right panel of Fig. 2 we show a high resolution zoom of this region. In this panel, the image at 1.4 GHz taken from the FIRST survey with a resolution of $\simeq$5$''$ is overlaid on the HST image. The FIRST survey confirms that any discrete radio emission is associated with the central cD. We estimated a 3$\sigma$ flux limit to a point-source radio emission associated with the central cD galaxy of 0.45 mJy, which corresponds to a power $L_{1.4GHz}<2.4\times10^{22}$W/Hz. A faint (flux density of 2.9$\pm$0.7 mJy), unresolved, radio source is detected that coincides with another galaxy to the east.
To estimate the flux density of diffuse emission on large scales, we subtracted the flux density of the faint radio source detected in the FIRST survey from that measured in our lower resolution image. The total radio emission estimated in the cluster central emission from our map was 4.8$\pm$0.2 mJy. Therefore 1.9$\pm$0.7 mJy, which corresponds to a power $L_{1.4GHz}=1.0\times10^{23}$W/Hz, is indicative of low diffusion emission, but the low significance ($<3\sigma$) require further observations. As mentioned in the case of A1068, the FIRST flux density subtraction must also be considered with caution.
In the X-ray band, A1413 has been observed by XMM-Newton (Pratt & Arnaud 2002) and Chandra (Baldi et al. 2007, Vikhlinin et al 2005). It exhibits a regular morphology and no evidence of recent merging. In the bottom panel of Fig. 2, the previous radio image is overlaid on the Chandra X-ray emission of the cluster in the $0.5-4$ keV band. The orientation of the X-ray emission is well aligned with that of the central cD galaxy, while the diffuse radio emission is offset from the X-ray peak and seems elongated in a slightly different direction. However this could be due to the confusion effect of the nearby unrelated discrete source visible in the FIRST image. As in the case of A1068, further observations are needed to confirm the presence of a mini-halo and the possible discrepancy between the radio and the X-ray peak.
A peculiarity of the extended emission in A1413 is that the cD galaxy, located in the middle of this putative mini-halo, does not contain a compact radio source, at least at the FIRST sensitivity level. Another cluster with a similar characteristic is A2142 (Giovannini & Feretti 2000). The properties of these clusters may suggest the presence of “relic” mini-halos in which the central cD galaxy is switched-off, while the diffuse mini-halo continues to emit. Observations at lower frequency might help in understanding this case.
Abell 1650
----------
![image](fig3_lr.ps){width="15cm"}
Studies of this cluster in X-ray (Takahashi & Yamashita 2003, Donahue et al. 2005) pointed out that there is no clear evidence of substructures, suggesting that the cluster has not experienced a major merger in the recent past, and is in a relaxed state. However, while the central cooling time of this cluster is shorter than the Hubble time and it has a strong metallicity gradient, it does not have a significant temperature gradient close to its center.
A1650 contains a single optically luminous central cD galaxy that is known to be radio quiet (Burns 1990). The FIRST survey does not detect any radio emission coincident with this dominant cluster galaxy.
We imaged this cluster at 1.4 GHz, reaching a sensitivity at 1$\sigma$ of 0.045 mJy/beam. In Fig. 3, the central cluster radio emission is overlaid on both the POSS2 and the Chandra image. We note that our deep image reveals the presence of a faint, unresolved, radio source in coincidence with the cD galaxy. Its flux density is 0.44$\pm$0.09 mJy, which corresponds to a power $L_{1.4GHz}=7.6\times10^{21}$W/Hz. However, at this sensitivity level, this cluster does not exhibit any signs of mini-halo. We estimated a 3$\sigma$ flux limit of 1.35 mJy for a mini-halo extended 300 kpc in size, which corresponds to a power of $L_{1.4GHz}<2.3\times10^{22}$W/Hz.
At the periphery of the A1650 cluster, we note that we detected an unusually round radio galaxy, whose properties are shown in Appendix B.
Abell 1835
----------
The X-ray emission of A1835 has been studied with [*XMM*]{} (Peterson et al. 2001, Majerowicz et al. 2002) and [*Chandra*]{} (Schmidt et al. 2001, Markevitch 2002), and both sets of observations provided evidence that the cluster has a relatively cool ($3-4$ keV) inner core surrounded by a hotter ($8-9$ keV) outer envelope. Overall, the X-ray image displays a relaxed morphology, although substructures have been detected at the cluster center.
![image](fig4_lr.ps){width="15cm"}
In Fig. 4, we display the central cluster radio emission, obtained with the VLA in C configuration, superimposed on optical and X-ray images. In the top left panel, we present the optical cluster image with a radio contour plot overlaid. The total intensity radio contours are at 1.4 GHz with a resolution of 16$''$, while the optical image was extracted from the POSS2 red plate in the Optical Digitalized Sky Survey. The field of view of this image is about 4$'$. Our image clearly displays evidence of extended emission around the central radio core.
The peak of the central radio emission is coincident with the cluster cD galaxy. The emission as detected at higher resolution is shown in the top right panel of Fig. 4, in which we overlay the high resolution radio image from the VLA archive (project AT211) on the HST image. The sensitivity of the image is 0.037 mJy/beam and the resolution is $1.36''\times1.30''$.
In the bottom left panel, our 16$''$ resolution radio image is overlaid on the Chandra X-ray emission of the cluster in the $0.5-4$ keV band. The diffuse radio emission appears extended to several hundreds of kiloparsecs and appears well mixed with the thermal plasma. The round morphology of the diffuse emission also coincides well with the morphology of the thermal gas.
The bottom right panel displays a large field of view of the total intensity radio contours of A1835 at 1.4 GHz with a FWHM of 53$''\times 53''$. This image was obtained with the VLA in D configuration. Despite the presence of several radio galaxies located close to the cluster center, a new mini-halo in A1835 is clearly evident at this lower resolution.
The mini-halo flux density is $6.0\pm 0.8$ mJy, which corresponds to a power $L_{1.4GHz}=1.2\times10^{24}$W/Hz. The central radio source has a flux density of $32.2\pm 0.8$, which corresponds to a power $L_{1.4GHz}=6.2\times10^{24}$W/Hz, which agrees with B[î]{}rzan et al. (2008).
Abell 2029
----------
The nearby well-studied cluster of galaxies A2029, is one of the most optically detected regular rich clusters known and is dominated by a central ultra-luminous cD galaxy (Dressler 1978, Uson et al. 1991).
The hot cluster A2029 has been extensively studied in X-rays (e.g., Buote & Canizares 1996, Sarazin et al. 1998, Lewis et al. 2002, Clarke et al. 2004, Vikhlinin et al. 2005, Bourdin & Mazzotta 2008). On scales $r>100-200$ kpc, it is one of the most regular and relaxed clusters known (e.g., Buote & Tsai 1996). However, in the image of the cool dense core, Clarke et al. (2004) observed a subtle spiral structure. Along with several sharp brightness edges, it indicates ongoing gas sloshing that is most likely indicative of past subcluster infall episodes (Ascasibar & Markevitch 2006). As in many cool-core clusters with central radio sources, some small-scale X-ray structure is also apparently connected with the central radio galaxy (Clarke et al. 2004)
In the top left panel of Fig. 5, our deep radio observation at 1.4 GHz with a resolution of 16$''$, obtained with the VLA in C configuration, is overlaid on the POSS2 red plate image. The field of view of the image is about 7$'$. A large-scale diffuse emission, most likely a mini-halo, is located around, but has its centroid offset from the central PKS1509+59 source, which is coincident with the cluster cD galaxy. The mini-halo is far more extended than the central radio galaxy and appears to have a filamentary morphology slightly elongated in a north-east to south-west direction. The relatively high resolution of our new image ensures that the detected diffuse emission is real and not due to a blend of discrete sources. A sign of a diffuse, extended emission in this cluster is also found by Markovi[ć]{} et al. (2003) in a VLA observation at 74 MHz.
To distinguish more effectively between the mini-halo emission and the radio emission of the central cD galaxy, in the top right panel of Fig. 5, we show a high resolution zoom of the cluster central region. In this panel, the image at 1.4 GHz with a resolution of $1.62'' \times 1.35''$ taken from an archive data set (AL252) is overlaid on the HST image. As shown in this panel, the radio source PKS1509+59 associated with the central cD galaxy has a distorted morphology and two oppositely directed jets. The source has also a very steep integrated spectrum and a high rotation measure as discussed in detail by Taylor et al. (1994).
In the bottom left panel of Fig. 5, the previous radio image is overlaid on the Chandra X-ray emission of the cluster in the $0.5-4$ keV band. We retrieved archival X-ray data for comparison of the gas distribution and the cluster radio emission. The mini-halo appears elongated in the same direction as both the cluster X-ray emission and the central cD galaxy.
The bottom right panel shows the total intensity radio contours of A2029 at 1.4 GHz for a larger field of view with a FWHM of 53$''\times 53''$. This image was obtained with the VLA in D configuration. The mini-halo in A2029 is even clearer in this lower resolution image.
The mini-halo flux density is $18.8\pm 1.3$ mJy, which corresponds to a power $L_{1.4GHz}=2.6\times10^{23}$W/Hz. While the central radio source has a flux density of $480.0\pm 19.5 mJy$, which corresponds to a power $L_{1.4GHz}=6.7\times10^{24}$W/Hz.
![image](fig5_lr.ps){width="15cm"}
Ophiuchus
---------
Ophiuchus is a nearby, rich cluster located 12 deg from the Galactic center. In X-rays, it is one of the hottest clusters known.
![image](fig6_lr.ps){width="15cm"}
\[Ophi\]
Suzaku data (Fujita et al. 2008) and the archival Chandra data indicate that this hot cluster has a cool, dense core. The archival wide-field ROSAT PSPC image suggests an ongoing unequal-mass merger, but the Chandra image indicates that the core is still largely undisturbed, although it exhibits the prototypical cold fronts caused by gas sloshing (Ascasibar & Markevitch 2006). Eckert et al. (2008) detected, with INTEGRAL, X-ray emission at high energies in excess of a thermal plasma spectrum with the cluster’s mean temperature. This emission may be of non-thermal origin, caused by for example, by Compton scattering of relativistic electrons by the cosmic microwave background radiation (see e.g., Rephaeli et al. 2008, Petrosian et al. 2008, and references therein for reviews). It would be particularly interesting to investigate the presence of relativistic electrons in the intergalactic medium of this cluster because by assuming that the synchrotron emission and the hard X-ray excess are cospatial and produced by the same population of relativistic electrons, their detection would allow the determination of the cluster magnetic field.
We analyzed a VLA archive observation at 1.4 GHz in D configuration. In the left panel of Fig. 6, the radio observation at 1.4 GHz with a resolution of $91.4'' \times 40.4''$ is overlaid on the POSS2 red plate image. The asymmetric beam is due to the far southern declination of the cluster. The field of view of the image is about 20$'$. It is evident that a large-scale low surface brightness diffuse emission, which is probably a mini-halo, is located around the central cD galaxy. We note that we may have missed some extended flux due to the large angular extension of the mini-halo and/or because of the poor uv coverage in the short VLA observation (only 0.5 hours).
The mini-halo flux density, calculated on the basis of the fit analysis described in Paper II, is $106.4 \pm 10.4$ mJy, which corresponds to a power $L_{1.4GHz}=1.9\times10^{23}$W/Hz. The same analysis was used to estimate the radio flux density of the central cD galaxy, since no high resolution data for this cluster is available in the VLA archive. We obtained a flux density of $29\pm 2$ mJy, which corresponds to a power $L_{1.4GHz}=5.1\times10^{22}$W/Hz.
In the right panel of Fig. 6, our radio image is overlaid on the Chandra X-ray emission of the cluster in the $0.5-4$ keV band. The peak of the X-ray emission is coincident with the position of the cD galaxy. Although the Chandra detector does not cover the entire cluster X-ray extension, a connection between the mini-halo and the X-ray emission is evident, which is better investigated in the next section.
Comparison between mini-halos and the X-ray emitting gas
========================================================
By superimposing the radio and X-ray data presented in previous sections, we emphasized the similarity between the radio mini-halo emission and the cluster X-ray morphologies of A2029, A1835, and Ophiuchus. Because of the large angular extension of Ophiuchus, it is possible to perform a quantitative comparison between the radio and X-ray brightness. Here we compare the Chandra count rate image in the $0.5-4$ keV band, with the VLA radio image, corrected for the primary beam attenuation.
We first constructed a square grid covering a region containing both the radio mini-halo and the X-ray emission from which we excluded areas containing discrete radio sources. The grid cells size was chosen to be as large as the radio beam ($\simeq$ 90$''$). In the statistical analysis, all the discrete sources were excluded by masking them out. All pixels lying in blanked areas or close to the edges of the chips were also excluded from the statistics. A scatter plot of the radio versus X-ray brightness comparisons is presented in top panel of Fig. 7. Each point represents the mean brightness in each cell of the grid, while the error bars indicate the corresponding statistical error. The close similarity between radio and X-ray structures in Ophiuchus is demonstrated by the correlation between these two parameters. We fitted the data with a power law relation of the type: $I_{1.4 GHz}\propto I_X^{1.51\pm 0.04}$. We then present the same correlation this time obtained by azimuthally averaging the radio and the X-ray emission in annuli (see middle panel of Fig.7). The data are fitted by a power law relation of the type $I_{1.4 GHz}\propto I_X^{1.6\pm 0.1}$, which is consistent within the errors with the scatter plot analysis. The correlation indicates that the radial decline in the non-thermal radio component is slightly steeper than that of the thermal X-ray component, as can be seen in the bottom panel of Fig. 7.
This trend may be explained by secondary models of mini-halos, where a super-linear power law relation is expected (Dolag & En[ß]{}lin 2000, Pfrommer et al. 2008). However, this result must be viewed with uncertainty, because as previously mentioned, we may miss part of the radio emission on the largest angular scales, which would reduce the slope of that relation. Moreover, especially in cooling-core clusters, the slope of the radius versus the X-ray brightness may be affected by strongly variable temperature across the cluster.
It is interesting to compare the energetics of the mini-halos with that of the thermal X-ray emitting gas. It has become clear that AGNs play a crucial role in reheating the gas in the cores of clusters (e.g., McNamara & Nulsen 2007). This decelerates the cooling and condensing of thermal gas onto the cD galaxy at the center of the cluster. The details of reheating are still misunderstood, since the volume occupied by the radio jets and lobes is small compared to that of the cool core. The mini-halo, however, appears well-matched to the cluster core in extent, even corresponding in elongation and surface brightness in some cases (e.g., Ophiucus, A2029).
Applying standard minimum energy arguments to the radio emission from the mini-halos and assuming that $k/f=1$ (where $k$ is the ratio of the relativistic particle energy to that in electrons emitting synchrotron radiation, and $f$ is the volume filling factor of magnetic field and relativistic particles) we infer a typical equipartition magnetic field strength of 0.6 $\mu$G (but see Paper II for more detailed calculations), and a total energy of roughly $6 \times 10^{57}$ ergs for a typical mini-halo of radius 150 kpc. If we adopt a timescale for resupply of $10^7$ years then we derive an energy dissipation of 2 x $10^{43}$ erg/s. This is smaller by a factor 50 than the $10^{45}$ erg/s required to balance cooling in the X-rays (Fabian 1994). A possible solution is to increase the value of $k/f$. Dunn, Fabian & Taylor (2005) measured a wide range of $k/f$ values in clusters with values for the mini-halos typically $\sim$100, though they did not single out that class of objects. Since the total energy increases only by $(k/f)^{4/7}$, increasing $k/f$ by a factor of 100 only increases the total energy by a factor $\sim$14, leaving a factor $\sim$3.5 unexplained. Given the various approximations employed, this is fairly close to the energy required. However, it raises the questions of (1) what process supplies energy to the mini-halos? and (2) how do the mini-halos couple with the thermal gas? Even if the mini-halos are not involved in the heating of the cluster, they could be tracing where the heating is going on. One possibility is that the mechanism that heats the cluster could also deposit some energy into relativistic electrons that then radiate in the radio.
![Comparison of the radio and X-ray brightness in the Ophiuchus galaxy cluster. The scattered plot of the radio intensity at 1.4 GHz versus the Chandra X-ray count rate is shown in the top panel. Each point represents the average in a rectangular region of $90''\times90''$ as shown in the inset. The middle panel presents the same correlation this time obtained by azimuthally averaging the radio and the X-ray emission in annuli as shown in the inset. The normalized X-ray and radio radial profiles are shown in the bottom panel. All pixels lying in blanked areas or close to the edges of the chips have been excluded from the statistics.[]{data-label="RadX"}](fig7_lr.ps){height="18cm"}
Comparison between the mini-halos and the central cD radio galaxies
===================================================================
As shown by Burns (1990), a large number of cooling core clusters contain powerful radio sources associated with central cD galaxies. Best et al. (2007) showed that brightest group and cluster galaxies are more likely to host a radio-loud AGN than other galaxies of the same stellar mass.
Since mini-halos are located at the center of cooling core clusters, where a radio-loud AGN is in general present, a possible link between the radio emission of the cDs and the mini-halos is interesting to investigate in the framework of the models attempting to explain the formation of mini-halos.
In Fig. 8, we plot the radio power at 1.4 GHz of the mini-halos versus those of the central cD galaxies. In addition to data for A1835, A2029 and Ophiuchus we plot data for RXJ1347.5-1145 (Gitti et al. 2007), A2390 (Bacchi et al. 2003), and Perseus (Pedlar et al. 1990). All fluxes are calculated in a consistent way from the fit procedure presented in Paper II.
The comparison between the radio power of mini halos and that of the central cD galaxy, indicates that there is a weak tendency for more powerful mini-halos to host stronger central radio sources. We recall that in a few clusters with cooling flows, the cD galaxy can be a low power radio source or even radio quiet. This is indicative of a recurrent radio activity with a duty cycle of AGN activity that is lower than the cooling time and lower than the radio mini-halo time life. Therefore, we do not expect a strong connection between cD and mini-halo radio power, and the position of Ophiuchus in the diagram implies that this cluster is undergoing a low radio activity phase of the central cD. We propose that this is important and that further studies and more robust statistical analyses are necessary to prove that mini-halo emission is directly triggered by the central cD galaxy.
![Comparison between the radio power at 1.4 GHz of the mini-halos with that of the central cD galaxies, for the clusters analyzed in this work. All the fluxes are derived from the fit procedure described in Paper II.](fig8_lr.ps){width="8cm"}
\[cD\]
Conclusions
===========
Mini-halos in clusters are still poorly understood sources. They are a rare phenomenon, which have been found so far in only a few clusters. A larger number of mini-halo discoveries and more information about their physical properties is necessary to discriminate between the different mechanisms suggested for transferring energy to the relativistic electrons that power the radio emission.
To search for new mini-halos, we have analyzed deep radio observations of A1068, A1413, A1650, A1835, A2029, and Ophiuchus, carried out with the Very Large Array at 1.4 GHz.
We have found that at the center of the clusters A1835, A2029, and Ophiuchus, the dominant radio galaxy is surrounded by a diffuse low surface brightness mini-halo. The relatively high resolution of our new images ensures that the detected diffuse emission is real and not due to a blend of discrete sources.
We analyzed the interplay between the mini-halos and the cluster X-ray emission. We identified a similarity between the shape of the radio mini-halo emission and the cluster X-ray morphology of A2029, A1835, and Ophiuchus. We note that, although all these clusters are considered to be relaxed systems, when analyzed in detail they are found to contain peculiar X-ray features at the cluster center, which are indicative of a link between the mini-halo emission and some minor merger activity. Because of the large angular extension of Ophiuchus, it is possible to perform a point-to-point comparison of the radio and X-ray brightness distributions. The close similarity between radio and X-ray structures in this cluster is demonstrated by the correlation between these two parameters. We fitted the data with a power law relation of the type $I_{1.4 GHz}\propto I_X^{1.51\pm 0.04}$.
A hint of diffuse emission at the center of A1068 and A1413 is present with low significance ($2-3 \sigma$), and before classification as mini-halos, further investigation is needed. In addition, in the field of view of one of our observations, we report the serendipitous detection of a giant radio galaxy, located at RA=10$h$39$m$30$s$ DEC=39$^{\circ}$47$'$19$''$, of a total extension $\sim$1.6 Mpc in size.
Finally, the comparison between the radio power of mini halos and that of the central cD galaxy, reveals that there is a weak tendency for the more powerful mini-halos to host stronger central radio sources.
Discriminating between the different scenarios proposed for mini-halo formation is difficult using present radio data. We note that the radio-X ray connection in the Ophiuchus cluster may support secondary models. On the other hand, the analyses of the clusters studied here appear to indicate that mini-halos are not a common phenomenon in relaxed system, a result that is in closer agreement with primary models.
FG and MM thank the hospitality of the Harvard-Smithsonian Center for Astrophysics where most of this work was done. Support was provided by Chandra grants GO5-6123X and GO6-7126X, NASA contract NAS8-39073, and the Smithsonian Institution. This research was partially supported by ASI-INAF I/088/06/0 - High Energy Astrophysics and PRIN-INAF2005. We are grateful to the referee Pasquale Mazzotta for very useful comments that improved this paper. We would like to thank Rossella Cassano and Chiara Ferrari for helpful discussions. The National Radio Astronomy Observatory (NRAO) is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. Based on photographic data obtained using The UK Schmidt Telescope. The UK Schmidt Telescope was operated by the Royal Observatory Edinburgh, with funding from the UK Science and Engineering Research Council, until 1988 June, and thereafter by the Anglo-Australian Observatory. Original plate material is copyright (c) of the Royal Observatory Edinburgh and the Anglo-Australian Observatory. The plates were processed into the present compressed digital form with their permission. The Digitized Sky Survey was produced at the Space Telescope Science Institute under US Government grant NAG W-2166. This research has made use of the NASA/IPAC Extragalactic Data Base (NED) which is operated by the JPL, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
Serendipitous detection of a giant radio galaxy south-west of A1068
===================================================================
The left panel of Fig. A.1 shows a large field of view of the radio emission around A1068 overlaid on the archive XMM observation. The XMM image is in the 0.2-5.0 keV band and has been convolved with a Gaussian of $\sigma =20''$. The image shows a sky region south-west of A1068. The cluster central emission is located in the top left corner of the figure.
At a projected distance of about 17$'$ from A1068, we detected an extraordinarily extended radio galaxy, which does not belong to the cluster, showing a wide-angle head-tail morphology. The core of the head-tail radio galaxy is located at RA=10$h$39$m$30$s$ DEC=39$^{\circ}$47$'$19$''$ and is coincident with a bright galaxy at a redshift $z=0.0929$ (taken from the Sloan Survey). At the distance of the head tail radio galaxy, 1$''$ corresponds to 1.71 kpc. Therefore, its total extension reaches about 1.6 Mpc in size. The radio emission shows a clearly unresolved core, two jets with some bright knots, and fading extended lobes. The southern jet is brighter than the northern one. Moreover, it presents a well collimated shape that bends sharply to the east at about 3$'$ ($\simeq$ 300 kpc) from the core. The morphology of the northern jet is not as clear since it is only partially visible, although it seems to bend more gradually than the southern jet. The difference in brightness and shape of the two jets may indicate that the radio source appears shortened by projection effects. On the north side, we see a diffuse low brightness lobe, while the southern lobe is only partially detected. Because of its large extension, we may be missing flux from the entire source, and the flux density of the radio galaxy calculated in our image must therefore be considered as a lower limit. Its total flux density is $>$ 72 mJy, which corresponds to a power $L_{1.4GHz} > 1.5\times10^{24}$W/Hz. From the NVSS, we estimated a flux density of $75\pm2$ mJy.
In the radio/X-ray overlay, we note an extended X-ray source, which may be a small cluster or a group hosting this giant radio galaxy. The presence of a cluster is confirmed in the optical band, since it belongs to the large new catalog of galaxy clusters (Koester et al. 2007) extracted from the Sloan Digital Sky Survey optical imaging data (York et al. 2000). The interaction of the radio plasma with the thermal intracluster medium may explain its distorted morphology.
To identify the optical counterpart of the giant radio source, the right panel of Fig. A.1 shows a zoomed image of the central part of the radio galaxy overlaid on the optical image taken from the POSS2 red plate. The identification seems unambiguous.
![image](figA1_lr.ps){width="15cm"}
Serendipitous detection of an unusual radio galaxy in A1650
===========================================================
Coincident with the galaxy 2MASX J12583829-0134290 (z=0.086217) that is at the position RA=$12h58m38.2s$ DEC=$-01^{\circ}34'29''$, we detected an unusual radio galaxy. This radio galaxy was also observed by Owen et al. (1993) in a 1.4 GHz VLA survey of Abell clusters of galaxies. The left panel of Fig. B.1 shows the radio emission with a resolution of 16$''$. The morphology of the source appears to be a core with, to the north-west, a round structure with sharp edges. The flux density of the overall structure is 135$\pm$4 mJy. The right panel of Fig. B.1 shows the image taken from the FIRST survey overlaid on the POSS2 image. The higher resolution image shows that the round structure we detected at lower resolution is not due to the radio emission from another galaxy. It may be a NAT (narrow-angle tailed) source. The faint extended emission visible at higher resolution could be the ends of the two tails. A similar source was detected in A3921 (Ferrari et al. 2006), a NAT source associated with a galaxy, whose diffuse component is a partly detached pair of tails from an earlier period of activity of the galaxy.
The Chandra field of view does not extend to the radio galaxy. We retrieved the larger field of view XMM observation from the archive but no diffuse X-ray emission was detected at the location of the radio galaxy.
![image](figB1_lr.ps){width="15cm"}
\[A1650\_ball\]
Ascasibar, Y., & Markevitch, M. 2006, , 650, 102
Bacchi M., Feretti L., Giovannini G., Govoni, F. 2003, A&A, 400, 465
Baldi, A., Ettori, S., Mazzotta, et al., 2007, , 666, 835
Becker, R. H., White, R. L., & Helfand, D. J. 1995, , 450, 559
Best, P. N., von der Linden, A., Kauffmann, G., Heckman, T. M., & Kaiser, C. R. 2007, , 379, 894
B[î]{}rzan, L., McNamara, B. R., Nulsen, P. E. J., et al. 2008, , 686, 859
Bourdin, H., & Mazzotta, P. 2008, , 479, 307
Brunetti, G., Setti, G., Feretti, L., & Giovannini, G. 2001, , 320, 365
Buote, D. A. 2001, , 553, L15
Buote, D.A., & Canizares, C.R. 1996, , 457, 565
Buote, D. A., & Tsai, J. C. 1995, , 452, 522
Burns, J.O. 1990, , 99, 14
Burns, J.O., Sulkanen, M.E., Gisler, G.R., & Perley, R.A. 1992, , 388, L49
Cassano, R., Gitti, M., & Brunetti, G. 2008, , 486, L31
Clarke, T.E., Blanton, E.L., & Sarazin, C.L. 2004, , 616, 178
Condon J.J., Cotton W.D., Greisen E.W. et al., 1998, AJ, 115, 1693
Dolag, K., & En[ß]{}lin, T. A. 2000, , 362, 151
Donahue, M., Voit, G. M., O’Dea, C. P., Baum, S. A., & Sparks, W. B. 2005, , 630, L13
Dressler A., 1978, ApJ, 223, 765
Dunn, R. H., Fabian, A. C., & Taylor, G. B. 2005, MNRAS, 364, 1343
Eckert, D., Produit, N., Paltani, et al. 2008, , 479, 27
Fabian, A.C. 1994, ARA&A, 32, 277
Feretti, L., & Giovannini, G. 2008, A Pan-Chromatic View of Clusters of Galaxies and the Large-Scale Structure, 740, 143
Ferrari, C.; Hunstead, R. W.; Feretti, L.; Maurogordato, S.; Schindler, S. 2006, A&A 457, 21
Ferrari, C., Govoni, F., Schindler, S., Bykov, A.M., & Rephaeli, Y. 2008, Space Science Reviews, 134, 93
Fujita, Y., Kohri, K., Yamazaki, R., & Kino, M. 2007, , 663, L61
Fujita, Y., et al. 2008, , 60, 1133
Giovannini, G., Tordi, M., & Feretti, L. 1999, New Astronomy, 4, 141
Giovannini, G., & Feretti, L. 2000, New Astronomy, 5, 335
Gitti, M., Brunetti, G., & Setti, G. 2002, , 386, 456
Gitti, M., Brunetti, G., Feretti, L., & Setti, G. 2004, , 417, 1
Gitti, M., Ferrari, C., Domainko, W., Feretti, L., & Schindler, S. 2007, , 470, L25
Govoni, F., Feretti, L., Giovannini, G., B[ö]{}hringer, H., Reiprich, T.H., & Murgia, M. 2001, , 376, 803
Govoni, F., Markevitch, M., Vikhlinin, A., VanSpeybroeck, L., Feretti, L., & Giovannini, G. 2004, , 605, 695
Kempner, J. C., & Sarazin, C. L. 2001, , 548, 639
Koester, B.P., McKay, T.A., Annis, J., et al. 2007, , 660, 239
Lewis, A.D., Stocke, J.T., & Buote, D.A. 2002, , 573, L13
Majerowicz, S., Neumann, D. M., & Reiprich, T. H. 2002, , 394, 77
Markevitch, M. 2002, arXiv:astro-ph/0205333
Markevitch, M., Vikhlinin, A., & Forman, W. R. 2003, Astronomical Society of the Pacific Conference Series, 301, 37
Markovi[ć]{}, T., Owen, F.N., & Eilek, J.A. 2004, The Riddle of Cooling Flows in Galaxies and Clusters of galaxies, 61
Mazzotta, P., & Giacintucci, S. 2008, , 675, L9
McNamara, B.R., Wise, M.W., Murray, S.S. 2004, , 601, 173
McNamara, B. R., & Nulsen, P. E. J. 2007, , 45, 117
Owen, F.N., White, R.A., & Ge, J. 1993, , 87, 135
Pedlar, A., Ghataure, H. S., Davies, R. D., et al. 1990, , 246, 477
Peterson, J.R., Paerels, F.B., Kaastra, J.S., et al. 2001, , 365, L104
Petrosian, V. 2001, , 557, 560
Petrosian, V., Bykov, A., & Rephaeli, Y. 2008, Space Science Reviews, 134, 191
Pfrommer, C., & En[ß]{}lin, T. A. 2004, , 413, 17
Pfrommer, C., En[ß]{}lin, T. A., & Springel, V. 2008, , 385, 1211
Pratt, G.W., & Arnaud, M. 2002, , 394, 375
Rephaeli, Y., Nevalainen, J., Ohashi, T., & Bykov, A. M. 2008, Space Science Reviews, 134, 71
Rengelink, R. B., Tang, Y., de Bruyn, A. G., et al. 1997, , 124, 259
Sarazin, C.L., Wise, M.W., & Markevitch, M.L. 1998, , 498, 606
Schmidt, R.W., Allen, S.W., & Fabian, A.C. 2001, , 327, 1057
Takahashi, S., & Yamashita, K. 2003, , 55, 1105
Taylor, G.B., Barton, E.J., & Ge, J. 1994, , 107, 1942
Uson J.M., Boughn S.P., Kuhn J.R., 1991, 369, 46
Wise, M.W., McNamara, B.R., Murray, S.S., 2004, , 601, 184
York, D.G., Adelman, J., Anderson, J.E., et al., 2000, , 120, 1579
Venturi, T., Giacintucci, S., Brunetti, G., et al. 2007, , 463, 937
Venturi, T., Giacintucci, S., Dallacasa, D., et al. 2008, , 484, 327
Vikhlinin, A., Markevitch, M., Murray, S.S., Jones, C., Forman, W., & Van Speybroeck, L. 2005, , 628, 655
[^1]: S($\nu$)$\propto \nu^{- \alpha}$
[^2]: See http://archive.eso.org/dss/dss
|
---
abstract: 'It is shown that in a simple coupler where one of the waveguides is subject to controlled losses of the electric field, it is possible to observe optical analog of the linear and nonlinear quantum Zeno effects. The phenomenon consists in a *counter-intuitive* enhancement of transparency of the coupler with the increase of the dissipation and represents an optical analogue of the quantum Zeno effect. Experimental realization of the phenomenon based on the use of the chalcogenide glasses is proposed. The system allows for observation of the cross-over between the linear and nonlinear Zeno effects, as well as effective manipulation of light transmission through the coupler.'
author:
- 'F. Kh. Abdullaev$^{1}$, V. V. Konotop$^{1,2}$, and V. S. Shchesnovich$^{3}$'
title: Linear and nonlinear Zeno effects in an optical coupler
---
Introduction
============
Decay of a quantum system, either because it is in a metastable state or due to its interaction with an external system (say, with a measuring apparatus), is one of the fundamental problems of the quantum mechanics. Already more than fifty years ago it was proven that the decay of a quantum metastable system is, in general, non-exponential [@Khalfin; @Degasperis] (see also the reviews [@rev; @Khalfin_rev]). Ten years later in Ref. [@Misra] it was pointed that a quantum system undergoing frequent measurements does not decay at all in the limit of the infinitely-frequent measurements. This remarkable phenomenon was termed by the authors the quantum “Zeno’s paradox”. The Zeno’s paradox, i.e. the total inhibition of the decay, requires, however, unrealistic conditions and shows only as the Zeno effect, i.e. the decrease of the decay rate by frequent observations, either pulsed or continuous. The Zeno effect was observed experimentally by studying the decay of continuously counted beryllium ions [@beryl], the escape of cold atoms from an accelerating optical lattice [@zeno_lat], the control of spin motion by circularly polarized light [@zeno_spin], the decay of the externally driven mixture of two hyperfine states of rubidium atoms [@zeno_BEC], and the production of cold molecular gases [@zeno_molec]. There is also the opposite effect, i.e. the acceleration of the decay by observations, termed the anti-Zeno effect, which is even more ubiquitous in the quantum systems [@Zeno].
It was argued that the quantum Zeno and anti-Zeno effects can be explained from the purely dynamical point of view, without any reference to the projection postulate of the quantum mechanics [@FPJPhyA]. In this respect, in Ref. [@BKPO; @SK] it is shown that the Zeno effect can be understood within the framework of the mean field description, when the latter can be applied, thus providing the link between purely quantum and classical systems.
The importance of the Zeno effect goes beyond the quantum systems. An analogy between the quantum Zeno effect and the decay of light in an array of optical waveguides, was suggested in Ref. [@Longhi]. Namely, the authors found an exact solution which showed a non-exponential decay of the field in one of the waveguides. Modeling of the quantum Zeno effect in the limit of frequent measurements using down conversion of light in the sliced nonlinear crystal was considered in Ref. [@Reh]. The effect has been mimicked by the wave process in a $\chi^{(2)}$ coupler with a linear and nonlinear arms, since in the strong coupling limit the pump photons propagate in the nonlinear arm without decay. The analogy between the inhibition of losses of molecules and the enhanced reflection of light from a medium with a very high absorption was also noticed in [@zeno_BEC].
Meantime, in the mean field models explored in Refs. [@BKPO; @SK] inter-atomic interactions play an important role, leading to the nonlinear terms in the resulting dynamical equations. In its turn, the nonlinearity introduces qualitative differences in the Zeno effect, in particular dramatically reducing the decay rate [@SK] compared to the case of noninteracting atoms. This phenomenon, the enhancement of the effect by the inter-atomic interactions, in Ref. [@SK] was termed the [*nonlinear Zeno effect*]{} (since, when the nonlinearity is negligible, it reduces to the usual linear Zeno effect).
Mathematically, the mean field description of a Bose-Einstein condensate (BEC) and of the light propagation in Kerr-type media are known to have many similarities, due to the same (Gross-Pitaevskii or nonlinear Schrödinger) equation describing the both phenomena. [Furthermore, the linear Zeno effect, is observable not only in pure quantum statement, but also in the meanfield approximation [@SK]. This immediately suggests that detecting the Zeno dynamics, is possible in the classical systems, and in particular, in the nonlinear optics, thus offering new possibilities for managing of light [@com1]. Namely, one can expect the counter-intuitive reduction of attenuation of the total field amplitude (which would correspond to the reduction of losses of atoms in the BEC case) by increasing the losses in some parts of the system (an analogy to the increasing of the removal rate of atoms in the case of BEC). ]{}
To report on a very basic system where analogs of the linear and nonlinear Zeno effects can be observed and exploited is the main goal of the present paper. More specifically, we explore the mathematical analogy of the semi-classical dynamics of a BEC in a double well potential subject to removal of atoms [@SK], with light propagation in a nonlinear optical coupler, in which one of the arms is subject to controllable losses.
The paper is organized as follows. First in Sec. \[sec:two\_examp\] we consider two well known models of dissipative oscillators, which illustrate the classical analogues of the Zeno phenomenon (originally introduced in the quantum measurement theory). Next, in Sec. \[sec:experiment\] we discuss possible experimental settings allowing observation of the phenomenon in optics. In Sec. \[sec:NonLinZeno\] the theory of optical nonlinear Zeno effect is considered in details. Sec. \[sec:lin\_nonlin\] is devoted to comparative analysis of the linear and nonlinear Zeno effects. The outcomes are summarized in the Conclusion.
Two trivial examples. {#sec:two_examp}
=====================
Before going into details of the optical system, let us first give a simple insight on the pure classical origin of the phenomenon of inhibition of the field attenuation by strong dissipation. First, we recall the well-known fact, that increase of the dissipation $\alpha$ of an overdamped ($\alpha\gg \omega$) oscillator $\ddot{x} +
\alpha \dot{x}+\omega^2 x=0$, results in decrease of the attenuation of the oscillations. Indeed the decay rate $R\approx
\omega^2/\alpha$ approaches zero, when the dissipation coefficient $\alpha$ goes to infinity. But the amplitude of oscillations in this case is also nearly zero. However, the coupling of another linear oscillator to the dissipative one, $$\begin{aligned}
%\begin{array}{l}
\ddot{x}_1+\alpha\dot{x}_1+\omega^2 x_1+\kappa x_2=0,
%\\
\quad
\ddot{x}_2+ \omega^2
x_2+\kappa x_1=0,
%\end{array}\end{aligned}$$ allows one to observe the inhibition of attenuation due to strong dissipation by following a finite amplitude $x_2$. Indeed, the characteristic equation, $$\displaystyle{\lambda=\frac{\lambda^4+2\lambda^2\omega^2+\kappa^2-\omega^4}{\alpha(\lambda^2+\omega^2)}},$$ evidently has the small root $
\lambda\approx (\kappa^2-\omega^4)/ (\alpha\omega^2)
$ which appears for $\alpha\gg \kappa^2/\omega^2-\omega^2>0$. Thus, one of the dynamical regimes of the system is characterized by the decay rate which goes to zero in the overdamped case, moreover the relation between the amplitudes of the damped and undamped oscillators reads $|x_1/x_2|\to \omega^2/\kappa<1$ as $\alpha\to\infty$. In other words, strong dissipation in one of the oscillators can attenuate the energy decay in the whole system. On the other hand, the last example illustrates that if the coupling is of the same order as the eigenfrequencies of the subsystems, the energy is distributed between the two subsystems in approximately equal parts. This does not allow for further decrease of the decay rate of the energy, because its large part is concentrated in the damped subsystem.
[The phenomenon descried above for the linear oscillators can be viewed as a classical analog of the linear Zeno effect.]{} The nonlinearity changes the situation dramatically. This case, however does not allow for complete analytical treatment, any more, and that is why we now turn to the specific nonlinear system, which will be studies numerically. We consider an optical coupler composed of two Kerr-type waveguides, one arm of which is subject to relatively strong field losses. We will show that such a coupler mimics the quantum Zeno effect, allowing one to follow, in a simple optical experiment, the cross-over between the linear [(weak intensities)]{} and nonlinear [(strong intensities)]{} Zeno effects, thus providing a deep analogy between the effect of dissipation in the classical and quantum systems. In particular, we will also show that [*strong losses of the field in one of the waveguides can significantly enhance the transmittance of the coupler as a whole*]{}.
The coupler and a possible experimental setting. {#sec:experiment}
================================================
The optical fields in the two tunnel-coupled nonlinear optical fibers [@Jensen82] (alternatively, one can consider two linearly couple waveguides [@planar]) are described by the system
\[sys\] $$\begin{aligned}
\label{sys_a}
-i\frac{da_1}{dz} =(\beta_1 +i\alpha_1)a_1 \pm \gamma |a_1|^2 a_1 + \kappa a_2,\\
\label{sys_b}
-i\frac{da_2}{dz} =(\beta_2 +i\alpha_2)a_2 \pm \gamma |a_2|^2 a_2 + \kappa a_1.\end{aligned}$$
Here $a_{1,2}$ are the properly normalized fields in each arm of the coupler, $\kappa$ is the coupling coefficient measuring the spatial overlap between the channels, the upper and lower signs correspond to the focusing ($+$) and defocusing ($-$) media, $\beta_j $ ($j=1,2$) are the modal propagation constants of the cores, $\gamma = 2\pi n_2/\lambda A_{eff}$, $n_2$ is the Kerr nonlinearity parameter, $A_{eff}$ is the effective cross section of the fiber, $\lambda$ is the wavelength, and the loss coefficient $\alpha_{j} >0$ stands for the field absorption in the $j$th waveguide.
Our aim is to employ the manageable losses, i.e. a control over the coefficients $\alpha_{1,2}$, in order to observe different regimes of the light transmission through the coupler. Since in optics one cannot easily manipulate with $z$, i.e. with the length of the coupler, we are interested in realizing different dynamical regimes with a single given coupler (rather than using several couplers having different characteristics). This contrasts with the BEC case where the propagation variable $z$ corresponds to time (see e.g. [@SK]) and can be easily varied. For this reason the most suitable experimental setting could be with a coupler, whose properties strongly depend on the wavelength of the input beam (alternatively one can consider flexible change of the optical properties using temporal gradients, active doping, etc.).
An experimentally feasible realization of the described nonlinear directional coupler can be based on the use of the $As_2 Se_3$ chalcogenide glass. For this material the intrinsic nonlinearity can be up to three orders of magnitude greater than that of the pure silica fibers [@Taeed; @Chremmos; @Ruan]. More specifically, one can consider the material losses in the chalcogenide glasses, where the Kerr nonlinearity parameter is $n_2 =1.1 \cdot 10^{-13}$cm$^2$/W, that is 400 times of the nonlinearity of the fused silica fiber. However, what is even more important for our aims, is that the absorption rate of at least one of the coupler arms can be changed dramatically during the experiment. Say, in the mentioned chalcogenide glasse $\alpha$ can be on the order of a few dB/m and is very sensitive to the wavelength. Thus, a practical control over the absorption can be performed by using the dependence of the loss coefficients $\alpha_{1,2}$ on the wavelength of the incident light.
To implement this idea, it is necessary to produce the two arms of the coupler using chalcogenide glasses of different types. In particular, one can consider the standard sulphide fiber in one arm of the coupler and the lowest loss sulphide fibers [@Aggarwal] in the other arm. Such sulphide fibers have a particularly narrow attenuation peak at the wavelength $\lambda_0\approx 3\mu$m. The behavior of the absorption coefficient $\alpha$ in the vicinity of this peak can be modeled by the Lorentzian curve (here we use the experimental results reported in [@Loren]): $$\begin{aligned}
\label{Lorenz}
\alpha_{1}(\lambda) = \alpha_{1,0} + \frac{\alpha_{1,1} \Gamma^2}{(\lambda- \lambda_0)^2 + \Gamma^2},\end{aligned}$$ where $\Gamma \approx (0.5\div 1)\,\mu$m and $\alpha_{1,0} \approx 0.5$dB/m and $\alpha_{1,1} \sim 5$dB/m for a usual sulfide fiber. Varying the wavelength about $\lambda_0\approx 3\mu$m in the interval $\lambda_0\pm 0.5\mu$m the loss can be varied in the standard sulfide fiber by $(0.5\div 5)$dB/m, and in the lowest loss sulfide fiber in the interval $(0.05\div
0.2)$dB/m. Even larger attenuation can be achieved for chalcogenide fibers 30Ge-10As-30Se-30Te, where the pick attenuation is on the order of $(5 \div
30)dB/m$ observed for $\lambda \approx 4.5\mu$m.
The nonlinear optical Zeno effect. {#sec:NonLinZeno}
==================================
Thus we consider the situation when one of the waveguides (waveguide 1) is subjected to controllable losses (as discussed above), while the second one (waveguide 2) is operating in the transparency regime, i.e. when $\alpha_1 \gg
\alpha_2$. Respectively, we simplify the problem setting in what follows $\alpha_2=0$.
We start with the estimate of the effective losses, designated below as $\widetilde{\alpha}_2$, in the transparent arm of the coupler, which occur due to the energy exchange between the arms. For $z\gg \alpha_1^{-1}$, one can adiabatically eliminate $a_1$ from the system (\[sys\]). Moreover, assuming that $\alpha_1 \gg \kappa$ we obtain: $|a_1|^2 \approx
\frac{\widetilde{\alpha}_2}{\alpha_1}|a_2|^2\ll|a_2|^2$ and $$-i\frac{d a_2}{dz} \approx \left(i\widetilde{\alpha}_2 + \widetilde{\beta}_2 +
\widetilde{\gamma}|a_2|^2\right)a_2.
\label{EQa2}$$ Here $\widetilde{\beta}_2 = \beta_2 +
\widetilde{\alpha}_2 (\beta_2-\beta_1)/\alpha_1$ and $\widetilde{\gamma}
= \gamma\left(1+ \widetilde{\alpha}_2/\alpha_1\right)$ , with the effective $z$-dependent attenuation rate: $$\widetilde{\alpha}_2 = \frac{\alpha_1\kappa^2}{\left( \beta_2-\beta_1 +
\gamma |a_2|^2\right)^2 + \alpha_1^2 }.
\label{newrate}$$ First of all, we observe that $\widetilde{\alpha}_2$ decays with increase of the difference $\beta_2-\beta_1$ or the nonlinearity (the term $\gamma|a_2|^2$ in the denominator). This behavior is natural because the difference in the propagation constants $\beta_{1,2}$ results in incomplete energy transfer between the arms, whereas the nonlinearity effectively acts as an additional amplitude-dependent detuning. In practical terms, however, the effect due to the constant linear detuning is negligible, because $\beta_2-\beta_1$ is typically too small, whereas the nonlinearity can result in an appreciable effect. Thus, the effective attenuation rate $\widetilde{\alpha}_2$ decays either with increase of the absorption $\alpha_1\to\infty$ (the linear Zeno effect), or (for given losses $\alpha_1$) with the intensity of the light in the transparent arm, tending to zero in the formal limit $ |a_2|^2\to \infty$ (the nonlinear Zeno effect).
Moreover, the anti-Zeno effect, i.e. the increase of the attenuation with the increase of the loss coefficient $\alpha_1$ can be observed merely due to the presence of a strong nonlinearity. Such an effect, however, is not counter-intuitive in our setup. In fact, it is rather trivial: for $\gamma
|a_2|^2\gg \alpha_1$ Eq. (\[newrate\]) tells us that the ratio $|a_1|^2/|a_2|^2
\approx \widetilde{\alpha}_2/\alpha_1$ is independent of $\alpha_1$, which means that, in the Zeno regime $z\gg \alpha_1^{-1}$, increasing the loss coefficient must increase the actual attenuation.
In order to perform the complete numerical study of the coupler we introduce the real amplitudes and phase of the fields: $a_j = \rho_j
\exp(i\phi_j)$, $j=1,2$, the relative difference in the energy flows in the two arms $F = (|a_1|^2 - |a_2|^2)/(|a_1|^2 + |a_2|^2)$, the total energy flow in the coupler $Q = (|a_1|^2 + |a_2|^2)/P_0$, normalized to the input flow $P_0 = |a_{10}|^2 + |a_{20}|^2$, as well as the phase mismatch $\phi = \phi_1
- \phi_2$. Then the original system (\[sys\]) is reduced to
\[sys1\] $$\begin{aligned}
\label{sys1_a}
&&F_Z = -g (1-F^2) + 2\sqrt{1-F^2}\sin(\phi), \\
\label{sys1_b}
&&\phi_Z = \frac{(\beta_1 -\beta_2)}{\kappa} \pm 4\delta F Q - 2\frac{F}{\sqrt{1-F^2}}\cos(\phi),\\
\label{sys1_c}
&&Q_Z = - g Q(1+F)\end{aligned}$$
where $g=\alpha_1/\kappa$, $\delta = P_0/P_c$, and the distance is normalized on the linear coupling length $L = 1/\kappa$, i.e. $Z =z/L = z\kappa$. Here we also introduced the critical power $P_c = 4\kappa/\gamma$, The critical power $P_c$ which separates the regimes with periodic exchange energy between the arms for $P
< P_c$ and the localization of energy in one of the waveguides if $P >
P_c$ [@Jensen82]. For a fiber based on the chalcogenide glass, described above, the critical power $P_c\sim 1$W and the coupling length L varies in the interval ($0.1 \div 1$)m.
Notice that mathematically the system (\[sys1\]) coincides with the one describing a BEC in a double-well trap subject to elimination of atoms from one of the wells [@SK]. The coupler mimics the nonlinear Zeno effect in a BEC in a double-well trap, where the time is replaced by the propagation distance in the coupler and the electric fields in the arms of the coupler correspond to the number of quantum particles in the potential wells. The system (\[sys1\]) also resembles the evolution of Bose-Hubbard dimer with non-Hermitian Hamiltonian [@Korch]
Linear [*vs*]{} nonlinear Zeno effects. {#sec:lin_nonlin}
=======================================
Passing to the numerical study of the system (\[sys1\]) we estimate that for the length $L$ on the order of one meter the value of the absorption coefficient $g$ can be changed in chalcogenide fibers by up to $20$ times. In the empiric formula (\[Lorenz\]) the values of the dimensionless parameters are as follows: $g_{1,0}=\alpha_{1,0}/\kappa \sim 1$ and $g_{1,1} \alpha_{1,1}/\kappa \sim (10\div
20)$, while $\delta = P_0/P_c$ is in the interval $(0 \div 2.5)$.
Our main results are summarized in Fig. \[FG1\]. In panels (a) and (b), where we show the dependence of the output signal [*vs*]{} the coupler length, the three different regimes are evident. At small distances from the coupler input, $Z\lesssim 0.2$, the standard [*exponential*]{} decay occurs. This stage does not depend significantly on the intensity of the input pulse (i.e. on $\delta$ in our notations). However at larger distances, $0.2\lesssim Z\lesssim 2$, the system clearly reveals [*power-like decay*]{}. The power of the decay, however appears to be sensitive to the magnitude of the input power, i.e. to the nonlinearity of the system. The decay is much stronger at lower powers ($\delta\approx0$), corresponding to the linear Zeno effect, and much weaker for the input intensities above the critical value ($\delta=2$), and may be termed the [*nonlinear*]{} Zeno effect. In all the cases the output beam is concentrated in the fiber/waveguide without losses \[Fig. \[FG1\]a\] and the output power is still sufficiently high above 70% of the imput power. We also notice that, while we have chosen relatively large $g$ the phenomenon is also observable (although less pronounced) for lower levels of the light absorption.
In practice, however, Figs \[FG1\] (a) and (b) will not correspond to a real experiment with an optical coupler, because in the standard settings its length, i.e. $Z$, is fixed. Instead, as we mentioned above, observation of the Zeno effects, can be achieved by varying wavelength of the light. From the panel (b) one concludes that the best observation of the phenomenon can be achieved at some intermediate lengths of the coupler, where, on the one hand, the power-like decay is already established and, on the other hand, the output power is still high so that the system is still in the nonlinear regime (we do not show transition to the linear regime, which for the data used in Fig. \[FG1\] occurs approximately at $Z\approx 0.3$. In our case the coupler lengths satisfying the above requirements correspond to the interval $0.2\lesssim Z\lesssim 2$. Respectively, choosing $Z=2$ in the panels (c) and (d) we show how the output intensity depends on the wavelength of the incident beam, which can be manipulated experimentally. In, particular in panel (d) one clearly observes the linear Zeno effect as a dramatic increase of the output power (the transparency window of the coupler) exactly at the pick attenuation (see the dashed curve) achieved at the wavelength $\lambda_0$, as well as practically lossless propagation of the field in the nonlinear case (c.f. the solid line with dashed curves). Remarkably, for the strongly nonlinear case we also observe local increase of the output power, which however is preceded by the small decay of the power. The local decay of the intensity appears when the input power is approximately equal to the critical one ($\delta\approx
1$).
So far, however, we have considered the case of zero phase mismatch between the two arms of the coupler. In fig. \[FG2\] we show the dependence on the phase mismatch between the two cores.
One observes that the input phase mismatch does not destroy the phenomenon, but can affect the output energy flow by the order of 10% (the relative energy distribution being practically unchanged).
Conclusions
===========
To conclude, we have shown that by using a simple optical coupler subjected to the wavelength dependent absorption of the light in one of the arms, one can observe the linear and nonlinear Zeno effects. The phenomenon consists in increase of the output energy with increase of the absorption coefficient of one of the arms. The linear Zeno effect shows an especially strong dependence on the wavelength of the input signal, [as this is expected from the design of the system]{}. The nonlinear Zeno effect, observed at the intensities above the critical one, is characterized by a much larger transparency of the system, [and consequently]{} accompanied by much weaker dependence on the input wavelength.
It is interesting to mention that, recently, the effect of a light localization in linear coupler with the strong losses in one waveguide has been observed [@DOC1]. The authors attributed this phenomenon to the [PT]{}-symmetric configuration of their passive coupler (to which it can be reduced by the proper change of variables). Since the presence of the nonlinearity rules out the referred change of variables, the present work proposes an alternative explanation of the experiment reported in [@DOC1], and moreover, shows that this is a quite general phenomenon (non necessarily related to the [PT]{}-symmetry), which can be observed in linear and nonlinear systems and which open new possibilities for manipulating transmission of light by means of controllable absorption by making it either intensity of wavelength dependent.
The authors greatly acknowledge stimulating discussions with Alex Yulin. FKA and VVK were supported by the 7th European Community Framework Programme under the grant PIIF-GA-2009-236099 (NOMATOS). VSS was supported by the CNPq of Brasil.
[99]{}
L. A. Khalfin, Dokl. Akad. Nauk SSSR [**115**]{}, 277 (1957) \[Sov. Phys. Dokl. [**2**]{}, 232 (1958)\]; Zh. Eksp. Teor. Fiz. [**33**]{}, 1371 (1958) \[Sov. Phys. JETP [**6**]{}, 1053 (1958)\].
A. Degasperis, L. Fonda, and G. C. Ghirardi, Nuovo Cimento, [**21**]{}, 471 (1974)
L. Fonda, G. C. Ghirardi, and A. Rimini, Rep. Prog. Phys., [**41**]{}, 587 (1978).
L. A. Khalfin, Usp. Fiz. Nauk [**160**]{}, 185 (1990).
B. Misra and E. C. G. Sudarshan, J. Math. Phys. Sci. [**18**]{}, 756 (1977).
W. M. Itano, D. J. Heinzen, J. J. Bollinger, and D. J. Wineland, Phys. Rev. A [**41**]{}, 2295 (1990)
M. C. Fischer, B. Gutiérrez-Medina, and M. G. Raizen, Phys. Rev. Latt. [**87**]{}, 040402 (2001)
T. Nakanishi, K. Yamane, and M. Kitano, Phys. Rev. A [**65**]{}, 013404 (2001)
E. W. Streed, et. al., Phys. Rev. Lett. [**97**]{}, 260402 (2006)
N. Syassen et. al., Science [**320**]{}, 1329 (2008)
see e.g. P. Facchi, H. Nakazato, and S. Pascazio, Phys. Rev. Lett. [**86**]{}, 2699 (2001); A.G. Kofman and G. Kurizki, Nature (London) **405**, 546 (2000), Phys. Rev. Lett. [**87**]{}, 270405 (2001); P. Facchi and S. Pascazio, Phys. Rev. Lett. [**89**]{}, 080401 (2002); A. Barone, G. Kurizki, and A.G. Kofman, Phys. Rev. Lett. [**92**]{}, 200403 (2004); I. E. Mazets, G. Kurizki, N. Katz, and N. Davidson, Phys. Rev. Lett. [**94**]{}, 190403 (2005).
P. Facchi and S. Pascazio, J. Phys. A: Math. Theor. **41**, 493001 (2008).
V. A. Brazhnyi, V. V. Konotop, V. M. Pérez-García, and H. Ott, Phys. Rev. Lett. [**102**]{}, 144101 (2009)
V. S. Shchesnovich, and V. V. Konotop, Phys. Rev. A [**81**]{}, 053611 (2010).
S. Longhi, Phys. Rev. Lett. [**97**]{}, 110402 (2006).
J. Rehacek, J. Perina, P. Facchi, S. Pascazio and L. Mista, Jr., Optics and Spectroscopy. [**91**]{}, 501 (2001).
Notice, that the optical analogy of the linear quantum Zeno effect described in [@Longhi] was based on existence of evanescent modes, i.e. on a different physical principle having and had very different physical manifestation.
S. M. Jensen, IEEE J. Quantum Electron. [**18**]{}, 1580 (1982).
D. N. Christodoulides and R. I. Joseph, Opt. Lett. [**13**]{}, 794 (1988)
V.G. Taeed et al., Optics Express, [**15**]{}, 9205 (2007).
I.D. Chremmos, G. Kakarantas, and M. K. Usunoglu, Opt. Comm. [**251**]{}, 339 (2005).
Y. Ruan et al. Opt. Lett. [**30**]{}, 2605 (2005).
I.D. Aggarwal, J.C. Sanghera, and J. Optoelectr. Adv. Mat. [**4**]{}, 665 (2002).
J.S. Sanghera et al., J. Optoelectr. Adv. Mat. [**8**]{}, 2148 (2006).
E. M. Graefe, H. J. Korsch, and A. E. Niederle, Phys. Rev. Lett. [**101**]{}, 150408 (2008).
A. Guo et al. Phys. Rev. Lett. [**103**]{}, 093902 (2009).
|
---
abstract: 'Knowledge mobilization and translation describes the process of moving knowledge from research and development (R&D) labs into environments where it can be put to use. There is increasing interest in understanding mechanisms for knowledge mobilization, specifically with respect to academia and industry collaborations. These mechanisms include funding programs, research centers, and conferences, among others. In this paper, we focus on one specific knowledge mobilization mechanism, the CASCON conference, the annual conference of the IBM Centre for Advanced Studies (CAS). The mandate of CAS when it was established in 1990 was to foster collaborative work between the IBM Toronto Lab and university researchers from around the world. The first CAS Conference (CASCON) was held one year after CAS was formed in 1991. The focus of this annual conference was, and continues to be, bringing together academic researchers, industry practitioners, and technology users in a forum for sharing ideas and showcasing the results of the CAS collaborative work. We collected data about CASCON for the past 25 years including information about papers, technology showcase demos, workshops, and keynote presentations. The resulting dataset, called “CASCONet”[^1] is available for analysis and integration with related datasets. Using CASCONet, we analyzed interactions between R&D topics and changes in those topics over time. Results of our analysis show how the domain of knowledge being mobilized through CAS had evolved over time. By making CASCONet available to others, we hope that the data can be used in additional ways to understand knowledge mobilization and translation in this unique context.'
author:
- |
Dixin Luo, Kelly Lyons\
Faculty of Information, University of Toronto, Toronto, ON, Canada\
[$\{$dixin.luo,kelly.lyons$\}$@utoronto.ca]{}
bibliography:
- 'bare\_conf.bib'
title: 'CASCONet: A Conference dataset'
---
knowledge mobilization; knowledge translation; CASCON; CASCONet; computer science and engineering; topic models; time series analysis;
Introduction
============
There is increasing interest in understanding how knowledge transfer and mobilization takes place. At the same time, the number of available datasets and accessible analysis tools is growing. Many efforts have been made to make conference datasets available and new techniques have been developed for analyzing conference data for the purpose of understanding outcomes such as knowledge mobilization. Vasilescu et al. [@vasilescu2013historical] present a dataset of software engineering conferences that contains historical data about the publications and the composition of program committees for eleven well-established conferences. This historical data is intended to assist conference steering committees or program committee chairs in assessing their selection process or to help prospective authors decide on conferences to which they should submit their work . Hayat and Lyons analyzed the social structure of the CASCON conference paper co-authorship network and proposed potential actions that might be taken to further develop the CASCON community [@hayat2010evolution]. They also analyzed the co-authorship ego networks of the ten most central authors in twenty-four years of papers published in the proceedings of CASCON using social network analysis and proposed a typology that differentiates three styles of co-authorship [@hayat2017typology]. Solomon presented an in-depth analysis of past and present publishing practices in academic computer science conference and journal publications (from DBLP) to suggest the establishment of a more consistent publishing standard [@solomon2009programmers]. Many other datasets about conference and journal publications have also been proposed, e.g., the NIPS dataset [@perrone2016poisson], the Microsoft Academic Graph (MAG) [@sinha2015overview], and the AMiner database[^2]. Interesting analyses have been proposed and carried out on some of these data sets. For example, a relatively new topic model is proposed in [@perrone2016poisson] and verified on the NIPS dataset — the dynamics of topics on NIPS over time are analyzed quantitatively, e.g., standard neural networks (“NNs backpropagation”) were extremely popular until the early 90s; however, after this, papers on this topic went through a steady decline, only to increase in popularity later on. Moreover, the popularity of deep architectures and convolutional neural networks (“deep learning”) steadily increased over these 29 years, to the point that deep learning was the most popular among all topics in NIPS in 2015. A heterogeneous entity graph of publications is proposed in MAG [@sinha2015overview], which has potential to improve academic information retrieval and recommendation systems.
In this paper, we describe a specific conference dataset and demonstrate how analyses performed on that dataset can provide insights into mechanisms of knowledge transfer. We consider the CASCON conference, the annual conference of the IBM Centre for Advanced Studies (CAS). The mandate of CAS when it was established in $1990$ was to foster collaborative work between IBM Toronto and university researchers from around the world [@perelgut1997overview]. It is a unique knowledge mobilization and translation environment, specifically designed to facilitate the transfer of technology from university research into IBM products and processes. The CASCONet dataset presented in this paper is unique in that it includes not only data about authors and papers but also data about all aspects of the CASCON conference.
![image](Schema.png){width="0.9\linewidth"}
The first CAS Conference (CASCON) was held in $1991$, one year after CAS was formed. The focus of this conference was, and continues to be, bringing together researchers, government employees, industry practitioners, and technology users in a forum for sharing ideas and results of CAS collaborative work [@perelgut1997overview]. The CASCON conference is an interesting object of study in this way because it is an annual conference of the IBM Center for Advanced Studies (CAS), a unique center for knowledge mobilization and translation. Furthermore, rather than focusing on a narrow topic area in computer science research, CASCON’s mandate is broader, covering many topics in computer science and software engineering with a focus around industry / university collaborations. It is therefore interesting to understand what kinds of unique knowledge mobilization structures can be identified by analyzing data about CASCON. The central data element of the CASCONet dataset is “person” and each person’s role in CASCON activities is described through the data. CASCONet includes the author role and provides title, author, and publication year for over 800 CASCON papers. The workshop chair role includes workshop title, workshop chair, and year. The keynote data associates people (presenters) with keynote titles and year. Finally, the demos data links demo presenter to title and year. The people, papers, themes of the workshops, topics of the keynote presentations, and the products and tools presented in the demos over the past 25 years reflect the evolving processes and knowledge mobilization in the CASCON community and may provide a glimpse into the field of computer science (advanced methods and techniques, challenges and urgent problems, innovation, and applications) overtime. As an example of the kinds of analyses that can be performed on this dataset, we present basic statistics of CASCON and analyze the temporal dynamics of topics presented at CASCON. We believe that this dataset and analyses such as these may provide researchers of computer science and social science with a new resource to study the co-evolution of academic and industry communities.
Properties of CASCONet
======================
The first CASCON took place in $1991$. More than $1500$ researchers, technologists, developers, and decision makers attend CASCON each year. CASCONet $(1991 - 2016)$ contains data about a total of $2517$ people who have written $846$ papers, presented $1212$ demos, delivered $107$ keynote presentations, and organized $796$ workshops. Fig. \[fig:schema\] shows the schema of CASCONet.
Topic Top 10 words
------------------------ ----------------------------------------------------------------------------------------------------
Users and Business User, Internet, task, trust, model, resource, web, information, database, search
Cloud and Web Services Cloud, web, service, application, design, development, integral, user, code, URL
Systems System, model, distributed, computing, database, user, management, paper, application, information
Programs and Code Program, performance, compiler, language, parallel, class, oriented, object, code, Java
Applications Interaction, application, user, mobile, interface, device, information, visual, tool, support
Networks and Security Security, network, cache, local, communication, enterprise, layer, server, grid, privacy
Software Software, design, analysis, engine, tool, approach, test, development, process, performance
Databases Database, usage, system, transaction, optimization, query, DB2, data, user, system
Data Analysis Data, analytic, user, mining, decision, event, distribution, information, business, system
Algorithms Algorithm, problem, performance, architecture, system, cluster, design, time, schedule, test
[**Person.**]{} According to the dataset, $24.0\%$ of the people who authored papers at CASCON have published more than one paper in CASCON. The most number of papers published by one person is $20$. There are $33$ people whose time span of authoring papers in CASCON is greater than $10$ years. There are only $4$ people who have participated in CASCON in all roles, as author, workshop chair, demo organizer, and keynote speaker. [**Papers.**]{} Fig. \[fig:paperyear\] shows the number of CASCON papers published each year. The CASCON main conference has accepted a relatively stable number of papers for each year since $1997$. In the early years, a greater number of papers were accepted to CASCON. Then in $2006$ and $2007$, in addition to the main conference, an IBM Dublin CAS symposium was held. An “emerging technology track” and a “short paper track” were added in $2013$. In CASCONet, the papers are identified by their types: “technical papers”; “short papers”; “emerging papers”; or, “symposium papers”. [**Workshops and demos.**]{} Fig. \[fig:workshopyear\] illustrates the number of workshops held at CASCON each year. Workshops provide attendees with opportunities to learn new technologies, learn about concepts, identify collaboration opportunities, share results, and engage in discussions around specific topics. The dynamics of the number of workshops is quite different from that of the number of papers published each year. The years between $2000$ and $2010$ saw a greater number of workshops than previous to then or since. Fig. \[fig:demoyear\] shows the numbers of demos over the years. Note that there are not any records available for demos exhibited between $1998$-$2000$ and for $2010$ and $2011$.
Topics of CASCONet Papers Over Time
===================================
In this section, we establish a topic model for CASCON papers, with the aim to capture the evolution of topics over time, and quantify and analyze the influences between pairs of topics. This is one of many kinds of analyses that can be carried out on the CASCONet dataset.
[**Topic Model of Papers.**]{} The topic model of CASCON papers was extracted using Latent Dirichlet allocation (LDA) [@blei2003latent]. We collected the nouns, verbs, and adjectives in the titles of all the $M=846$ CASCON papers from $1991$ to $2016$, and built a word corpus. $N$ “topics” were extracted as bags of words using LDA, by calculating the conditional probabilities $[p(\mbox{word}|\mbox{topic})]$ and $[p(\mbox{topic}|\mbox{paper})]$. We first set $N=50$ and learned the LDA model. Then, using $[p(\mbox{word}|\mbox{topic})]$ as features of topics, we clustered topics into $N=10$ higher-level topics based on the correlations among the topic features. The $10$ words having the highest conditional probability $p(\mbox{word}|\mbox{topic})$ in each topic are shown in Table \[table:topwords\]. We manually summarized the semantic meaning of topics and labeled them as “users and business”, “cloud and web services”, “systems”, “programs and code”, “applications”, “networks and security”, “software”, “databases”, “data analysis” and “algorithms”. For each paper, we can take the the topic with the highest conditional probability as the theme of the paper. We can then calculate the distribution of topics over years by counting the number of papers corresponding to different topics in each year. Fig. \[fig:topdis\] visualizes the distribution of topics over year. In the early years of CASCON ($1991$-$1995$), we see that “software”, “systems”, and “programs and code” are the predominant topics — papers about these three topics occupy most of the accepted submissions. Between 1991 and 1995, the average percentage of CASCON papers about “software”, “systems”, and “programs and code” are $28.1\%$, $17.7\%$, and $17.8\%$, respectively. With the development of interest in cloud computing [@armbrust2010view], more papers in this area appeared in CASCON starting in $2008$. Moreover, “software” is one of the main topics in nearly all of the $26$ CASCON conferences, while the numbers of papers in “users and business”, “networks and security” and “algorithms” are relatively small but stable over all years.
[**Granger Causality Analysis of Topics.**]{} After identifying the topics hidden in the papers’ titles, we further analyzed the causal relationships among the topics. We developed a dynamic model of topics based on a vector auto-regressive (VAR) model [@lutkepohl2011vector] and analyzed the Granger causality [@granger1969investigating; @basu2015network; @xu2016learning] of topics based on this model. Specifically, we calculated the distributions of the topics over the years according to the results of LDA model, denoted as $[\bm{p}_1, ..., \bm{p}_T]\in\mathbb{R}^{N\times T}$. Considering $\{\bm{p}_t\}_{t=1}^T$ as an instance of time series, we describe its transition process by learning the following 1-order vector auto-regressive model: $$\begin{aligned}
\label{var}
\begin{aligned}
\min_{\bm{A}}~\sum_{t=1}^{T-1}\|\bm{p}_{t+1}-\bm{A}\bm{p}_{t}\|_2^2 +\lambda\|\bm{A}\|_1.
\end{aligned}\end{aligned}$$ Here $\bm{A}=[a_{ij}]$ is a transition matrix, whose element $a_{ij}$ measures the influence $j$-th topic imposes on the $i$-th topic. If $a_{ij}>0$ ($<0$), the $j$-th topic is understood to trigger (suppress) the $i$-th topic in the next time-stamp (in this case, year). $a_{ij}=0$ if, and only if, the $i$-th topic is locally independent on the $j$-th topic. The first term of the objective function minimizes the estimation error of the time series. Furthermore, considering the fact that generally a topic is only related to a subset of topics [@luo2016learning; @luo2015multi], we impose a sparse constraint, $\|\bm{A}\|_1=\sum_{i,j}|a_{ij}|$, on the transition matrix $\bm{A}$. The VAR model can be learned effectively by using the alternating direction method of multipliers (ADMM) method.
According to the work in [@basu2015network; @xu2016learning], the transition matrix is actually the adjacent matrix of a Granger causality graph $G(\mathcal{N},\mathcal{E})$ [@granger1969investigating], where the node set $\mathcal{N}=\{1,...,N\}$ contains topics and the edge set $\mathcal{E}$ indicates the Granger causality of topics. For $i,j\in\mathcal{N}$, we say that the $j$-th topic “Granger causes” the $i$-th topic if and only if $j\rightarrow i\in \mathcal{E}$, which is equivalent to $a_{ij}\neq 0$. Fig. \[fig:TransMat\] shows the inferred transition matrix, where the topics are sorted in descending order according to their self-triggering intensity $a_{ii}$, $i=1,...,10$. We find several interesting phenomena.
- **Endogenous and exogenous topics.** In Fig. \[fig:TransMat\], the element $a_{ij}$ is labeled as black when $|a_{ij}|<0.02$, which indicates that the triggering from $j$ to $i$ is so weak that we can ignore the Granger causality from $j$ to $i$. We find that “cloud and web services”, “systems”, “applications”, “programs and code” and “software” have positive $a_{ii}$ while the other five topics have $a_{ii}=0$. This means that these five topics (“cloud and web services”, “systems”, “applications”, “programs and code” and “software”) are endogenous topics of CASCON, which, once introduced, tend to appear continuously over the years, while the other five topics are exogenous topics influenced by endogenous topics. In other words, for each exogenous topic, its appearance is mainly caused by other topics rather than itself, and its appearance in any given year does not contribute to its future appearance. From this perspective, it seems that CASCON is a conference focused on the fundamental engineering problems of computer science, e.g., systems and software engineering. The higher-level problems, such as user experience, network security, data analysis and so on, seem to be more complementary to the conference. This observation may relate to the focus of the IBM Toronto Lab and knowledge mobilization within the CAS community.
- **Positive and negative triggering patterns.** We do not impose any nonnegative constraints on the optimization problem (\[var\]) so that some of the elements of $\bm{A}$ are negative. We find that most of the $a_{ij}$’s are positive (light green and yellow in Fig. \[fig:TransMat\]) while some of them are negative (dark green ones in Fig. \[fig:TransMat\]). Positive triggering patterns indicate that the corresponding two topics are highly-correlated with each other. For example, the “Security” topic is likely triggered by other topics because the security problem is related to many fields of computer science, e.g., system security, cloud security, client information security, and security algorithms, etc. On the other hand, negative suppressing patterns may indicate that the corresponding topics have a competitive relationship in CASCON. It is natural that when the time available for paper presentations at a conference is fixed, the total number of publications is limited as well. As a result, when the number of papers about one topic is dominant at the conference, the submission and acceptance of paper on other topices may be suppressed accordingly. For example, the “systems” topic is a technical topic while the “users and business” topic is a business-related topic. These two topics are competitive — papers on systems tend to be selected over papers on business topics and vice versa. The inverse phenomenon would happen when organizers want to increase business interactions.
![The transition matrix of topics.[]{data-label="fig:TransMat"}](TransMat2.pdf){width="0.7\linewidth"}
Conclusion
==========
CASCON is a longtime conference held by the IBM Canada Lab Centre for Advanced Studies (CAS), which provides us with an interesting dataset “CASCONet”. In this paper, we introduced the data we collected and presented some preliminary analyses. Our CASCONet dataset is available on <https://github.com/iDBKMTI/CASCONet>. In the future, we will collect more data about IBM CAS, and study the correlation between CASCON and the development of CAS itself, in order to better understand the mechanism of collaboration, knowledge translation and output.
Acknowledgment {#acknowledgment .unnumbered}
==============
This research was partially funded by an NSERC Strategic Partnership Grant.
[^1]: published on GitHub at <https://github.com/iDBKMTI/CASCONet>
[^2]: <https://aminer.org/>
|
---
abstract: 'We use the framework of a relativistic constituent quark model to study the semileptonic transitions of the $B_c$ meson into $(\bar c c)$ charmonium states where $(\bar c c)=\eta_c\,(^1S_0),$ $J/\psi\, (^3S_1),$ $\chi_{c0}\, (^3P_0),$$\chi_{c1}\, (^3P_1),$ $h_c\, (^1P_1),$ $\chi_{c2}\, (^3P_2),$ $\psi\, (^3D_2)$. We compute the $q^2$–dependence of all relevant form factors and give predictions for their semileptonic $B_c$ decay modes including also their $\tau$-modes. We derive a formula for the polar angle distribution of the charged lepton in the $(l\nu_l)$ c.m. frame and compute the partial helicity rates that multiply the angular factors in the decay distribution. For the discovery channel $B_c\to J/\psi(\rightarrow \mu^+ \mu^-) l \nu$ we compute the transverse/longitudinal composition of the $ J/\psi$ which can be determined by an angular analysis of the decay $ J/\psi \rightarrow \mu^+ \mu^-$. We compare our results with the results of other calculations.'
author:
- 'Mikhail A. Ivanov'
- 'Juergen G. Körner'
- Pietro Santorelli
title: 'Semileptonic decays of $B_c$ mesons into charmonium states in a relativistic quark model'
---
Introduction {#s:intro}
============
In 1998 the CDF Collaboration reported on the observation of the bottom-charm $B_c$ meson at Fermilab [@CDF]. The $B_c$ mesons were found in an analysis of the semileptonic decays $B_c\to J/\psi l \nu$ with the $J/\psi$ decaying into muon pairs. Values for the mass and the lifetime of the $B_c$ meson were given as $M(B_c)=6.40\pm 0.39\pm 0.13$ GeV and $\tau(B_c)=0.46^{+0.18}_{-0.16}({\rm stat})\pm 0.03({\rm syst})\cdot
10^{-12}$ s, respectively. First $B_c$ mesons are now starting to be seen also in the Run II data from the Tevatron [@lucchesi04; @cdf04]. Much larger samples of $B_c$ mesons and more information on their decay properties are expected from the current Run II at the Tevatron and future experiments at the LHC starting in 2007. In particular this holds true for the dedicated detectors BTeV and LHCB which are especially designed for the analysis of B physics where one expects to see up to $10^{10}$ $B_c$ events per year.
The study of the $B_c$ meson is of great interest due to some of its outstanding features. It is the lowest bound state of two heavy quarks (charm and bottom) with open (explicit) flavor. As far as the bound state characteristics are concerned the $B_c$ meson is quite similar to the $J^{PC}=0^{-+}$ states $\eta_c$ and $\eta_b$ in the charmonium ($c\bar c$-bound state) and the bottomium ($b\bar b$-bound state) sector. However, the $\eta_c$ and $\eta_b$ have hidden (implicit) flavor and decay strongly and electromagnetically whereas the $B_c$-meson decays weakly since it lies below the $B\bar D$-threshold.
The $B_c$ meson and its decays have been widely studied in the literature. The theoretical status of the $B_c$-meson was reviewed in [@Gershtein:1998mb]. The $B_c$ lifetime and decays were studied in the pioneering paper [@Lusignoli:1990ky]. The exclusive semileptonic and nonleptonic (assuming factorization) decays of the $B_c$-meson were calculated in a potential model approach [@Chang:1992pt]. The binding energy and the wave function of the $B_c$-meson were computed by using a flavor-independent potential with the parameters fixed by the $c\bar c$ and $b \bar b$ spectra and decays. The same processes were also studied in the framework of the Bethe-Salpeter equation in [@AMV], and, in the relativistic constituent quark model formulated on the light-front in [@AKNT]. Three-point sum rules of QCD and NRQCD were analyzed in [@KLO] and [@Kiselev:2000pp] to obtain the form factors of the semileptonic decays of $B^+_c\to J/\psi(\eta_c)l^+\nu$ and $B^+_c\to B_s(B_s^\ast)l^+\nu$. As shown by the authors of [@Jenkins], the form factors parameterizing the $B_c$ semileptonic matrix elements can be related to a smaller set of form factors if one exploits the decoupling of the spin of the heavy quarks in the $B_c$ and in the mesons produced in the semileptonic decays. The reduced form factors can be evaluated as an overlap integral of the meson wave-functions which can be obtained, for example, using a relativistic potential model. This was done in [@Colangelo], where the $B_c$ semileptonic form factors were computed and predictions for semileptonic and non-leptonic decay modes were given.
In [@Ivanov:2000aj] we focused on its exclusive leptonic and semileptonic decays which are sensitive to the description of long distance effects. From the semileptonic decays one can obtain results on the corresponding two-body non-leptonic decay processes in the so-called factorization approximation. The calculations have been done within our relativistic constituent quark model based on an effective Lagrangian describing the coupling of hadrons $H$ to their constituent quarks. The relevant coupling strength is determined by the compositeness condition $Z_H=0$ [@SWH; @EI] where $Z_H$ is the wave function renormalization constant of the hadron $H$.
The relativistic constituent quark model was also employed in a calculation of the exclusive rare decays $B_c\to D(D^\ast)\bar l l$ [@Faessler:2002ut] and of the nonleptonic decays $B_c\to D_s \overline {D^0}$ and $B_c\to D_s D^0$ [@Ivanov:2002un]. In the latter case we confirmed that the nonleptonic decays $B_c\to D_s \overline {D^0}$ and $B_c\to D_s D^0$ are well suited to extract the CKM angle $\gamma$ through amplitude relations, as was originally proposed in [@masetti1992; @fleischer2000]. The reason is that the branching fractions into the two channels are of the same order of magnitude.
In this paper we continue the study of $B_c$ decay properties and calculate the branching rates of the semileptonic decays $B_c\to (\bar c c)\,l\nu$ with $(\bar c c)=\eta_c\,(^1S_0),$ $J/\psi\, (^3S_1),$ $\chi_{c0}\, (^3P_0),$$\chi_{c1}\, (^3P_1),$ $h_c\, (^1P_1),$ $\chi_{c2}\, (^3P_2),$ $\psi\, (^3D_2)$. We compare our results with the results of [@Chang:1992pt; @Chang:2001pm] where it was shown that these decay rates are quite sizable and may be accessible in RUN II of Tevatron and/or the LHC. Two-particle decays of the $B_c$-meson into charmonium states have been studied before in [@Kiselev:2001zb] by using the factorization of hard and soft contributions. The weak decays of the $B_c$-meson to charmonium have been studied in the framework of the relativistic quark model based on the quasipotential approach in [@Ebert:2003cn]. In this paper we compute all form factors of the above semileptonic $B_c$-transitions and give predictions for various semileptonic $B_c$ decay modes including their $\tau$-modes. From a general point of view we would like to remark that the semileptonic decays of the $\tau$-lepton have been studied within perturbative QCD. It has allowed one to determine the strong coupling constant with a high accuracy (see e.g. [@Korner:2000xk]). We have improved on our previous calculation [@Ivanov:2000aj] in that we no longer employ the so-called impulse approximation. In the impulse approximation one assumes that the vertex functions depend only on the loop momentum flowing through the vertex. Dropping the impulse approximation means that the vertex function can also depend on outer momenta according to the flow of momentum through the vertex. A comparison with the results for the decays into the para- and ortho-charmonium states $(\bar c c)=\eta_c\,(^1S_0),$ $J/\psi$ $(^3S_1)$ [@Ivanov:2000aj], which was done in the impulse approximation, shows a $\approx 10\%$ downward effect in the rates when the impulse approximation is dropped.
Bound state representation of the charmonium states {#s:bound}
===================================================
The charmonium states treated in this paper are listed in Table \[tab:states\]. We have also included the purported $D$–wave state $\psi(3836)$ whose quantum numbers have not been established yet. Table \[tab:states\] also contains the quark currents used to describe the coupling of the respective charmonium states to the charm quarks. The masses of the charmonium states listed in Table \[tab:states\] are taken from the PDG [@Eidelman:2004wy].
-----------------------------------------------------------------------------------------------------------------------------------------------
quantum number name quark current mass (GeV)
---------------------------- -------------------- -------------------------------------------------------------------------------- ------------
$J^{PC}=0^{-+}$ (S=0, L=0) $^1S_0=\eta_c$ $\bar q\, i\gamma^5\, q $ 2.980
$J^{PC}=1^{--}$ (S=1, L=0) $^3S_1=J/\psi$ $\bar q\,\gamma^\mu\, q $ 3.097
$J^{PC}=0^{++}$ (S=1, L=1) $^3P_0=\chi_{c0}$ $\bar q\, q $ 3.415
$J^{PC}=1^{++}$ (S=1, L=1) $^3P_1=\chi_{c1}$ $\bar q\, \gamma^\mu\gamma^5\, q $ 3.511
$J^{PC}=1^{+-}$ (S=0, L=1) $^1P_1=h_c(1P)$ $\bar q\, \stackrel{\leftrightarrow}{\partial}^{\,\mu} \gamma^5\, q $ 3.526
$J^{PC}=2^{++}$ (S=1, L=1) $^3P_2=\chi_{c2}$ $(i/2)\,\bar q\, \left(\gamma^\mu \stackrel{\leftrightarrow}{\partial}^{\,\nu} 3.557
+\gamma^\nu \stackrel{\leftrightarrow}{\partial}^{\,\mu}\right)\,q $
$J^{PC}=2^{--}$ (S=1, L=2) $^3D_2=\psi(3836)$ $(i/2)\, 3.836
\bar q\left(\gamma^\mu\gamma^5 \stackrel{\leftrightarrow}{\partial}^{\,\nu}
+ \gamma^\nu \gamma^5 \stackrel{\leftrightarrow}{\partial}^{\,\mu} \right)q $
-----------------------------------------------------------------------------------------------------------------------------------------------
: The charmonium states $^{2S+1}L_{\,J}$. We use the notation $\stackrel{\leftrightarrow}{\partial}=
\stackrel{\rightarrow}{\partial}-\stackrel{\leftarrow}{\partial}$.[]{data-label="tab:states"}
Next we write down the Lagrangian describing the interaction of the charmonium fields with the quark currents. We give also the definition of the one-loop self-energy or mass insertions (called mass functions in the following) $\widetilde\Pi(p^2)$ of the relevant charmonium fields.
We can be quite brief in the presentation of the technical details of our calculation since it is patterned after the calculation presented in [@Ivanov:2000aj] which contains more calculational details. We treat the different spin cases $(S=0,1,2)$ in turn.
$$\begin{aligned}
\label{eq:s=0}
{\cal L}_{\rm \, S=0}(x) &=& \frac{1}{2}\,\phi(x)(\Box-m^2)\phi(x)
+g\,\phi(x)\,J_q(x), \hspace{1cm} \Box=-\partial^\alpha\partial_\alpha.
\\
&&\nonumber\\
\Pi(x-y) &=& i\,g^2\,\langle\, T\left\{J_q(x)\,J_q(y)\right\}\,\rangle_0,
\nonumber\\
\widetilde\Pi(p^2)&=&\int d^4x\, e^{-ipx}\,\Pi(x)\equiv \frac{3\,g^2}{4\pi^2}
\widetilde\Pi_0(p^2),\nonumber\\
&&\nonumber\\
Z &= & 1-\widetilde\Pi^{\,\prime}(m^2)=
1-\frac{3\,g^2}{4\pi^2}\widetilde\Pi^{\,\prime}_0(m^2)=0,\nonumber\\
&&\nonumber\\
J_q(x)&=&\int\!\!\!\int\! dx_1dx_2\, F_{\rm cc}(x,x_1,x_2)\,
\bar q(x_1)\,\Gamma\, q(x_2)\,\qquad (\Gamma=i\,\gamma^5,I),\nonumber\\
F_{cc}(x,x_1,x_2)&=&\delta\left(x-\frac{x_1+x_2}{2}\right)
\Phi_{cc}\left((x_1-x_2)^2\right),\nonumber\\
\Phi_{cc}\left(x^2\right)&=&\int\!\frac{d^4q}{(2\,\pi)^4}\, e^{-iqx}\,
\widetilde\Phi_{cc}\left(-q^2\right).\nonumber\end{aligned}$$
$\widetilde\Pi^{\,\prime}(m^2)$ is the derivative of the mass function $\widetilde\Pi(p^2)$.
$$\begin{aligned}
\label{eq:s=1}
{\cal L}_{\rm\, S=1}(x) &=&-\, \frac{1}{2}\,\phi_\mu(x)(\Box-m^2)\phi^\mu(x)
+g\,\phi_\mu(x)\,J^\mu_q(x),\\
\partial_\mu\phi^\mu(x) &=& 0
\qquad ({\rm leaving \, three \, independent \, components}),
\nonumber\\
&&\nonumber\\
\Pi^{\mu\nu}(x-y) &=& -\,i\,g^2\,
\langle\, T\left\{J^\mu_q(x)\,J^\nu_q(y)\right\}\,\rangle_0,\nonumber\\
\widetilde\Pi^{\mu\nu}(p)&=&\int d^4x\, e^{-ipx}\,\Pi^{\mu\nu}(x)
=g^{\mu\nu}\,\widetilde\Pi^{(1)}(p^2)+p^\mu p^\nu \widetilde\Pi^{(2)}(p^2),
\nonumber\\
\widetilde\Pi^{(1)}(p^2) &\equiv & \frac{3\,g^2}{4\pi^2}\,\widetilde\Pi_1(p^2),
\qquad
Z = 1-\frac{3\,g^2}{4\pi^2}\widetilde\Pi^{\,\prime}_1(m^2)=0,\nonumber\\
&&\nonumber\\
J^\mu_q(x)&=&\int\!\!\!\int\! dx_1dx_2\, F_{\rm cc}(x,x_1,x_2)\,
\bar q(x_1)\,\Gamma^\mu\, q(x_2),\,
\nonumber\\
\Gamma^\mu &=&\gamma^\mu,\,\gamma^\mu\gamma^5,\,
\stackrel{\leftrightarrow}{\partial}^{\,\mu}\gamma^5.
\nonumber\end{aligned}$$
The spin 1 polarization vector $\epsilon^{(\lambda)}_\mu(p)$ satisfies the constraints:
$$\def\arraystretch{2.0}
\begin{array}{ll}
\displaystyle \epsilon^{(\lambda)}_\mu(p) \, p^\mu = 0 &
\hspace*{1cm} {\rm transversality},
\\
\displaystyle \sum\limits_{\lambda=0,\pm}\epsilon^{(\lambda)}_\mu(p)
\epsilon^{\dagger\,(\lambda)}_\nu (p)
=-g_{\mu\nu}+\frac{p_\mu\,p_\nu}{m^2} &
\hspace*{1cm} {\rm completeness},
\\
\displaystyle\epsilon^{\dagger\,(\lambda)}_\mu \epsilon^{(\lambda')\,\mu}
=-\delta_{\lambda \lambda'} &
\hspace*{1cm} {\rm orthonormality}.
\end{array}$$
$$\begin{aligned}
\label{eq:s=2}
{\cal L}_{\rm\, S=2}(x) &=&
\frac{1}{2}\,\phi_{\mu\nu}(x)(\Box-m^2)\phi^{\mu\nu}(x)
+g\,\phi_{\mu\nu}(x)\,J_q^{\mu\nu}(x).\\
\phi^{\mu\nu}(x) &=&\phi^{\nu\mu}(x) , \quad
\partial_\mu\phi^{\mu\nu}(x) = 0, \quad
\phi^{\mu}_{\mu}(x) = 0, \quad
({\rm leaving \,\, 5 \,\,independent\,\, components}),
\nonumber\\
&&\nonumber\\
\Pi^{\mu\nu,\alpha\beta}(x-y) &=&
i\,g^2\,<T\left\{ J^{\mu\nu}_q(x)\,J^{\alpha\beta}_q(y) \right\}>_0,
\nonumber\\
&&\nonumber\\
\widetilde\Pi^{\mu\nu,\alpha\beta}(p)&=&
\int d^4x\, e^{-ipx}\,\Pi^{\mu\nu,\alpha\beta}(x)=\frac{1}{2}
\left(g^{\mu\alpha}\,g^{\nu\beta}+g^{\mu\beta}\,g^{\nu\alpha}\right)\,
\widetilde\Pi^{(1)}(p^2)
\nonumber\\
&+& g^{\mu\nu}\,g^{\alpha\beta}\,\widetilde\Pi^{(2)}(p^2)
+( g^{\mu\nu}\,p^\alpha p^\beta
+g^{\mu\alpha}\,p^\nu p^\beta + g^{\mu\beta}\,p^\nu p^\alpha)\,
\widetilde\Pi^{(3)}(p^2)
+p^\mu p^\nu p^\alpha p^\beta\,\widetilde\Pi^{(4)}(p^2) ,
\nonumber\\
&&\nonumber\\
\widetilde\Pi^{(1)}(p^2) &\equiv & \frac{3\,g^2}{4\pi^2}\,\widetilde\Pi_2(p^2),
\qquad
Z = 1-\frac{3\,g^2}{4\pi^2}\widetilde\Pi^{\,\prime}_2(m^2)=0,\nonumber\\
&&\nonumber\\
J^{\mu\nu}_q(x)&=&\int\!\!\!\int\! dx_1dx_2\, F_{\rm cc}(x,x_1,x_2)\,
\bar q(x_1)\,\Gamma^{\mu\nu}\, q(x_2),
\nonumber\\
\Gamma^{\mu\nu} &=&\frac{i}{2}\,
\left(\gamma^\mu\stackrel{\leftrightarrow}{\partial}^{\,\nu}
+\gamma^\nu\stackrel{\leftrightarrow}{\partial}^{\,\mu}\right),\,\,\,\,
\frac{i}{2}\,
\left(\gamma^\mu\gamma^5\stackrel{\leftrightarrow}{\partial}^{\,\nu}
+\gamma^\nu\gamma^5\stackrel{\leftrightarrow}{\partial}^{\,\mu}\right).
\nonumber\end{aligned}$$
The spin 2 polarization vector $\epsilon^{(\lambda)}_{\mu\nu}(p)$ satisfies the constraints: $$\def\arraystretch{1.7}
\begin{array}{ll}
\displaystyle\epsilon^{(\lambda)}_{\mu\nu}(p) = \epsilon^{(\lambda)}_{\nu\mu}(p)&
\hspace*{1cm} {\rm symmetry},
\\
\displaystyle\epsilon^{(\lambda)}_{\mu\nu}(p)\, p^\mu = 0 &
\hspace*{1cm} {\rm transversality},
\\
\displaystyle\epsilon^{(\lambda)}_{\mu\mu}(p) = 0 &
\hspace*{1cm} {\rm tracelessness},
\\
\displaystyle\sum\limits_{\lambda=0,\pm 1, \pm 2}\epsilon^{(\lambda)}_{\mu\nu}
\epsilon^{\dagger\,(\lambda)}_{\alpha\beta}
=\frac{1}{2}\left(S_{\mu\alpha}\,S_{\nu\beta}
+S_{\mu\beta}\,S_{\nu\alpha}\right)
-\frac{1}{3}\,S_{\mu\nu}\,S_{\alpha\beta} &
\hspace*{1cm} {\rm completeness},
\\
\displaystyle\epsilon^{\dagger\,(\lambda)}_{\mu\nu} \epsilon^{(\lambda')\,\mu\nu}
=\delta_{\lambda \lambda'} &
\hspace*{1cm} {\rm orthonormality},
\end{array}$$ where $$S_{\mu\nu} = -g_{\mu\nu}+\frac{p_\mu\,p_\nu}{m^2}.$$
We use the the local representation for the quark propagator when calculating the Fourier-tansforms of the mass functions. The local quark propagator is given by $$\label{local}
S_q(x-y) = \langle\,T\{q(x)\,\bar q(y)\}\,\rangle_0=
\int\frac{d^4 k}{(2\,\pi)^4\,i}\,e^{-i k\cdot(x-y)}\,\widetilde S_q(k), \qquad
\widetilde S_q(k)=\frac{1}{m_q-\not\! k}.$$
For the mass functions one needs to calculate the integrals $$\begin{aligned}
\widetilde\Pi_0(p^2) &=& -\,\int\frac{d^4k}{4\,\pi^2\,i}\,
\widetilde\Phi_{cc}^2(-k^2)\,
{\rm Tr}\left[\Gamma\,\widetilde S(k-p/2)\,\Gamma\,\widetilde S(k+p/2)\right],
\\
\Gamma(P,S) &=& i\,\gamma^5,\,\,I.
\\
&&\\
\widetilde\Pi^{\mu\nu}_1(p) &=& \int\frac{d^4k}{4\,\pi^2\,i}\,
\widetilde\Phi_{cc}^2(-k^2)\,
{\rm Tr}\left[\Gamma^\mu\,\widetilde S(k-p/2)\,\Gamma^\nu\,\widetilde S(k+p/2)\right],
\\
\Gamma^\mu(V,A,PV)
&=& \gamma^\mu,\,\,\gamma^\mu\gamma^5,\,\,2\,i\,k^\mu\gamma^5.
\\
&&\\
\widetilde\Pi^{\mu\nu,\alpha\beta}_2(p) &=& \int\frac{d^4k}{4\,\pi^2\,i}\,
\widetilde\Phi_{cc}^2(-k^2)\,
{\rm Tr}\left[\Gamma^{\mu\nu}\,\widetilde S(k-p/2)\,
\Gamma^{\alpha\beta}\,\widetilde S(k+p/2)\right],
\\
\Gamma^{\mu\nu}(T,PT) &=& i\,\left(\gamma^\mu\,k^\nu +\gamma^\nu\,k^\mu\right),
\,\,i\,\left(\gamma^\mu\gamma^5\,k^\nu +\gamma^\nu\gamma^5\,k^\mu\right).\end{aligned}$$
The functional form of the vertex function $\widetilde\Phi_{cc}(-k^2)$ and the quark propagators $\widetilde S_q(k)$ can in principle be determined from an analysis of the Bethe-Salpeter and Dyson-Schwinger equations as was done e.g. in [@Ivanov:1998ms]. In this paper, however, we choose a phenomenological approach where the vertex functions are modelled by a Gaussian form, the size parameters of which are determined by a fit to the leptonic and radiative decays of the lowest lying charm and bottom mesons. For the quark propagators we use the above local representation Eq. (\[local\]).
We represent the vertex function by $$\begin{aligned}
\widetilde\Phi_{cc}(-k^2) &=& e^{s_{cc}\,k^2}, \qquad
s_{cc}=\frac{1}{\Lambda_{cc}^2},\end{aligned}$$ where $\Lambda_{cc}$ parametrizes the size of the charmonium state. The quark propagator can easily be calculated using Feynman parametrization. One has $$\begin{aligned}
\widetilde S_q(k\pm p/2) &=& (m_q+ \not\! k~\pm \not\! p/2)
\int\limits_0^\infty\! d\alpha\, e^{-\alpha\,(m_q^2-(k-p/2)^2)}\, .\end{aligned}$$ We then transform to new $\alpha$-variables according to $$\alpha_i \to 2\,s_{cc}\,\alpha_i,$$ and make use of the identity $$\begin{aligned}
\int\!\!\!\int\limits_0^\infty\! d\alpha_1d\alpha_2 \,f(\alpha_1,\alpha_2) &=&
\int\limits_0^\infty\! dt\,t\!
\int\!\!\!\int\limits_0^\infty\! d\alpha_1d\alpha_2 \,
\delta\left(1-\alpha_1-\alpha_2\right) \,f(t\alpha_1,t\alpha_2) \, .\end{aligned}$$ One then obtains $$\label{bracket}
\widetilde\Pi(p) = \left \langle\,\,\frac{1}{c^2_t}\,\int\frac{d^4 k}{\pi^2\,i}\,
e^{(k+c_p\,p)^2/c_t}\,\, \frac{1}{4}\,{\rm Tr}(\cdots)\,\right\rangle \, ,$$ where $$c_t=\frac{1}{2(1+t)s_{cc}}, \qquad c_p=\frac{t}{1+t}\left (\alpha-\frac{1}{2}\right )$$
The symbol $<...>$ stands for the two–fold integral $$\begin{aligned}
\langle\cdots\rangle &=&\int\limits_0^\infty dt\frac{t}{(1+t)^2}
\int\limits_0^1 d\alpha\,e^{-2\,s_{cc}\,z}\,(\cdots),\\
&&\\
z &=& t\,\left[m_c^2-\alpha(1-\alpha)\,p^2\right]
-\frac{t}{1+t}\,\left(\alpha-\frac{1}{2}\right)^2\,p^2\,.\end{aligned}$$ It is then convenient to shift the loop momentum according to $k\to k-c_p\,p$. The ensuing tensor integrals can be expressed by scalar integrals according to $$\begin{aligned}
\int\frac{d^4k}{\pi^2\,i}\,f(-k^2)\,k^\mu k^\nu k^\alpha k^\beta &=&
\frac{1}{24}\,\left(g^{\mu\nu}g^{\alpha\beta}+g^{\mu\alpha}g^{\nu\beta}
+g^{\mu\beta}g^{\nu\alpha}\right)\int\frac{d^4k}{\pi^2\,i}\,f(-k^2)\,k^4\,,
\\
\int\frac{d^4k}{\pi^2\,i}\,f(-k^2)\,k^\mu k^\nu &=&
\frac{1}{4}\,g^{\mu\nu}\int\frac{d^4k}{\pi^2\,i}\,f(-k^2)\,k^2\, .\end{aligned}$$
The remaining scalar integrals can be integrated to give $$\frac{1}{c^2_t}\,\int\frac{d^4 k}{\pi^2\,i}\,e^{k^2/c_t}\,k^{2n}
=(-)^n\,(n+1)!\,c_t^n \, .$$
For the mass functions one finally obtains
$$\begin{array}{lll}
\bar q\,i\,\gamma^5\,q & : &
\widetilde\Pi(p^2)_P=\langle\, 2\,c_t+m^2_c+(1/4-c^2_p)\,p^2\, \rangle \\
\bar q\,q & : &
\widetilde\Pi(p^2)_S=\langle\, 2\,c_t-m^2_c+(1/4-c^2_p)\,p^2\,\rangle \\
\bar q\,\gamma^\mu\,q & : &
\widetilde\Pi(p^2)_V=\langle\, c_t+m^2_c+(1/4-c^2_p)\,p^2\, \rangle \\
\bar q\,\gamma^\mu\gamma^5\,q & : &
\widetilde\Pi(p^2)_A=\langle\, c_t-m^2_c+(1/4-c^2_p)\,p^2\, \rangle \\
\bar q\,\stackrel{\leftrightarrow}{\partial}^{\,\mu}\gamma^5\,q & : &
\widetilde\Pi(p^2)_{PV}=2\,c_t\,
\langle\, 3\,c_t+m^2_c+(1/4-c^2_p)\,p^2\, \rangle \\
\bar q\,
\frac{i}{2}\left(\stackrel{\leftrightarrow}{\partial}^{\,\mu}\gamma^\nu
+\stackrel{\leftrightarrow}{\partial}^{\,\nu}\gamma^\mu\right)\,q & : &
\widetilde\Pi(p^2)_{T}=2\,c_t\,
\langle\, c_t+m^2_c+(1/4-c^2_p)\,p^2\, \rangle \\
\bar q\,
\frac{i}{2}
\left(\stackrel{\leftrightarrow}{\partial}^{\,\mu}\gamma^\nu\gamma^5
+\stackrel{\leftrightarrow}{\partial}^{\,\nu}\gamma^\mu\gamma^5\right)\,q &
: &
\widetilde\Pi(p^2)_{PT}=2\,c_t\,
\langle \,c_t-m^2_c+(1/4-c^2_p)\,p^2\, \rangle \\
\end{array}$$ The mass functions $\widetilde\Pi_I(p^2)$ enter the compositeness condition in the derivative form $$Z_I=1-\frac{3\,g^2_I}{4\,\pi^2}\,\widetilde \Pi_I^\prime(p^2) \, ,$$ where the prime denotes differentiation with respect to $p^2$. The differentiation of the mass functions result in $$\begin{aligned}
\widetilde\Pi^\prime(p^2)_P &=&
\langle\, -2\,s_{cc}\,\hat z\,[\,2\,c_t+m^2_c+(1/4-c^2_p)\,p^2\,]
+1/4-c^2_p\,\,\rangle \\
\widetilde\Pi^\prime(p^2)_S &=&
\langle -2\,s_{cc}\,\hat z\,[\,2\,c_t-m^2_c+(1/4-c^2_p)\,p^2\,]
+1/4-c^2_p\,\,\rangle \\
\widetilde\Pi^\prime(p^2)_V &=&
\langle\, -2\,s_{cc}\,\hat z\,[\,c_t+m^2_c+(1/4-c^2_p)\,p^2\,]
+1/4-c^2_p\,\,\rangle \\
\widetilde\Pi^\prime(p^2)_A &=&
\langle\, -2\,s_{cc}\,\hat z\,[\,c_t-m^2_c+(1/4-c^2_p)\,p^2\,]
+1/4-c^2_p\,\,\rangle \\
\widetilde\Pi^\prime(p^2)_{PV} &=& 2\,c_t\,
\langle -2\,s_{cc}\,\hat z\,[\,3\,c_t+m^2_c+(1/4-c^2_p)\,p^2\,]
+ 1/4-c^2_p \,\,\rangle \\
\widetilde\Pi^\prime(p^2)_{T} &=& 2\,c_t\,
\langle\, -2\,s_{cc}\,\hat z\,[\,c_t+m^2_c+(1/4-c^2_p)\,p^2\,]
+1/4-c^2_p\,\, \rangle \\
\widetilde\Pi^\prime(p^2)_{PT} &=& 2\,c_t\,
\langle\, -2\,s_{cc}\,\hat z\,[\,c_t-m^2_c+(1/4-c^2_p)\,p^2\,]
+1/4-c^2_p\,\,\rangle\end{aligned}$$ where $$\hat z=-t\,\alpha(1-\alpha)-\frac{t}{1+t}\,\left(\alpha-\frac{1}{2}\right)^2.$$
The semileptonic decays {#s:semilep}
========================
Let us first write down the interaction Lagrangian which we need for the calculation of the matrix elements of the semileptonic decays $B_c\to (\bar cc)+l + \bar\nu$. One has $$\begin{aligned}
{\cal L}_{\rm int}(x) &=&
g_{bc}\,B_c^-(x)\cdot J^+_{bc}(x)+g_{cc}\,\phi_{cc}(x)\cdot J_{cc}(x)
+\frac{G_F}{\sqrt{2}}\,V_{bc}\,(\bar c\,O^\mu\, b)\cdot (\bar l\, O_\mu\,\nu),
\\
&&\nonumber\\
J^+_{bc}(x)&=&\int\!\!\!\int\! dx_1dx_2\, F_{\rm bc}(x,x_1,x_2)\cdot
\bar b(x_1)\,i\,\gamma^5\, c(x_2),
\\
J_{cc}(x)&=&\int\!\!\!\int\! dx_1dx_2\, F_{\rm cc}(x,x_1,x_2)\cdot
\bar c(x_1)\,\Gamma_{cc}\, c(x_2),
\\
&&\\
F_{bc}(x,x_1,x_2)&=&\delta(x-c_1\,x_1-c_2\,x_2)
\,\Phi_{bc}\left((x_1-x_2)^2\right),\\
F_{cc}(x,x_1,x_2)&=&\delta\left(x-\frac{x_1+x_2}{2}\right)
\,\Phi_{cc}\left((x_1-x_2)^2\right),\\
&&\\
\Phi\left(x^2\right)&=&\int\!\frac{d^4q}{(2\,\pi)^4} e^{-iqx}
\widetilde\Phi\left(-q^2\right).\end{aligned}$$ Here we adopt the notation: $l=e^-,\,\mu^-,\,\tau^-$, $\bar l=e^+,\,\mu^+,\,\tau^+$, $O^\mu=\gamma^\mu\,(1-\gamma^5)$, $c_1=m_b/(m_b+m_c),\,c_2=m_c/(m_b+m_c)$.
The S-matrix element describing the semileptonic decays $B_c\to (\bar cc)\,+l\bar\nu$ is written as $$\begin{aligned}
S_{B_c\to (\bar cc)} &=& i^3\,g_{bc}\,g_{cc}\,\frac{G_F}{\sqrt{2}}\,V_{bc}\,
\int\!\!\!\int\!\!\!\int\! dxdx_1dx_2\,B_c^-(x)\,\delta(x-c_1\,x_1-c_2\,x_2)\,
\Phi_{bc}\left((x_1-x_2)^2\right)\\
&\times&
\int\!\!\!\int\!\!\!\int\! dydy_1dy_2\,\phi_{cc}(y)\,
\delta\left(y-\frac{y_1+y_2}{2}\right)\,
\Phi_{cc}\left((y_1-y_2)^2\right)
\int\!dz \left(\bar l\,O_\mu\, \nu\right)_z \\
&\times& \langle\, T\left\{\bar b(x_1)\,i\,\gamma^5\,c(x_2)\cdot
\bar c(y_1)\,\Gamma_{cc}\,c(y_2)\cdot
\bar c(z)\,O^\mu\,c(z)\right\}\,\rangle_0.\end{aligned}$$
The matrix element is calculated in the standard manner. We have $$\begin{aligned}
T_{B_c\to (\bar cc)}(p_1,p_2,k_l,k_\nu) &=&
i\,(2\pi)^4\,\delta(p_1-p_2-k_l-k_\nu)\,
M_{B_c\to (\bar cc)}(p_1,p_2,k_l,k_\nu),\\
&&\\
M_{B_c\to (\bar cc)}(p_1,p_2,k_l,k_\nu) &=&
\frac{G_F}{\sqrt{2}}\,V_{bc}\,{\cal M}^\mu(p_1,p_2)\,
\bar u_l(k_l)\,O^\mu\,u_\nu(k_\nu),\\
&&\\
{\cal M}^\mu(p_1,p_2) &=& -\,\frac{3\,g_{bc}\,g_{cc}}{4\,\pi^2}\,
\int\!\frac{d^4k}{\pi^2\,i}\,\widetilde\Phi_{bc}\left(-(k+c_2\,p_1)^2\right)\,
\widetilde\Phi_{cc}\left(-(k+p_2/2)^2\right)\,\\
&\times& \frac{1}{4}\,{\rm Tr}\left[\,i\,\gamma^5\,\widetilde S_c(k)\,
\Gamma_{cc}\,\widetilde S_c(k+p_2)\,O^\mu\,\widetilde S_b(k+p_1)\,\right]\end{aligned}$$ where $p_1$ and $p_2$ are the $B_c$ and $(\bar cc)$ momenta, respectively. The spin coupling structure of the $(\bar{c} c)$–states is given by $$\Gamma_{cc}=i\,\gamma^5,\,\, I,\,\,
\epsilon^{\dagger}_\nu\gamma^\nu,\,\,
\epsilon^{\dagger}_\nu\gamma^\nu\gamma^5,\,\,
-\,2\,i\,\epsilon^{\dagger\,\nu} k_\nu\gamma^5,\,\,
2\,\epsilon^{\dagger}_{\nu\alpha}\,k^\nu \gamma^\alpha,\,\,
2\,\epsilon^{\dagger}_{\nu\alpha}\,k^\nu \gamma^\alpha\gamma^5.$$
The calculation of the transition matrix elements $M^\mu$ proceeds along similar lines as in the case of the mass functions. For the scalar vertex functions one has $$\begin{aligned}
\widetilde\Phi_{bc}(-(k+c_2\,p_1)^2) &=& e^{s_{bc}\,(k+c_2\,p_1)^2}, \qquad
s_{bc}=\frac{1}{\Lambda_{bc}^2},
\\
\widetilde\Phi_{cc}(-(k+p_2/2)^2) &=& e^{s_{cc}\,(k+p_2/2)^2}, \qquad
s_{cc}=\frac{1}{\Lambda_{cc}^2},
\\&&\\
\widetilde S_q(k+p) &=& (m_q+\not\! k + \not\! p)
\int\limits_0^\infty\! d\alpha\, e^{-\alpha\,(m_q^2-(k+p)^2)}.\end{aligned}$$ Again we shift the parameters $\alpha_i (i=1,2,3)$ according to $$\begin{aligned}
\alpha_i &\to & (s_{bc}+s_{cc})\,\alpha_i, \\
&&\\
\int\limits_0^\infty\! d^3\alpha \,
f(\alpha_1,\alpha_2,\alpha_3) &=&
\int\limits_0^\infty\! dt\,t^2
\int\limits_0^\infty\! d^3\alpha \,
\delta\left(1-\sum_{i=1}^3\alpha_i\right) \,
f(t\alpha_1,t\alpha_1,t\alpha_3)\end{aligned}$$ where $d^3\alpha=d\alpha_1 d\alpha_2 d\alpha_3$. One then obtains $$\begin{aligned}
{\cal M}^\mu(p_1,p_2) &=&
\left\langle\,\,\frac{1}{c^2_t}\,\int\frac{d^4 k}{\pi^2\,i}\,
e^{(k+c_{p_1}\,p_1+c_{p_2}\,p_2)^2/c_t}\,\,
\frac{1}{4}\,{\rm Tr}(\cdots)\,\right\rangle \, , \\
&&\\
c_t&=&\frac{1}{(s_{bc}+s_{cc})(1+t)}, \\
c_{p_1}&=&\frac{c_2\,w_{bc}+t\,\alpha_1}{1+t}, \qquad
c_{p_2} = \frac{w_{cc}/2+t\,\alpha_2}{1+t},\\
w_{bc} &=&\frac{s_{bc}}{s_{bc}+s_{cc}}, \qquad
w_{cc} = \frac{s_{cc}}{s_{bc}+s_{cc}}.\end{aligned}$$ where the symbol $<...>$ is related to the corresponding symbol $<...>$ defined in Sec. \[s:bound\] Eq. (\[bracket\]). In the present case the symbol $<...>$ stands for the four-fold integral $$\begin{aligned}
\langle\cdots\rangle &=&(s_{bc}+s_{cc})\cdot
\int\limits_0^\infty\! dt\,\frac{t^2}{(1+t)^2}
\int\limits_0^1\, d^3\alpha\,
\delta\left(1-\sum_{i=1}^3\alpha_i\right) \,
e^{-\,(s_{bc}+s_{cc})\,z}\,(\cdots)\, ,\end{aligned}$$ where $$\begin{aligned}
z &=& t\,\left[(\alpha_2+\alpha_3)\,m_c^2+\alpha_1\,m_b^2
-\alpha_1\alpha_3\,p_1^2-\alpha_2\alpha_3\,p_2^2-\alpha_1\alpha_2\,q^2\right]
\\
&+&\frac{1}{1+t}\,\left\{\frac{}{}
p_1^2\,\left[\,t\,w_{bc}\,c_2\,(2\,\alpha_1+\alpha_2-c_2)
+t\,w_{cc}\,\alpha_1/2-t\,\alpha_1\,(\alpha_1+\alpha_2)
+,w_{bc}\,c_2\,(w_{cc}/2-c_2+w_{bc}\,c_2)\right]
\right.\\
&+&
p_2^2\,\left[\,t\,w_{bc}\,c_2\,\alpha_2
+t\,w_{cc}\,(\alpha_1/2+\alpha_2-1/4)\,-t\,\alpha_2\,(\alpha_1+\alpha_2)
+w_{cc}/4\,(2\,w_{bc}\,c_2-1+w_{cc}\,)\right] \\
&+&
\left.
\,q^2\,\left[\,t\,(-w_{bc}\,c_2\,\alpha_2-w_{cc}\,\alpha_1/2+\alpha_1\alpha_2)
-w_{bc}\,w_{cc}\,c_2/2\,\right]
\frac{}{}\right\}.\end{aligned}$$ One then shifts the loop momentum according to $k\to k-c_{p_1}\,p_1-c_{p_2}\,p_2$.
Our final results are given in terms of a set of invariant form factors defined by $$\begin{aligned}
{\cal M}^\mu(\,B_c\to (\bar cc)_{\,S=0}\,) &=&
P^\mu\,F_+(q^2)+q^\mu\,F_-(q^2),\label{ff0}\\
&&\nonumber\\
{\cal M}^\mu(\,B_c\to (\bar cc)_{\,S=1} \,) &=&
\frac{1}{m_{B_c}+m_{cc}}\,\epsilon^\dagger_\nu\,
\left\{\,
-\,g^{\mu\nu}\,Pq\,A_0(q^2)+P^\mu\,P^\nu\,A_+(q^2)
+q^\mu\,P^\nu\,A_-(q^2)
\right.\nonumber\\
&&
\left.
+i\,\varepsilon^{\mu\nu\alpha\beta}\,P_\alpha\,q_\beta\,V(q^2)\right\},
\label{ff1}\\
&&\nonumber\\
{\cal M}^\mu(B_c\to (\bar cc)_{\,S=2} ) &=&
\epsilon^\dagger_{\nu\alpha}\,
\left\{\,
g^{\mu\alpha}\,P^\nu\,T_1(q^2)
+P^\nu\,P^\alpha\,\left[\,P^\mu\,T_2(q^2)+q^\mu\,T_3(q^2)\,\right]
\right.\nonumber\\
&&
\left.
+i\,\varepsilon^{\mu\nu\delta\beta}\,P^\alpha\,P_\delta\,q_\beta\,T_4(q^2)
\right\},\label{ff2}\\
&&\nonumber\\
P &=&p_1+p_2, \qquad q=p_1-p_2.
\nonumber\end{aligned}$$ In our results we have dropped an overall phase factors which is irrelevant for the calculation of the decay widths.
The calculation of traces and invariant integrations is done with help of FORM [@Vermaseren:2000nd]. For the values of the model parameters (hadron sizes $\Lambda_H$ and constituent quark masses $m_q$) we use the values of [@Ivanov:2003ge]. The numerical evaluation of the form factors is done in FORTRAN.
Angular decay distributions {#s:angdistr}
===========================
Consider the semileptonic decays $B^-_c(p_1)\to (\bar cc)(p_2)+l(k_l)+\bar\nu(k_\nu)$ and $B^+_c(p_1)\to (\bar cc)(p_2)+\bar l(k_l)+\nu(k_\nu).$ Recalling the expression for the matrix elements, one can write $$\begin{aligned}
M_{B^-_c\to\bar cc}(p_1,p_2,k_l,k_\nu) &=&
\frac{G_F}{\sqrt{2}}\,V_{bc}\, {\cal M}_\mu(p_1,p_2)\,
\bar u^{\lambda}_l(\vec k_l)\,O^\mu\,v^{\lambda'}_\nu(\vec k_\nu),
%\hspace{1cm} {\rm lepton\,\, in\,\, the\,\, final\,\, state}
\\
&&\\
M_{B^+_c\to\bar cc}(p_1,p_2,k_l,k_\nu) &=&
\frac{G_F}{\sqrt{2}}\,V_{bc}\, {\cal M}_\mu(p_1,p_2)\,
\bar u^{\lambda}_\nu(\vec k_\nu)\,O^\mu\,v^{\lambda'}_l(\vec k_l),
\\\end{aligned}$$ where $p_1$ and $p_2$ are the $B_c$ and $(\bar cc)$ momenta, respectively.
The angular decay distribution reads $$\label{width2}
\frac{d\Gamma}{dq^2\,d\cos\theta}=
\frac{G_F^2}{(2\pi)^3}|V_{bc}|^2\cdot
\frac{(q^2-\mu^2)\,|{\bf p_2}|}{8\,m_1^2\,q^2}\cdot
L^{\mu\nu}\,H_{\mu\nu}$$ where $\mu$ is the lepton mass and $|{\bf p_2}|=\lambda^{1/2}(m_1^2,m_2^2,q^2)/(2\,m_1)$ is the momentum of the $(\bar cc)$-meson in the $B_c$-rest frame.
$L^{\mu\nu}$ is the lepton tensor given by $$\label{lepton-tensor}
L^{\mu\nu}_\mp =
\frac{1}{8}\, {\rm Tr}\left(O^\mu\not\! k_\nu O^\nu\not\! k_l\right)
=
k_l^\mu\,k_\nu^\nu+k_l^\nu\,k_\nu^\mu
-g^{\mu\nu}\,\frac{q^2-\mu^2}{2}
\pm i\,\varepsilon^{\mu\nu\alpha\beta}\,k_{l\,\alpha}k_{\nu\,\beta},$$ The lepton tensors $L^{\mu\nu}_-$ and $L^{\mu\nu}_+$ refer to the $(l\bar \nu_l)$ and $(\bar l \nu_l)$ cases. They differ in the sign of the parity–odd $\varepsilon$-tensor contribution. The hadron tensor $H_{\mu\nu}={\cal M}_\mu(p_1,p_2)\, {\cal M}^\dagger_\nu(p_1,p_2)$ is given by the corresponding tensor products of the transition matrix elements defined above.
It is convenient to perform the Lorentz contractions in Eq. (\[width2\]) with the help of helicity amplitudes as described in [@Korner:1989qb] and [@Korner:1982bi; @Korner:vg]. First, we define an orthonormal and complete helicity basis $\epsilon^\mu(m)$ with the three spin 1 components orthogonal to the momentum transfer $q^\mu$, i.e. $\epsilon^\mu(m) q_\mu=0$ for $m=\pm,0$, and the spin 0 (time)-component $m=t$ with $\epsilon^\mu(t)= q^\mu/\sqrt{q^2}$.
The orthonormality and completeness properties read $$\begin{aligned}
&&
\epsilon^\dagger_\mu(m)\epsilon^\mu(n)=g_{mn} \hspace{1cm}
(m,n=t,\pm,0),
\nonumber\\
&&\label{orthonorm}\\
&&
\epsilon_\mu(m)\epsilon^{\dagger}_{\nu}(n)g_{mn}=g_{\mu\nu}
\nonumber\end{aligned}$$ with $g_{mn}={\rm diag}\,(\,+\,,\,\,-\,,\,\,-\,,\,\,-\,)$. We include the time component polarization vector $\epsilon^\mu(t)$ in the set because we want to include lepton mass effects in the following.
Using the completeness property we rewrite the contraction of the lepton and hadron tensors in Eq. (\[width2\]) according to $$\begin{aligned}
L^{\mu\nu}H_{\mu\nu} &=&
L_{\mu'\nu'}g^{\mu'\mu}g^{\nu'\nu}H_{\mu\nu}
= L_{\mu'\nu'}\epsilon^{\mu'}(m)\epsilon^{\dagger\mu}(m')g_{mm'}
\epsilon^{\dagger \nu'}(n)\epsilon^{\nu}(n')g_{nn'}H_{\mu\nu}
\nonumber\\
&&\nonumber\\
&=& L(m,n)g_{mm'}g_{nn'}H(m',n')
\label{contraction}\end{aligned}$$ where we have introduced the lepton and hadron tensors in the space of the helicity components $$\begin{aligned}
\label{hel_tensors}
L(m,n) &=& \epsilon^\mu(m)\epsilon^{\dagger \nu}(n)L_{\mu\nu},
\hspace{1cm}
H(m,n) = \epsilon^{\dagger \mu}(m)\epsilon^\nu(n)H_{\mu\nu}.\end{aligned}$$ The point is that the two tensors can be evaluated in two different Lorentz systems. The lepton tensors $L(m,n)$ will be evaluated in the $\bar l \nu$ or $ l \bar \nu$–c.m. system whereas the hadron tensors $H(m,n)$ will be evaluated in the $B_c$ rest system.
Hadron tensor
-------------
In the $B_c$ rest frame one has $$\begin{aligned}
p^\mu_1 &=& (\,m_1\,,\,\,0,\,\,0,\,\,0\,)\,,
\nonumber\\
p^\mu_2 &=& (\,E_2\,,\,\,0\,,\,\,0\,,\,\,-|{\bf p_2}|\,)\,,
\\
q^\mu &=& (\,q_0\,,\,\,0\,,\,\,0\,,\,\,|{\bf p_2}|\,)\,,
\nonumber\end{aligned}$$ where $$\begin{aligned}
&&
E_2 = \frac{m_1^2+m_2^2-q^2}{2\,m_1}, \hspace{1cm}
q_0=\frac{m_1^2-m_2^2+q^2}{2\,m_1},\\
&&
E_2+q_0=m_1, \hspace{1cm}
q_0^2=q^2+|{\bf p_2}|^2,\hspace{1cm}
|{\bf p_2}|^2+E_2\,q_0=\frac{1}{2}(m_1^2-m_2^2-q^2).\end{aligned}$$ In the $B_c$-rest frame the polarization vectors of the effective current read $$\begin{aligned}
\epsilon^\mu(t)&=&
\frac{1}{\sqrt{q^2}}(\,q_0\,,\,\,0\,,\,\,0\,,\,\,|{\bf p_2}|\,)
\,,\nonumber\\
\epsilon^\mu(\pm) &=&
\frac{1}{\sqrt{2}}(\,0\,,\,\,\mp 1\,,\,\,-i\,,\,\,0\,)\,,
\label{hel_basis}\\
\epsilon^\mu(0) &=&
\frac{1}{\sqrt{q^2}}(\,|{\bf p_2}|\,,\,\,0\,,\,\,0\,,\,\,q_0\,)\,.
\nonumber\end{aligned}$$ Using this basis one can express the helicity components of the hadronic tensors through the invariant form factors defined in Eqs. (\[ff0\]-\[ff2\]). We treat the three spin cases in turn.
(a):
$$\label{helS0a}
H(m,n)= \left(\epsilon^{\dagger \mu}(m){\cal M}_\mu\right)\cdot
\left(\epsilon^{\dagger \nu}(n){\cal M}_\nu\right)^\dagger\equiv
H_mH^\dagger_n$$
The helicity form factors $H_m$ can be expressed in terms of the invariant form factors. One has $$\begin{aligned}
H_t &=& \frac{1}{\sqrt{q^2}}(Pq\, F_+ + q^2\, F_-)\,,
\nonumber\\
H_\pm &=& 0\,,
\label{helS0b}\\
H_0 &=& \frac{2\,m_1\,|{\bf p_2}|}{\sqrt{q^2}} \,F_+ \,.
\nonumber\end{aligned}$$
\(b) :\
The nonvanishing helicity form factors are given by $$H_m =\epsilon^{\dagger \mu}(m){\cal M}_{\mu\alpha}\epsilon_2^{\dagger\alpha}(m)
\quad {\rm for} \quad m=\pm,0$$ and $$H_t =\epsilon^{\dagger \mu}(t){\cal M}_{\mu\alpha}\epsilon_2^{\dagger\alpha}(0)$$
As in Eq. (\[helS0a\]) the hadronic tensor is given by $H(m,n)= H_mH^\dagger_n$.
In order to express the helicity form factors in terms of the invariant form factors Eq. (\[ff1\]) one needs to specify the helicity components $\epsilon_2(m)$ $(m=\pm,0)$ of the polarization vector of the $(\bar cc)_{S=1}$ state. They are given by $$\begin{aligned}
\epsilon^\mu_2(\pm) &=&
\frac{1}{\sqrt{2}}(0\,,\,\,\pm 1\,,\,\,-i\,,\,\,0\,)\,,
\nonumber\\
&&\label{pol_S1}\\
\epsilon^\mu_2(0) &=&
\frac{1}{m_2}(|{\bf p_2}|\,,\,\,0\,,\,\,0\,,\,\,-E_2\,)\,.
\nonumber\end{aligned}$$ They satisfy the orthonormality and completeness conditions: $$\begin{aligned}
&&
\epsilon_2^{\dagger\mu}(r)\,\epsilon_{2\mu}(s)=-\delta_{rs},
\nonumber\\
&&\\
&&
\epsilon_{2\mu}(r)\,\epsilon^\dagger_{2\nu}(s)\,\delta_{rs}=
-g_{\mu\nu}+\frac{p_{2\mu}p_{2\nu}}{m_2^2}.\nonumber\end{aligned}$$ The desired relations between the helicity form factors and the invariant form factors are then
$$\begin{aligned}
H_t &=&
\epsilon^{\dagger \mu}(t)\,\epsilon_2^{\dagger \alpha}(0)
\,{\cal M}_{\mu\alpha}
\,=\,
\frac{1}{m_1+m_2}\frac{m_1\,|{\bf p_2}|}{m_2\sqrt{q^2}}
\left(P\cdot q\,(-A_0+A_+)+q^2 A_-\right),
\nonumber\\
H_\pm &=&
\epsilon^{\dagger \mu}(\pm)\,\epsilon_2^{\dagger \alpha}(\pm)
\,{\cal M}_{\mu\alpha}
\,=\,
\frac{1}{m_1+m_2}\left(-P\cdot q\, A_0\mp 2\,m_1\,|{\bf p_2}|\, V \right),
\label{helS1c}\\
H_0 &=&
\epsilon^{\dagger \mu}(0)\,\epsilon_2^{\dagger \alpha}(0)
\,{\cal M}_{\mu\alpha}
\nonumber\\
&=&
\frac{1}{m_1+m_2}\frac{1}{2\,m_2\sqrt{q^2}}
\left(-P\cdot q\,(m_1^2-m_2^2-q^2)\, A_0
+4\,m_1^2\,|{\bf p_2}|^2\, A_+\right).\nonumber\end{aligned}$$
\(c) :\
The nonvanishing helicity form factors can be calculated according to $$H_m =\epsilon^{\dagger \mu}(m)\,{\cal M}_{\mu\alpha_1\alpha_2}\,
\epsilon_2^{\dagger\alpha_1\alpha_2}(m)
\quad {\rm for} \quad m=\pm,0$$ and $$H_t =\epsilon^{\dagger \mu}(t)\, {\cal M}_{\mu\alpha_1\alpha_2}\,
\epsilon_2^{\dagger\alpha_1\alpha_2}(0)$$ Again the hadronic tensor is given by $H(m,n)= H_mH^\dagger_n$.
For the further evaluation one needs to specify the helicity components $\epsilon_2(m)$ $(m=\pm 2,\pm 1,0)$ of the polarization vector of the $\bar cc_{S=2}$. They are given by $$\begin{aligned}
\epsilon^{\mu\nu}_2(\pm 2) &=&\epsilon^\mu_2(\pm)\, \epsilon^\nu_2(\pm),
\nonumber\\
\epsilon^{\mu\nu}_2(\pm 1) &=&
\frac{1}{\sqrt{2}}\,\left(\epsilon^\mu_2(\pm)\,\epsilon^\nu_2(0)
+\epsilon^\mu_2(0)\,\epsilon^\nu_2(\pm)\right),
\label{pol_S2}\\
\epsilon^{\mu\nu}_2(0) &=&
\frac{1}{\sqrt{6}}\,\left(\epsilon^\mu_2(+)\,\epsilon^\nu_2(-)
+\epsilon^\mu_2(-)\,\epsilon^\nu_2(+)\right)
+\sqrt{\frac{2}{3}}\,\epsilon^\mu_2(0)\,\epsilon^\nu_2(0)\,,
\nonumber\end{aligned}$$ where $\epsilon^\mu_2(r)$ are defined in Eq. (\[pol\_S1\]).
The relation between the helicity form factors $H_m$ and the invariant form factors Eq. (\[ff2\]) read $$\begin{aligned}
H_t &=& h(t,0)=\sqrt\frac{2}{3}\,
\frac{m_1^2\, |{\bf p_2}|^2}{m_2^2\,\sqrt{q^2}}
\,\left\{T_1+( |{\bf p_2}|^2+E_2\, q_0+m_1\, q_0)\,T_2+q^2\, T_3\right\},
\\
H_\pm &=& h(\pm,\pm)=
\frac{m_1\,|{\bf p_2}|}{\sqrt{2}\,m_2}\,
\left(T_1\mp 2\, m_1|{\bf p_2}|\,T_4 \right),
\nonumber\\
H_0 &=& h(0,0)=
\sqrt\frac{1}{6}\,\frac{m_1\,|{\bf p_2}|}{m_2^2\,\sqrt{q^2}}
\left( (m_1^2-m_2^2-q^2)\, T_1 + 4\,m_1^2\,|{\bf p_2}|^2\,T_2 \right).\end{aligned}$$
Lepton tensor
-------------
The helicity components of the lepton tensors $L(m,n)$ are evaluated in the ($l\bar\nu$)–c.m. system $\vec k_l+\vec k_\nu=0$. One has $$\begin{aligned}
q^\mu &=& (\,\sqrt{q^2}\,,\,\,0\,,\,\,0\,,\,\,0\,)\,,
\nonumber\\
k^\mu_\nu &=&
(\,|{\bf k_l}|\,,\,\, |{\bf k_l}|\sin\theta\cos\chi\,,\,\,
|{\bf k_l}| \sin\theta\sin\chi\,,\,\,|{\bf k_l}| \cos\theta\,)\,,
\label{lepton_basis}\\
k^\mu_l &=& (\,E_l\,,\,\,-|{\bf k_l}|\sin\theta\cos\chi\,,\,\,
-|{\bf k_l}|\sin\theta\sin\chi\,,\,\,-|{\bf k_l}|\cos\theta\,)\,,
\nonumber\end{aligned}$$ with $$E_l=\frac{q^2+\mu^2}{2\,\sqrt{q^2}}, \hspace{1cm}
|{\bf k_l}|=\frac{q^2-\mu^2}{2\,\sqrt{q^2}}.$$ In the ($l\bar \nu$)–c.m. frame the longitudinal and time-component polarization vectors are given by $$\begin{aligned}
\label{lepton-pol}
\epsilon^\mu(t) &=& \frac{q^\mu}{\sqrt{q^2}}=(1,0,0,0),
\nonumber\\
\epsilon^\mu(\pm)&=& \frac{1}{\sqrt{2}}\,(0,\mp,-i,0),
\label{hel_lep}\\
\epsilon^\mu(0) &=& (0,0,0,1).
\nonumber\end{aligned}$$ Using Eqs. (\[lepton-pol\]) and (\[lepton-tensor\]) it is not difficult to evaluate the helicity representation $L(m,n)$ of the lepton tensor.
In this paper we are not interested in the azimuthal $\chi$–distribution of the lepton pair. We therefore integrate over the azimuthal angle dependence of the lepton tensor. Of course, our formalism is general enough to allow for the inclusion of an azimuthal dependence if needed. After azimuthal integration the differential $(q^2,\cos\theta)$ distribution reads $$\begin{aligned}
\label{distr2}
\frac{d\Gamma}{dq^2d\cos\theta}
&=&\,
\frac{3}{8}\,(1+\cos^2\theta)\cdot \frac{d\Gamma_U}{d q^2}
+\frac{3}{4}\,\sin^2\theta\cdot \frac{d\Gamma_L}{d q^2}
\mp\frac{3}{4}\cos\theta\cdot \frac{d\Gamma_P}{d q^2}
\\
&& \nonumber \\
&+&
\frac{3}{4}\,\sin^2\theta\cdot \frac{d\widetilde\Gamma_U}{d q^2}
+\frac{3}{2}\,\cos^2\theta\cdot \frac{d\widetilde\Gamma_L}{d q^2}
+\frac{1}{2}\,\frac{d\widetilde\Gamma_S}{d q^2}
+3\,\cos\theta\cdot \frac{d\widetilde\Gamma_{SL}}{d q^2},
\nonumber\end{aligned}$$ where we take the polar angle $\theta$ to be the angle between the $\vec p_2$ and the $\vec k_l$ in the lepton-neutrino c.m. system. The upper and lower signs in front of the parity violating (p.v.) contribution refer to the two cases $l^-\bar\nu$ and $l^+\nu$, respectively.
The differential partial helicity rates $d\Gamma_i/dq^2$ and $d\widetilde\Gamma_i/dq^2$ in Eq. (\[distr2\]) are defined by $$\begin{aligned}
\label{partialrates}
\frac{d\Gamma_i}{dq^2} &=& \frac{G^2_F}{(2\pi)^3} \,
|V_{bc}|^2\,\frac{(q^2-\mu^2)^2\, |{\bf p_2}|}{12\,m_1^2\,q^2}\,H_i \, ,\\
&& \nonumber \\
\frac{d\widetilde\Gamma_i}{dq^2} &=&
\frac{\mu^2}{2\,q^2}\,\frac{d\Gamma_i}{dq^2} \, , \nonumber\end{aligned}$$ where we have introduced a standard set of helicity structure functions $H_i$ ($i=U,L,P,S,SL$) given by the following linear combinations of the helicity components of the hadron tensor $H(m,n)=H_m H^\dagger_n$: $$\def\arraystretch{1.5}
\begin{array}{ll}
H_{U}={\rm Re}\left(H_{+} H_{+}^{\dagger}\right)
+{\rm Re}\left(H_{-} H_{-}^{\dagger}\right) &
\hspace*{1cm} {\rm {\bf U}npolarized-transverse} \\
% H_{T}={\rm Re}\left(H_{+} H_{-}^{\dagger}\right) &
%\hspace*{1cm} {\rm {\bf T}ransverse-interference} \\
H_{L}={\rm Re}\left(H_{0} H_{0}^{\dagger}\right) &
\hspace*{1cm} {\rm{\bf L}ongitudinal} \\
H_{P}={\rm Re}\left(H_{+} H_{+}^{\dagger}\right)
-{\rm Re}\left(H_{-} H_{-}^{\dagger}\right) &
\hspace*{1cm} {\rm {\bf P}arity-odd} \\
H_{S}=3\,{\rm Re}\left(H_{t} H_{t}^{\dagger}\right) &
\hspace*{1cm} {\rm{\bf S}calar} \\
H_{SL}={\rm Re}\left(H_{t} H_{0}^{\dagger}\right) &
\hspace*{1cm} {\rm{\bf S}calar-{\bf L}ongitudinal \, interference}
\end{array}$$ Note that the helicity amplitudes are real such that the complex conjugation symbol can in fact be dropped.
It is evident that the “tilde” rates $\widetilde{\Gamma}$ in Eq. (\[partialrates\]) do not contribute in the limit of vanishing lepton masses. In the present application this means that they can be neglected for the $e$– and $\mu$–modes and only contribute to the $\tau$–modes.
Integrating over $\cos\theta$ one obtains the differential $q^2$ distribution $$\frac{d\Gamma}{dq^2} =
\frac{d\Gamma_U}{d q^2}+\frac{d\Gamma_L}{d q^2}
+\frac{d\widetilde\Gamma_U}{d q^2}+\frac{d\widetilde\Gamma_L}{d q^2}
+\frac{d\widetilde\Gamma_S}{d q^2}.$$ Finally, integrating over $q^2$, one obtains the total rate $$\Gamma =
\Gamma_U+\Gamma_L+\widetilde\Gamma_U+\widetilde\Gamma_L+\widetilde\Gamma_S$$ In Sec. \[s:NumRes\] we list our predictions for the integrated partial helicity rates $\Gamma_i$ $(i=U, L, P)$ and $\widetilde\Gamma_i$ $i=(U, L, S, SL)$.
To save on notation in the following we shall sometimes use a self-explanatory notation for the differential and integrated partial helicity rates. For example, we write $U$ for either the differential or the integrated helicity rates $d\Gamma_U/dq^2$ and $\Gamma_U$, respectively, and $\widetilde{U}$ for $d\widetilde{\Gamma}_U/dq^2$ and $\widetilde{\Gamma}_U$.
An interesting quantity is the forward-backward asymmetry $A_{FB}$ of the lepton in the ($l\bar\nu$) cm system which is given by $$\label{forward-backward}
A_{FB}=
\frac{3}{4} \frac{\pm P +4 \widetilde{SL}}{U+\widetilde{U}+L+\widetilde{L}+\widetilde{S}}.$$ In Sec. \[s:NumRes\] we shall give our numerical predictions for the asymmetry $A_{FB}$ for the decay channels under study.
For the discovery channel $B_c\to J/\psi l \nu$ with the $J/\psi$ decaying into muon pairs the transverse/longitudinal composition of the produced $J/\psi$ is of interest. The transverse/longitudinal composition can be determined by a measurement of the angular orientation of the back-to-back muon pairs in the $J/\psi$ rest frame. The relevant angular distribution reads $$\begin{aligned}
\label{distr3}
\frac{d\Gamma}{dq^2d\cos\theta^*}
&=&\,
\frac{3}{8}\,(1+\cos^2\theta^*) \left( \frac{d\Gamma_U}{d q^2} +
\frac{d\widetilde\Gamma_U}{d q^2} \right) \\
&&+ \frac{3}{4}\,\sin^2\theta^* \left( \frac{d\Gamma_L}{d q^2} +
\frac{d\widetilde\Gamma_L}{d q^2} + \frac{d\widetilde\Gamma_S}{d q^2} \right)
\nonumber\end{aligned}$$ where $\theta^*$ is the polar angle of the muon pair relative to the original momentum direction of the $J/\psi$. We have included lepton mass effects so that the angular decay distribution in Eq. (\[distr3\]) can be also used for the $\tau$-mode in this decay. The transverse and longitudinal contributions of the $J/\psi$ are given by ($U + \widetilde{U}$) and ($L + \widetilde{L} +\widetilde{S} $), respectively.
One can define an asymmetry parameter $\alpha^*$ by rewriting Eq. (\[distr2\]) in terms of its $\cos^2\theta^*$ dependence, i.e. $d\Gamma \propto 1+\alpha^*\cos^2\theta^*$. The asymmetry parameter can be seen to be given by $$\label{trans-long}
\alpha^*
=\frac{U+\widetilde{U}-2(L+ \widetilde{L}+\widetilde{S})}{U+\widetilde{U}+2(L + \widetilde{L}
+\widetilde{S})}\,.$$ Our predictions for the asymmetry parameter $\alpha^*$ appear in Sec. \[s:NumRes\].
We have only written out single angle decay distributions in this paper. It is not difficult to write down joint angular decay distributions including also azimuthal correlations in our formalism if necessary.
Numerical results {#s:NumRes}
=================
Let us discuss the model parameters and their determination. Since we consider the decay of the $B_c$-meson into charmonium states only, the adjustable parameters are the constituent masses of charm and bottom quarks and the size parameters of the $B_c$-meson and charmonium states. The values of quark masses were determined in our previous studies (see, for example, [@Ivanov:2003ge]) of the leptonic and semileptonic decays of the low-lying pseudoscalar mesons ($\pi$, $K$, $D$, $D_s$, $B$, $B_s$ and $B_c$). The values of the charm and bottom quarks were found to be $m_c=1.71$ GeV and $m_b=5.12$ GeV. The value of $\Lambda_{bc}=1.96$ GeV was determined from a fit to the world average of the leptonic decay constant $f_{B_c}=360$ MeV. The value of $\Lambda_{J/\psi}=2.62$ GeV was found from a fit to the experimental value of the radiative decay constant $f_{J/\psi}=405$ MeV which enters in the $J/\psi\to e^+ e^-$ decay width ($f_{J/\psi}^{\rm expt}=405\pm 17$ MeV).
In our calculation we are using free quark propagators with an effective constituent quark mass (see, Eq. \[local\]). This imposes a very simple yet important constraint on the relations between the masses of the bound state and their constituents. One has to assume that the meson mass $M_H$ is less than the sum of the masses of their constituents
$$\label{conf}
M_H < m_{q_1} + m_{q_2}$$
in order to avoid the appearance of imaginary parts in physical amplitudes, which are described by the one-loop quark diagrams in our approach. This is satisfied for the low-lying pseudoscalar mesons $\pi$, $K$, $D$, $D_s$, $B$, $B_s$, $B_c$ and $\eta_c$ and also for the $J/\psi$ but is no longer true for the excited charmonium states considered here. We shall therefore employ identical masses for all excited charmonium states $m_{cc}=m_{J/\psi}$=3.097 GeV (except for the $\eta_c$) in our matrix element calculations but use physical masses in the phase space calculation. This is quite a reliable approximation because the hyperfine splitting between the excited charmonium states and $J/\psi$ is not large. For example, the maximum relative error is $(m_{\psi(3836)}-m_{J/\psi})/m_{J/\psi}=0.24$.
The size parameters of the excited charmonium states should be determined from a fit to the available experimental data for the two-photon and the radiative decays as was done for the $J/\psi$-meson. However, the calculation of the matrix elements involving two photons will be very time consuming because one has to introduce the electromagnetic field into the nonlocal Lagrangians in Eqs. \[eq:s=0\]-\[eq:s=2\]. This is done by using the path exponential (see, our recent papers [@Ivanov:2003ge] and [@Faessler:2003yf]). The gauging of the nonlocal Lagrangian with spin 2 has not yet been done and is a project all of its own. For the time being, we are calculating the widths of the semileptonic decays $B_c\to (\bar cc) +l\nu$ by assuming an identical size parameters for all charmonium states $\Lambda_{cc}=\Lambda_{J/\psi}=2.62$ GeV.
In order to get a quantitative idea about the invariant form factors we list their $q^2_{\rm min}=0$ and $q^2_{\rm max}=(m_1-m_2)^2$ values in Table \[t:formfactors\].
$q^2$ $F_+$ $F_-$
------------- ----------------- ------- -------
$\eta_c$ 0 0.61 -0.32
$q^2_{\rm max}$ 1.14 -0.61
$\chi_{c0}$ 0 0.40 -1.00
$q^2_{\rm max}$ 0.65 -1.63
: \[t:formfactors\] Predictions for the form factors of the $B_c\to (\bar cc)$ tansitions.
$q^2$ $A_0$ $A_+$ $A_-$ $V$
------------- ----------------- -------- ------- ------- -------
$J/\psi$ 0 1.64 0.54 -0.95 0.83
$q^2_{\rm max}$ 2.50 0.97 -1.76 1.53
$\chi_{c1}$ 0 -0.064 -0.39 1.52 -1.18
$q^2_{\rm max}$ 0.46 -0.50 2.36 -1.81
$h_c$ 0 0.44 -1.08 0.52 0.25
$q^2_{\rm max}$ 0.54 -1.80 0.89 0.365
: \[t:formfactors\] Predictions for the form factors of the $B_c\to (\bar cc)$ tansitions.
$q^2$ $T_1$ $T_2$,GeV$^{-2}$ $T_3$,GeV$^{-2}$ $T_4$,GeV$^{-2}$
-------------- ----------------- ------- ------------------ ------------------ ------------------
$\chi_{c2}$ 0 1.22 -0.011 0.025 -0.021
$q^2_{\rm max}$ 1.69 -0.018 0.040 -0.033
$\psi(3836)$ 0 0.052 0.0071 -0.036 0.026
$q^2_{\rm max}$ 0.35 0.0090 -0.052 0.038
: \[t:formfactors\] Predictions for the form factors of the $B_c\to (\bar cc)$ tansitions.
We put our values of the decay rates in Table \[t:widths\] together with those predicted in other papers. A number of calculations are devoted to the $B_c\to \eta_c l\nu$ and $B_c\to J/\psi l\nu$ decays. All of them predict values at the same order of magnitude. A study of the semileptonic decays of the $B_c$-meson into excited charmonium states was done in [@Chang:1992pt; @Chang:2001pm] within an approach which is quite different from our relativistic quark model. Concerning the electron-modes here is quite good agreement with [@Chang:1992pt; @Chang:2001pm] in the case of $B_c\to J/\psi e\nu$, $\eta_c e\nu$, $\chi_{c2} e\nu$. Our rates are a factor of 1.5 (1.8) larger for $B_c\to \chi_{c0} e\nu$, $h_c e\nu$ decays and our rate is a factor 1.6 smaller for $B_c\to\chi_{c1} e\nu$ decay. Concerning the $\tau$-modes here is quite good agreement with [@Chang:1992pt; @Chang:2001pm] in the case of $B_c\to\chi_{c0}\tau\nu$, $h_c\tau\nu$ decays but our rates are almost a factor of two smaller for the other modes $B_c\to\chi_{c1}\tau\nu$, $\chi_{c2}\tau\nu$. The partial rate for $B_c\to J/\psi+l+\nu$ is the largest. The partial rates into the P-wave charmonium states are all of the same order of magnitude and are predicted to occur at $\sim 10\%$ of the most prominent decay $B_c\to J/\psi+l+\nu$. The decays of the $B_c$ into D-wave charmonium state are suppressed. The $\tau$–modes are generally down by a factor of $\sim 10$ compared to the $e$–modes except for the transitions $B_c\to\eta_c$, $B_c\to J/\psi$ and $B_c\to \psi(3836)$ where the $\tau$–modes are smaller only by a factor of $\sim 3-4$.
In Table \[t:hel\] we list our results for the integrated partial helicity rates $\Gamma_i$ $(i=U,L,P,\widetilde{U}, \widetilde{L}, \widetilde{S},\widetilde{SL})$. They are needed for the calculation of the forward-backward asymmetry parameter $A_{FB}$ and, in the case of the decay $B_c\to J/\psi+l+\nu$, for the calculation of the asymmetry parameter $\alpha^*$ determining the transverse/longitudinal composition of the $J/\psi$ in the decay. The partial “tilde” rates $\widetilde{\Gamma}_i$ are quite tiny for the $e$–mode as expected from Eq. (\[partialrates\]) but are not negligible for the $\tau$–modes. This shows up in the calculated values for $A_{FB}$ in Table \[t:AFB\]. For the decays into spin $0$ states $A_{FB}$ is proportional to $\widetilde{SL}$ and thus tiny for the $e$–mode but nonnegligible for the $\tau$–modes. For the decay into the other spin states one has $A_{FB}(e^-)= - A_{FB}(e^+)$ but $A_{FB}(\tau^-)\ne - A_{FB}(\tau^+)$ as can easily be appreciated by looking at Eq. (\[forward-backward\]). The forward-backward asymmetry can amount up to 40 %. The transverse and the longitudinal pieces of the $J/\psi$ in the decay $B_c\to J/\psi+l+\nu$ are almost equal for both the $e$– and the $\tau$–modes (see Table \[t:hel\]). According to Eq. (\[trans-long\]) this implies that the asymmetry parameter $\alpha^*$ should be close to $- 33 \%$ as is indeed the case as the entries in Table \[t:AFB\] show. For the other two modes involving spin 1 charmonium states the transverse/longitudinal population is quite different. For the transition $B_c\to \chi_{c1}$ the transverse mode dominates by a factor of $\sim 3$ for both the $e$– and $\tau$–modes whereas for the transition $B_c\to h_c$ the longitudinal mode dominates by a factor of $\sim 13$ and $\sim 7$ for the $e$– and $\tau$–modes, respectively.
Taking the central value of the CDF lifetime measurement $\tau(B_c)=0.46
\cdot 10^{-12}$ s [@CDF] and our predictions for the rates into the different charmonium states one finds branching fractions of $\sim 2 \%$ and $\sim 0.7 \%$ for the decays into the two $S$–wave charmonium states $J/\psi$ and $\eta_c$, respectively, and branching fractions of $\sim 0.2 \%$ for the deacys into the $P$–wave charmonium states. Considering the fact that there will be a yield of up to $10^{10}$ $B_c$ mesons per year at the Tevatron and LHC the semileptonic decays of the $B_c$ mesons into charmonium states studied in this paper offers a fascinating area of future research.
This model [@Chang:1992pt; @Chang:2001pm] [@AMV] [@AKNT] [@KLO] [@Colangelo] [@Ebert:2003cn]
--------------------------------- ------------ -------------------------------- -------- ------------- ---------- -------------- -----------------
$B_c\to\eta_c\, e\,\nu$ 10.7 14.2 11.1 8.6 11$\pm$1 2.1 (6.9) 5.9
$B_c\to\eta_c\,\tau\,\nu$ 3.52 3.3$\pm$0.9
$B_c\to J/\psi\, e\,\nu$ 28.2 34.4 30.2 17.2 28$\pm$5 21.6 (48.3) 17.7
$B_c\to J/\psi \,\tau\,\nu$ 7.82 7$\pm$2
$B_c\to \chi_{c0}\, e\,\nu$ 2.52 1.686
$B_c\to\chi_{c0} \,\tau\,\nu$ 0.26 0.249
$B_c\to \chi_{c1} \, e\,\nu$ 1.40 2.206
$B_c\to \chi_{c1} \,\tau\,\nu$ 0.17 0.346
$B_c\to h_c\, e\,\nu $ 4.42 2.509
$B_c\to h_c\, \tau\,\nu$ 0.38 0.356
$B_c\to \chi_{c2}\, e\,\nu $ 2.92 2.732
$B_c\to \chi_{c2}\, \tau\,\nu$ 0.20 0.422
$B_c\to \psi(3836)\, e\,\nu $ 0.13
$B_c\to \psi(3836)\, \tau\,\nu$ 0.0031
: \[t:widths\] Semileptonic decay rates in units of $10^{-15}$ GeV. We use $|V_{cb}|= 0.04$.
Mode $U$ $\widetilde U$ $L$ $\widetilde L$ $P$ $\widetilde S$ $\widetilde {SL}$
--------------------------------- ----------------- ----------------- ----------------- ----------------- ------------------ ----------------- -------------------
$B_c\to\eta_c\, e\,\nu$ 0 0 10.7 $0.34\,10^{-5}$ 0 $0.11\,10^{-4}$ $0.34\,10^{-5}$
$B_c\to\eta_c\,\tau\,\nu$ 0 0 1.18 0.27 0 2.06 0.42
$B_c\to J/\psi\, e\,\nu$ 14.0 $3.8\,10^{-7}$ 14.2 $0.31\,10^{-5}$ 8.00 $0.88\,10^{-5}$ $0.30\,10^{-5}$
$B_c\to J/\psi \,\tau\,\nu$ 3.59 0.73 2.35 0.50 1.74 0.63 0.31
$B_c\to \chi_{c0}\, e\,\nu$ 0 0 2.52 $0.11\,10^{-5}$ 0 $0.32\,10^{-5}$ $0.11\,10^{-5}$
$B_c\to\chi_{c0} \,\tau\,\nu$ 0 0 0.13 0.037 0 0.089 0.033
$B_c\to \chi_{c1} \, e\,\nu$ 1.08 $0.51\,10^{-7}$ 0.33 $0.11\,10^{-6}$ -0.35 $0.32\,10^{-6}$ $0.11\,10^{-6}$
$B_c\to \chi_{c1} \,\tau\,\nu$ 0.11 0.029 0.024 0.0064 -0.066 $0.42\,10^{-2}$ $0.29\,10^{-2}$
$B_c\to h_c\, e\,\nu $ 0.33 $0.13\,10^{-7}$ 4.10 $0.24\,10^{-5}$ 0.21 $0.72\,10^{-5}$ $0.24\,10^{-5}$
$B_c\to h_c\, \tau\,\nu$ 0.045 0.012 0.13 0.037 0.023 0.15 0.044
$B_c\to \chi_{c2}\, e\,\nu $ 0.98 $0.53\,10^{-7}$ 1.93 $0.10\,10^{-5}$ 0.62 $0.30\,10^{-5}$ $0.99\,10^{-6}$
$B_c\to \chi_{c2}\, \tau\,\nu$ 0.073 0.021 0.066 0.019 0.036 0.025 0.012
$B_c\to \psi(3836)\, e\,\nu $ 0.075 $0.06\,10^{-7}$ 0.052 $0.36\,10^{-7}$ -0.036 $0.10\,10^{-6}$ $0.35\,10^{-7}$
$B_c\to \psi(3836)\, \tau\,\nu$ $0.16\,10^{-2}$ $0.53\,10^{-3}$ $0.66\,10^{-3}$ $0.22\,10^{-3}$ $-0.13\,10^{-2}$ $0.16\,10^{-3}$ $0.11\,10^{-3}$
: \[t:hel\] Partial helicity rates in units of $10^{-15}$ GeV.
Mode $A_{FB}(l^-)$ $A_{FB}(l^+)$ $\alpha^\ast$
--------------------------------- ----------------- ----------------- ---------------
$B_c\to\eta_c\, e\,\nu$ $9.64\,10^{-6}$ $9.64\,10^{-6}$ -
$B_c\to\eta_c\,\tau\,\nu$ 0.36 0.36 -
$B_c\to J/\psi\, e\,\nu$ 0.21 -0.21 -0.34
$B_c\to J/\psi \,\tau\,\nu$ 0.29 -0.05 -0.24
$B_c\to \chi_{c0}\, e\,\nu$ $1.29\,10^{-6}$ $1.29\,10^{-6}$ -
$B_c\to\chi_{c0} \,\tau\,\nu$ 0.38 0.38 -
$B_c\to \chi_{c1} \, e\,\nu$ -0.19 0.19 -
$B_c\to \chi_{c1} \,\tau\,\nu$ -0.24 0.34 -
$B_c\to h_c\, e\,\nu $ 0.036 -0.036 -
$B_c\to h_c\, \tau\,\nu$ 0.39 0.30 -
$B_c\to \chi_{c2}\, e\,\nu $ 0.16 -0.16 -
$B_c\to \chi_{c2}\, \tau\,\nu$ 0.32 0.05 -
$B_c\to \psi(3836)\, e\,\nu $ -0.21 0.21 -0.17
$B_c\to \psi(3836)\, \tau\,\nu$ -0.21 0.41 0.006
: \[t:AFB\] Forward-backward asymmetry $A_{FB}$ and the asymmetry parameter $\alpha^\ast$.
Acknowledgments {#acknowledgments .unnumbered}
===============
M.A.I. appreciates the partial support by the DFG (Germany) under the grant 436RUS17/26/04, the Heisenberg-Landau Program and the Russian Fund of Basic Research (Grant No.04-02-17370).
Appendix: Convention for Dirac $\gamma$-matrices and the antisymmetric tensor in Minkowski space {#appendix-convention-for-dirac-gamma-matrices-and-the-antisymmetric-tensor-in-minkowski-space .unnumbered}
================================================================================================
We use the conventions of Bjorken-Drell. Thus we define the metric tensor and the totally antisymmetric $\varepsilon$-tensor in Minkowski space by $g^{\mu\nu}=g_{\mu\nu}={\rm diag}(+,-,-,-,)$ and $\varepsilon_{0123}=
-\varepsilon^{0123}= 1$. For the partial and full contractions of a pair of $\varepsilon$-tensors one finds $$\begin{aligned}
\varepsilon_{\mu_1\mu_2\mu_3\mu_4}\varepsilon^{\nu_1\nu_2\nu_3\mu_4}&=&
-g_{\mu_1}^{\nu_1}g_{\mu_2}^{\nu_2}g_{\mu_3}^{\nu_3}
-g_{\mu_1}^{\nu_2}g_{\mu_2}^{\nu_3}g_{\mu_3}^{\nu_1}
-g_{\mu_1}^{\nu_3}g_{\mu_2}^{\nu_1}g_{\mu_3}^{\nu_2}
\\
&&
+g_{\mu_1}^{\nu_1}g_{\mu_2}^{\nu_3}g_{\mu_3}^{\nu_2}
+g_{\mu_1}^{\nu_2}g_{\mu_2}^{\nu_1}g_{\mu_3}^{\nu_3}
+g_{\mu_1}^{\nu_3}g_{\mu_2}^{\nu_2}g_{\mu_3}^{\nu_1}
\\
\varepsilon_{\mu_1\mu_2\mu_3\mu_4}\varepsilon^{\nu_1\nu_2\mu_3\mu_4}&=&
-\,2\,(g_{\mu_1}^{\nu_1}g_{\mu_2}^{\nu_2}
- g_{\mu_1}^{\nu_2}g_{\mu_2}^{\nu_1})
\\
\varepsilon_{\mu_1\mu_2\mu_3\mu_4}\varepsilon^{\nu_1\mu_2\mu_3\mu_4} &=&
- 6\,g_{\mu_1}^{\nu_1}
\\
\varepsilon_{\mu_1\mu_2\mu_3\mu_4}\varepsilon^{\mu_1\mu_2\mu_3\mu_4} &=& -24\end{aligned}$$ We employ the following definition of the $\gamma^5$-matrix $$\begin{aligned}
&&
\gamma^5 = \gamma_5= i\, \gamma^0\gamma^1\gamma^2\gamma^3
=\frac{i}{24}\varepsilon_{\mu_1\mu_2\mu_3\mu_4}\,
\gamma^{\mu_1}\gamma^{\mu_2}\gamma^{\mu_3}\gamma^{\mu_4}=
\left(
\begin{array}{lr}
0 & I \\
I & 0 \\
\end{array}
\right),
\\
&&\\
&&
{\rm Tr}
\left(\gamma_5\gamma^{\mu_1}\gamma^{\mu_2}\gamma^{\mu_3}\gamma^{\mu_4}\right)
= 4\,i\,\varepsilon^{\mu_1\mu_2\mu_3\mu_4}.\end{aligned}$$ The leptons with negative charge ($l=e^-,\,\mu^-,\,\tau^-$) are referred to as “leptons” whereas the positively charged leptons $\bar l=e^+,\,\mu^+,\,\tau^+$ are referred to as “antileptons”.
[40]{} F. Abe [*et al.*]{} \[CDF Collaboration\], Phys. Rev. D [**58**]{}, 112004 (1998) \[arXiv:hep-ex/9804014\];F. Abe [*et al.*]{} \[CDF Collaboration\], Phys. Rev. Lett. [**81**]{}, 2432 (1998) \[arXiv:hep-ex/9805034\]. Invited talk by D. Lucchesi at ICHEP04, Beijing, 2004 (to be published). See also, S. Towers, D0Note 4539-CONF, August 13th 2004. S. D’Auria, Talk at the “Fermilab Joint Experimental–Theoretical Seminar”, Dec. 10th 2004. S. S. Gershtein, V. V. Kiselev, A. K. Likhoded, A. V. Tkabladze, A. V. Berezhnoy and A. I. Onishchenko, “Theoretical status of the B(c) meson,” Published in “Progress in heavy quark physics. Proceedings, 4th International Workshop, Rostock, Germany, September 20-22, 1997,” Edited by M. . (. Beyer, T. . (. Mannel and H. . (. Schroder, (Rostock, Germany: Univ. Rostock (1998) 272 p). M. Lusignoli and M. Masetti, Z. Phys. C [**51**]{} (1991) 549. C. H. Chang and Y. Q. Chen, Phys. Rev. D [**49**]{}, 3399 (1994). A. Abd El-Hady, J. H. Mu$\tilde {{\rm n}}$oz and J. P. Vary, Phys. Rev. D [**62**]{}, 014019 (2000) \[arXiv:hep-ph/9909406\]; J. F. Liu and K. T. Chao, Phys. Rev. D [**56**]{}, 4133 (1997). A. Y. Anisimov, P. Y. Kulikov, I. M. Narodetsky and K. A. Ter-Martirosian, Phys. Atom. Nucl. [**62**]{}, 1739 (1999) \[Yad. Fiz. [**62**]{}, 1868 (1999)\] \[arXiv:hep-ph/9809249\]. V. V. Kiselev, A. K. Likhoded and A. I. Onishchenko, Nucl. Phys. B [**569**]{}, 473 (2000) \[arXiv:hep-ph/9905359\]. V. V. Kiselev, A. E. Kovalsky and A. K. Likhoded, Nucl. Phys. B [**585**]{}, 353 (2000) \[arXiv:hep-ph/0002127\]. E. Jenkins, M. E. Luke, A. V. Manohar and M. J. Savage, Nucl. Phys. B [**390**]{}, 463 (1993) \[arXiv:hep-ph/9204238\]. P. Colangelo and F. De Fazio, Phys. Rev. D [**61**]{}, 034012 (2000) \[arXiv:hep-ph/9909423\]. M. A. Ivanov, J. G. Körner and P. Santorelli, Phys. Rev. D [**63**]{}, 074010 (2001) \[arXiv:hep-ph/0007169\]. A. Salam, Nuovo Cim. [**25**]{}, 224 (1962); S. Weinberg, Phys. Rev. [**130**]{}, 776 (1963); K. Hayashi [*et al.*]{}, Fort. der Phys. [**15**]{}, 625 (1967). G.V. Efimov and M.A. Ivanov, Int. J. Mod. Phys. [**A4**]{}, 2031 (1989); G.V. Efimov and M.A. Ivanov,“[*The Quark Confinement Model of Hadrons*]{}", IOP Publishing, 1993. A. Faessler, T. Gutsche, M. A. Ivanov, J. G. Korner and V. E. Lyubovitskij, Eur. Phys. J. directC [**4**]{}, 18 (2002) \[arXiv:hep-ph/0205287\]. A. Faessler, T. Gutsche, M. A. Ivanov, V. E. Lyubovitskij and P. Wang, Phys. Rev. D [**68**]{}, 014011 (2003) \[arXiv:hep-ph/0304031\]. M. A. Ivanov, J. G. Körner and O. N. Pakhomova, Phys. Lett. B [**555**]{}, 189 (2003) \[arXiv:hep-ph/0212291\]. M. Masetti, Phys. Lett. B [**286**]{}, 160 (1992). R. Fleischer and D. Wyler, Phys. Rev. D [**62**]{}, 057503 (2000) \[arXiv:hep-ph/0004010\]. C. H. Chang, Y. Q. Chen, G. L. Wang and H. S. Zong, Phys. Rev. D [**65**]{}, 014017 (2002) \[arXiv:hep-ph/0103036\]. V. V. Kiselev, O. N. Pakhomova and V. A. Saleev, J. Phys. G [**28**]{}, 595 (2002) \[arXiv:hep-ph/0110180\]. D. Ebert, R. N. Faustov and V. O. Galkin, Phys. Rev. D [**68**]{}, 094020 (2003) \[arXiv:hep-ph/0306306\]. J. G. Korner, F. Krajewski and A. A. Pivovarov, Phys. Rev. D [**63**]{}, 036001 (2001) \[arXiv:hep-ph/0002166\]. S. Eidelman [*et al.*]{} \[Particle Data Group Collaboration\], Phys. Lett. B [**592**]{}, 1 (2004). M. A. Ivanov, Y. L. Kalinovsky and C. D. Roberts, Phys. Rev. D [**60**]{}, 034018 (1999) \[arXiv:nucl-th/9812063\]. J. A. M. Vermaseren, arXiv:math-ph/0010025. M. A. Ivanov, J. G. Korner and P. Santorelli, Phys. Rev. D [**70**]{}, 014005 (2004) \[arXiv:hep-ph/0311300\]. J. G. Körner and G. A. Schuler, Z. Phys. C [**46**]{}, 93 (1990). J. G. Körner, J. H. Kühn and H. Schneider, Phys. Lett. B [**120**]{}, 444 (1983). J. G. Körner, J. H. Kühn, M. Krammer and H. Schneider, Nucl. Phys. B [**229**]{}, 115 (1983).
|
---
abstract: 'Ranked data appear in many different applications, including voting and consumer surveys. There often exhibits a situation in which data are partially ranked. Partially ranked data is thought of as missing data. This paper addresses parameter estimation for partially ranked data under a (possibly) non-ignorable missing mechanism. We propose estimators for both complete rankings and missing mechanisms together with a simple estimation procedure. Our estimation procedure leverages a graph regularization in conjunction with the Expectation-Maximization algorithm. Our estimation procedure is theoretically guaranteed to have the convergence properties. We reduce a modeling bias by allowing a non-ignorable missing mechanism. In addition, we avoid the inherent complexity within a non-ignorable missing mechanism by introducing a graph regularization. The experimental results demonstrate that the proposed estimators work well under non-ignorable missing mechanisms.'
address: |
Department of Mathematical Informatics,\
Graduate School of Information Science and Technology,\
The University of Tokyo
author:
- 'Kento Nakamura, Keisuke Yano,'
- Fumiyasu Komaki
bibliography:
- 'NakamuraYanoKomaki.bib'
date: '.'
title: |
Learning partially ranked data\
based on graph regularization
---
Introduction
============
Data commonly come in the form of ranking in preference survey such as voting and consumer surveys. Asking people to rearrange items according to their preference, we obtain the collection of rankings. Several methods for ranked data have been proposed. [@mallows1957non] proposed a parametric model, now called the Mallows model; [@diaconis1989generalization] developed a spectral analysis for ranked data; Recently, the analysis of ranked data has gathered much attention in machine learning community (see [@liu2011learning; @furnkranz2011preference]). See Section \[section: literature\] for more details.
Partially ranked data is often observed in real data analysis. This is because one does not necessarily express his or her preference completely; for example, according to the election records of the American Psychological Association collected in 1980, one-third of ballots provided full preferences for five candidates, and the rest provided only top-$t$ preferences with $t=1,2,3$ (see Section 2A in [@diaconis1989generalization]); Data are commonly of partially ranked in movie ratings because respondents usually know only a few movie titles among a vast number of movies. Therefore, analyzing partially ranked data efficiently extends the range of application of statistical methods for ranked data.
Partially ranked data is thought of as missing data. We can naturally consider that there exists a latent complete ranking behind a partial ranking as discussed in [@lebanon2008non]. The existing studies for partially ranked data make the Missing-At-Random (MAR) assumption, that is, an assumption that the missing mechanism generating partially ranked data is ignorable; Under the MAR assumption, [@busse2007cluster] and [@meilua2010dirichlet] leverage an extended distance for partially ranked data; [@lu2011learning] introduces a probability model for partially ranked data. However, an improper application of the MAR assumption may lead to a relatively large estimation error as argued in the literature on missing data analysis ([@little2014statistical]). In the statistical sense, if the missing mechanism is non-ignorable, using the MAR assumption is equivalent to using a misspecified likelihood function, which causes significantly biased parameter estimation and prediction. In fact, [@marlin2009collaborative] points out that there occurs a violation of the MAR assumption in music rankings.
This paper addresses learning the distribution of complete and partial rankings based on partially ranked data under a (possibly) non-ignorable missing mechanism. Our approach includes estimating a missing mechanism. However, estimating a missing mechanism has an intrinsic difficulty. Consider a top-$t$ ranking of $r$ items. Length $t$ characterizes the missing pattern generating a top-$t$ ranking from a complete ranking with $r$ items. It requires $r!(r-2)$ parameters to fully parameterize the missing mechanism since $r!$ multinomial distributions with $r-1$ categories models the missing mechanism. Note that the number of complete rankings is $r!$. A large number of parameters cause over-fitting especially when the sample size is small. To avoid over-fitting, we introduce an estimation method leveraging the recent graph regularization technique ([@hallac2015network]) together with the Expectation-Maximization (EM) algorithm. The numerical experiments using simulation data as well as applications to real data indicate that our proposed estimation method works well especially under non-ignorable missing mechanisms.
Contribution
------------
In this paper, we propose estimators for the distribution of a latent complete ranking and for a missing mechanism. To this end, we employ both a latent variable model and a recently developed graph regularization. Our proposal has two merits: First, we allow a missing mechanism to be non-ignorable by fully parameterizing it. Second, we reduce over-fitting due to the complexity of missing mechanisms by exploiting a graph regularization method.
![ A latent structure behind partially ranked data when the number of items is four: A ranking is expressed as a list of ranked items. The number located at the $i$-th position of a list represents the label of the $i$-th preference. The top layer shows latent complete rankings with a graph structure. A vertex in the top layer corresponds to a latent complete ranking. A edge in the top layer is endowed by a distance between complete rankings. The bottom three layers show partial rankings generated according to missing mechanisms. An arrow from the top layer to the bottom three layers corresponds to a missing pattern. A probability on arrows from a complete ranking to the resulting partial rankings corresponds to a missing mechanism. []{data-label="fig:concept"}](concept.pdf){width="7cm"}
Our ideas for the construction of the estimators are two-fold: First, we work with a latent structure behind partially ranked data (see Figure \[fig:concept\]). This structure consists of the graph representing complete rankings (in the top layer) and arrows representing missing patterns. In this structure, a vertex in the top layer represents a latent complete ranking; An edge is endowed by a distance between complete rankings; An arrow from the top layer to the bottom layers represents a missing pattern; A multinomial distribution on arrows from a complete ranking corresponds to a missing mechanism. Second, we assume that two missing mechanisms become more similar as the associated complete rankings get closer to each other on the graph (in the top layer). Together with both the restriction to the probability simplex and the EM algorithm, these ideas are implemented by the graph regularization method ([@hallac2015network]) under the probability restriction. In addition, we discuss the convergence properties of the proposed method.
The simulation studies as well as applications to real data demonstrate that the proposed method improves on the existing methods under non-ignorable missing mechanisms, and the performance of the proposed method is comparable to those of the existing methods under the MAR assumption.
Literature review {#section: literature}
-----------------
Relatively scarce is the literature on the inference for ranking-related data with (non-ignorable) missing data. [@Marlin:2007:CFM:3020488.3020521] points out that the MAR assumption does not hold in the context of the collaborative filtering. [@Marlin:2007:CFM:3020488.3020521] and [@marlin2009collaborative] propose two estimators based on missing mechanisms. These estimators show the higher performance both in prediction of rating and in the suggestion of top-$t$ ranked items to users than estimators ignoring a missing mechanism. Using the Plackett-Luce and the Mallows models, [@fahandar2017statistical] introduces a rank-dependent coarsening model for pairwise ranking data. This study is different from these studies in types of ranking-related data: [@marlin2009collaborative] and [@Marlin:2007:CFM:3020488.3020521] discuss rating data; [@fahandar2017statistical] discusses pairwise ranking data; this study discusses partially ranked data.
Several methods have been proposed for estimating distributions of partially ranked data ([@beckett1993maximum; @busse2007cluster; @meilua2010dirichlet; @lebanon2008non; @jacques2014model; @caron2014bayesian]). These methods regard partially ranked data as missing data. [@beckett1993maximum] discusses imputing items on missing positions of a partial ranking by employing the EM algorithm. [@busse2007cluster] and [@meilua2010dirichlet] discuss the clustering of top-$t$ rankings by the existing ranking distances for top-$t$ rankings. [@lebanon2008non] proposes a non-parametric model together with a computationally efficient estimation method for partially ranked data. For the proposal, [@lebanon2008non] exploits the algebraic structure of partial rankings and utilizes the Mallows distribution as a smoothing kernel. [@jacques2014model] proposes a clustering algorithm for multivariate partially ranked data. [@caron2014bayesian] discusses Bayesian non-parametric inferences of top-$t$ rankings on the basis of the Plackett-Luce model. [@caron2014bayesian] does not explicitly rely on the framework that regards partially ranked data as the result of missing data; However, the model discussed in [@caron2014bayesian] is equivalent to that under the MAR assumption. Overall, all previous studies rely on the MAR assumption, whereas our study is the first attempt to estimate the distribution of partially ranked data with a (possibly) non-ignorable missing mechanism.
We work with the graph regularization framework called Network Lasso ([@hallac2015network]). Network Lasso employs the alternating direction method of multipliers (ADMM; see [@boyd2011distributed]) to solve a wide range of regularization-related optimization problems of a graph signal that cannot be solved efficiently by generic optimization solvers. In addition, Network Lasso has a desirable convergence property, cooperates with distributed processing systems, and has been applied to various optimization problems on a graph. We present an application of Network Lasso to missing data analysis. In the application, we coordinate Network Lasso with the probability simplex constraint and the EM algorithm.
Organization
------------
The rest of this paper is organized as follows. Section 2 formulates a probabilistic model of partially ranked data based on a missing mechanism, and introduces a distance-based graph structure for a complete ranking. Section 3 proposes the regularized estimator for both a latent complete ranking and a missing mechanism. We also discuss the convergence properties of the proposed estimation procedure. Section 4 demonstrates the result of simulation studies and real data analysis. Section 5 concludes the paper. The concrete algorithm of the proposed estimation procedure and The proof of the convergence property are provided in Appendices A and B, respectively.
Preliminaries
=============
Notation {#section: notation}
--------
We begin with introducing notations for analyzing partially ranked data. In this paper, we identify a complete ranking of $r$ items $\{1,\ldots,r\}$ with a permutation that maps each item $i\in\{1,\ldots,r\}$ uniquely to a corresponding rank $\{1,\ldots,r\}$. A top-$t$ ranking is a list of $t$ items out of $r$ items. We identify a top-$t$ ranking with a permutation that maps an item in a subset of items uniquely to a corresponding rank in $\{1,\ldots,t\}$.
We denote by $S_{r}$ the collection of all complete rankings of $r$ items. We denote by $\overline{S}_r$ the collection of all top-$t$ rankings with $t$ running through $t=1,\ldots,r-1$. We denote by $t(\tau)$ the length of a partial ranking $\tau\in\overline{S}_{r}$. A tuple of a complete ranking $\pi$ and length $t$ uniquely determines a top-$t$ ranking $\tau$: $\pi^{-1}(i)=\tau^{-1}(i)$ for $i=1,\ldots,t$, where $\pi^{-1}$ and $\tau^{-1}$ denote the inverses of $\pi$ and $\tau$, respectively. We define the collection of complete rankings compatible with a given ranking $\tau\in \overline{S}_{r}$ as $$[\tau] = \{\pi\in S_r \mid \pi^{-1}(i) = \tau^{-1}(i)\ (i=1,\ldots,t(\tau))\}.$$
Partial rankings and a missing mechanism {#section: missing}
----------------------------------------
Next we introduce notations and terminologies for a probabilistic model with a missing mechanism.
A probabilistic model for top-$t$ ranked data with a general missing mechanism consists of a probabilistic model of generating complete rankings and that of a missing mechanism. The joint probability of a complete ranking and a missing mechanism is decomposed as $$P(t,\pi)=P(t\mid \pi)P(\pi),$$ where $P(\pi)$ determines how a complete ranking is filled, and $P(t \mid \pi)$ specifies a missing pattern conditioned on the latent complete ranking $\pi$. Then, the probability of a top-$t$ ranking $\tau\in\overline{S}_{r}$ is obtained by marginalizing a latent complete ranking out: $$P(\tau)= \sum_{\pi\in[\tau]}P(t(\tau)\mid \pi)P(\pi).$$ Now distributions $P(t\mid\pi)$ and $P(\pi)$ are parameterized by $\phi$ and $\theta$, respectively; hence $P(\tau;\theta,\phi)=\sum_{\pi\in[\tau]}P(t\mid\pi;\phi)P(\pi;\theta)$. We call $\{P(\pi;\theta):\theta\in\Theta\}$ with a parameter space $\Theta$ a complete ranking model and $\{P(t\mid\pi;\phi):\phi\in\Phi\}$ with a parameter space $\Phi$ a missing model. We call $\theta$ a complete ranking parameter, and call $\phi$ a missing parameter, respectively. Given the i.i.d. observations $\tau_{(n)}=\{\tau_{1},\ldots,\tau_{n} \in \overline{S}_{r}\}$, we denote the negative log-likelihood function by $$\begin{aligned}
L(\theta,\phi;\tau_{(n)}) := -\sum_{i=1}^n \log\left[\sum_{\pi\in[\tau_i]}P(t(\tau_i)\mid \pi;\phi)P(\pi;\theta)\right]. \label{likelihood}\end{aligned}$$
Distances, graph structures, and distributions on complete rankings {#section: distance}
-------------------------------------------------------------------
We end this section with introducing distances, graph structures, and distributions on complete rankings.
We endow the class $S_{r}$ of complete rankings with a distance structure as follows. Since we identify the class $S_{r}$ as a the class of permutations, we endow $S_{r}$ with the symmetric group structure and its group law $\circ$. Using this identification, we leverage distances on symmetric groups for distances on $S_{r}$. There exist a large number of distances on $S_{r}$ such as the Kendall distance, the Spearman rank correlation metric, and the Hamming distance. Among these, the Kendall distance ([@kendall1938new]) has been often used in statistics and machine learning. The Kendall distance between two complete rankings $\pi_1$ and $\pi_2$, $d(\pi_{1},\pi_{2})$, is defined as $$d(\pi_{1},\pi_{2}):= \min_{{\widetilde}{d}}\{{\widetilde}{d}: \pi_{2}=a_{{\widetilde}{d}}\circ\ldots\circ a_{1}\circ \pi_{1},\ a_{1},\ldots,a_{{\widetilde}{d}}\in\mathcal{A}\},$$ where $\mathcal{A}$ is the whole class of adjacent transpositions. This distance is suitable for describing similarity between preferences because the transform of a complete ranking by a single adjacent transposition is just the exchange of the $i$-th and $(i+1)$-th preferences. In this paper, we focus on the Kendall distance $d$ as a distance on $S_{r}$.
There exists a one-to-one correspondence between the distance structure with the Kendall distance and a graph structure. Set the vertex set $V=S_r$ and set the edge set $E=\{\{\pi,\pi'\}:\text{ there exists } b\in\mathcal{A} \text{ such that }\pi=b\circ \pi'\}$. Then the distance structure $(S_{r},d)$ corresponds one-to-one to the graph $G=(V,E)$, since the Kendall distance $d(\pi,\pi')$ is its minimum path length between the vertices corresponding to $\pi$,$\pi'$. Remark that $E$ is rewritten as $E=\{\{\pi,\pi'\}: d(\pi,\pi')=1 \}$.
We introduce a well-known probabilistic model for complete rankings. The Mallows model ([@mallows1957non]) is one of the most popular probabilistic models for complete rankings. The Mallows model associated with the Kendall distance is defined as $$\left\{P(\pi;\sigma,c) = \frac{\exp\{-c d(\pi,\sigma)\}}{Z(c)}: c>0,\sigma\in S_{r}\right\},$$ where $\sigma$ is a location parameter indicating a representative ranking, $c$ is a concentration parameter indicating a decay rate, and $Z(c)=\sum_{\pi\in S_r}\exp\{-c d(\pi,\sigma)\}$ is a normalizing constant that depends only on $c$. The mixture model of $K\in\mathbb{N}$ Mallows distributions is defined as $$\begin{aligned}
\left\{P(\pi;\bm{c},\bm{\sigma},\bm{w}) = \sum_{k=1}^K w_{k} \frac{\exp\{-c_k d(\pi,\sigma_k)\}}{Z(c_{k})}: c_{k}>0,\ \sigma_{k}\in S_{r},\ w_{k}> 0,\ \sum_{k=1}^{K}w_{k}=1\right\} \label{mallows_mixture}\end{aligned}$$ where $\bm{c} = \{c_k\}_k,\bm{\sigma} = \{\sigma_k\}_k,\bm{w} = \{w_k\}_k$ represent the parameters of each mixture component. The Mallows mixture model has been used for estimation and clustering analysis of ranked data ([@murphy2003mixtures; @busse2007cluster]).
Proposed Method {#section: method}
===============
In this section, we propose estimators for both complete ranking and missing models together with a simple estimation procedure. Here we assume that the parameterization of the complete ranking and missing models is separable, that is, $\theta$ and $\phi$ are distinct. We use the following missing model $\{P(t\mid \pi; \phi):\phi\in\Phi\}$ to allow a non-ignorable missing mechanism: $$\begin{aligned}
P(t\mid \pi; \phi) &= \phi_{\pi, t} \text{ for } t\in\{1,\ldots,r-1\} \text{ and } \pi\in S_{r},\\
\Phi&=\left\{\phi \in \mathbb{R}^{r!(r-1)}: \sum_{t=1}^{r-1}\phi_{\pi, t} = 1, \phi_{\pi,t}\geq 0, \pi \in S_{r}\right\}.\end{aligned}$$ We make no assumptions on a complete ranking model $\{P(\pi; \theta):\theta\in\Theta\}$.
Estimators and estimation procedure {#section: procedure}
-----------------------------------
We propose the following estimators for $\theta$ and $\phi$: On the basis of i.i.d. observations $\tau_{(n)}=\{\tau_{1},\ldots,\tau_{n} \in \overline{S}_{r}\}$, $$(\hat{\theta}(\tau_{(n)}),\hat{\phi}(\tau_{(n)}))={\mathop{\mathrm{argmin}}}_{\theta\in\Theta, \phi\in\Phi} L_{\lambda}(\theta,\phi;\tau_{(n)}).$$ Here $L_{\lambda}$ with a regularization parameter $\lambda>0$ is defined as $$L_{\lambda}(\theta,\phi;\tau_{(n)})= L(\theta,\phi;\tau_{(n)}) +\lambda\sum_{\{\pi,\pi'\}\in E}\|\phi_{\pi}-\phi_{\pi'}\|_2^2,$$ where $\phi_{\pi}$ with $\pi\in S_{r}$ denotes the vector $(\phi_{\pi, 0},\ldots,\phi_{\pi, (r-1)})$, and recall that $L(\theta,\phi;\tau_{(n)})$ is the negative log-likelihood function (\[likelihood\]) and $E$ is the edge set of the graph induced by the Kendall distance; see Subsections \[section: missing\]-\[section: distance\].
We conduct minimization in the definition of $\hat{\theta},\hat{\phi}$ using the following EM algorithm: At the $(m+1)$-th step, set $$\begin{aligned}
\theta^{m+1}&={\mathop{\mathrm{argmin}}}_{\theta'\in\Theta} L(\theta';\tau_{(n)},q^{m+1}_{(n)}), \label{opttheta}\\
\phi^{m+1}&={\mathop{\mathrm{argmin}}}_{\phi' \in\Phi} L_{\lambda}(\phi';\tau_{(n)},q_{(n)}^{m+1}), \label{optphi}\end{aligned}$$ where for $i\in\{1,\ldots,n\}$, $$\begin{aligned}
q_{i,\pi}^{m+1}&=
\begin{cases}
\phi^{m}_{\pi,t(\tau_{i})}P(\pi;\theta^{m})
\bigg{/}\sum_{\pi'\in[\tau_i]}\phi^{m}_{\pi',t(\tau_{i})}P(\pi';\theta^{m}) & (\pi\in[\tau_i]), \\
0 & (\pi\not\in[\tau_i]),
\end{cases}\label{optq}\\
q_{(n)}^{m+1}&:=\{q_{i,\pi}^{m+1}: i=1,\ldots,n , \pi\in S_{r} \},\\
L(\theta;\tau_{(n)},q^{m+1}_{(n)})&:=-\sum_{i=1}^n\sum_{\pi\in[\tau_i]}q^{m+1}_{i,\pi}\log P(\pi;\theta),\\
L_{\lambda}(\phi;\tau_{(n)},q^{m+1}_{(n)})&:=-\sum_{i=1}^n\sum_{\pi\in[\tau_i]}q^{m+1}_{i,\pi}\log\phi_{\pi', t(\tau_i)} +\lambda \sum_{\{\pi,\pi'\}\in E}\|\phi_{\pi}-\phi_{\pi'}\|_2^2. \label{likelihood_phi}\end{aligned}$$
Consider minimizations (\[opttheta\]) and (\[optphi\]). Minimization (\[opttheta\]) depends on the form of a complete ranking model $P(\pi;\theta)$; For example, consider the Mallows model with $\theta=(\sigma,c)$ (see Section \[section: distance\]). In this case, we write down the minimization of $\theta$ at the $(m+1)$-th step as follows: $$\begin{aligned}
\sigma^{m+1}&=\mathop{{\mathop{\mathrm{argmin}}}}_{{\widetilde}\sigma\in S_r}\sum_{i=1}^n\sum_{\pi\in S_r}q_{i,\pi}^{m+1} d(\pi,{\widetilde}\sigma),\\
c^{m+1}&={\mathop{\mathrm{argmin}}}_{{\widetilde}{c}>0}\sum_{i=1}^n\sum_{\pi\in S_r}q_{i,\pi}^{m+1}\{{\widetilde}{c}d(\pi,\sigma^{m+1}) + \log(Z({\widetilde}{c}))\}.\end{aligned}$$ See [@busse2007cluster] for more details. Minimization (\[optphi\]) in the $(m+1)$-th step is conducted using the following iteration: At the $(l+1)$-th step, set $$\begin{aligned}
\phi^{l+1}&={\mathop{\mathrm{argmin}}}_{{\widetilde}{\phi}\in\Phi }L_{\rho}({\widetilde}{\phi},\varphi^{l},u^{l}; q^{m+1}_{(n)}),\label{optvertex}\\
\varphi^{l+1}&={\mathop{\mathrm{argmin}}}_{{\widetilde}{\varphi}\in\mathbb{R}^{r!(r-1)^2}}L_{\rho}(\phi^{l+1},{\widetilde}{\varphi},u^{l}; q^{m+1}_{(n)}),\label{optvarphi}\\
u^{l+1}&=u^{l}+(\phi^{l+1}-\varphi^{l+1}). \label{optu}\end{aligned}$$ Here, $\varphi\in \mathbb{R}^{r!(r-1)^2}$ is the copy variable of $\phi$, $u\in \mathbb{R}^{r!(r-1)^2}$ is the dual variable, and $L_\rho$ is an augmented Lagrangian function with a penalty constant $\rho$ defined as $$\begin{aligned}
L_{\rho}(\phi,\varphi,u;q^{m+1}_{(n)})&=-\sum_{\pi\in V}\sum_{t}[q^{m+1}_{\pi, t}\log \phi_{\pi, t}] \nonumber\\
&\quad+ \sum_{\{\pi,\pi'\}\in E}\left\{\lambda\|\varphi_{\pi, \pi'}-\varphi_{\pi',\pi}\|^2_2-\frac{\rho}{2}(\|u_{\pi, \pi'}\|_2^2+\|u_{\pi',\pi}\|_2^2)\right.\nonumber\\
&\quad\quad\quad\quad\quad\left.+\frac{\rho}{2}(\|\phi_{\pi}-\varphi_{\pi, \pi'}+u_{\pi, \pi'}\|_2^2+\|\phi_{\pi'}-\varphi_{\pi',\pi}+u_{\pi',\pi}\|_2^2)\right\},\label{lagrangian}\end{aligned}$$ where $$\begin{aligned}
q^{m+1}_{\pi, t}:=\sum_{i:t(\tau_i)=t}q_{i,\pi}^{m+1}, \label{qpit}\end{aligned}$$ for all $\pi\in S_{r}$ and $t=1,\ldots,r-1$. Note that $q^{m+1}_{\pi,t}=0$ when $\{i:t(\tau_i)=t\}$ is an empty set. The detailed algorithm is provided in Appendix \[section: algorithm\].
We make the assumption that two complete rankings close to each other in the Kendall distance have smoothly related missing probabilities. This assumption leads to adding a ridge penalty $$\sum_{\substack{\pi,\pi'\in S_r\\d(\pi,\pi')=1}}\|\phi_{\pi}-\phi_{\pi'}\|_2^2
=\sum_{\{\pi,\pi'\}\in E}\|\phi_{\pi}-\phi_{\pi'}\|_{2}^{2}$$ to the negative log-likelihood function. This assumption is reasonable because the Kendall distance measures the similarity of preferences expressed by two rankings.
For top-$t$ ranked data, the MAR assumption is expressed as $$P(t(\tau)\mid\pi;\phi) = P(t(\tau)\mid\pi';\phi),\ (\pi,\pi' \in [\tau]).$$ Then under the MAR assumption, the negative log-likelihood function $L(\theta,\phi;\tau_{(n)})$ is decomposed as $$\begin{aligned}
L(\theta,\phi;\tau_{(n)})&=-\sum_{i=1}^n\log P(\tau_i\mid\pi\in[\tau_i];\phi)-\sum_{i=1}^n\log \left[\sum_{\pi\in[\tau_i]}P(\pi;\theta)\right]\nonumber\\
&=L(\phi;\tau_{(n)})+L(\theta;\tau_{(n)}),\end{aligned}$$ which indicates that the parameter estimation for $\phi$ is unnecessary for estimating $\theta$.
Convergence
-----------
In this subsection, we discuss theoretical guarantees for two procedures (\[opttheta\])-(\[likelihood\_phi\]) and (\[optvertex\])-(\[optu\]).
It is guaranteed that the sequence $\{L_{\lambda}(\theta^m,\phi^m;\tau_{(n)}):m=1,2,\ldots \}$ obtained using the procedure (\[opttheta\])-(\[likelihood\_phi\]) monotonically decreases, $$L_{\lambda}(\theta^{m},\phi^{m};\tau^{(n)}) \geq L_{\lambda}(\theta^{m+1},\phi^{m+1};\tau^{(n)}), \ m=1,2,\ldots,$$ because the procedure is just the EM algorithm. Introduce a latent assignment variable $z_{(n)}=\{z_i\}_i$ ($z_i\in \mathbb{R}^{|S_r|}$ for every $i = 1,\ldots,n$. $z_{i\pi}=1$ if $\tau_i$ is the missing from $\pi$ and $z_{i\pi}=0$ otherwise). Using $z_{(n)}$, we decompose the likelihood function as follows: $$\begin{aligned}
&L_\lambda(\theta,\phi;\tau_{(n)},z_{(n)}) \\
&= -\sum_{i=1}^n\log\left[\prod_{\pi\in[\tau_i]}\{\phi_{\pi, t(\tau_i)}P(\pi;\theta)\}^{z_{i,\pi}}\right] +\lambda\sum_{\{\pi,\pi'\}\in E}\|\phi_{\pi}-\phi_{\pi'}\|_2^2.\\
&=\left\{-\sum_{i=1}^n\sum_{\pi\in[\tau_i]}z_{i,\pi}\log P(\pi;\theta)\right\}+\left\{-\sum_{i=1}^n\sum_{\pi\in[\tau_i]}z_{i,\pi}\log\phi_{\pi, t(\tau_i)} +\lambda\sum_{\{\pi,\pi'\}\in E}\|\phi_{\pi}-\phi_{\pi'}\|_2^2\right\}\\
&=L(\theta;\tau_{(n)},z_{(n)})+L_{\lambda}(\phi;\tau_{(n)},z_{(n)}).\end{aligned}$$ On the basis of the decomposition, the standard procedure of the EM algorithm yields the iterative algorithm shown in (\[opttheta\])-(\[likelihood\_phi\]). Note that it depends on a complete ranking model $P(\pi;\theta)$ whether the convergent point of the sequence is a local maximum of $L(\theta,\phi;\tau_{(n)})$; see Section 3 of [@mclachlan2007algorithm].
Next, it is guaranteed that the sequence $\phi^{l+1}$ obtained using the procedure (\[optvertex\])-(\[optu\]) converges to the global minimum of $L_{\lambda} (\phi ; \tau_{(n)} , q_{(n)}^{m} )$ in the sense of $L_{\lambda}(\phi;\tau_{(n)},q^{m}_{(n)})$.
\[prop: convergence\_admm\] The sequence $\{L_{\lambda}(\phi^l,\tau_{(n)},q^{m+1}_{(n)})\}_{l=1}^{\infty}$ converges to $\min_{\phi}L_{\lambda}(\phi,\tau_{(n)},q^{m+1}_{(n)})$.
The proof is provided in Appendix \[section: proof\]. The basis of the proof is reformulating the optimization problem (\[optphi\]) as an instance of the alternating direction method of multipliers (ADMM; [@boyd2011distributed]: We rewrite the problem (\[optphi\]) as follows: $$\begin{aligned}
\phi=&{\mathop{\mathrm{argmin}}}_{\phi'}&-\sum_{\pi\in V}\sum_{t}[q_{\pi, t}\log \phi'_{\pi, t}] +\lambda \sum_{\{\pi,\pi'\}\in E}\|\phi_{\pi}-\phi_{\pi'}\|_2^2\nonumber\\
&&\mathrm{s.t.}\ \sum_t \phi_{\pi, t} = 1 \ (\forall \pi\in V) \label{RFopt}\end{aligned}$$ where $V$ is the vertex set of the graph defined in Section \[section: distance\] and $q_{\pi, t}=\sum_{i:t(\tau_i)=t}\sum_{k=1}^Kq_{i,k,\pi}$. Introducing a copy variable $\varphi$ on the edge set, we recast the optimization problem (\[RFopt\]) into an equivalent form: $$\begin{aligned}
\phi,\varphi=&{\mathop{\mathrm{argmin}}}_{\phi',\varphi'}&-\sum_{\pi\in V}\sum_{t}[q_{\pi, t}\log \phi'_{\pi, t}] + \lambda\sum_{\{\pi,\pi'\}\in E}\|\varphi'_{\pi, \pi'}-\varphi'_{\pi',\pi}\|^2_2\label{optphidecomp}\\
&\mathrm{s.t.}&\phi'_{\pi}=\varphi'_{\pi, \pi'} \ (\forall \{\pi,\pi'\}\in E),\nonumber\\
&& \sum_t \phi'_{\pi, t} = 1 \ (\forall \pi\in V).\nonumber\end{aligned}$$ Note that this reformulation follows the idea of [@hallac2015network]. We employ ADMM to solve the optimization of the sum of objective functions of splitted variables under linear constraints.
Numerical experiments
=====================
In this section, we apply our methods to both simulation studies and real data analysis. In simulation studies, we use the Mallows mixture models (\[mallows\_mixture\]) with two types of missing models. In the real data analysis, we use the election records of the American Psychological Association collected in 1980.
Performance measures
--------------------
We evaluate the performance of several estimators for $\theta$ and $\phi$ in estimating distributions of a latent complete ranking and of a partial ranking. We measure the performance using the following total variation losses: When the true values of a complete ranking and missing parameters are $\theta$ and $\phi$, respectively, the losses of estimators $\hat{\theta}$ and $\hat{\phi}$ are given as $$\begin{aligned}
L_{\mathrm{par}}
&=L_{\mathrm{par}}(\theta,\phi;\hat{\theta},\hat{\phi})
=\sum_{\tau\in \overline{S}_{r}}
|P(\tau;\theta,\phi)-P(\tau;\hat{\theta},\hat{\phi})| \label{Lpar}\\
L_{\mathrm{comp}}
&=L_{\mathrm{comp}}(\theta,\hat{\theta})
=\sum_{\pi\in S_{r}}
|P(\pi;\theta)-P(\pi;\hat{\theta})|.\label{Lcomp}\end{aligned}$$ Losses $L_{\mathrm{par}}$ and $L_{\mathrm{comp}}$ measure the estimation losses for partial and complete ranking distributions, respectively.
Method comparison {#section: comparison methods}
-----------------
We compare our estimators with the estimator based on the maximum entropy approach proposed by [@busse2007cluster] and a non-regularized estimator, abbreviated by ME and NR, respectively. In addition, we use the proposed estimator with the regularization parameter selected using two-fold cross-validation based on $L_{\mathrm{par}}$.
We denote the proposed method introduced in section \[section: method\] with the value of regularization parameter $\lambda$ as R$\lambda$ and that with the regularization parameter selected using cross-validation as RCV.
The maximum entropy approach (ME) uses an extended distance between top-$t$ rankings to introduce an exponential family distribution of a top-$t$ ranking. From the viewpoint of missing data analysis, ME implicitly assumes the MAR assumption. For this reason, in the maximum entropy approach, we estimate the missing model parameter $\phi$ by assuming homogeneous missing probabilities $P(t\mid \pi) = \phi_t\ (\forall \pi\in S_r)$ and using the maximum likelihood in the evaluation of the loss $L_{\mathrm{par}}(\theta,\phi)$.
The non-regularized estimator (NR) is the minimizer of the non-regularized likelihood function $L(\theta,\phi; \tau_{(n)})$. The estimation based on the non-regularized likelihood function can be implemented straightforwardly.
Stopping criteria
-----------------
In simulation studies, we use the following stopping criteria and hyperparameters. We terminate the iteration of the EM algorithm when the change of the likelihood of the observable distribution gets lower than $\epsilon = 1$. We terminate the iteration of ADMM when both the primal and dual residuals got less than $\epsilon_p=\epsilon_d=1$ or when the number of iterations exceeded 100. In addition, to prevent being trapped in local minima due to the EM algorithm, we use the following devices. First, we use 10 different initial location parameters in the EM algorithm. Second, we make the value of the location parameter transit from the current to a different one in the first five iterations of the EM algorithm.
Simulation studies {#section: synthetic}
------------------
We conducted two simulation studies. The data-generating models are as follows: For complete ranking models, we use the Mallows and Mallows mixture models. For missing models, we use a binary missing mechanism in which the possible missing patterns are only that no items are missing or that all but the first items are missing. We parameterize missing models in such a way that there is a discrepancy between the distribution of a complete ranking generated by the latent Mallows model and the marginal distribution of a partial ranking restricted to $S^{(r-1)}_{r}:=\{\tau:t(\tau)=r-1,\tau\in \overline{S}_{r}\}$. Note that $S^{(r-1)}_{r}$ is identical to $S_{r}$ as a set.
In each simulation, we generate 100 datasets with sample size of $n=1000$. We set the number of items to $r=5$.
### Tilting the concentration parameter {#section: tilt concentration}
![Boxplots of $L_{\mathrm{par}}$ in (\[Lpar\]) for the dataset tilting the concentration parameter: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100), and its version with cross-validation (RCV). The results with different values of the concentration parameter $c^{\ast}$ are shown in (A)-(C).[]{data-label="synGP"}](synG_10P8.pdf){width="45mm"}
\[p8\]
![Boxplots of $L_{\mathrm{par}}$ in (\[Lpar\]) for the dataset tilting the concentration parameter: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100), and its version with cross-validation (RCV). The results with different values of the concentration parameter $c^{\ast}$ are shown in (A)-(C).[]{data-label="synGP"}](synG_10P10.pdf){width="45mm"}
\[p10\]
![Boxplots of $L_{\mathrm{par}}$ in (\[Lpar\]) for the dataset tilting the concentration parameter: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100), and its version with cross-validation (RCV). The results with different values of the concentration parameter $c^{\ast}$ are shown in (A)-(C).[]{data-label="synGP"}](synG_10P12.pdf){width="45mm"}
\[p12\]
![Boxplots of $L_{\mathrm{comp}}$ in (\[Lcomp\]) for the dataset tilting the concentration parameter: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100), and its version with cross-validation (RCV). The results with different values of the concentration parameter $c^{\ast}$ are shown in (A)-(C).[]{data-label="synGC"}](synG_10C8.pdf){width="50mm"}
\[fig:one\]
![Boxplots of $L_{\mathrm{comp}}$ in (\[Lcomp\]) for the dataset tilting the concentration parameter: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100), and its version with cross-validation (RCV). The results with different values of the concentration parameter $c^{\ast}$ are shown in (A)-(C).[]{data-label="synGC"}](synG_10C10.pdf){width="50mm"}
\[fig:one\]
![Boxplots of $L_{\mathrm{comp}}$ in (\[Lcomp\]) for the dataset tilting the concentration parameter: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100), and its version with cross-validation (RCV). The results with different values of the concentration parameter $c^{\ast}$ are shown in (A)-(C).[]{data-label="synGC"}](synG_10C12.pdf){width="50mm"}
\[fig:one\]
In the first simulation study, we use the Mallows model and the missing model that tilts the concentration parameter $c$: The missing model is parameterized by $c^{\ast}>0$ and $R\in [0,1]$ as $$\phi_{\pi} = ( \phi_{\pi,0},\ldots,\phi_{\pi,(r-1)})
= (1-C_{\pi}(c^{\ast},R),0,\ldots,0,C_{\pi}(c^{\ast},R)), \ \pi\in S_{r}$$ where $$C_{\pi}(c^{\ast},R) =\min\left\{1,\frac{Z(c)}{Z(c^{\ast})}R\exp \{-(c^{\ast}-c) d(\pi,\sigma_0)\}\right\}.$$ In this parameterization, the parameter $c^{\ast}$ specifies the degree of concentration of the marginal distribution $P(\tau;\theta,\phi)$ of a partial ranking restricted to $S^{(r-1)}_{r}$: If $\{Z(c)/Z(c^{\ast})\}R\exp \{-(c^{\ast}-c) d(\pi,\sigma_0)\}\leq 1$, $P(\tau;\theta,\phi)$ has the form of the Mallows distribution with the concentration parameter $c^{\ast}$: $$\begin{aligned}
P(\tau;\theta,\phi) &= \sum_{\pi'\in[\tau]}P(t=r\mid \pi';\phi)P(\pi';\theta)\\
&=C_{\pi}(c^{\ast},R)\frac{1}{Z(c)}\exp\{-c d(\pi,\sigma)\}\\
&= \frac{R}{Z(c^{\ast})}\exp\{-c^{\ast}d(\pi,\sigma_0)\},\end{aligned}$$ where $\pi(i)=\tau(i), i=1,\ldots,r-1$. The parameter $0\leq R\leq 1$ specifies the proportion of partial rankings in $S^{(r-1)}_r$. We set $c = 1$, $R=0.7$, and $c^{\ast} \in \{0.8,1,1.2\}$.
Figures \[synGP\] and \[synGC\] show the results. When $c^{\ast}\neq 1$, the proposed methods outperform ME both in $L_{\mathrm{par}}$ and $L_{\mathrm{comp}}$. When $c^{\ast}=1$, the proposed methods underperform compared to ME. These results reflect that the setting with $c^{\ast}=1$ satisfies the MAR assumption, whereas the settings with $c^{\ast}\neq 1$ do not satisfy the MAR assumption. For $L_{\mathrm{par}}$, the proposed methods outperform NR regardless of the values of $c^{\ast}$. However, there are subtle distinctions in the values of $L_{\mathrm{comp}}$ of these methods. The performance of the proposed method with the cross-validated regularization parameter (RCV) is comparable with that of the proposed method with the optimal regularization parameter both for $L_{\mathrm{par}}$ and $L_{\mathrm{comp}}$, indicating the utility of cross-validation.
### Tilting the mixture coefficient {#section: tilt mixture}
In the second simulation study, we use the Mallows mixture model with two clusters and the missing model that tilts the mixture coefficient $w$. We instantiate a missing model, in which missing probabilities depend on the cluster assignment $k\in\{1,\ldots,K\}$, such that $P(t\mid \pi,z_k=1)=P(t\mid z_k=1)=\phi_{k,t}$, where $z_k=1$ if and only if the assigned cluster is $k$ and $z_k=0$ otherwise. Then, the missing model is parameterized by $w^{\ast}\in [0,1]$ and $R\in [0,1]$ as $$\phi = \left(
\begin{array}{ccccc}
\phi_{1,1} & \ldots \phi_{1,(r-1)}\\
\phi_{2,1} & \ldots \phi_{2,(r-1)}
\end{array}
\right)=\left(
\begin{array}{ccccc}
1-C_{1}(w^{\ast},R) & 0 &\ldots & 0 & C_{1}(w^{\ast},R)\\
1-C_{2}(w^{\ast},R) & 0 &\ldots& 0 & C_{2}(w^{\ast},R)
\end{array}
\right),$$ where $C_{k}(w^{\ast},R) = (w^{\ast}_k / w_k)R.$ In this parameterization, the parameter $w^{\ast}$ determines the mixture coefficient of the marginal distribution $P(\tau;\theta,\phi)$ of a partial ranking restricted to $S^{(r-1)}_r$. We set the parameter values as follows:
- $\bm\sigma=((1,2,3,4,5), (3,2,5,4,1))$, $\bm{c} = (1,1)$, and $w = (0.5,0.5)$;
- $R=0.7$ and $w^{\ast}=\{(0.5,0.5),(0.6,0.4),(0.7,0.3)\}$.
In this simulation study, we additionally use the classification error as the performance measure.
![Boxplots of $L_{\mathrm{par}}$ in (\[Lpar\]) for the dataset tilting the mixture coefficient: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100) and its version with cross-validation (RCV). The results with different values of the concentration parameter $w^{\ast}$ are shown in (A)-(C).[]{data-label="synCP"}](synC2_10P5.pdf){width="50mm"}
\[fig:one\]
![Boxplots of $L_{\mathrm{par}}$ in (\[Lpar\]) for the dataset tilting the mixture coefficient: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100) and its version with cross-validation (RCV). The results with different values of the concentration parameter $w^{\ast}$ are shown in (A)-(C).[]{data-label="synCP"}](synC2_10P6.pdf){width="50mm"}
\[fig:one\]
![Boxplots of $L_{\mathrm{par}}$ in (\[Lpar\]) for the dataset tilting the mixture coefficient: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100) and its version with cross-validation (RCV). The results with different values of the concentration parameter $w^{\ast}$ are shown in (A)-(C).[]{data-label="synCP"}](synC2_10P7.pdf){width="50mm"}
\[fig:one\]
![Boxplots of $L_{\mathrm{comp}}$ in (\[Lcomp\]) for the dataset tilting the mixture coefficient: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100) and its version with the cross-validation (RCV). The results with different values of the concentration parameter $w^{\ast}$ are shown in (A)-(C).[]{data-label="synCC"}](synC2_10C5.pdf){width="50mm"}
\[fig:one\]
![Boxplots of $L_{\mathrm{comp}}$ in (\[Lcomp\]) for the dataset tilting the mixture coefficient: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100) and its version with the cross-validation (RCV). The results with different values of the concentration parameter $w^{\ast}$ are shown in (A)-(C).[]{data-label="synCC"}](synC2_10C6.pdf){width="50mm"}
\[fig:one\]
![Boxplots of $L_{\mathrm{comp}}$ in (\[Lcomp\]) for the dataset tilting the mixture coefficient: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100) and its version with the cross-validation (RCV). The results with different values of the concentration parameter $w^{\ast}$ are shown in (A)-(C).[]{data-label="synCC"}](synC2_10C7.pdf){width="50mm"}
\[fig:one\]
![Boxplots of the classification errors for the dataset tilting the mixture coefficient: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100) and its version with the cross-validation (RCV). The results with different values of the concentration parameter $w^{\ast}$ are shown in (A)-(C).[]{data-label="synCD"}](synC2_10D5.pdf){width="50mm"}
\[fig:one\]
![Boxplots of the classification errors for the dataset tilting the mixture coefficient: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100) and its version with the cross-validation (RCV). The results with different values of the concentration parameter $w^{\ast}$ are shown in (A)-(C).[]{data-label="synCD"}](synC2_10D6.pdf){width="50mm"}
\[fig:one\]
![Boxplots of the classification errors for the dataset tilting the mixture coefficient: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed methods (R1; R10; R100) and its version with the cross-validation (RCV). The results with different values of the concentration parameter $w^{\ast}$ are shown in (A)-(C).[]{data-label="synCD"}](synC2_10D7.pdf){width="50mm"}
\[fig:one\]
Figures \[synCP\]–\[synCD\] show the results. The proposed methods outperform ME when $w_1^{\ast}\neq 0.5$ in comparing $L_{\mathrm{par}}$; when $w_1^{\ast}$ is $0.7$ in comparing $L_{\mathrm{comp}}$; when $w_1^{\ast}\neq 0.5$ in comparing the classification error. The proposed methods outperform NR both in comparing $L_{\mathrm{par}}$ and $L_{\mathrm{comp}}$ except when $w_1^{\ast}$ is $0.7$ for $L_{\mathrm{comp}}$. As $w^{\ast}$ deviates from $0.5$, the classification error of ME increases. On the other hand, the classification errors of the other methods decrease.
Application to real data
------------------------
We apply the proposed method to real data. We use the election records for five candidates collected by the American Psychological Association. Among the 15549 vote casts, only 5141 filled all candidates ($t=5,4$); 2108 filled $t=3$; 2462 filled $t=2$; and the rest filled only $t=1$.
![Sample size dependency of the mean of $L_{\mathrm{par}}$ in (\[Lpar\]) for the American Psychological Association dataset: The vertical axis represents the mean value of $L_{\mathrm{par}}$; and the horizontal axis represents the sample size $n$ on the log-scale. The results with $n=100, 500, 1000, 5000, 10000$ are shown. The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed method with cross-validation (RCV).[]{data-label="apa2"}](APA_CV_point.pdf){width="50mm"}
![Boxplots of $L_{\mathrm{par}}$ in (\[Lpar\]) for the American Psychological Association dataset: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed method with cross-validation (RCV). The results with a different sample size $n$ of the train dataset are shown in (A)-(C).[]{data-label="apa"}](APA_CV_232.pdf){width="50mm"}
\[fig:one\]
![Boxplots of $L_{\mathrm{par}}$ in (\[Lpar\]) for the American Psychological Association dataset: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed method with cross-validation (RCV). The results with a different sample size $n$ of the train dataset are shown in (A)-(C).[]{data-label="apa"}](APA_CV_233.pdf){width="50mm"}
\[fig:one\]
![Boxplots of $L_{\mathrm{par}}$ in (\[Lpar\]) for the American Psychological Association dataset: The compared methods are the maximum entropy approach (ME), the non-regularized estimator (NR), the proposed method with cross-validation (RCV). The results with a different sample size $n$ of the train dataset are shown in (A)-(C).[]{data-label="apa"}](APA_CV_234.pdf){width="50mm"}
\[fig:one\]
For comparison, we chose several pairs of train and test datasets randomly to measure $L_{\mathrm{par}}$ since we do not have the true values of the model parameters nor the form of the model. To see the dependence of the estimation performance on the sample size, we used different sizes ($n=100, 500, 1000, 5000, 10000$) of the train datasets, whereas we fixed the size of the test datasets to $n=3000$. We sampled test datasets independently $30$ times and sampled train datasets from the remaining data independently $30$ times for each size. In calculating $L_{\mathrm{par}}$, we used the empirical distribution of the employed test dataset as the true distribution. For a complete ranking model, we made the use of the likelihood of the Mallows mixture model with the number of clusters set to 3 as in [@busse2007cluster]. Since R1 performs poorly in terms of $L_{\mathrm{par}}$ according to the simulation study, we eliminate R1 from the candidate of two-fold cross-validation.
Figures \[apa2\] and \[apa\] show the result. When the sample size is small ($n=100, 500$), the proposed method is comparable to ME, and NR works poorly. When the sample size is moderate ($n=1000$), the proposed method outperforms both ME and NR. When the sample size is large ($n=5000, 10000$), the proposed method outperforms ME, and it is comparable to NR. These results indicate that considering non-ignorable missing mechanisms contributes to the improvement of the performance when the sample size is sufficient, while the graph regularization reduces over-fitting when the sample size is insufficient.
Conclusion
==========
We proposed a regularization method for partially ranked data to prevent modeling bias due to the MAR assumption and avoid over-fitting due to the complexity of missing models. Our simulation experiments showed that the proposed method improves on the maximum entropy approach ([@busse2007cluster]) under non-ignorable missing mechanisms. They also showed that the proposed method improves on the non-regularized estimator especially in estimating distribution of a partial ranking. Our real data analysis suggested that moderate or large sample sizes attribute the improvement by the proposed method and the proposed method is effective in reducing over-fitting.
We propose two main tasks for future work. The first task is to improve the computational efficiency of our method since it was not a priority in this study. Leveraging partial completion of items (instead of full completion) might be effective for reducing the computational cost. For this purpose, the distance of top-$t$ ranking described in [@busse2007cluster] might be beneficial for the construction of the graph. The second task is to develop cross-validation or an information criterion for inferring the distribution of a latent complete ranking. In this study, we employed cross-validation based on the distribution of a partial ranking. When the distribution of a latent complete ranking is of interest, cross-validation based on the distribution of a latent complete ranking would be more suitable. However, the construction of such cross-validation would be difficult because the empirical distribution of a latent complete ranking cannot be obtained directly, which rises ubiquitously where one uses the EM algorithm for the estimation of latent variables. There have been several derivations of information criteria comprising the distribution of latent variables ([@shimodaira1994new; @cavanaugh1998akaike]). We conjecture that these derivations would be useful for inferring partially ranked data.
Algorithms {#section: algorithm}
==========
In this appendix, we provide a concise algorithm to conduct ADMM in (\[optvertex\])-(\[optu\]). In the algorithm, $\lambda$ is the regularization parameter, $\rho$ is the penalty constant, and $\epsilon_p,\epsilon_d$ are two parameters for stopping the algorithm.
proof of proposition \[prop: convergence\_admm\] {#section: proof}
================================================
*Proof of Proposition \[prop: convergence\_admm\]*: First, we express optimization (\[optphidecomp\]) at the $m$-th step of iteration (\[optphi\]) using an extended-real-valued function as follows: $$\begin{aligned}
\phi,\varphi=&{\mathop{\mathrm{argmin}}}_{\phi,\varphi}&f(\phi)+g(\varphi)\\
&\mathrm{s.t.} &\phi'_{\pi}=\varphi'_{\pi, \pi'} \ (\forall \{\pi,\pi'\}\in E),\\end{aligned}$$ where the functions $f:\mathbb{R}^{r!(r-1)}\rightarrow \mathbb{R}\cup \{\infty\}$ and $g:\mathbb{R}^{r!(r-1)^2}\rightarrow \mathbb{R}$ are defined as $$\begin{aligned}
f(\phi) &= \sum_{\pi\in S_r}\sum_{t=1}^{r-1}f_{\pi,t}(\phi),\\
f_{\pi,t}(\phi)&=\begin{cases}
\infty & (\phi_{\pi,t}\not\in [0,1] \ \text{or}\ (\phi_{\pi,t}=0 \ \text{and}\ q^{m+1}_{\pi,t}\neq 0)),\\
0 & (\phi_{\pi,t}\in [0,1] \ \text{and}\ q_{\pi,t}^{m+1}=0),\\
-q^{m+1}_{\pi, t}\log \phi_{\pi, t} & (\text{otherwise}),
\end{cases}\\
g(\varphi)&=\lambda\sum_{\{\pi,\pi'\}\in E}\|\varphi_{\pi, \pi'}-\varphi_{\pi',\pi}\|^2_2.\end{aligned}$$ Note that the effective domain $\mathrm{dom}(f)=\{\phi\in\mathbb{R}^{r!(r-1)}\mid f(\phi)<\infty\}$ is identical to the parameter space $\Phi=\left\{\phi \in \mathbb{R}^{r!(r-1)}: \sum_{t=1}^{r-1}\phi_{\pi, t} = 1, \phi_{\pi,t}\geq 0, \pi \in S_{r}\right\}$. It suffices to show the following two convergences for the sequence $\{\phi^l,\varphi^l,u^l:l=0,1,\ldots \}$ generated by iteration (\[optvertex\])-(\[optu\]).
- Residual convergence: the primal residual $\bar{r}^l\in\mathbb{R}^{r!(r-1)^2}$ defined by $\bar{r}_{\pi,\pi',t}^l:= \phi_{\pi,t}^l-\varphi_{\pi,\pi',t}^l$ converges to $0$ with respect to $l$: $\lim_{l\rightarrow\infty}\bar{r}^l=0;$
- Objective convergence: the convergence $$\lim_{l\rightarrow\infty}\{f(\phi^l)+g(\varphi^l)\}=\min_{\substack{\phi'\in\mathbb{R}^{r!(r-1)},\varphi'\in\mathbb{R}^{r!(r-1)^2}\\\phi_{\pi}=\varphi_{\pi,\pi'},\ (\pi,\pi'\in E)}}\{f(\phi')+g(\varphi')\}$$ holds.
Objective convergence together with residual convergence implies convergence of the objective function $L_{\lambda}(\phi;\tau_{(n)},q^{m+1}_{(n)})$, because we have $$\begin{aligned}
&\left|L_{\lambda}(\phi^l;\tau_{(n)},q^{m+1}_{(n)})-\min_{\phi'\in\Phi}L_{\lambda}(\phi';\tau_{(n)},q^{m+1}_{(n)})\right|\\
&\leq |f(\phi^l)+g(\phi^l)-f(\phi^l)+g(\varphi^l)|+\left|f(\phi^l)+g(\varphi^l)-\min_{\phi'\in\Phi}L_{\lambda}(\phi';\tau_{(n)},q^{m+1}_{(n)})\right|\\
&= |g(\phi^l)-g(\varphi^l)|+\left|f(\phi^l)+g(\varphi^l)-\min_{\phi'_{\pi}=\varphi'_{\pi,\pi'},\ (\pi,\pi'\in E)}\{f(\phi')+g(\varphi')\}\right|\\
&\rightarrow 0\ (l\rightarrow \infty),\end{aligned}$$ where it follows from residual convergence and the continuity of $g$ that $|g(\phi^l)-g(\varphi^l)|\rightarrow 0$.
The following is a sufficient condition for objective and residual convergence based on ADMM (see Section 3.2 of [@boyd2011distributed]):
(I) The functions $f$ and $g$ are closed, proper, and convex; \[big1\]
(II) Unaugmented Lagrangian ${\widetilde}{L}_0$ has a saddle point. \[big2\]
Here unaugmented Lagrangian ${\widetilde}{L}_0$ is defined as $$\begin{aligned}
{\widetilde}{L}_{0}(\phi,\varphi,y;q^{m+1}_{(n)})=f(\phi)+g(\varphi)+ \sum_{\pi\in V}\sum_{\{\pi,\pi'\}\in E}y_{\pi,\pi}^{\mathrm{T}}(\phi_{\pi}-\varphi_{\pi,\pi'}).\end{aligned}$$ In what follows, we show that conditions (\[big1\]) and (\[big2\]) hold.
*Confirming condition (\[big1\]):* $g$ is clearly a closed, proper, and convex function because $g$ is a positive quadratic function. Each function $f_{\pi,t}\ (\pi\in S_r, t\in \{1,\ldots,r-1\})$ is closed because every level set $V_{\gamma}=\{x\in \mathbb{R}^{r!(r-1)}\mid f_{\pi,t}(\phi)\}$ with $\gamma\in\mathbb{R}$ is a closed set: $$V_{\gamma}=\begin{cases}
(-\infty,\infty)\times\ldots \times[\exp(-\gamma),1]\times\ldots\times(-\infty,\infty) & (\gamma\geq 0\ \text{and}\ q_{\pi,t}\neq 0),\\
(-\infty,\infty)\times\ldots \times[0,1]\times\ldots\times(-\infty,\infty) & (\gamma\geq 0\ \text{and}\ q_{\pi,t}= 0),\\
\emptyset & (\text{otherwise}).
\end{cases}$$ Therefore, $f$ is closed. $f$ is proper because $f\geq 0>-\infty$ everywhere and $f(\phi)<\infty$ for $\phi\in\mathbb{R}^{r!(r-1)}$ satisfying $\phi_{\pi,t}=1/(r-1),\ \pi\in S_r,t=1,\ldots,r-1$. Each function $f_{\pi,t}$ $(\pi\in S_{r}, t\in \{1,\ldots,r-1\})$ is convex because the effective domain $\mathrm{dom}(f_{\pi,t})$ is a convex set and $\nabla^2f_{\pi,t}(\phi)$ is positive semidefinite for all $\phi\in\mathrm{dom}(f_{\pi,t})$. Therefore, $f$ is convex. Thus, condition (\[big1\]) holds.
*Confirming condition (\[big2\]):* We employ the following sufficient condition for the existence of a saddle point described as Assumption 5.5.1 and Proposition 5.5.6 in Section 5.5 of [@bertsekas2015convex]:
(i) For each $\phi\in\mathbb{R}^{r!(r-1)},\varphi\in\mathbb{R}^{r!(r-1)^2}$, $-{\widetilde}{L}_0(\phi,\varphi,\cdot): \mathbb{R}^{r!(r-1)^2}\rightarrow \mathbb{R}\cup\{\infty\}$ is convex and closed; \[small1\]
(ii) For each $y\in\mathbb{R}^{r!(r-1)^2}$, ${\widetilde}{L}_0(\cdot,\cdot,y): \mathbb{R}^{r!(r-1)}\times\mathbb{R}^{r!(r-1)^2}\rightarrow \mathbb{R}\cup\{\infty\}$ is convex and closed; \[small2\]
(iii) Functions $L^+$ and $L^-$ are proper, where $L^{+}:\mathbb{R}^{r!(r-1)}\times\mathbb{R}^{r!(r-1)^2}\rightarrow \mathbb{R}\cup\{\infty\}$ and $L^{-}:\mathbb{R}^{r!(r-1)^2}\rightarrow \mathbb{R}\cup\{\infty\}$ are defined as $$\begin{aligned}
L^{+}(\phi,\varphi)=\sup_{y\in \mathbb{R}^{r!(r-1)^2}}{\widetilde}{L}_0(\phi,\varphi,y)
\text{ and }
L^{-}(y) = \sup_{\substack{\phi\in\mathbb{R}^{r!(r-1)}\\\varphi\in\mathbb{R}^{r!(r-1)^2}}}-{\widetilde}{L}_0(\phi,\varphi,y);\end{aligned}$$ \[small3\]
(iv) For each $\gamma\in\mathbb{R}$, the level set $\{\phi,\varphi\mid L^+(\phi,\varphi)\leq \gamma\}$ is compact; \[small4\]
(v) For each $\gamma\in\mathbb{R}$, the level set $\{y\mid L^-(y)\leq \gamma\}$ is compact. \[small5\]
Condition (\[small1\]) holds because $-{\widetilde}{L}_0(\phi,\varphi,\cdot)$ is linear for $\phi\in\mathrm{dom}(f)$) and $-\infty$ for $\phi\not\in\mathrm{dom}(f)$. Condition (\[small2\]) holds because ${\widetilde}{L}_0(\cdot,\cdot,y)$ is the sum of convex and closed functions.
To confirm condition (\[small3\]), we will show that $L^+$ and $L^-$ are proper. Set $\phi^{\ast}\in \mathbb{R}^{r!(r-1)}$ and $\varphi^{\ast}\in\mathbb{R}^{r!(r-1)^2}$ such that $$\begin{aligned}
\phi^{\ast}_{\pi,t} &= 1/(r-1) \text{ for all }\pi\in V, t\in\{1,\ldots,r-1\} \text{ and }\\
\varphi^{\ast}_{\pi,\pi',t} &= 1/(r-1)\text{ for all }\{\pi,\pi'\}\in E, t\in\{1,\ldots,r-1\}.\end{aligned}$$ It follows that $L^+$ is proper since $$\begin{aligned}
L^+(\phi,\varphi) &\geq {\widetilde}{L}_0(\phi,\varphi,0)\geq 0>-\infty \text{ for all } \phi,\varphi
\text{ and}\\
L^+(\phi^{\ast},\varphi^{\ast})&={\widetilde}{L}_0(\phi^{\ast},\varphi^{\ast},0)<\infty \text{ for } (\phi,\varphi) = (\phi^{\ast},\varphi^{\ast}).\end{aligned}$$ It follows that $L^-$ is proper since $$\begin{aligned}
L^-(y)&\geq -{\widetilde}{L}_0(\phi^{\ast},\varphi^{\ast},y)>-\infty\text{ for all }y \text{ and }\\
L^-(0) &= \sup_{\phi,\varphi}\{-f(\phi)-g(\varphi)\}\leq 0<\infty \text{ for }y=0.\end{aligned}$$ Therefore, condition (\[small3\]) is confirmed.
To confirm conditions (\[small4\]) and (\[small5\]), it suffices to show that the level sets are closed and bounded. Since the function obtained by taking the point-wise supremum of a family of closed functions is again closed, both $L^+$ and $L^-$ are closed and thus their level sets are closed. The remaining part of the proof is to show that the level sets of $L^{+}$ and $L^{-}$ are bounded.
We will show that all level sets of $L^+$ are bounded by focusing on the effective domain of $L^+$. We show that the effective domain $\mathrm{dom}(L^+)$ is a subset of the bounded set $$B = \left\{\phi\in\mathbb{R}^{r!(r-1)},\varphi\in\mathbb{R}^{r!(r-1)^2}\mid \phi_{\pi,t}\geq 0, \sum_{t=1}^{r-1}\phi_{\pi,t}=1, \phi_{\pi,t}-\varphi_{\pi,\pi',t}=0,\ \{\pi,\pi'\}\in E\right\},$$ according to which all level sets of $L^{+}$ are bounded. If $\phi_{{\widetilde}\pi,{\widetilde}{t}}-\varphi_{{\widetilde}\pi,{\widetilde}\pi',{\widetilde}{t}}\neq 0$ for some $\{{\widetilde}\pi,{\widetilde}\pi'\}\in E$ and ${\widetilde}{t}\in\{1,\ldots,r-1\}$, we can take a sequence $\{y^n\}_{n=1}^{\infty}\subset \mathbb{R}^{r!(r-1)^2}$ such that $y^n_{{\widetilde}\pi,{\widetilde}\pi',{\widetilde}{t}}=n(\phi_{{\widetilde}\pi,{\widetilde}{t}}-\varphi_{{\widetilde}\pi,{\widetilde}\pi',{\widetilde}{t}})$ for $(\{\pi,\pi'\},t)= (\{{\widetilde}\pi,{\widetilde}\pi'\},{\widetilde}{t})$ and $y^n_{\pi,\pi',t}=0$ otherwise. For the sequences $\{y^n\}$, we have $$\lim_{n\rightarrow\infty}\sum_{\pi\in V}\sum_{\{\pi,\pi'\}\in E}(y_{\pi,\pi'}^{n})^{\mathrm{T}}(\phi_{\pi}-\varphi_{\pi,\pi'})=\lim_{n\rightarrow\infty}n({\widetilde}\phi_{\pi,{\widetilde}{t}}-\varphi_{{\widetilde}\pi,{\widetilde}\pi',{\widetilde}{t}})^2= \infty,$$ from which we obtain $L^+(\phi,\varphi)=\sup_{y\in\mathbb{R}^{r!(r-1)^2}}{\widetilde}{L}_0(\phi,\varphi,y)=\infty$. Therefore, the effective domain of $L^+(\phi,\varphi)$ is included in the bounded set $B$ and thus all level sets of $L^+$ are bounded.
We will show that all level sets of $L^-$ are bounded by showing that $L^{-}$ is coercive, i.e., for any sequence $\{y^n\}_{n=1}^{\infty}\subset \mathbb{R}^{r!(r-1)^2}$ satisfying $\lim_{n\rightarrow\infty}\|y^n\|_2=\infty$, we have $\lim_{n\rightarrow\infty}L^-(y^n)=\infty.$ For any given sequence $\{y^n\}_n$ satisfying $\lim_{n\rightarrow\infty}\|y^{n}\|_{2}=\infty$, take sequences $\{\phi^n\}_n$ and $\{\varphi^n\}_n$ such that $\phi_{\pi,t}^n=1/(r-1)$ and $\varphi_{\pi,\pi'}^n = \phi_{\pi}^n+y_{\pi,\pi'}^n/\|y_{\pi,\pi'}^n\|_2.$ For sequences $\{\phi^n\}_n$ and $\{\varphi^n\}_n$, we obtain $$\begin{aligned}
&-f(\phi^n)=\sum_{\pi\in V}\sum_{t=1}^{r-1}q^{m+1}_{\pi, t}\log (r-1)\geq 0,\\
&-g(\phi^n)=-\sum_{\pi\in V}\sum_{\{\pi,\pi'\}\in E}\left\|\frac{y_{\pi,\pi'}^n}{\|y_{\pi,\pi'}^n\|_2}-\frac{y_{\pi',\pi}^n}{\|y_{\pi',\pi}^n\|_2}\right\|^2_2\geq -2r!(r-1),\\
&-\sum_{\pi\in V}\sum_{\{\pi,\pi'\}\in E}(y_{\pi,\pi'}^{n})^{\mathrm{T}}(\phi_{\pi}^n-\varphi_{\pi,\pi'}^n)=\sum_{\pi\in V}\sum_{\{\pi,\pi'\}\in E}\|y_{\pi,\pi'}^{n}\|_2\geq\|y^n\|_2.\end{aligned}$$ Hence, $L^-$ is coercive since we have $L^-(y^n)\geq -{\widetilde}{L}_0(\phi^n,\varphi^n,y^n)\geq -2r!(r-1)+\|y^n\|_2\rightarrow \infty\ (n\rightarrow\infty)$ for any sequence $\{y^{n}\}_{n=1}^{\infty}$ satisfying $\|y^n\|_2\rightarrow\infty$, and thus all level sets of $L^-$ are bounded.
From the above, conditions (\[small4\]) and (\[small5\]) are satisfied and thus we complete the proof.
|
---
abstract: |
We study the relationship of higher order variational eigenvalues of $p$-Laplacian and the higher order Cheeger constants. The asymptotic behavior of the $k$-th Cheeger constant is investigated. Using methods developed in [@2], we obtain the high-order Cheeger’s inequality of $p$-Laplacian on domain $h_k^p(\Omega)\leq C \lambda_{k}(p,\Omega)$.
[***Keywords***]{}: [High order Cheeger’s inequality; eigenvalue problem; $p$-Laplacian]{}
author:
- |
[Shumao Liu]{}\
[ School of Statistics and Mathematics]{}\
[ Central University of Finance and Economics]{}\
[Beijing, China, 100081]{}\
[(lioushumao@163.com ) ]{}\
title: ' High-order Cheeger’s inequality on domain '
---
Introduction.
=============
Let $\Omega\subset\mathbb{R}^n$ be a bounded open domain. The minimax of the so-called Rayleigh quotient $$\label{lambda}
\lambda_{k}(p,\Omega)=\inf_{A\in \Gamma_{k,p}}\max_{u\in A}\displaystyle \frac{\int_{\Omega}{|\nabla u|^pdx}}{\int_{\Omega}|u|^pdx},\ (1<p<\infty),$$ leads to a nonlinear eigenvalue problem, where $$\Gamma_{p,k}=\{A\in {W^{1,p}_0}(\Omega)\backslash \{0\}| A\cap\{\|u\|_p=1\}\mbox{is compact,}\ A \mbox{symmetric,}\ \gamma(A)\geq k\}.$$ The corresponding Euler-Lagrange equation is $$\label{p-laplacian equation}
-\Delta_pu:=-\mbox{div}(|\nabla u|^{p-2}\nabla u)=\lambda|u|^{p-2}u,$$ with Dirichlet boundary condition. This eigenvalue problem has been extensively studied in the literature. When $p=2$, it is the familiar linear Laplacian equation$$\Delta u+\lambda u=0.$$ The solution of this Laplacian equation describes the shape of an eigenvibration, of frequency $\sqrt{\lambda}$, of homogeneous membrane stretched in the frame $\Omega$. It is well-known that the spectrum of Laplacian equation is discrete and all eigenfunctions form an orthonormal basis for $L^2(\Omega)$ space. For general $1<p<\infty$, the first eigenvalue $\lambda_1(p,\Omega)$ of $p$-Laplacian $-\Delta_p$ is simple and isolated. The second eigenvalue $\lambda_2(p,\Omega)$ is well-defined and has a “variational characterization", see [@20]. It has exactly 2 nodal domains, c.f.[@14]. However, we know little about the higher eigenvalues and eigenfunctions of the $p$-Laplacian when $p\not=2$. It is unknown whether the variational eigenvalues (\[lambda\]) can exhaust the spectrum of equation (\[p-laplacian equation\]). In this paper, we only discuss the variational eigenvalues (\[lambda\]). For (\[lambda\]), there are asymptotic estimates, c.f.[@17] and [@18]. [@21], [@22], and [@23] discuss the $p$-Laplacian eigenvalue problem as $p\rightarrow\infty$ and $p\rightarrow 1$.
The Cheeger’s constant which was first studied by J.Cheeger in [@9] is defined by $$\label{cheeger inequality}
h(\Omega):=\displaystyle\inf_{D\subseteq\Omega}\frac{|\partial D|}{|D|},$$ with $D$ varying over all smooth subdomains of $\Omega$ whose boundary $\partial D$ does not touch $\partial\Omega$ and with $|\partial D|$ and $|D|$ denoting $(n-1)$ and $n$-dimensional Lebesgue measure of $\partial D$ and $D$. We call a set $C\subseteq \overline{\Omega}$ Cheeger set of $\Omega$, if $\displaystyle\frac{|\partial C|}{|C|}=h(\Omega)$. For more about the uniqueness and regularity, we refer to [@11]. Cheeger sets are of significant importance in the modelling of landslides, see [@24],[@25], or in fracture mechanics, see [@26].
The classical Cheeger’s inequality is about the first eigenvalue of Laplacian and the Cheeger constant(c.f.[@3]) $$\lambda_{1}(2,\Omega)\geq \bigg(\frac{h(\Omega)}{2}\bigg)^2\quad\mbox{i.e.}\quad h(\Omega)\leq 2\sqrt{\lambda_{1}(2,\Omega)},$$ which was extent to the $p$-Laplacian in [@12]: $$\lambda_{1}(p,\Omega)\geq \bigg(\frac{h(\Omega)}{p}\bigg)^p.$$ When $p=1$, the first eigenvalue of $1$-Laplacian is defined by $$\label{1-laplace}
\lambda_{1}(1,\Omega):=\min_{0\not=u\in BV(\Omega)}\displaystyle\frac{\int_{\Omega}|Du|+\int_{\partial\Omega}|u|d\mathcal{H}^{n-1}}{\int_{\Omega}|u|dx},$$ where $BV(\Omega)$ denotes the space of functions of bounded variation in $\Omega$. From [@3], $\lambda_{1}(1,\Omega)=h(\Omega)$. And, problem (\[cheeger inequality\]) and problem (\[1-laplace\]) are equivalent in the following sense: a function $u\in BV(\Omega)$ is a minimum of (\[1-laplace\]) if and only if almost every level set is a Cheeger set. An important difference between $\lambda_1(p,\Omega)$ and $h_k(\Omega)$ is that the first eigenfunction of $p$-Laplacian is unique while the uniqueness of Cheeger set depends on the topology of the domain. For counterexamples, see [@4 Remark 3.13]. For more results about the eigenvalues of 1-Laplacian, we refer to [@6] and [@7].
As to the more general Lipschitz domain, we need the following definition of perimeter: $$P_{\Omega}(E):=\sup\bigg{\{}\int_E \mbox{div}\phi dx\bigg{|} \phi\in C^1_c(\Omega, \mathbb{R}^n), |\phi|\leq 1, \mbox{div} \phi\in L^{\infty}(\Omega)\bigg{\}}.$$ For convenience, we denote $|\partial E|:=P_{\Omega}(E)$. The higher order Cheeger’s constant is defined by $$h_k(\Omega):=\inf\{\lambda\in \mathbb{R}^+|\exists \ E_1,E_2,\cdots,E_k\subseteq \Omega, E_i\cap E_j=\emptyset,\mbox{µ±}i\not=j,\max_{1,2,\cdots,k}\frac{|\partial E_i|}{|E_i|}\leq \lambda \};$$ if$|E|=0$, we set $\displaystyle\frac{|\partial E|}{|E|}=+\infty$. An equivalent characterization of the higher order Cheeger constant is (see[@4]) $$h_k(\Omega):=\inf_{\mathfrak{D}_k}\max_{i=1,2,\cdots,k}h(E_i),$$where $\mathfrak{D}_k$ are the set of all partitions of $\Omega$ with $k$ subsets. We set $h_1(\Omega):=h(\Omega)$. Obviously, if $R\subseteq \Omega$, then $h_k(\Omega)\leq h_k(R)$.
For the high-order Cheeger constants, there is a conjecture: $$\label{conjecture}
\lambda_{k}(p,\Omega)\geq \bigg(\frac{h_k(\Omega)}{p}\bigg)^p.\qquad \forall\ 1\leq k < +\infty, \ 1< p < +\infty.$$ From [@14 Theorem 3.3], the second variational eigenfunction of $-\Delta_p$ has exactly two nodal domains, see also [@20]. It follows that (\[conjecture\]) is hold for $k=1,2.$ We refer to [@4 Theorem 5.4] for more details. However, by Courant’s nodal domain theorem, for other variational eigenfunctions, it is not necessary to have exactly $k$ nodal domains. Therefore, the inequality (\[conjecture\]) on domain is still an open problem for $k>2$.
In this paper, we will get an asymptotic estimate for $h_k(\Omega)$ and establish high-order Cheeger’s inequality for general $k$, and discuss the reversed inequality. To deal with the high-order Cheeger’s inequality, we should give some restriction on domain.
If there exists $n$-dimensional rectangle $R\subset \Omega$ and $c_1,c_2$ independent of $\Omega$, such that $c_1|R|\leq |\Omega|\leq c_2|R|$, we say $R$ the comparable inscribed rectangle of $\Omega$.
In graph theory, when $p=2$ the high-order Cheeger inequality was proved in [@1], and was improved in [@2]. In [@1], using orthogonality of the eigenfunctions of Laplacian in $l_2$ and a random partitioning, they got $$\frac{\lambda_k}{2}\leq \rho_G(k)\leq O(k^2)\sqrt{\lambda_k},$$ where $\rho_G(k)$ is the $k$-way expansion constant, the analog of $h_k$. But, when it comes to the domain case, there is no such random partitioning. Therefore, we adapt the methods in [@2] to get:
\[theorem-1\] Let $\Omega\subset\mathbb{R}^n$ be a bounded domain with a comparable inscribed rectangle. For $1<p<\infty$, we have the following asymptotic estimates: $$\label{my Cheeger inequality}
h_k(\Omega)\leq C {k^{\frac{1}{n}}}\bigg(\frac{\lambda_1(p,\Omega)}{h_1(\Omega)}\bigg)^{\frac{q}{p}},\qquad \forall\ 1\leq k < +\infty£¬$$where $C$ only depends on $n,p$, $\frac{1}{p}+\frac{1}{q}=1$.
There are some lower bounds about the first eigenvalue of $p$-Laplacian, see [@19]. There is lower bound by the $h_k$ when $\Omega$ be a planar domain with finite connectivity $k$.
Let $(S,g)$ be a Riemannian surface, and let $D\subset S$ be a domain homeomorphic to a planar domain of finite connectivity $k$. Let $F_k$ be the family of relatively compact subdomains of $D$ with smooth boundary and with connectivity at most $k$. Let $$h_k(D):=\inf_{D'\in F_k}\frac{|\partial D'|}{|D'|},$$where $|D'|$ is the area of $D'$ and $|\partial D'|$ is the length of its boundary. Then, $$\lambda_{1}(p,D)\geq \bigg(\frac{h_k(D)}{p}\bigg)^p.$$
The results of theorem \[theorem-1\] generalize the above theorem to more general cases.
As to the reversed inequality, if $\Omega\subset\mathbb{R}^n$ is convex, the following lower bound (the Faber-Krahn inequality) for $h_k(\Omega)$ was proved in [@4]: $$\label{Faber-Krahn inequality}
h_k(\Omega)\geq n\big(\frac{k\omega_n}{|\Omega|}\big)^{\frac{1}{n}},\ \forall\; k=1,2,\cdots,$$where $\omega_n$ is the volume of the unit ball in $\mathbb{R}^n$. Therefore $$\label{Faber-Krahn inequality-2}
0<h_1(\Omega)\leq h_2(\Omega)\leq\cdots\leq h_k(\Omega)\rightarrow +\infty, \mbox{as}\; k\rightarrow\infty.$$ However, for general domain, inequalities (\[Faber-Krahn inequality\]) and (\[Faber-Krahn inequality-2\]) are not true at all for $p>1$. In fact, there are counter-examples in [@15] and [@16] to show that there exist domains such that $h_k(\Omega)\leq c,$ where $c$ depends only on $n$. Meanwhile, $\lambda_{k}(p,\Omega)\rightarrow +\infty$. Therefore, the reversed inequality of (\[my Cheeger inequality\]) is not hold for general domain.
Let’s consider the convex domain. By the John ellipsoid theorem (c.f.[@27 theorem 1.8.2]) and the definition of comparable inscribed rectangle, there exists comparable inscribed rectangle $R$ for convex $\Omega$, such that $c_1|R|\leq |\Omega|\leq c_2|R|$, where $c_1,c_2$ depend only on n.
On the other hand, according to [@17] and [@18], for $1<p<+\infty$, there exist $C_1,C_2$ depending only on $p,n$, such that $$\label{weyl's asymptotic}
C_1\bigg(\frac{k}{|\Omega|}\bigg)^{\frac{1}{n}}\leq \lambda_{k}^{\frac{1}{p}}(p,\Omega)\leq C_2 \bigg(\frac{k}{|\Omega|}\bigg)^{\frac{1}{n}},\qquad \forall\ k \in\mathbb{N}.$$
Therefore, if the domain is a bounded convex domain, combining Theorem\[theorem-1\], (\[Faber-Krahn inequality\]) and (\[weyl’s asymptotic\]), the following inequality holds.
\[theorem-2\] Let $\Omega\subset\mathbb{R}^n$ be a bounded convex domain, then there exist $C_1,C_2$ depending only on $n$, such that $$C_1\bigg(\frac{k}{|\Omega|}\bigg)^{\frac{1}{n}}\leq h_{k}(\Omega)\leq C_2 \bigg(\frac{k}{|\Omega|}\bigg)^{\frac{1}{n}},\qquad \forall\ k \in\mathbb{N}.$$
By the two theorems above, we get bilateral estimate of $h_k(\Omega)$ with respect to $\lambda_{k}(p,\Omega)$.
\[cor-1\] Let $\Omega\subset\mathbb{R}^n$ be a bounded convex domain. Then, for $1<p<\infty$, there exist $C_1,C_2$ depending only on $p,n$, such that $$C_1 \lambda_{k}(p,\Omega)\leq h_k^p(\Omega)\leq C_2 \lambda_{k}(p,\Omega).$$
From [@4], when $\Omega\subset \mathbb{R}^n$ be a lipschitz domain, there is $$\limsup_{p\rightarrow 1}\lambda_k(p,\Omega)\leq h_k(\Omega).$$
This paper is arranged as follows: In section 2, we get some variants of Cheeger’s inequalities. Section 3 is devoted to prove Theorem \[theorem-1\] and Theorem \[theorem-2\].
Some variants of Cheeger’s inequalities
=======================================
In this section, we will give several variants of Cheeger’s inequalities. For a subset $S\subseteq \Omega$, define $\displaystyle\phi(S)=\frac{|\partial S|}{|S|}$. The Rayleigh quotient of $\psi$ is defined by $\displaystyle\mathcal{R}(\psi):=\frac{\int_{\Omega}|\nabla\psi|^pdx}{\int_{\Omega}|\psi|^pdx}$. We define the support of $\psi$, $Supp(\psi)=\{x\in\Omega|\psi(x)\not=0\}$. If $Supp(f)\cap Supp(g)=\emptyset$, we say $f$ and $g$ are disjointly supported. Let $\Omega_\psi(t):=\{x\in\Omega|\psi(x)\geq t\}$ be the level set of $\psi$. For an interval $I=[t_1, t_2]\subseteqq \mathbb{R}$. $|I|=|t_2-t_1|$ denote the length of $I$. For any function $\psi$, $\Omega_\psi(I):=\{x\in \Omega|\psi(x)\in I\}$, $\displaystyle\phi(\psi):=\min_{t\in \mathbb{R}}\phi(\Omega_{\psi}(t))$. $t_{opt}:=\min\{t\in\mathbb{R}|\phi(\Omega_\psi(t))=\phi(\psi)\}$.
\[lemma-1\] For any $\psi\in W_0^{1,p}(\Omega)$, there exist a subset $S\subseteq Supp\psi$, such that $\phi(S)\leq p({\mathcal{R}(\psi)})^{\frac{1}{p}}$.
The proof can be found in the appendix of [@11], we write it here for the reader’s convenience.
Note that $|\nabla|\psi||\leq |\nabla \psi|$. We only need to show the conclusion for $\psi\geq 0$. Suppose first that $\omega\in C_0^{\infty}(\Omega)$. Then by the coarea formula and by Cavalieri’s principle $$\int_{\Omega}|\nabla\omega|dx=\int_{-\infty}^{\infty}|\partial\Omega_\omega(t)|dt=\int_{-\infty}^{\infty}\frac{|\partial\Omega_\omega(t)|}{|\Omega_\omega(t)|}|\Omega_\omega(t)|dt$$ $$\label{inequality 1}\geq \inf\frac{|\partial \Omega_w(t)|}{|\Omega_w(t)|}\int_{-\infty}^{+\infty}| \Omega_w(t)|dt= \inf\frac{|\partial \Omega_w(t)|}{|\Omega_w(t)|}\int_{\Omega}|w|dx=\phi(\Omega_w{(t_{opt})})\int_{\Omega}|w|dx$$ Since $C_0^{\infty}(\Omega)$ is dense in $W_0^{1,1}(\Omega)$, the above inequality also holds for $\omega\in W_0^{1,1}(\Omega)$ . Define $\Phi(\psi)=|\psi|^{p-1}\psi$. Then Hölder’s inequality implies $$\int_{\Omega}|\nabla\Phi(\psi)|dx=p\int_{\Omega}|\psi|^{p-1}|\nabla\psi|dx\leq p\|\psi\|_p^{p-1}\|\nabla \psi\|_p.$$ Meanwhile, (\[inequality 1\]) applies and$$\int_{\Omega}|\nabla\Phi(\psi)|\geq \phi(\Omega_\Phi{(t_{opt})})\int_{\Omega}|\psi|^pdx.$$Therefore, there exist a subset $S=:\Omega_\Phi{(t_{opt})}\subseteq Supp\psi$, such that $$\phi(S)\leq \frac{\int_{\Omega}|\nabla\omega |dx}{\int_{\Omega}|\omega|dx}\leq \frac{p\|\psi\|_p^{p-1}\|\nabla \psi\|_p}{\int_{\Omega}|\psi|^pdx}=p\frac{\|\nabla \psi\|_p}{\|\psi\|_p}=p({\mathcal{R}(\psi)})^{\frac{1}{p}}.$$
Let $\displaystyle \mathcal{E}_f:=\int_{\Omega}|\nabla f|^pdx$. Then $\displaystyle\mathcal{R}(f)=\frac{\mathcal{E}_f}{\|f\|_p^p}$. To use the classical Cheeger’s inequality for truncated functions, we introduce $\displaystyle \mathcal{E}_f(I):=\int_{\Omega_f(I)}|\nabla f|^pdx$.
\[Lem-2\] For any function $f\in W_0^{1,p}(\Omega)$, and interval $I=[b,a]$ with $a>b\geq 0$, we have $$\mathcal{E}_f(I)\geq \frac{(\phi(f)|\Omega_f(a)||I|)^p}{|\Omega_f(I)|^{\frac{p}{q}}},$$where $\frac{1}{p}+\frac{1}{q}=1$.
We first prove it for $f\in C_0^\infty(\Omega)$. By Coarea formula and Cavalieri’s principle, $$\int_{\Omega_f(I)}|\nabla f|dx=\int_I|\partial \Omega_f(t)|dt=\int_I\frac{|\partial\Omega_f(t)|}{|\Omega_f(t)|}|\Omega_f(t)|dt$$ $$\qquad\qquad\qquad \geq \phi(f)\int_I|\Omega_f(t)|dt\geq \phi(f)|\Omega_f(a)||I|.$$ The Hölder inequality gives $$\bigg(\int_{\Omega_f(I)}|\nabla f|dx\bigg)\leq \bigg(\int_{\Omega_f(I)}|\nabla f|^pdx\bigg)^{\frac{1}{p}}\bigg(\int_{\Omega_f(I)}dx\bigg)^{\frac{1}{q}}=\bigg(\int_{\Omega_f(I)}|\nabla f|^pdx\bigg)^{\frac{1}{p}}|\Omega_f(I)|^{\frac{1}{q}}.$$ Combining above two inequalities, we get this Lemma for $f\in C_0^\infty(\Omega)$. Arguments as in the proof of Lemma \[lemma-1\] give this lemma for $f\in W_0^{1,p}(\Omega)$.
Construction of separated functions
====================================
In this section, we will prove theorem \[theorem-1\] and theorem \[theorem-2\]. We use the method developed in [@2] for high-order Cheeger’s inequality on graph. Our proof consists of three steps. First, we will deal with the case of $n$-dimensional rectangle $\Omega=(a_1,b_1)\times(a_2,b_2)\times\cdots(a_n,b_n)\subset\mathbb{R}^n$ with single variable changed. Second, we extend to the case of $n$-dimensional rectangle $\Omega=(a_1,b_1)\times(a_2,b_2)\times\cdots(a_n,b_n)\subset\mathbb{R}^n$ with multi-variables changed. Finally, we discuss the general domain.
$n$-dimensional rectangle with single variable. {#subsection4.1}
-----------------------------------------------
For any non-negative function $f\in W_0^{1,p}(\Omega)$ with $\|f\|_{W_0^{1,p}}=1$. In this subsection, we discuss $f(x_1,x_2,\cdots,x_l,\cdots,x_n)$ with $x_l$ changed and other variables unchanged. We denote $\delta:=(\frac{\phi^p(f)}{\mathcal{R}(f)})^{\frac{q}{p}}$. Given $I\subseteq \mathbb{R}^+$, let $L(I):=\displaystyle\int_{\{x\in\Omega| f(x)\in I\}}|f|^pdx.$ We say $I$ is $W$-dense if $L(I)\geq W$. For any $a\in \mathbb{R}^+$, we define $$\label{distance function}
dist(a,I):=\inf_{b\in I}\frac{|a-b|}{b}.$$ The $\varepsilon$-neighborhood of a region $I$ is the set $N_\varepsilon(I):=\{a\in \mathbb{R}^+| dist(a,I)<\varepsilon\}$. If $N_\varepsilon(I_1)\cap N_\varepsilon(I_2)=\emptyset$, we say $I_1,I_2$ are $\varepsilon$-well separated.
\[lem4.1\] Let $I_1,\cdots, I_{2k}$ be a set of $W$-dense and $\varepsilon$-well separated regions. Then, there are $k$ disjointly supported functions $f_1,\cdots,f_k$, each supported on the $\varepsilon$-neighborhood of one of the regions such that $$\mathcal{R}(f_i)\leq \frac{2^{p+1}\mathcal{R}(f)}{k\varepsilon^p W}, \forall 1\leq i\leq k.$$
For any $1\leq i\leq 2k$, we define the truncated function $$f_i(x):=f(x)\max\{0,1-\frac{dist(f(x),I_i)}{\varepsilon}\}.$$ Then $\|f_i\|_p^p\geq L(I_i)$. Noting that the regions are $\varepsilon$-well separated, the functions are disjointly supported. By an averaging argument, there exist $k$ functions $f_1,\cdots, f_k$ (after renaming) satisfy the following. $$\int_{\Omega}|\nabla f_i|^pdx\leq \frac{1}{k}\sum_{j=1}^{2k}\int_{\Omega}|\nabla f_j|^pdx, \quad 1\leq i\leq k.$$ By the construction of distance and $I_i\subset \mathbb{R}^1$, we know that $$\int_{\Omega}|\nabla \max\{0,1-\frac{dist(f(x),I_i)}{\varepsilon}\}|^p|f(x)|^pdx\leq (\frac{1+\varepsilon}{\varepsilon})^p\int_{\Omega}|\nabla f(x)|^pdx.$$ Therefore $$\int_{\Omega}|\nabla f_i(x)|^pdx\leq (1+(\frac{1+\varepsilon}{\varepsilon})^p)\int_{\Omega}|\nabla f(x)|^pdx\leq (\frac{2}{\varepsilon})^p\int_{\Omega}|\nabla f|^pdx.$$ Thus, for $1\leq i\leq k$,$$\mathcal{R}(f_i)=\displaystyle\frac{\|\nabla f_i\|_p^p}{\|f_i\|_p^p}\leq \frac{\displaystyle\sum_{j=1}^{2k}\int_{\Omega}|\nabla f_j|^pdx}{\displaystyle k\min_{i\in[1,2k]}\|f_i\|_p^p}\leq \frac{\displaystyle 2 (2^p\int_{\Omega}|\nabla f|^pdx)}{k\varepsilon^pW}=\frac{ 2^{p+1}\mathcal{R}(f)}{k\varepsilon^pW}.$$
Let $0<\alpha<1$ be a constant that will be fixed later. For $i\in \mathbb{Z}$, we define the interval $I_i:=[\alpha^{i+1},\alpha^i]$. We let $L_i:=L(I_i)$. We partition each interval $I_i$ into $12k$ subintervals of equal length. $$I_{i,j}=[\alpha^i(1-\frac{(j+1)(1-\alpha)}{12k}),\alpha^i(1-\frac{j(1-\alpha)}{12k})],\ \ \mbox{for}\ 0\leq j\leq 12k.$$ So that $\displaystyle |I_{i,j}|=\frac{\alpha^i(1-\alpha)}{12k}$. Set $L_{i,j}=L(I_{i,j})$. We say a subinterval $I_{i,j}$ is heavy, if $\displaystyle L_{i,j}\geq \frac{c\delta L_{i-1}}{k}$, where $c>0$ is a constant determined later. Otherwise we say it is light. We use $\mathcal{H}_i$ to denote the set of heavy subintervals of $I_i$ and $\mathcal{L}_i$ for the set of light subintervals. Let $h_i:=\sharp \mathcal{H}_i$ the number of heavy subintervals. If $h_i\geq 6k$, we say $I_i$ is balanced, denoted by $I_i\in \mathcal{B}$.
Using Lemma \[lem4.1\], it is sufficient to find $2k$, $\displaystyle\frac{\delta}{k}$-dense, $\displaystyle\frac{1}{k}$ well-separated regions$R_1,R_2,\cdots R_{2k}$, such that each regions are unions of heavy subintervals. We will use the following strategy: from each balanced interval we choose $2k$ separated heavy subintervals and include each of them in one of the regions. In order to keep that the regions are well separated, once we include $I_{i,j}\in \mathcal{H}_i $ into a region $R$ we leave the two neighboring subintervals $I_{i,j-1}$ and $I_{i,j+1}$ unassigned, so as to separate $R$ form the rest of the regions. In particular, for all $1\leq a\leq 2k$ and all $I_{i}\in \mathcal{B}$, we include the $(3a-1)$-th heavy subinterval of $I_i$ in $R_a$. $R_a:=\cup_{I_i\in\mathcal{B} }I_{i,a}$. Note that if an interval is balanced, then it has $6k$ heavy subintervals and we can include one heavy subinterval in each of the $2k$ regions. Moreover, by the construction of the distance function (\[distance function\]), the regions are $\frac{1-\alpha}{12k}$-well separated. It remains to prove that these $2k$ regions are dense. Let $$\Delta:=\sum_{I_{i}\in \mathcal{B}}L_{i-1}.$$ Then, since each heavy subinterval $I_{i,j}$ has a mass of $\frac{c \delta L_{i-1}}{k}$, by the construction all regions are $\frac{c\delta\Delta}{k}$-dense.
Therefore, we have the following lemma.
\[Lem3.2\] There are $k$ disjoint supported functions $f_1,\cdots, f_k$, such that for all $1\leq i\leq k$, $supp(f_i)\subseteq supp(f)$, and $$\mathcal{R}(f_i)\leq \bigg(\frac{24k}{(1-\alpha)}\bigg)^p\frac{2 \mathcal{R}(f)}{c\delta\Delta }, \quad \forall 1\leq i\leq k.$$
Now we just need to lower bound $\Delta$ by an absolute constant.
For any interval $I_i\not\in \mathcal{B}$, $$\mathcal{E}(I_i)\geq \frac{6(\alpha^{p(1+\frac{1}{q})}(1-\alpha))^p\phi^p(f)L_{i-1}}{(12)^p(c\delta)^{\frac{p}{q}}},$$where $\frac{1}{q}+\frac{1}{p}=1$.
Claim: For any light interval $I_{i,j}$,$$\mathcal{E}(I_{i,j})\geq \frac{(\alpha^{p(1+\frac{1}{q})}(1-\alpha))^p\phi^p(f)L_{i-1}}{(c\delta)^{\frac{p}{q}}(12)^pk}.$$
Indeed, observe that $$L_{i-1}=\int_{\Omega_f(I_{i-1})}|f(x)|^pdx\leq |\alpha^{i-1}|^p|\Omega_f(I_{i-1})|\leq |\alpha^{i-1}|^p|\Omega_f(\alpha^i)|.$$ Thus $$|\Omega_f(I_{i,j})|=\int_{\Omega_f(I_{i,j})}dx\leq \int_{\Omega_f(I_{i,j})}\frac{|f(x)|^p}{|\alpha^{i+1}|^p}dx=\frac{1}{(\alpha^{i+1})^p} \int_{\Omega_f(I_{i,j})}|f(x)|^pdx$$ $$\qquad =\frac{1}{(\alpha^{i+1})^p}L_{i,j}\leq \frac{c\delta L_{i-1}}{k(\alpha^{i+1})^p}\leq \frac{c \delta |\alpha^{i-1}|^p|\Omega_f(I_{i-1})|}{k(\alpha^{i+1})^p}\leq \frac{c\delta |\Omega_f(\alpha^i)|}{k\alpha^{2p}}.$$where we use the assumption that $I_{i,j}\in \mathcal{L}_i$. Therefore, by Lemma \[Lem-2\], $$\mathcal{E}(I_{i,j})\geq \frac{(\phi(f)|\Omega_f(\alpha^i)||I_{i,j}|)^p}{|\Omega_f(I_{i,j})|^{\frac{p}{q}}}\geq \frac{(k\alpha^{2p})^{\frac{p}{q}}(\phi(f)|\Omega_f(\alpha^i)||I_{i,j}|)^p}{(c\delta |\Omega_f(\alpha^i)|)^{\frac{p}{q}}}=\frac{(k\alpha^{2p})^{\frac{p}{q}}|\Omega_f(\alpha^i)|(\phi(f)|I_{i,j}|)^p}{(c\delta)^{\frac{p}{q}}}.$$ Note that $|I_{i,j}|=\frac{\alpha^i(1-\alpha)}{12k}$, we have $$\mathcal{E}(I_{i,j})\geq (\frac{k\alpha^{2p}}{c\delta})^{\frac{p}{q}}\frac{L_{i-1}}{|\alpha^{i-1}|^p}\bigg(\phi(f)\frac{\alpha^i(1-\alpha)}{12k}\bigg)^p
=\frac{L_{i-1}(\phi(f)\alpha^{p(1+\frac{1}{q})}(1-\alpha))^p}{k({c\delta})^{\frac{p}{q}}(12)^p}.$$ Therefore, we get the Claim.
Now, since the subintervals are disjoint, $$\mathcal{E}(I_i)\geq \sum_{I_{i,j}\in\mathcal{L}_i}\mathcal{E}(I_{i,j})\geq (12k-h_i) \frac{L_{i-1}(\phi(f)\alpha^{p(1+\frac{1}{q})}(1-\alpha))^p}{k({c\delta})^{\frac{p}{q}}(12)^p}\geq \frac{6L_{i-1}(\phi(f)\alpha^{p(1+\frac{1}{q})}(1-\alpha))^p}{({c\delta})^{\frac{p}{q}}(12)^p},$$where we used the assumption that $I_i$ is not balanced and thus $h_i<6k$.
Now, it is time to lower-bound $\Delta$.
Note that $\|f\|_p=1$.$$\mathcal{R}(f)=\mathcal{E}(f)\geq \sum_{I_i\not\in \mathcal{B}}\mathcal{E}(I_i)\geq \frac{6(\phi(f)\alpha^{p(1+\frac{1}{q})}(1-\alpha))^p}{({c\delta})^{\frac{p}{q}}(12)^p}\sum_{I_i\not\in \mathcal{B}}L_{i-1}.$$ Therefore, $$\sum_{I_i\not\in \mathcal{B}}L_{i-1}\leq \frac{({c\delta})^{\frac{p}{q}}(12)^p\mathcal{R}(f)}{6(\phi(f)\alpha^{p(1+\frac{1}{q})}(1-\alpha))^p}.$$ Set $\alpha=\frac{1}{2}$ and $c^{\frac{p}{q}}:=\frac{3(\alpha^{p(1+\frac{1}{q})}(1-\alpha))^p}{(12)^p}$. From the above inequality and the definition of $\delta$, we get $$\sum_{I_i\not\in \mathcal{B}}L_{i-1}\leq\frac{1}{2}.$$ Note that $1=\|f\|_p^p=\sum_{I_i\in \mathcal{B}}L_{i-1}+\sum_{I_i\not\in \mathcal{B}}L_{i-1}$. Thus, $$\Delta=\sum_{I_i\in \mathcal{B}}L_{i-1}\geq \frac{1}{2}.$$ Then, by Lemma \[Lem3.2\] and the definition of $\delta$, we get $$\mathcal{R}(f_i)\leq \frac{2\mathcal{R}(f)}{c\delta \Delta}(48k)^{p}\leq Ck^{p}\bigg(\frac{\mathcal{R}(f)}{\phi(f)}\bigg)^q.$$
Therefore, we have
\[th4\] For any non-negative function $f\in W_{0}^{1,p}(\Omega)$, there are $k$ disjoint supported functions $f_1,\cdots, f_k$, such that for all $1\leq i\leq k$, $supp(f_i)\subseteq supp(f)$, and $$\mathcal{R}(f_i)\leq Ck^{p}\bigg(\frac{\mathcal{R}(f)}{\phi(f)}\bigg)^q, \quad \forall\ 1\leq i\leq k,$$where $C$ depends only on $p$.
The above arguments can also be used in general dimension $n>1$ without any modification.
General $n$-dimensional cases.
------------------------------
Using arguments as in above subsection, we will first discuss the case of $n$-dimensional rectangle $\Omega=(a_1,b_1)\times(a_2,b_2)\times\cdots(a_n,b_n)\subset\mathbb{R}^n$ with multi-variables changed. Then, we deal with the general domain by comparing the volume of $\Omega$ and the inscribed rectangle.
When $\Omega$ is an $n$-dimensional rectangle $\Omega=(a_1,b_1)\times(a_2,b_2)\times\cdots(a_n,b_n)\subset\mathbb{R}^n$. we get a similar theorem as Theorem\[th4\].
\[thm4.2\] For the first eigenfunction $f\in W_{0}^{1,p}(\Omega)$, there are $k^n$ disjoint supported functions $f_{i,j}(x)$, such that for all $1\leq i\leq k, 1\leq j\leq n$, $supp(f_{i,j}(x))\subseteq supp(f(x))$, and $$\mathcal{R}(f_{i,j}(x))\leq Ck^{p}\bigg(\frac{\mathcal{R}(f)}{\phi(f)}\bigg)^q, \quad \forall\ 1\leq i\leq k,\ 1\leq j\leq n,$$where $C$ depends only on $n, p$.
In the proof of Lemma \[lem4.1\],we set $\theta_{i,j}(x)=\displaystyle\max\{0,1-\frac{dist(f(x_1,\cdots,x_j,\cdots,x_n),I_i)}{\varepsilon}\}$, where $1\leq i\leq k, 1\leq j\leq n$. For each variable, discussing as in subsection \[subsection4.1\], we get $k^n$ support separated functions $f_{i,j}(x)=f(x)\theta_{i,j}(x)$, where $1\leq i\leq k, 1\leq j\leq n$. By the construction, $supp(f_{i,j}(x))\subseteq supp(f(x))$ and $$\mathcal{R}(f_{i,j}(x))\leq Ck^{p}\bigg(\frac{\mathcal{R}(f)}{\phi(f)}\bigg)^q, \quad \forall\ 1\leq i\leq k,\ 1\leq j\leq n,$$where $C$ depends only on $n, p$. Therefore, we get the theorem.
Finally, the case of a general bounded domain $\Omega$ with comparable inscribed n-dimensional rectangle $R\subset\Omega$, can be proved by comparison. More precisely, for the first eigenfunction $f$ of $R$, by Theorem \[thm4.2\], we can find $k^n$ functions $f_{i,j}(x)$. Noting that Lemma \[lemma-1\], we have $k^n$ subset $S_{i,j}\subset R\subset\Omega$, such that $$\phi(S_{i,j})\leq p(\mathcal{R}(f_{i,j}(x)))^{\frac{1}{p}}\leq C k \bigg(\frac{\mathcal{R}(f)}{\phi(f)}\bigg)^{\frac{q}{p}}.$$ Redefining the subscript, by the definition of $h_k(\Omega)$, we have $$h_k(\Omega)\leq h_k(R)\leq C k^{\frac{1}{n}}\bigg(\frac{\mathcal{R}(f)}{\phi(f)}\bigg)^{\frac{q}{p}}\leq C k^{\frac{1}{n}}\bigg(\frac{\lambda_{1}(p, R)}{h_1(R)}\bigg)^{\frac{q}{p}}.$$ Therefore, we get theorem \[theorem-1\].
When $\Omega$ is convex, (\[Faber-Krahn inequality\]) and (\[weyl’s asymptotic\]) substituted into the above inequalities, we get $$h_k(\Omega)\leq C k^{\frac{1}{n}}\bigg(\frac{1}{|\Omega|}\bigg)^{\frac{(p-1)q}{np}}=C\bigg(\frac{k}{|\Omega|}\bigg)^{\frac{1}{n}}.$$
Combining (\[Faber-Krahn inequality\]) with the above inequality, we obtain Theorem \[theorem-2\].
Again, using (\[weyl’s asymptotic\]), there exist $C$, such that $$h_k^p(\Omega)\leq C \lambda_{k}(p,\Omega).$$ Thus we prove corollary \[cor-1\].
[99]{}
J.R. Lee, S.O. Gharan, L.Trevisan. Multi-way spectral partitioning and higher-order Cheeger inequalities. Proceedings of 44th ACM STOC. 2012, pp. 1117-1130. arXiv:1111.1055.
T.C. Kwok, L.C. Lau, Y.T. Lee, S.O. Gharan, L.Trevisan. Improved Cheeger’s inequality: Analysis of spectral partitioning algorithms through higher order spectral gap. Proceedings of 45th ACM STOC. 2013, 11-20. arXiv:1301.5584. B.Kawohl, V. Fridman. Isoperimetric estimates for the first eigenvalue of the $p$-Laplace operator and the Cheeger constant. Comment. Math. Univ. Carolin. 44, 4(2003), 659–667. E. Parini. The second eigenvalue of the $p$-Laplacian as $p$ goes to $1$. Int. J. Diff. Eqns., (2010), 1–23. E. Parini. An introduction to the Cheeger problem. Surveys in Mathematics and its Applications, Volume 6 (2011), 9–22. K.C. Chang. The spectrum of the $1$-Laplace operator. Preprint. K.C. Chang. The spectrum of the $1$-Laplace operator and Cheeger constant on graph. Preprint. Osserman,R., Isoperimetric inequalities and eigenvalues of the Laplacian, Proceedings of the International Congress of Mathematics, Helsinki, (1978). J.Cheeger. A lower bound for the smallest eigenvalue of the Laplacian, in: Problems in Analysis, A Symposium in Honor of Salomon Bochner, R.C.Gunning, Ed., Princeton Univ. Press, 1970, 195-199. V. Caselles; M. Novaga; A. Chambolle. Some remarks on uniqueness and regularity of Cheeger sets, Rendiconti del Seminario Matematico della Universit¨¤ di Padova (2010),Volume: 123, 191-202. lefton L., Wei D., Numberical approximation of the first eigenpair of the $p$-Laplacian using finite elements and the penalty method, Numer. Funct. Anal. Optim. 18(1997), 389–399. Cuesta M., D.G. DE Figueiredo, J-P Gossez, A nodal domain property for the $p$-Laplacian, C.R. Acad. Sci. Paris, t.330, S$\acute{e}$rie I,(2000), 669-673. Aomar Anane, Omar Chakrone, Mohamed Filali, and Belhadj Karim, Nodal Domains for the $p$-Laplacian, Advances in Dynamical Systems and Applications,Volume 2 Number 2 (2007), 135-140.
P. Dr$\acute{a}$bek and S. B. Robinson, On the Generalization of the Courant Nodal Domain Theorem, Journal of Differential Equations 181,(2002) 58-71. Jie Xiao, The p-Faber-Krahn inequality noted, Around the Research of Vladimir Maz’ya I International Mathematical Series Volume 11, 2010, 373-390 Vladimir Maz’ya, Integral and isocapacitary inequalities, Linear and complex analysis, Amer. Math. Soc. Transl. Ser. Amer. Math. Soc., Providence, RI vol 226(2009) issue 2, 85-107 L. Friedlander, Asymptotic behaviour of the eigenvalues of the $p$-laplacian, Communications in Partial Differential Equations, Volume 14, Issue 8-9, 1989, 1059-1069. J.P.García£¬ I.Peral, Comportement asymptotique des valeurs propres du $p$-laplacien \[Asymptotic behaviour of the eigenvalues of the p-Laplacian\], C.R.Acad. Sci. Paris Ser.I Math. 307 (1988),75-78.
G. Poliquin, Bounds on the principal frequency of the $p$-Laplacian, Arxiv:1304.5131v2 \[math.SP\] 31 Jan 2014. Anane J A, Tsouli N. On the second eigenvalue of the p-Laplacian, Nonlinear partial differential equaitons (Fés,1994). Pitman Research Notes in Mathematics Series,343,Longman,Harlow, (1996) 1-9. Lindqvist, Peter; Juutinen, Petri. On the higher eigenvalues for the $\infty$-eigenvalue problem, Calculus of Variations and Partial Differential Equations, June 2005, Volume 23, Issue 2, pp 169-192. Belloni M, Juutinen P, Kawohl B. The $p$-Laplace eigenvalue problem as $p\rightarrow\infty$ in a Finsler metric. J. Eur. Math. Soc. 8 (2006), 123-138. B. Kawohl, and M. Novaga: The $p$-Laplace eigenvalue problem as $p \rightarrow 1$ and Cheeger sets in a Finsler metric, J. Convex Anal. 15 (2008), 623-634.
P. Hild; I. R. Ionescu; Th. Lachand-Robert; I. Rosca, The blocking of an inhomogeneous Bingham fluid. Applications to landslides,ESAIM: Mathematical Modelling and Numerical Analysis (2010) Volume 36, Issue 6, page 1013-1026. I. R. Ionescu; Th. Lachand-Robert, Generalized Cheeger sets related to landslides, Calculus of Variations and Partial Differential Equations, (2005), Volume 23, Issue 2, pp 227-249. J.B. Keller, Plate failure under pressure, SIAM Review, 22(1980), pp. 227-228. C.E. Gutiérrez, The Monge-Ampère equation, Birkhäuser, Boston. Basel. Berlin, (2001)
|
---
author:
- 'Niral Desai,'
- 'Can Kilic,'
- 'Yuan-Pao Yang,'
- Taewook Youn
bibliography:
- 'bib\_FDM\_xdim.bib'
title: Suppressed flavor violation in Lepton Flavored Dark Matter from an extra dimension
---
Introduction {#sec:intro}
============
While the existence of dark matter (DM) is strongly supported by astronomical observations, its microscopic nature remains a mystery. In the absence of experimental input from particle physics experiments such as the Large Hadron Collider (LHC), direct, or indirect DM detection experiments, models of DM are designed to be simple, and to be compatible with extensions of the Standard Model that are motivated by other considerations. For instance, in models that address the naturalness problem of the scalar sector in the Standard Model (SM) by introducing partner particles that are odd under a $Z_{2}$ symmetry, the DM can be the lightest partner particle, which often leads to its observed relic abundance through thermal production in the early universe. Alternatively, models of asymmetric DM [@Nussinov:1985xr; @Gelmini:1986zz; @Barr:1990ca; @Barr:1991qn; @Kaplan:1991ah; @Kaplan:2009ag; @Petraki:2013wwa; @Zurek:2013wia] allow for a simple connection between DM and the matter/antimatter asymmetry in the SM sector. Axion DM [@PhysRevD.16.1791; @PhysRevLett.38.1440; @PhysRevLett.40.223; @PhysRevLett.40.279] is motivated by its connection to the strong CP problem.
Recently, models of Flavored Dark Matter (FDM) [@MarchRussell:2009aq; @Cheung:2010zf; @Kile:2011mn; @Batell:2011tc; @Agrawal:2011ze; @Kumar:2013hfa; @Lopez-Honorez:2013wla; @Kile:2013ola; @Batell:2013zwa; @Agrawal:2014una; @Agrawal:2014aoa; @Hamze:2014wca; @Lee:2014rba; @Kile:2014jea; @Kilic:2015vka; @Calibbi:2015sfa; @Agrawal:2015tfa; @Bishara:2015mha; @Bhattacharya:2015xha; @Baek:2015fma; @Chen:2015jkt; @Agrawal:2015kje; @Yu:2016lof; @Agrawal:2016uwf; @Galon:2016bka; @Blanke:2017tnb; @Blanke:2017fum; @Renner:2018fhh; @Dessert:2018khu] have been introduced to consider a different type of connection, between DM and the flavor structure of the SM. In FDM models, the DM is taken to transform non-trivially under lepton, quark, or extended flavor symmetries, and it couples to SM fermions at the renormalizable level via a mediator. This coupling is taken to be of the form \_[ij]{} |\_[i]{} \_[j]{}+ [h.c.]{}, \[eq:FDMgeneral\] where the $\chi_{i}$ represent the DM “flavors”, the $\psi_{j}$ are generations of a SM fermion (such as the right-handed leptons) and $\phi$ is the mediator. Both particle physics as well as astrophysical signatures of FDM have become active areas of research.
Because of the non-trivial flavor structure of the interaction of equation \[eq:FDMgeneral\], one of the main phenomenological challenges for FDM models is to keep beyond the Standard Model flavor changing processes under control. Indeed, when no specific structure is assumed for the entries in the $\lambda_{ij}$ matrix, the off-diagonal elements can give rise to flavor changing neutral currents (FCNCs) with rates that are excluded experimentally [@TheMEG:2016wtm; @Tanabashi:2018oca]. Most phenomenological studies of FDM models simply assume that the entries in the $\lambda_{ij}$ matrix have a specified form, such as Minimal Flavor Violation (MFV) [@DAmbrosio:2002vsn], in order to minimize flavor violating processes, but it is not clear that there is a UV completion of the FDM model where the MFV structure arises naturally.
In this paper we will adopt a benchmark of lepton-FDM, where the SM fields participating in the FDM interaction of equation \[eq:FDMgeneral\] are the right-handed ($SU(2)$ singlet) leptons, and we will show that in a (flat) five-dimensional (5D) UV completion[^1] of this model, the rates of flavor violating processes can be naturally small. In fact, as we will show, in the region of parameter space where relic abundance and indirect detection constraints are satisfied, the branching fraction for $\mu\rightarrow e\gamma$, which is the leading flavor violating process, is orders of magnitude below the experimental bounds.
We take the DM ($\chi_{i}$) and mediator ($\phi$) fields to be confined to a brane on one end of the extra dimension (the “FDM brane”), and the Higgs field to be confined to a brane on the other end (the “Higgs brane”), while the SM fermion and gauge fields are the zero modes of corresponding 5D bulk fields. In the bulk and on the FDM brane, there exist global $SU(3)$ flavor symmetries for each SM fermion species $\{q_{L}, u_{R}, d_{R}, \ell_{L}, e_{R}\}$, but these symmetries are broken on the Higgs brane. Flavor violation can only arise due to the mismatch between the basis in which the Yukawa couplings and the boundary-localized kinetic terms (BLKTs) [@Georgi:2000ks; @Dvali:2001gm; @Carena:2002me; @delAguila:2003bh; @delAguila:2003kd; @delAguila:2003gv; @delAguila:2006atw] on the Higgs brane are diagonal, and the basis in which the interaction of equation \[eq:FDMgeneral\] on the FDM brane is diagonal. Naively, one may think that no such mismatch can arise, since the FDM interaction starts out proportional to $\delta_{ij}$, and must therefore remain so after any unitary basis transformation. The Higgs brane BLKTs however cause shifts in the normalization of the lepton kinetic terms in a non-flavor universal way, and therefore the basis transformation necessary to bring the fields back into canonically normalized form involves rescalings, which are not unitary. By the time this is done and all interactions on the Higgs brane are brought to diagonal form, the FDM interaction is no longer diagonal. However, the size of the off-diagonal entries can be controlled by adjusting the profiles of the leptons along the extra dimension. In particular, by an appropriate choice of bulk masses, the fermion profiles can be made to peak on either brane, and be exponentially suppressed on the other. In the limit where the lepton profiles are sharply peaked on the FDM brane, the effect of all Higgs brane couplings vanish, and there is no flavor violation. Of course, in that limit the lepton zero modes, which only obtain masses from the Yukawa interactions on the Higgs brane, also become massless. Thus there is a tension between reproducing the correct lepton masses and suppressing lepton flavor violating processes. In the rest of this paper, we will quantitatively study this setup, and show that there are regions in the parameter space where the model can be made consistent with all experimental constraints.
The layout of the paper is as follows: In section \[sec:xdim\], we will introduce the details of the 5D model. Then in section \[sec:constraints\], we will study the impact of constraints from relic abundance, direct and indirect DM detection experiments, flavor violating processes and collider searches on the parameter space of the model. We will conclude in section \[sec:conclusions\].
Details of the model {#sec:xdim}
====================
[**Generalities:**]{} As described in the introduction, we will adopt a benchmark model of lepton-FDM. Since we wish to consider a 5D UV completion, it is convenient to make use of 4 component Dirac spinor notation. We introduce three flavors of DM \_[,i]{}=(
[c]{} \_[L,i]{}\
\_[R,i]{}
), and a scalar mediator field $\phi$ with hypercharge $+1$, such that the 4D effective Lagrangian contains an interaction between $\chi_{L,i}$ and the right handed leptons $e_{R,j}$ \_[ij]{} |\_[L,i]{} e\_[R,j]{}+ [h.c.]{}. \[eq:FDM\]
This effective interaction arises from an orbifolded flat extra dimension of length $L$, with the FDM brane at $y=0$ and the Higgs brane at $y=L$. As we will see in section \[sec:constraints\], constraints on the resonant production of the Kaluza-Klein (KK) modes of the SM gauge bosons suggest that the KK scale must be $\pi / L \gsim 10~$TeV, but we remark that the KK scale can in principle be much higher ($L^{-1}\lsim M_{\rm Planck,5D}$), which significantly simplifies the cosmological history. We will make no further assumptions about the KK scale.
[**Field Content:**]{} The SM gauge fields and fermions will all be taken to be the zero modes of corresponding 5D fields in the bulk. The boundary conditions for these fields are chosen such that the chiral matter content of the SM arises in the zero modes [@Ponton:2012bi]. In particular, we introduce \_[,i]{}=(
[c]{} \_[L,i]{}\
\_[R,i]{}
)\_[e,i]{}=(
[c]{} e\_[L,i]{}\
e\_[R,i]{}
). Where the SM left-handed ($SU(2)$ doublet) and right-handed ($SU(2)$ singlet) leptons are the zero modes of $\ell_{L,i}$ and $e_{R,j}$, while the zero modes of $\ell_{R,i}$ and $e_{L,j}$ are projected out by the boundary conditions. Additionally, the left-handed quark doublet and two quark singlets \_[q,i]{} = (
[c]{} q\_[L,i]{}\
q\_[R,i]{}
) \_[u,i]{} = (
[c]{} u\_[L,i]{}\
u\_[R,i]{}
) \_[d,i]{} = (
[c]{} d\_[L,i]{}\
d\_[R,i]{}
) exist in the bulk, and the zero modes of $q_{R,i}, u_{L,i},$ and $d_{L,i}$ are projected out similarly to the leptons. The boundary conditions for the SM gauge bosons are chosen such that the $A^{5}$ is projected out for all of them.
The Higgs doublet field $H$ is taken to be confined to the Higgs brane, where the SM Yukawa couplings arise, and the FDM fields $\Psi_{\chi,i}$ and $\phi$ are taken to be confined to the FDM brane, where the FDM interaction of equation \[eq:FDM\] arises.
[**Flavor Structure:**]{} In our model, the bulk and FDM brane respect an exact flavor symmetry ${\mathcal G}_{lepton}=SU(3)_{\ell}\times SU(3)_{e}$ within the lepton sector, under which the $\Psi_{\ell,i}$ transform as $({\bf 3},{\bf 1})$, while the $\Psi_{e,i}$ and $\Psi_{\chi,i}$ transform as $({\bf 1},{\bf 3})$. This symmetry is broken on the Higgs brane. Consequently, the lepton Yukawa couplings are not a-priori assumed to have a special flavor structure. Of course, in the absence of any other source of symmetry breaking, the Yukawa terms can be brought into diagonal form via a change of basis, and a $U(1)^{3}$ symmetry will survive, forbidding any lepton flavor violating processes. However, due to the absence of a flavor symmetry on the Higgs brane, we also need to include BLKTs for the leptons that are off-diagonal, and as we will show below in detail, together with the FDM interaction these generically break the flavor symmetry down to just the overall lepton number ($U(1)_{L}$) such that lepton flavor violating processes are no longer forbidden. As we will show in section \[sec:constraints\] these processes can be well below experimental bounds with a natural choice of parameters in our setup.
The quark sector has a similar flavor symmetry ${\mathcal G}_{quark} = SU(3)_q \times SU(3)_u \times SU(3)_d$ in the bulk and on the FDM brane. ${\mathcal G}_{quark}$ is also broken on the Higgs brane by Yukawa couplings and BLKTs down to overall baryon number ($U(1)_{B}$). Unlike the leptons however, there are no additional interactions for the quarks on the FDM brane, and due to gauge symmetry, the BLKTs are diagonal in the same basis as the kinetic terms. As a result, the only source of quark flavor violation in addition to those already present in the SM arises at loop level due to KK quarks in loops. Since the KK scale can be arbitrarily large, there are no further constraints from flavor violation in the quark sector.
[**KK mode decomposition:**]{} Bulk fermions have mass terms $M_\Psi$, which determine the 5D profiles of the zero modes. In particular, the profiles of the fermion zero modes are proportional to $e^{-M_\Psi x^{5}}$. We will choose the mass parameters such that the right-handed lepton profiles peak towards the FDM brane and they are suppressed at the Higgs brane. This can explain the smallness of the 4D effective $\tau$ Yukawa coupling for $\mathcal O(1)$ values of $M_\Psi L$, as we will show in Section 3. Due to the unbroken flavor symmetries in the bulk, for each of the SM fermions $\{q_{L},u_{R},d_{R},\ell_{L},e_{R}\}$, the three generations have identical profiles, thus the small [*ratios*]{} of Yukawa couplings $y_{e} / y_{\tau}$ and $y_{\mu} / y_{\tau}$ will not be addressed in our model. Explicitly, the KK mode decomposition for a 5D fermion field with a bulk mass $M_{\Psi}$ can be written as: (x\^,x\^[5]{})= e\^[-M\_ x\^[5]{}]{} \^[0]{}(x\^)+\_[n=1]{}\^ f\_[,n]{}(x\^[5]{}) \^[n]{}(x\^), \[eq:KK\] where $\Psi^{0}$ is the zero mode, the coefficient $C_{\psi}$ is chosen such that the kinetic term for $\Psi^{0}$ is properly normalized, and the $\Psi^{n}$ are the KK modes, with profiles $f_{\Psi,n}(x^{5})$ in the extra dimension. As we will see below, the smallness of lepton flavor violating processes is a consequence of the lepton zero mode profiles being small on the Higgs brane.
[**Interactions:**]{} The bulk Lagrangian contains only the kinetic terms for the gauge fields as well as the kinetic (with minimal gauge coupling) and mass terms for the fermions. The Lagrangian of the lepton sector on the FDM brane includes, in addition to the $\Psi_{\chi}$ and $\phi$ kinetic and mass terms (the $\Psi_{\chi}$ are degenerate at this level due to the flavor symmetry $\mathcal{G}_{lepton}$), the following terms \_[y=0]{} (\_[0]{} \_[ij]{} \_[\_i]{} \_[e\_[j]{}]{} + [h.c.]{}) + \^\_[0]{} \_[ij]{} \_[\_[i]{}]{} i \_ \^ \_[\_[j]{}]{} + \^[e]{}\_[0]{} \_[ij]{} \_[e\_[i]{}]{} i \_ \^ \_[e\_[j]{}]{}. \[eq:FDM5D\] Keeping only the zero modes for the leptons, this becomes $$\begin{aligned}
{\mathcal L}_{y=0}& \supset & \left(\lambda_{0}\frac{C_{e}}{\sqrt{L}} \delta_{ij} \bar{\chi}_{L,i} e_{R,j} \phi + {\rm h.c.}\right) \nonumber\\
& & + \alpha_{0}^{\ell}\frac{C_{\ell}^{2}}{L} \delta_{ij} \bar{\ell}_{L,i} i \partial_{\mu} \bar{\sigma}^{\mu} \ell_{L,j} + \alpha_{0}^{e} \frac{C_{e}^{2}}{L} \delta_{ij} \bar{e}_{R,i} i \partial_{\mu} \bar{\sigma}^{\mu} e_{R,j}.
\label{eq:FDMzeromodes}\end{aligned}$$ Thus the effective size of the coupling in the FDM interaction is C\_[e]{}. \[eq:lambda\] Thus assuming the dimensionless quantity $\lambda_{0} / \sqrt{L}$ appearing in the 5D theory to be $\mathcal O(1)$, the 4D effective FDM coupling is not particularly suppressed. Note that for completeness we have included BLKTs on the FDM brane. However due to the exact flavor symmetry ${\mathcal G}$ there, those are characterized only by the two dimensionless quantities $\alpha_{0}^{\ell,e} / L$, and the flavor structure is proportional to $\delta_{ij}$, just like the couplings of the FDM interaction. The FDM brane BLKTs do not contribute to flavor violating processes. The only effect of $\alpha_{0}^{\ell,e} / L$ is to change the normalization coefficients $C_{\ell}$ and $C_{e}$ when bringing the zero modes to canonically normalized form, but apart from that they have no more role to play in the rest of this paper.
The Lagrangian on the Higgs brane includes, in addition to the SM Higgs kinetic term and potential, the lepton Yukawa couplings, as well as BLKTs: \_[y=L]{} (Y\^[L]{}\_[0,ij]{} \_[\_[i]{}]{} \_[e\_[j]{}]{} H + ) + \^\_[0,ij]{} \_[\_[i]{}]{} i \_ \^ \_[\_[j]{}]{} + \^[e]{}\_[0,ij]{} \_[e\_[i]{}]{} i \_ \^ \_[e\_[j]{}]{}. Again, concentrating on the zero modes, this becomes $$\begin{aligned}
{\mathcal L}_{y=L} &\supset& \left(Y^{L}_{0,ij} \frac{C_{\ell}C_{e}}{L} e^{-(M_{\ell}+M_{e})L} \bar{\ell}_{L,i} e_{R,j} H + \mathrm{h.c.} \right) \nonumber\\
&&+ \alpha^{\ell}_{0,ij} \frac{C_{\ell}^{2}}{L} e^{-2M_{\ell}L} \bar{\ell}_{L,i} i \partial_{\mu} \bar{\sigma}^{\mu} \ell_{L,j} + \alpha^{e}_{0,ij} \frac{C_{e}^{2}}{L} e^{-2M_{e}L} \bar{e}_{R,i} i \partial_{\mu} \bar{\sigma}^{\mu} e_{R,j}.\end{aligned}$$ We see that the effective Yukawa coupling becomes Y\^[L]{}\_[ij]{} C\_C\_[e]{} e\^[-(M\_+M\_[e]{})L]{}. \[eq:yukcpl\] In particular, we see that the effective 4D Yukawa couplings are down by a factor of $e^{-(M_{\ell}+M_{e})L}$ from the original (dimensionless) couplings $Y^{L}_{0} / L$ appearing in the 5D theory. Note that this is in contrast with the effective 4D FDM couplings that are unsuppressed. As mentioned earlier, this can explain the smallness of the SM $\tau$ Yukawa coupling, even with $Y^{L}_{0,\tau\tau} / L \sim \mathcal O(1)$, for $(M_\ell + M_e)L \sim \mathcal O(1)$.
Note that the BLKT coefficients $\alpha^{\ell,e}_{0,ij}$ on the Higgs brane, unlike the BLKT coefficients $\alpha^{\ell,e}_{0}$ on the FDM brane, are not proportional to the identity (or even diagonal) in flavor space. However, the coefficients in the effective 4D theory for the zero modes are suppressed: \^\_[ij]{} C\_\^[2]{} e\^[-2M\_L]{},\^[e]{}\_[ij]{} C\_[e]{}\^[2]{} e\^[-2M\_[e]{}L]{}. \[eq:alphas\] As we described in the introduction, this will play a major role in the smallness of flavor violating processes, even though the $\alpha^{\ell}_{0,ij} / L$ and $\alpha^{e}_{0,ij} / L$ coefficients may be $\mathcal O(1)$ and have no special flavor structure.
[**Choice of basis:**]{} While we have now introduced the most general Lagrangian consistent with our 5D setup and the flavor symmetry $\mathcal{G}_{lepton}$, it is not straightforward in this description to calculate the rate of flavor violating processes such as $\mu\rightarrow e \gamma$, since both the kinetic terms and the Yukawa terms (and consequently the mass terms once the Higgs field is set to its vacuum expectation value) are non-diagonal in flavor space. The description of the physics is made much simpler by performing a number of field redefinitions and rotations such that both the kinetic terms and the mass terms for the fermions become diagonal. As described in the introduction, at the end of this process, all flavor non-diagonal effects can be encoded in the FDM coupling matrix.
Let us start our discussion in a basis where the Yukawa matrix $Y^{L}$ is diagonal. $C_{\ell}$ and $C_{e}$ were chosen such that the [*flavor-diagonal*]{} coefficients of the kinetic terms of the zero modes are one, however due to the presence of the BLKT coefficients $\alpha^{\ell ,e}_{ij}$, there are flavor off-diagonal contributions to the kinetic terms as well. Thus as a first step, we perform $SU(3)$ rotations $U^{\ell}$ and $U^{e}$ in order to diagonalize the kinetic terms. At this point, the kinetic terms are diagonal, but not canonically normalized, so we perform rescalings on the $\Psi_{\ell,i}$ and $\Psi_{e,i}$, implemented by the (diagonal) matrices $\Delta^{\ell}$ and $\Delta^{e}$. Generically, the $\Delta^{\ell,e}$ are *not* proportional to $\delta_{ij}$, due to the effects of the off-diagonal BLKT entries.
Now that the kinetic terms are diagonal in flavor space and canonically normalized, we perform another set of $SU(3)$ rotations given by the matrices $V^{\ell}$ and $V^{e}$ to bring the Yukawa interactions back into a diagonal form. At the end of this procedure only the FDM couplings are non-diagonal, and they encode all flavor-violating interactions. In going to the new basis $$\begin{aligned}
\Psi_{\ell,i} &\rightarrow& V^{\ell}_{ij} (\Delta^{\ell})^{-1}_{jk} U^{\ell}_{kl} \Psi_{\ell,l}\qquad {\rm and}\nonumber \\
\Psi_{e,i} &\rightarrow& V^{e}_{ij} (\Delta^{e})^{-1}_{jk} U^{e}_{kl} \Psi_{e,l},\end{aligned}$$ the original FDM coupling matrix $\lambda\delta_{ij}$ of equation \[eq:FDM5D\] transforms into (suppressing flavor indices) (U\^[e]{})\^ (\^[e]{})\^[-1]{} (V\^[e]{})\^. \[eq:FDMnondiagonal\] In section \[sec:constraints\] we will use this formula in order to estimate the size of flavor-violating processes. In particular, the size of such processes depends on off-diagonal entries of this matrix (which we will generically denote by $\delta \lambda$). In order to be consistent with constraints from lepton-flavor violating processes such as $\mu\rightarrow e \gamma$ [@TheMEG:2016wtm], it is sufficient if $\delta\lambda / \lambda \lesssim \mathcal O(10^{-3})$. As we are about to describe however, there are stronger constraints on this ratio from indirect detection constraints.
![Flavor non-universal contribution to the $\chi$ two-point function at the one-loop level.[]{data-label="fig:chimasssplitting"}](image/chi_mass_splitting){width="50.00000%"}
[**DM spectrum:**]{} Note that the three $\chi$ flavors start out having degenerate masses due to the $SU(3)$ flavor symmetry on the FDM brane. However, at loop level, the breaking of the flavor symmetry is communicated to the $\chi$ fields through the diagram shown in figure \[fig:chimasssplitting\]. The contribution from the zero modes of the leptons in the loop is by far the dominant contribution (even when the KK scale is taken as low as 10 TeV), and since the lepton zero modes are chiral, the diagram involves $\chi_{L}$ on both sides, making this a contribution to the kinetic term (as opposed to the mass term) for $\Psi_{\chi}$. Due to the difference between the lepton Yukawa couplings, one then needs to perform flavor-dependent rescalings on the $\Psi_{\chi}$ to bring them back to canonical normalization, which induces small mass splittings. This exact mechanism leading to a mass splitting between different DM flavors was studied in ref. [@Agrawal:2015tfa], with the result that $\chi_{e}$ is the lightest flavor, and the mass splittings between flavors $i$ and $j$ are given by =(++[O]{}()), \[eq:chisplit\] where the $y_{i}$ are the SM lepton Yukawa couplings. For $m_{\chi}$ and $m_{\phi}$ in the TeV range, this leads to $m_{\chi,\mu}$ being larger than $m_{\chi,e}$ by $\sim \mathcal O(10)$ eV and to $m_{\chi,\tau}$ to be larger by $\sim \mathcal O(1)$ keV. The $\chi_e$ is stable as the lightest flavor. However, the heavier flavors can decay down to the lightest one through a dipole transition. This is shown in figure \[fig:chidecay\]. Due to the larger mass splitting and thus less phase space suppression, the $\chi_{\tau}$ lifetime is much shorter compared to the $\chi_{\mu}$ lifetime, and bounds on the production of keV-range X-rays [@Ng:2019gch; @Perez:2016tcq] in this decay place severe constraints on the model, $\delta\lambda / \lambda \lsim \mathcal O(10^{-6})$. Note that this is a significantly stronger constraint than the one imposed by $\mu\rightarrow e\gamma$, therefore once the X-ray indirect detection bounds are satisfied, flavor violating processes are automatically safe. We will now analyze these and other constraints on our model quantitatively.
![Leading decay mode for a heavier DM flavor to a lighter one. Note that the flavor-violating coupling $\delta \lambda$ can be on either of the vertices, depending on whether the lepton in the loop is $e_{i}$ or $e_{j}$, and the photon line can be emitted either from $\phi$ or from the lepton line.[]{data-label="fig:chidecay"}](image/chi_i_to_chi_j_xdim){width="50.00000%"}
Constraints {#sec:constraints}
===========
Let us now turn our attention to the parameter space of our model, and study the impact of various types of experimental constraints on this parameter space. There are three bulk parameters of interest in the lepton sector: the KK scale $\pi / L$, and the dimensionless quantities for the bulk lepton masses $M_{\ell}L$ and $M_{e}L$. As we will see in this section, the only constraint on the KK scale arises from KK resonance searches at the LHC, while all other constraints we discuss are imposed on the dimensionless parameters of the form $M L$. On the FDM brane, we have the two masses $m_{\chi}$ and $m_{\phi}$, as well as the coupling $\lambda$. As we will describe soon, $\lambda$ will always be chosen such that the correct DM relic abundance is obtained. On the Higgs brane, we have the Yukawa couplings $Y^{L}_{0,ij}/L$ and the BLKT coefficients $\alpha^{\ell,e}_{0,ij}/L$. In the basis where the Yukawa matrix is diagonal, for a given choice of bulk masses we will choose its eigenvalues such that the correct SM lepton masses are obtained. To study the effects of the $\alpha^{\ell,e}_{0,ij}/L$ parameters, we will perform Monte Carlo studies where each element is chosen randomly, subject to positivity conditions for the matrix.
Note that since the DM and the mediator $\phi$ are confined to the FDM brane, most dark matter constraints are insensitive to the details of the extra-dimensional model. That is, they only depend on $m_\chi$, $m_\phi$, and the effective 4D FDM coupling $\lambda$, which we will fix according to the relic abundance constraint. The 5D parameters are constrained only by the bounds imposed by flavor physics, as well as the decays of the heavier DM flavors, since both processes depend on the off diagonal couplings $\delta\lambda$, the size of which is set by how suppressed the lepton profiles are on the Higgs brane.
Dark Matter Related Constraints
-------------------------------
![Dominant annihilation channel in our model.[]{data-label="fig:fdmannih"}](image/fdm_annih){width="45.00000%"}
![Contours for the value of $\lambda$ necessary to satisfy the relic abundance constraint. In the shaded region, $m_{\chi}>m_{\phi}$, and the DM is unstable.[]{data-label="fig:fdmrelic"}](image/RC){width="70.00000%"}
[**Relic Abundance:**]{} Since the three DM flavors in our model are very nearly degenerate, the relic abundance calculation closely mirrors that considered in ref. [@Agrawal:2015tfa]. In particular, all three flavors freeze out at the same time, and for purposes of setting the relic abundance, they behave like a single Dirac fermion DM species. The leading annihilation process diagram is shown in figure \[fig:fdmannih\]. The DM abundance after freezeout has approximately equal parts $\chi_{e}$, $\chi_{\mu}$ and $\chi_{\tau}$. The annihilation cross section is given by $$\begin{aligned}
\langle \sigma v \rangle = \frac{\lambda^4 m_{\chi}^2}{32 \pi (m_{\chi}^2 + m_\phi^2)^2}.
\end{aligned}$$ Setting $ \langle \sigma v \rangle = 2 \times (2.2 \times 10^{-26} \,\text{cm}^3\, \text{s}^{-1})$ required to obtain the correct relic abundance for a Dirac fermion, we show in figure \[fig:fdmrelic\] the value of $\lambda$ that is required for a range of values of $m_{\chi}$ and $m_{\phi}$. While the value is $\mathcal O(1)$ for the parameter space of interest, it is not so large that perturbative control is lost. In the rest of the paper, for any value of $m_{\chi}$ and $m_{\phi}$, we will choose $\lambda$ to equal the value that gives the correct relic abundance.
[**Indirect Detection:**]{} There are two types of indirect detection signatures in our model: annihilation processes which produce leptons (as well as gamma rays from bremsstrahlung), and flavor-violating heavy $\chi_{\tau}$ decays which produce X-ray photons ($\chi_{\mu}$ decays have a much lower rate, and produce photons in the UV-range where backgrounds are much larger). Limits from the flavor-violating decays place strong bounds on the extra-dimensional parameters of this model, while the annihilation processes are insensitive to details of the extra dimension.
As mentioned in section \[sec:xdim\] (see figure \[fig:chimasssplitting\] and equation \[eq:chisplit\]), the mass splittings between the $\chi$’s are induced by loop processes. Specifically, one finds [@Agrawal:2015tfa] that the dominant decay mode is the one shown in figure \[fig:chidecay\], with one flavor-preserving coupling $\lambda$ and one flavor-violating coupling $\delta\lambda$. The width for this decay mode is given by [@Agrawal:2015tfa] \_[\_[,i]{}\_[,j]{}]{}=. \[eq:chidecay\] Constraints from searches for X-rays in the keV range place bounds on the dark matter decay lifetime. These bounds are worked out in ref. [@Ng:2019gch] for much lighter DM particles decaying according to the mode $\chi_{\nu}\rightarrow\gamma\nu$, with the result \_ (10\^[26 - 29]{} ) . \[eq:chidecay\] Since the decay mode in our model is $\chi_{i}\rightarrow\chi_{j}\gamma$, and the $\chi$ have ${\mathcal O}({\rm TeV})$ (as opposed to ${\mathcal O}({\rm keV})$) masses, and therefore a much lower number density, the numbers above need to be modified. In particular, the bound is on the number of photons emitted per unit time, which according to an exponential decay law during a time interval $\Delta t$ is given by $\Delta n = - (n / \tau) e^{-t/\tau} \Delta t$, where $n$ is the DM number density at time $t$. Matching this rate between the model used in refs. [@Perez:2016tcq; @Ng:2019gch] (with DM mass $m_{\nu}$ and lifetime $\tau_{\nu}$) and our model, and taking into account that in our model the decaying $\chi_{\tau}$ only comprise $1/3$ of the DM number density, we can set up a correspondence between the bounds on the DM lifetime in the two models: e\^[-t\_0/\_[\_]{}]{} e\^[-t\_0/\_]{}, $t_0 \simeq 4 \times 10^{17}$ seconds being the present age of the universe. This gives a bound on the $\chi_\tau$ lifetime of $\mathcal O(10^{17-20})$ seconds. Note that this is close to the age of the universe; in other words, if the parameters are chosen close to the bound, the $\chi_{\tau}$ particles in the universe would be just about to start decaying today in sizable numbers. For $m_\chi$ and $m_\phi$ at the TeV scale and $\lambda$ of $\mathcal O(1)$, the relevant off-diagonal coupling is constrained to be $\delta \lambda \lesssim 10^{-6}$. In section \[s.otherConst\] where we will perform a Monte Carlo study scanning over the BLKT coefficients, we will present distributions for the relevant off-diagonal $\lambda$ entries and we will discuss the impact on the allowed parameter space.
![Indirect detection constraints on our model, with the yellow-shaded region being excluded. The most stringent bound comes from AMS-02 experiment [@Elor:2015bho].[]{data-label="fig:fdmindirect"}](image/ID){width="70.00000%"}
We now turn our attention to the annihilation process. The DM particles annihilate to lepton-antilepton pairs of all flavors via processes like that shown in figure \[fig:fdmannih\]. This leads to positron and gamma-ray signatures that experiments such as Fermi-LAT [@Fermi-LAT:2016uux], HESS [@Abdallah:2016ygi], AMS-02 [@Elor:2015bho] and the Planck CMB observations [@Aghanim:2018eyx] are sensitive to. High energy photons are produced mainly by $\pi^{0}\rightarrow\gamma\gamma$ coming from $\tau$’s in the final state. For the mass range of interest to us, the energy of these photons is typically not high enough for HESS to place significant constraints on our model. Positrons can be produced both directly in the annihilation, as well as from the decays of $\mu^{+}$ and $\tau^{+}$ that are produced in the annihilation, although these latter sources give rise to lower positron energies, and the constraints from AMS-02 constrain primarily the directly produced positrons. The same is also true for the Planck constraints. In figure \[fig:fdmindirect\], we show the effect of these indirect detection constraints (dominated by AMS-02) on our model. As we will see next, these are subdominant to direct detection constraints.
![Leading contributions to FDM-nucleon scattering for direct detection.[]{data-label="fig:fdmdirectfeyn"}](image/fdm_directfeyn){width="65.00000%"}
![Direct detection constraints from the Xenon1T experiment on our model, the blue-shaded region being excluded.[]{data-label="fig:fdmxenon"}](image/DD){width="70.00000%"}
[**Direct Detection:**]{} The scattering of one of the $\chi$ flavors from a nucleus proceeds by the loop process shown in figure \[fig:fdmdirectfeyn\]. The parton level cross section for flavor $i$ is given by (see [@Agrawal:2011ze; @Hamze:2014wca]) $$\begin{aligned}
\sigma_{i} = \frac{\mu^2 Z^2}{\pi} \left[ \frac{\lambda^2 e^2}{64 \pi^2 m_\phi^2} \left[ 1 + \frac{2}{3}\log \left( \frac{\Lambda_{i}^{2}}{m_\phi^2} \right) \right] \right]^2,
\end{aligned}$$ where $\mu$ is the reduced mass, and the scale $\Lambda_{i}$ cuts off the infrared divergence in the loop. For the muon and tau flavors, this scale is simply the corresponding lepton mass, $m_{\mu}$ and $m_{\tau}$ respectively. For the electron however, there is a physical scale larger than $m_{e}$ that cuts off the divergence, namely the characteristic momentum exchange in the collision. Since all three flavors of $\chi$ make up the local DM density, the effective cross section relevant for direct detection experiments is simply $(\sigma_{e}+\sigma_{\mu}+\sigma_{\tau})/3$. Using the bounds set by the Xenon1T experiment [@Aprile:2018dbl], we show in figure \[fig:fdmxenon\] (using the value of $\lambda$ at each point that gives the correct relic abundance) the exclusion region in terms of $m_{\chi}$ and $m_{\phi}$. Values of $m_{\chi}\gsim 300$ GeV are compatible with direct detection constraints. Note that the region excluded by indirect detection constraints is already fully excluded by direct detection constraints.
![The $\phi$ pair-production cross section at the 13 TeV LHC as a function of $m_{\phi}$. The shaded red area denotes the values of $m_\phi$ excluded by the direct detection constraints.[]{data-label="fig:fdmlhc"}](image/phi_pair_prod){width="70.00000%"}
[**Dilepton + MET Searches at the LHC:**]{} Since $\phi$ carries electric charge, it can be pair-produced at colliders. $\phi$ then decays to one of the three $\chi$ flavors and the associated lepton, resulting in a dilepton + MET signature where the lepton flavors on the two sides of the event are uncorrelated. The cross section for this process is very small, both since the production occurs through the electromagnetic interaction, and because $\phi$ is heavy and a scalar (which means that pair production is suppressed near threshold). We show this cross section in figure \[fig:fdmlhc\], calculated with MadGraph [@Alwall:2014hca]. As can be seen by comparing to figure \[fig:fdmxenon\], in the region of parameter space that is not ruled out by direct detection, less than a single event is expected at existing dilepton+MET searches, and therefore even with sophisticated kinematic observables, LHC searches do not lead to any additional exclusion.
Additional constraints {#s.otherConst}
----------------------
[**Constraints from lepton flavor violation and DM decays:**]{} As we have seen, constraints from X-ray searches place severe bounds on off-diagonal entries of the $\lambda$ coupling matrix. We will now work out the constraints on these couplings from lepton flavor violation, and show them to be subdominant. In the process, we will also perform a Monte Carlo study by scanning over the BLKT coefficients, and we will show the impact of the relevant constraints on the parameter space of the model.
As we have shown in section \[sec:xdim\], we can work in a basis where flavor violating couplings are all encoded in the FDM coupling matrix, as in equation \[eq:FDMnondiagonal\]. Equations \[eq:yukcpl\] and \[eq:alphas\] imply that the effective BLKT coefficients are generally suppressed compared to the 4D Yukawa couplings. Using this fact, one can estimate the generic size of the entries of the FDM coupling matrix to the leading order in $\alpha^{\ell, e}_{ij}$ as
$$\begin{aligned}
\left[ \frac{\lambda}{\lambda_0} \right] &\approx&
\bordermatrix{
& (e) & (\mu) & (\tau) \cr
(\chi_e) & 1 & \frac{ \alpha^\ell_{12} Y^L_{ee} Y^L_{\mu\mu} + \alpha^{e}_{12} (Y^L_{\mu\mu})^2}{(Y^L_{ee})^2 - (Y^L_{\mu\mu})^2}
& \frac{ \alpha^\ell_{13} Y^L_{ee} Y^L_{\tau\tau} + \alpha^{e}_{13} (Y^L_{\tau\tau})^2}{(Y^L_{ee})^2 - (Y^L_{\tau\tau})^2} \cr (\chi_\mu) &
\frac{ \alpha^\ell_{21} Y^L_{\mu\mu} Y^L_{ee} + \alpha^{e}_{21} (Y^L_{ee})^2}{(Y^L_{\mu\mu})^2 - (Y^L_{ee})^2}
& 1
& \frac{ \alpha^\ell_{23} Y^L_{\mu\mu} Y^L_{\tau\tau} + \alpha^{e}_{23} (Y^L_{\tau\tau})^2}{(Y^L_{\mu\mu})^2 - (Y^L_{\tau\tau})^2} \cr (\chi_\tau) &
\frac{ \alpha^\ell_{31} Y^L_{\tau\tau} Y^L_{ee} + \alpha^{e}_{31} (Y^L_{ee})^2}{(Y^L_{\tau\tau})^2 - (Y^L_{ee})^2}
& \frac{ \alpha^\ell_{32} Y^L_{\tau\tau} Y^L_{\mu\mu} + \alpha^{e}_{32} (Y^L_{\mu\mu})^2}{(Y^L_{\tau\tau})^2 - (Y^L_{\mu\mu})^2}
& 1} \\
&\approx&
\left(
\begin{array}{ccc}
1 & -\mathcal O(10^{-3}) \alpha^\ell_{12} - \alpha^{e}_{12}
& -\mathcal O(10^{-4}) \alpha^\ell_{13} - \alpha^{e}_{13} \\
\mathcal O(10^{-3}) \alpha^\ell_{21} + \mathcal O(10^{-5}) \alpha^{e}_{21}
& 1
& -\mathcal O(10^{-2}) \alpha^\ell_{23} - \alpha^{e}_{23} \\
\mathcal O(10^{-4}) \alpha^\ell_{31} + \mathcal O(10^{-8}) \alpha^{e}_{31}
& \mathcal O(10^{-2}) \alpha^\ell_{32} + \mathcal O(10^{-3}) \alpha^{e}_{32}
& 1
\end{array}
\right).
\nonumber
\label{eq:fdmapprox}\end{aligned}$$
![The leading contribution to the process $\mu\rightarrow e\gamma$ in our model. The diagrams with $\chi_\mu$ and $\chi_e$ in the loop require only one off-diagonal coupling, whereas the diagram with $\chi_\tau$ in the loop requires two off-diagonal couplings and is therefore subdominant.[]{data-label="fig:mutoe"}](image/mu_to_e){width="50.00000%"}
We would like to remark on a nontrivial feature of the coupling matrix. In equation \[eq:FDMnondiagonal\], the matrix $V^{e}$ depends on both the $\alpha^{e}_{ij}$ as well as on $\alpha^{\ell}_{ij}$. On the other hand, the matrices $U^{e}$ and $\Delta^{e}$ depend only on $\alpha^{e}_{ij}$, but not on $\alpha^{\ell}_{ij}$, and of course they both become the identity matrix in the limit $\alpha^{e}_{ij}\to 0$. Therefore, the coupling matrix [*squared*]{} ($\lambda\lambda^{\dag}$ as well as $\lambda^{\dag}\lambda$) becomes identity in this limit as well, even for finite $\alpha^{\ell}_{ij}$, since $V^{e}$ is by definition a unitary matrix. It is straightforward to see from figure \[fig:chidecay\], and from figure \[fig:mutoe\] where we show the leading contribution to the process $\mu\to e\gamma$, that for both $\chi$ decays and for lepton flavor violating processes, the FDM couplings indeed appear in the combinations $\lambda\lambda^{\dag}$ and $\lambda^{\dag}\lambda$. Therefore, for these processes, the effects of $\alpha^{\ell}_{ij}$ are always more suppressed than those of $\alpha^{e}_{ij}$, even when $\alpha^{\ell}_{ij} \gg \alpha^{e}_{ij}$, and therefore to a good approximation in the above formula, we can simply drop the $\alpha^{\ell}_{ij}$ terms.
As one can see from the expansion above, the largest elements of the coupling matrix are the $(\chi_e,\mu)$, $(\chi_e,\tau)$ and $(\chi_\mu,\tau)$ entries. Since the sizes of these entries are comparable, and since the experimental bound on the process $\mu\to e\gamma$ ($BR(\mu \to e + \gamma) < 4.2 \times 10^{-13}$ [@TheMEG:2016wtm]) is the strongest among lepton flavor violating processes, if this bound is satisfied in our model, then all other lepton flavor violation bounds will also be satisfied. We calculate the leading contribution to $\Gamma_{\mu\rightarrow e\gamma}$ in our model (figure \[fig:mutoe\]). Expanding in $m_\chi / m_\phi$, we obtain $$\Gamma_{\mu\to e\gamma} = \frac{\alpha \lambda^2 (\delta \lambda)^2}{4096\pi^4}\frac{m_\mu^5}{m_\phi^4} \bigg[ \frac{1}{12}-\frac{1}{6}\frac{m_\chi^2}{m_\phi^2}+\mathcal{O}(\frac{m_\chi^4}{m_\phi^4}) \bigg]^2.
\label{eq:gammaapprox}$$ Here $\delta\lambda$ stands for the largest of the relevant off-diagonal couplings, which in this case is the $(\chi_e,\mu)$ element.
In order to study the impact of both the $\mu\to e\gamma$ bound as well as the X-ray constraints from $\chi_{\tau}$ decays on the parameter space of our model, we perform a numerical study as follows: we assign random values in the interval $(-1,1)$ (sampled uniformly) to all entries of $\alpha^{\ell}_{0,ij} / L$ and $\alpha^e_{0,ij} / L$ (subject to the constraint that the kinetic terms are symmetric and positive semi-definite such that there are no ghosts in the spectrum), and we also assign random values in the interval $(0,1)$ to the FDM brane BLKT coefficients $\alpha^{e,\ell}_{0} / L$.
For any given $M_e L$ and $M_\ell L$, we can then perform the basis change procedure described in section \[sec:xdim\]. The values of the 5D Yukawa couplings $Y^L_{0}$ are chosen such that the correct SM lepton masses are reproduced. For a range of values for $M_e L$ and $M_\ell L$, we run 100,000 such random trials each, and we calculate the resulting distributions for $Y^L_{0,\tau\tau}/L$ as well as all entries of the $\lambda$ matrix, and from these we calculate the $\mu\to e\gamma$ branching ratio as well as the $\chi_{\tau}$ lifetime. As a general trend, the X-ray constraints impose severe constraints on $\delta\lambda$, which as we argued above are dominated by $\alpha^{e}_{ij}$, which in turn scale as $e^{-M_{e} L}$ (see equation \[eq:alphas\]). Therefore the X-ray constraints favor larger values of $M_{e} L$, while they are fairly insensitive to $M_{\ell} L$.
![For the parameter point $M_e L = 8$, $M_\ell L = 1$, and as a function of $m_\chi$ and $m_\phi$, we show the contours for the ratio of the X-ray flux from $\chi_{\tau}$ decays to the limit on this flux [@Perez:2016tcq; @Ng:2019gch], using the median value of $\chi_{\tau}$ lifetime obtained from 100,000 random trials. We also show in red the contour where exactly 95% of the trials are consistent with the bound. In other words, the region to the upper left of this contour is excluded by X-ray bounds. The blue dashed curve corresponds to $\lambda=1.5$ for obtaining the correct relic abundance.[]{data-label="fig:chilife"}](image/xrayflux){width="\textwidth"}
![ For the parameter point $M_{e} L = 8$, $M_\ell L = 1$, we plot the distributions of $|\lambda_{ij}/(\frac{1}{3}\mathrm{tr}[\lambda])|$ after 100,000 random trials with $\mathcal{O}(1)$ symmetric BLKT coefficients as inputs. The analytical approximations for each entry from equation \[eq:fdmapprox\] are shown as red vertical lines for comparison. The first index of $\lambda_{ij}$ refers to the $\chi$-flavor, and the second index to the lepton flavor.[]{data-label="fig:fdmstat"}](image/fdmstat){width="100.00000%"}
In order to gain further insight into the parameter dependence of the constraints, we show in figure \[fig:chilife\] how the X-ray flux from $\chi_{\tau}$ decays (where we use the median value for the $\chi_{\tau}$ lifetime from 100,000 trials) depends on $m_\chi$ and $m_\phi$, for the parameter point $M_e L = 8$, $M_\ell L = 1$. We also indicate where the exclusion contour lies, namely where exactly 95% of trials leads to an X-ray flux consistent with the bounds [@Perez:2016tcq; @Ng:2019gch]. When the parameter $M_{e}$ is increased, the contours in this figure will move further up and to the left, making the excluded region smaller, while varying the parameter $M_{\ell}$ will not have a significant effect on the contours. Another way of saying this is as we increase $M_{e}$, parameter regions with smaller and smaller values of $\lambda$ become consistent with X-ray bounds. The parameter point ($M_e L = 8$, $M_\ell L = 1$) we use in this plot is chosen such that there exist points with $\lambda\lsim 1.5$ that are consistent with the bounds. For the same parameter point, we also show in figure \[fig:fdmstat\] the distributions of the $\lambda$ matrix entries (normalized to the diagonal entry, or more precisely $1/3$ of the trace).
In figure \[fig:Y5L\], we illustrate how the Yukawa couplings $Y^L_{0,\tau\tau}/L$ needed to reproduce the $\tau$ mass depend on the bulk mass parameters. The colors indicate in what range the median values in the distribution of $Y^L_{0,\tau\tau}/L$ lie. In this plot, we also show how the X-ray constraints impact the parameter space. In particular, in the green shaded region, there are points in the $m_\chi$-$m_\phi$ parameter space with $\lambda\lsim 1.5$ where the X-ray constraints can be satisfied. The parameter point $M_e L = 8$, $M_\ell L = 1$ used above is just inside this region. Note also that in the green shaded region, larger values of $Y^L_{0,\tau\tau}/L$ are favored.
![Contours for the median value of $\log_{10}[{\rm BR}(\mu\to e\gamma)]$ for the parameter point $M_{e} L = 8$, $M_\ell L = 1$ as a function of $m_{\chi}$ and $m_{\phi}$. For comparison, the experimental bound is BR$(\mu \to e\gamma) < 4.2 \times 10^{-13}$ [@TheMEG:2016wtm].[]{data-label="fig:fdmflav"}](image/muegam){width="\textwidth"}
Finally, also using the parameter point $M_e L = 8$, $M_\ell L = 1$ and 100,000 trials, we show in figure \[fig:fdmflav\] the median value of the branching ratio of $\mu\to e\gamma$ in our model as a function of $m_{\chi}$ and $m_{\phi}$. Clearly, once the X-ray constraints are satisfied, the rate of lepton flavor violating processes are far below the excluded values.
[**LHC Bounds on Resonant Production of KK Modes:**]{}
In addition to the dilepton + MET final state from $\phi$ pair production and decay, another collider signature of our model is the resonant production of KK modes of the gauge bosons from $q$-$\bar{q}$ initial states (note that the triple vertex for QCD with two zero mode gluons and one KK gluon vanishes [@Lillie:2007yh]). The mass scale of the first KK modes are, to zeroth approximation, $\pi / L$, and their coupling to the SM fermion $\psi_i$ (after diagonalizing the kinetic terms) is given by g\_[\_i \_i V\_[1]{}]{} = ( \^L\_0 dy e\^[-2M\_y]{} f\_[V,1]{}(y) + \^[\_i]{} + \_[0,ii]{}\^[\_i]{} e\^[-2M\_L]{} f\_[V,1]{}(L) ). \[eq:gppg\]
![The median values of the $Y^U_{0,tt}/L$ and $Y^D_{0,bb}/L$ distributions chosen to reproduce the correct top and bottom mass respectively, for a range of choices of $M_u L$, $M_d L$ and $M_q L$, after 100,000 random trials at each point with $\mathcal{O}(1)$ symmetric BLKT coefficients as inputs. The sets of benchmark points A through D for several values of $M_{q} L$ are also shown.[]{data-label="fig:YQ"}](image/Y5T "fig:"){width=".45\textwidth"} ![The median values of the $Y^U_{0,tt}/L$ and $Y^D_{0,bb}/L$ distributions chosen to reproduce the correct top and bottom mass respectively, for a range of choices of $M_u L$, $M_d L$ and $M_q L$, after 100,000 random trials at each point with $\mathcal{O}(1)$ symmetric BLKT coefficients as inputs. The sets of benchmark points A through D for several values of $M_{q} L$ are also shown.[]{data-label="fig:YQ"}](image/Y5B "fig:"){width=".45\textwidth"}
In the same vein as the procedure for the leptons (equation \[eq:yukcpl\]), we can determine the Yukawa couplings for the quarks after diagonalizing and normalizing their kinetic terms: $$\begin{aligned}
Y^U_{ij} &\equiv \frac{Y^U_{0,ij}}{L} C_q C_{u} e^{-(M_q + M_{u})L}, \\
Y^D_{ij} &\equiv \frac{Y^D_{0,ij}}{L} C_q C_{d} e^{-(M_q + M_{d})L},
\end{aligned}
\label{eq:udyuks}$$ where for given values of bulk masses for the quark fields, the diagonal entries $Y^U_{0,ii}/L$ and $Y^D_{0,ii}/L$ are chosen such that the correct SM quark masses are obtained. Similar to figure \[fig:Y5L\], we show in figure \[fig:YQ\] the median values of the $Y^U_{0,tt}/L$ and $Y^D_{0,bb}/L$ distributions as a function of the bulk quark mass parameters $M_u L$, $M_d L$, and $M_q L$. Since the top mass is large, having $Y^U_{0,ii}/L$ be order one requires a negative bulk mass for one (or both) of $u$ or $q$. Thus, one (or both) of these quark profiles peaks at the Higgs brane, in which case the Higgs brane BLKTs are not suppressed. Unlike the leptons however, the quarks have no interactions on the FDM brane, and due to gauge invariance, the BLKT’s are diagonal in the same basis as the kinetic terms. Therefore there is no basis mismatch giving rise to quark flavor changing processes, in addition to those already present in the SM.
Benchmark point $M_e L$ $M_\ell L$ $M_u L$ $M_d L$
----------------- --------- ------------ --------- ---------
A 8 1 1 5
B 8 1 1 3
C 8 1 -1 5
D 9 -1 1 5
: Four benchmarks for bulk masses to be used in our study of KK resonance bounds. For each of these choices, we will also vary $M_q L$.[]{data-label="tab:ABCD"}
In order to demonstrate how the KK resonance bounds depend on the model parameters (in particular, the bulk masses), we choose four benchmarks, listed in table \[tab:ABCD\]. These points (for several values of $M_{q} L$) are also shown in figures \[fig:Y5L\] and \[fig:YQ\]. Benchmark A is chosen such that in addition to the model being consistent with X-ray constraints, the nominal values of $Y^D_{0,bb}/L$ and $Y^U_{0,tt}/L$ are close to 1. Benchmarks B, C and D represent variations around benchmark A, where in benchmark B a lower value of $M_{d} L$ is considered (resulting in smaller values of $Y^D_{0,bb}/L$ at fixed $M_q$), in benchmark C a lower value of $M_{u} L$ is considered (resulting in smaller values of $Y^U_{0,tt}/L$ at fixed $M_q$), and where benchmark D is chosen to be in even less tension with X-ray constraints than benchmark A, while keeping the value of $Y^L_{0,\tau\tau}$ roughly the same.
![Constraints on the mass of the KK-$Z$ (dashed lines) and the KK gluon (solid lines) for the benchmarks A through D and for several values of $M_q L$, based on dilepton [@Aad:2019fac] and dijet resonance searches [@CMS:2019oju], respectively.[]{data-label="fig:KKGZ"}](image/KK)
In figure \[fig:KKGZ\], we plot the constraints on the mass of the KK $Z$-boson and the KK gluon (first KK mode in each case), based respectively on dilepton resonance searches [@Aad:2019fac] and dijet resonance searches [@CMS:2019oju], as we vary $M_{q} L$ for each of the four benchmarks A through D. The limits are extracted by performing a Monte Carlo scan over the BLKT coefficients (quarks as well as leptons) as before, and the exclusion curve corresponds to 95% of the trials resulting in resonant production cross sections consistent with the experimental bounds (1000 trials per point). There are two main takeaway points from this figure, namely that the KK gluon bound is the dominant one, pushing the KK scale $\pi / L$ to values close to 10 TeV, and that the resonance bounds are fairly insensitive to the bulk mass parameters. Therefore in this paper we have used 10 TeV as a lower bound on the KK scale, but we remind the reader that the KK scale could in principle be much larger. Finally, with the KK scale at or above 10 TeV, no KK modes can be pair produced at the LHC, leading to no additional constraints.
Conclusions {#sec:conclusions}
===========
We have studied a UV completion of lepton FDM in a flat extra dimension, where the DM and mediator fields, and the Higgs field live on branes on opposite ends of the extra dimension. With this setup, lepton flavor violating processes only arise as a result of the misalignment of bases which diagonalize the interactions on the Higgs and the FDM branes, and their size can be controlled by the lepton profiles along the extra dimension, which is achieved by an appropriate choice of bulk masses.
Relic abundance, direct and indirect detection constraints can be satisfied as long as $m_{\chi}\gsim 300~$GeV, and $\lambda\sim \mathcal O(1)$. Due to the global flavor symmetry setup, the DM flavors are very nearly degenerate in mass, with $\chi_{e}$ lighter than $\chi_{\mu}$ by $\mathcal O$(10)eV and lighter than $\chi_{\tau}$ by $\mathcal O$(1)keV. The heavier flavors decay to $\chi_{e}$ via dipole transitions over very long lifetimes. The $\chi_{\tau}$ lifetime is constrained to be longer than the age of the universe, which puts an upper limit on the off-diagonal entries of the coupling matrix $\delta\lambda\lsim \mathcal O(10^{-6})$. In terms of model parameters, this constraint translates to $M_{e} L\gsim 8$. In this parameter range, lepton flavor violating processes such as $\mu\to e\gamma$ remain significantly below experimental constraints. Collider searches push the KK scale $\pi / L$ to 10 TeV or above, and give no additional constraints from the pair production of the mediator $\phi$ beyond the region that is excluded by direct detection.
Our study shows that in lepton-FDM models, a flavor structure that is consistent with constraints from flavor-changing processes can arise from an extra dimensional setup, with a broad range of parameters where all experimental constraints are satisfied. The mechanism in our setup that ameliorates flavor related constraints is fairly general, and a similar construction could help address flavor related issues in other extensions of the Standard Model as well. While we have taken the extra dimension in our study to be flat, it would also be interesting to study whether there may be qualitative changes in the phenomenology in a warped extra dimensional setup.
Acknowledgements {#acknowledgements .unnumbered}
================
We are grateful to Zackaria Chacko for invaluable ideas, insights and discussions. We also thank Prateek Agrawal, Cynthia Trendafilova and Christopher Verhaaren for helpful comments and discussions. The research of the authors is supported by the National Science Foundation Grant Numbers PHY-1620610 and PHY-1914679.
[^1]: For a comprehensive review of extra dimensional models, see for instance ref. [@Ponton:2012bi] and references therein.
|
---
address: |
Center for Advanced Methods in Biological Image Analysis\
Center for Data Driven Discovery\
California Institute of Technology, Pasadena, CA, USA
bibliography:
- 'main.bib'
title: Geometric Median Shapes
---
Shape Analysis, Geometric Median, Median Shape, Average Shape, Segmentation Fusion
|
---
abstract: 'Extending the notion of symmetry protected topological phases to insulating antiferromagnets (AFs) described in terms of opposite magnetic dipole moments associated with the magnetic N$\acute{{\rm{e}}} $el order, we establish a bosonic counterpart of topological insulators in semiconductors. Making use of the Aharonov-Casher effect, induced by electric field gradients, we propose a magnonic analog of the quantum spin Hall effect (magnonic QSHE) for edge states that carry helical magnons. We show that such up and down magnons form the same Landau levels and perform cyclotron motion with the same frequency but propagate in opposite direction. The insulating AF becomes characterized by a topological ${\mathbb{Z}}_{2}$ number consisting of the Chern integer associated with each helical magnon edge state. Focusing on the topological Hall phase for magnons, we study bulk magnon effects such as magnonic spin, thermal, Nernst, and Ettinghausen effects, as well as the thermomagnetic properties of helical magnon transport both in topologically trivial and nontrivial bulk AFs and establish the magnonic Wiedemann-Franz law. We show that our predictions are within experimental reach with current device and measurement techniques.'
author:
- 'Kouki Nakata,$^1$ Se Kwon Kim,$^2$ Jelena Klinovaja,$^1$ and Daniel Loss$^1$'
bibliography:
- 'PumpingRef.bib'
title: Magnonic topological insulators in antiferromagnets
---
Introduction {#sec:Intro}
============
Since the observation of quasiequilibrium Bose-Einstein condensation [@demokritov] of magnons in an insulating ferromagnet (FM) at room temperature, the last decade has seen remarkable and rapid development of a new branch of magnetism, dubbed magnonics [@MagnonSpintronics; @magnonics; @ReviewMagnon], aimed at utilizing magnons, the quantized version of spin-waves, as substitute for electrons with the advantage of low-dissipation. Magnons are chargeless bosonic quasi-particles with a magnetic dipole moment $g \mu_{\rm{B}} {\mathbf{e}}_z$ that can serve as a carrier of information in units of the Bohr magneton $\mu_{\rm{B}}$. In particular, insulating FMs [@spinwave; @onose; @WeesNatPhys; @MagnonHallEffectWees] that possess a macroscopic magnetization \[Fig. \[fig:HelicalAFChiralFM\] (a)\] have been playing an essential role in magnonics. Spin-wave spin current [@spinwave; @WeesNatPhys], thermal Hall effect of magnons [@onose], and Snell’s law [@Snell_Exp; @Snell2magnon] for spin-waves have been experimentally established and just this year the magnon planar Hall effect [@MagnonHallEffectWees] has been observed. A magnetic dipole moving in an electric field acquires a geometric phase by the Aharonov-Casher [@casher; @Mignani; @magnon2; @KKPD; @ACatom; @AC_Vignale; @AC_Vignale2] (AC) effect, which is analogous to the Aharonov-Bohm effect [@bohm; @LossPersistent; @LossPersistent2] of electrically charged particles in magnetic fields, and the AC effect in magnetic systems has also been experimentally confirmed [@ACspinwave].
![(Color online) Left: Schematic representation of spin excitations in (a) a FM with the uniform ground state magnetization and (b) an AF with classical magnetic N$\acute{{\rm{e}}} $el order. Right: Schematic representation of edge magnon states in a two-dimensional topological (a) FM and (b) AF. (a) Chiral edge magnon state where magnons with a magnetic dipole moment $ g\mu_{\rm{B}} {\bf e}_z$ propagate along the edge of a finite sample in a given direction. (b) Helical edge magnon state where up and down magnons ($\sigma =\pm 1$) with opposite magnetic dipole moments $\sigma g\mu_{\rm{B}} {\bf e}_z$ propagate along the edge in opposite directions. The AF thus forms a bosonic analog of a TI characterized by two edge modes with opposite chiralities and can be identified with two independent copies (with opposite magnetic moments) of single-layer FMs shown in (a). []{data-label="fig:HelicalAFChiralFM"}](HelicalAFChiralFM.eps){width="7.5cm"}
Under a strong magnetic field, two-dimensional electronic systems can exhibit the integer quantum Hall effect [@QHEcharge] (QHE), which is characterized by chiral edge modes. Thouless, Kohmoto, den Nijs, and Nightingale [@TKNN; @Kohmoto] (TKNN) described the QHE [@AFQHE; @HaldaneQHEnoB; @VolovikQHE; @JK_IFQAHE] in terms of a topological invariant, known as TKNN integer, associated with bulk wave functions [@HalperinEdge; @BulkEdgeHatsugai] in momentum space. This introduced the notion of topological phases of matter, which has been attracting much attention over the last decade. In particular, in 2005, Kane and Mele [@Z2topo; @QSHE2005] have shown that graphene in the absence of a magnetic field exhibits a quantum spin Hall effect (QSHE) [@QSHE2005; @QSHE2006; @TIreview; @TIreview2], which is characterized by a pair of gapless spin-polarized edge states. These helical edge states are protected from backscattering by time-reversal symmetry (TRS), forming in this sense topologically protected Kramers pairs. This can be seen to be the first example of a symmetry protected topological (SPT) phase [@SPTreviewXGWen; @SPTreviewSenthil; @SenthilLevin] and it is now classified as a topological insulator (TI) [@Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2], which is characterized by a ${\mathbb{Z}}_{2}$ number, as the TKNN integer [@TKNN; @Kohmoto] associated with each edge state.
In this paper we extend the notion of topological phases to insulating antiferromagnets (AFs) in the N$\acute{{\rm{e}}} $el ordered phases which do not possess a macroscopic magnetization, see Fig. \[fig:HelicalAFChiralFM\] (b). The component of the total spin along the N$\acute{{\rm{e}}} $el vector is assumed to be conserved, and it is this conservation law which plays the role of the TRS (which is broken in the ordered AF) that protects the topological phase and helical edge states against nonmagnetic impurities and the details [^1] of the surface. [@SurfaceMode] In particular, using magnons [@AndersonAF; @RKuboAF; @AFspintronicsReview2; @AFspintronicsReview] we thus establish a bosonic counterpart of the TI and propose a magnonic QSHE resulting from the N$\acute{{\rm{e}}} $el order in AFs.
In Ref. \[\], motivated by the above-mentioned remarkable progress in recent experiments, we [@magnon2; @magnonWF; @ReviewMagnon; @KevinHallEffect] have proposed a way to electromagnetically realize the ‘quantum’ Hall effect of magnons in FMs, in the sense that the magnon Hall conductances are characterized by a Chern number [@TKNN; @Kohmoto] in an almost flat magnon band, which hosts a chiral edge magnon state, see Fig. \[fig:HelicalAFChiralFM\] (a). [^2] By providing a topological description [@NiuBerry; @Kohmoto; @TKNN] of the classical magnon Hall effect induced by the AC effect, which was proposed in Ref. \[\], we developed it further into the magnonic ‘quantum’ Hall effect and appropriately defining the thermal conductance for bosons, we found that the magnon Hall conductances in such topological FMs obey a Wiedemann-Franz [@WFgermany] (WF) law for magnon transport [@magnonWF; @ReviewMagnon]. In this paper, motivated by the recent experimental [@SekiAF] demonstration of thermal generation of spin currents in AFs using the spin Seebeck effect [@uchidainsulator; @ishe; @ohnuma; @adachi; @adachiphonon; @xiao2010; @OnsagerExperiment; @Peltier] and by the report [@MagnonNernstAF; @MagnonNernstAF2; @MagnonNernstExp] of the magnonic spin Nernst effect in AFs, we develop Ref. \[\] further into the AF regime [@AFspintronicsReview2; @AFspintronicsReview; @RKuboAF; @AndersonAF; @Kevin2; @DLquantumAF] and propose a magnonic analog of the QSHE [@QSHE2005; @QSHE2006; @Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2] for edge states that carry helical edge magnons \[Fig. \[fig:HelicalAFChiralFM\] (b)\] due to the AC phase. Focusing on helical magnon transport both in topologically trivial and nontrivial bulk [@HalperinEdge; @BulkEdgeHatsugai] AFs, we also study thermomagnetic properties and discuss the universality of the magnonic WF law [@magnonWF; @KJD; @ReviewMagnon]. Using magnons in insulating AFs characterized by the N$\acute{{\rm{e}}} $el magnetic order, we thus establish the bosonic counterpart of TIs [@Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2].
At sufficiently low temperatures, effects of magnon-magnon and magnon-phonon interactions become [@magnonWF; @adachiphonon; @Tmagnonphonon] negligibly small. Indeed, Ref. \[\] reported measurements of magnets at low temperature $T \lesssim {\cal{O}}(1) $K, where the exponent of the temperature dependence of the phonon thermal conductance is larger than the one for magnons. This indicates that for thermal transport the contribution of phonons dies out more quickly than that of magnons with decreasing temperature. We then focus on noninteracting magnons at such low temperatures throughout this work [^3] and assume that the total spin along the $z$ direction is conserved and thus remains a good quantum number.
This paper is organized as follows. In Sec. \[sec:trivial\] we introduce the model system for magnons with a quadratic dispersion because of an easy-axis spin anisotropy in a topologically trivial bulk AF and find that the dynamics can be described as the combination of two independent copies of that in a FM and thus derive the identical magnonic WF law for a topologically trivial FM and AF. In Sec. \[subsec:ACmagnon\], introducing the model system for magnons in the presence of an AC phase induced by an electric field gradient, we find the correspondence between the single magnon Hamiltonian and the one of an electrically charged particle moving in a magnetic vector potential, and see that the force acting on magnons is invariant under a gauge transformation. In Sec. \[subsec:MQSHE\], we see that each magnon with opposite magnetic dipole moment inherent to the N$\acute{{\rm{e}}} $el order forms the same Landau levels and performs cyclotron motion with the same frequency but in opposite direction, leading to the helical edge magnon state. We find that the AF in the topological ${\mathbb{Z}}_{2}$ phase is characterized by a Chern number associated with each edge state. In Sec. \[subsec:LatticeNumCal\], introducing a tight-binding representation (TBR) of the magnon Hamiltonian, we obtain the magnon energy spectrum and helical edge states numerically, for constant and periodic electric field gradients. In Sec. \[subsec:TopoHallBulk\], we study thermomagnetic properties of Hall transport of bulk magnons, with focus on the the helical edge magnon states, and analyze the differences between the topological and non-topological phases of the AF. In Sec. \[sec:experimentAF\] we give some concrete estimates for experimental candidate materials. Finally, we summarize and give some conclusions in Sec. \[sec:sum\], and remark open issues in Sec. \[sec:discussion\]. Technical details are deferred to the Appendices.
Topologically trivial AF {#sec:trivial}
========================
In this section, we consider a topologically trivial AF on a three-dimensional ($d=3$) cubic lattice in the ordered phase with the N$\acute{{\rm{e}}} $el order parameter along the $z$ direction, see Fig. \[fig:HelicalAFChiralFM\] (b). Spins of length $S$ on each bipartite sublattice, denoted by A and B, satisfy [@altland; @MagnonNernstAF; @MagnonNernstAF2] $ {\mathbf{S}}_{\rm{A}} = - {\mathbf{S}}_{\rm{B}}= S {\mathbf{e}}_z $ in the ground state. The magnet is described by the following spin Hamiltonian, [@RKuboAF; @AndersonAF] $$\begin{aligned}
{\cal{H}} = J \sum_{\langle ij \rangle} {\mathbf{S}}_i \cdot {\mathbf{S}}_j -\frac{\cal{K}}{2} \sum_{i} (S_i^z)^2,
\label{eqn:H} \end{aligned}$$ where $J >0$ parametrizes the antiferromagnetic exchange interaction between the nearest-neighbor spins and ${\cal{K}} >0$ is the easy-axis anisotropy [@CrOexp; @CrOexp2] that ensures the magnetic N$\acute{{\rm{e}}} $el order along the $z$ direction. Since the Hamiltonian is invariant under global spin rotations about the $z$ axis, the $z$ component of the total spin is a good quantum number (i.e. conserved). Therefore, magnons, quanta of spin waves, have well-defined spin along the $z$ axis, as will be shown explicitly below. Using the sublattice-dependent Holstein-Primakoff [@HP; @altland; @MagnonNernstAF; @MagnonNernstAF2] transformation, $ S_{i{\rm{A}}}^+ = \sqrt{2S}[1-a_i^\dagger a_i / (2S)]^{1/2} a_i$, $S_{i{\rm{A}}}^z = S - a_i^\dagger a_i$, $ S_{j{\rm{B}}}^+ = \sqrt{2S}[1-b_j^\dagger b_j / (2S)]^{1/2} b_j^{\dagger }$, $S_{j{\rm{B}}}^z = - S + b_j^\dagger b_j$, the spin degrees of freedom in Eq. (\[eqn:H\]) can be recast in terms of bosonic operators that satisfy the commutation relations, $[a_{i}, a_{j}^{\dagger }] = \delta_{i,j}$ and $[b_{i}, b_{j}^{\dagger }] = \delta_{i,j}$ and all other commutators between the annihilation (creation) operators $a_{i}^{(\dagger)}$ and $b_{i}^{(\dagger)}$ vanish to the lowest order in $1/S$, assuming large spins $S \gg 1$. Further performing the well-known Bogoliubov transformation [@altland] (see Appendix \[sec:Mchirality\] for details), the system can be mapped onto a system of non-interacting spin-up and spin-down magnons, which carry opposite magnetic dipole moment [@altland] $\sigma g\mu_{\rm{B}} {\bf e}_z$ with $\sigma = \pm 1$. The transformed Hamiltonian assumes diagonal form [@RKuboAF; @AndersonAF], $ {\cal{H}} = \sum_{\mathbf{k}} \hbar \omega _{\mathbf{k}} ({\cal{A}}_{\mathbf{k}}^{\dagger } {\cal{A}}_{\mathbf{k}} + {\cal{B}}_{\mathbf{k}}^{\dagger } {\cal{B}}_{\mathbf{k}})$, in terms of Bogoliubov quasi-particle operators, ${\cal{A}}_{\mathbf{k}}$ and ${\cal{B}}_{\mathbf{k}}$, satisfying $ [{\cal{A}}_{\mathbf{k}}, {\cal{A}}_{{\mathbf{k^{\prime}}}}^{\dagger }]= \delta_{{\mathbf{k}}, {\mathbf{k^{\prime}}}} $ and $ [{\cal{B}}_{\mathbf{k}}, {\cal{B}}_{{\mathbf{k^{\prime}}}}^{\dagger }]= \delta_{{\mathbf{k}}, {\mathbf{k^{\prime}}}} $ with all the other commutators vanishing. Within the long wave-length approximation and assuming a spin anisotropy [@CrOexp; @CrOexp2] at low temperature, the dispersion [@RKuboAF] becomes gapped and parabolic [^4] in terms of $k = | {\mathbf{k}} | $, $\hbar \omega _{{\mathbf{k}}} = Dk^2 + \Delta$, where $ D = JSa^2/\sqrt{\kappa^2 + 2\kappa} $ parametrizes the inverse of the ‘magnon mass’ (see below), $ \Delta = 2dJS\sqrt{\kappa^2 + 2\kappa} $ is the magnon gap, $ \kappa = {\cal{K}}/(2dJ) $, $a$ denotes the lattice constant, and $d=3$ is the dimension of the cubic lattice ([*[e.g.]{}*]{}, $d=2$ for the square lattice). Note that the dispersion becomes linear $\hbar \omega _{{\mathbf{k}}} \propto k $ in the absence of the spin anisotropy [@AndersonAF], see Appendix \[sec:Mchirality\] for details.
Since the $z$ component of the total spin is given by [@MagnonNernstAF] $ S^z = \sum_{i} (S_{i{\rm{A}}}^z + S_{i{\rm{B}}}^z)
= \sum_{\mathbf{k}} (-a_{\mathbf{k}}^{\dagger } a_{\mathbf{k}} + b_{\mathbf{k}}^{\dagger } b_{\mathbf{k}})
= \sum_{\mathbf{k}}(-{\cal{A}}_{\mathbf{k}}^{\dagger } {\cal{A}}_{\mathbf{k}} + {\cal{B}}_{\mathbf{k}}^{\dagger } {\cal{B}}_{\mathbf{k}}) $, the ${\cal{A}}$ (${\cal{B}}$) magnon carries $\sigma = -1$ ($+1$) spin angular momentum along the $z$ direction and can be identified with a down (up) magnon. Thus the low-energy magnetic excitation of the AF \[Eq. (\[eqn:H\])\] can be described as chargeless bosonic quasiparticles carrying a magnetic dipole moment $\sigma g\mu_{\rm{B}} {\bf e}_z$ with $\sigma = \pm 1$ \[Fig. \[fig:HelicalAFChiralFM\] (b)\], where $g$ is the $g$-factor of the constituent spins and $\mu_{\rm{B}}$ is the Bohr magneton. Throughout this paper, we work under the assumption that the total spin along the $z$ direction is conserved and remains a good quantum number.
In the presence of an external magnetic field $B\geq 0$ along the $z$ axis $ {\mathbf{B}} = B {\mathbf{e}}_z $, the degeneracy is lifted and the low-energy physics of the AF at sufficiently low temperatures where effects of magnon-magnon and magnon-phonon interactions become [@magnonWF; @adachiphonon; @Tmagnonphonon] negligibly small is described by the Hamiltonian $$\begin{aligned}
{\cal{H}} = \sum_{\sigma = \uparrow , \downarrow } \sum_{\mathbf{k}} \hbar \omega _{{\mathbf{k}}\sigma } a_{{\mathbf{k}}\sigma }^{\dagger } a_{{\mathbf{k}}\sigma },
\label{eqn:H2} \end{aligned}$$ where $\sigma = \uparrow $ and $\sigma = \downarrow$ denote the up magnon ($\sigma =1$) and the down magnon ($\sigma = -1$), respectively. Here, $\hbar \omega _{{\mathbf{k}}\sigma } = Dk^2 + \Delta_{\sigma } $ and $\Delta_{\sigma } = \Delta - \sigma g \mu _{\rm{B}} B $ are the energy and the gap of spin-$\sigma$ magnons; $a_{{\mathbf{k}} \sigma }^{\dagger } a_{{\mathbf{k}} \sigma }$ is the number operator of spin-$\sigma$ magnons. Throughout the paper, we adopt the aforementioned notations for simplicity. We consider a magnetic field that is much weaker than the anisotropy, i.e., $ g \mu _{\rm{B}} B \ll \Delta $, where the spin anisotropy prevents spin flop transition.
Onsager coefficients {#subsec:Onsager}
--------------------
The two magnon modes, up and down, are completely decoupled in the AF described by Eq. (\[eqn:H2\]). Therefore the dynamics of magnons in the AF can be described as the combination of two independent copies of the dynamics of magnons in a FM for each mode $\sigma =\pm 1$. For spin-$\sigma$ magnons, a magnetic field gradient $ \partial _x B $ along the $x$ axis works as a driving force ${\mathbf{F}}_{B} = F_{\sigma} {\mathbf{e}}_x$ with $F_{\sigma} = \sigma g \mu _{\rm{B}} \partial _x B $. Since the directions of the force are opposite for the two magnon modes $\sigma = \pm 1$, the magnetic field gradient generates [*[helical]{}*]{} magnon transport in the topologically trivial bulk AF, Eq. (\[eqn:H2\]), as will be shown explicitly below. Specifically, the magnetic field and temperature gradients generate magnonic spin and heat currents, $j_{x \sigma }$ and $ j_{x\sigma }^Q $, respectively, along the $x$ direction. Within the linear response regime, each Onsager coefficient $L_{ij \sigma }$ ($i, j = 1, 2$ ) is defined by $$\begin{aligned}
\begin{pmatrix}
\langle j_{x \sigma } \rangle \\ \langle j_{x\sigma }^Q \rangle
\end{pmatrix}
=
\begin{pmatrix}
L_{11 \sigma } & L_{12\sigma } \\ L_{21\sigma } & L_{22\sigma }
\end{pmatrix}
\begin{pmatrix}
\partial _x B \\ - \partial _x T/T
\end{pmatrix}.
\label{eqn:2by2af}\end{aligned}$$ A straightforward calculation using the Boltzmann equation [@mahan; @Basso; @Basso2; @Basso3] gives the following coefficients (see Appendix \[sec:Boltzmann\] for details),
$$\begin{aligned}
{{ L}}_{11\sigma } &= (g \mu _{\rm{B}})^2 {\cal{C}} \cdot {\rm{Li}}_{3/2}({\rm{e}}^{-b_{\sigma }}),
\label{eqn:11} \\
{{ L}}_{12\sigma } &= \sigma g \mu _{\rm{B}} k_{\rm{B}} T {\cal{C}}
\Big[\frac{5}{2} {\rm{Li}}_{5/2}({\rm{e}}^{-b_{\sigma }}) +b_{\sigma }{\rm{Li}}_{3/2}({\rm{e}}^{-b_{\sigma }}) \Big] \label{eqn:21} \\
&= {{ L}}_{21 \sigma },
\label{eqn:12} \\
{{L}}_{22\sigma } &= (k_{\rm{B}} T)^2 {\cal{C}}
\Big[\frac{35}{4} {\rm{Li}}_{7/2}({\rm{e}}^{-b_{\sigma }}) +5 b_{\sigma }{\rm{Li}}_{5/2}({\rm{e}}^{-b_{\sigma }}) \nonumber \\
&+ b_{\sigma }^2 {\rm{Li}}_{3/2}({\rm{e}}^{-b_{\sigma }}) \Big],
\label{eqn:22}
$$
where $b_{\sigma } \equiv \Delta_{\sigma }/(k_{\rm{B}} T)$ represents the dimensionless inverse temperature, ${\rm{Li}}_{s}(z) = \sum_{n=1}^{\infty} z^n/n^s$ is the polylogarithm function, and ${\cal{C}} \equiv \tau (k_{\rm{B}} T)^{3/2}/(4 \pi^{3/2} \hbar ^2 \sqrt{D}) $ with a phenomenologically introduced lifetime $\tau $ of magnons, which can be generated by nonmagnetic impurity scatterings and is assumed to be constant at low temperature. The coefficients in Eq. (\[eqn:12\]) satisfy the Onsager relation. In the absence of a magnetic field, $B=0$, up and down magnons are degenerate and the degeneracy is robust against external perturbations due to the spin anisotropy [@CrOexp; @CrOexp2] and the resultant magnon energy gap. This gives $ {{ L}}_{i i \uparrow } = {{ L}}_{i i \downarrow } $, while $ {{ L}}_{i j \uparrow } = - {{ L}}_{i j \downarrow } $ for $i \not= j $ because of the opposite magnetic dipole moment $\sigma =\pm 1$. Note that the particle current for each magnon $ j_{x \sigma }^{\rm{P}} $ is given by $ j_{x \sigma }^{\rm{P}} = j_{x \sigma }/(\sigma g \mu _{\rm{B}})$, and Eqs. (\[eqn:11\]) and (\[eqn:21\]) show that the magnetic field gradient generates [*[helical]{}*]{} magnon transport in the bulk AF where magnons with opposite magnetic moments flow in opposite $x$ directions, while all magnons subjected to a thermal gradient flow in the same $x$ direction. Thermomagnetic properties of such magnon transport in topologically trivial bulk AFs are summarized in Table \[tab:table1\].
Antiferromagnet Topologically trivial bulk: Sec. \[sec:trivial\] Topological bulk: Sec. \[subsec:TopoHallBulk\]
------------------------- -------------------------------------------------- ----------------------------------------------------- -- -- -- --
Magnetic field gradient Helical magnon transport Magnon Hall transport
Spin: $ \sum_{\sigma } L_{11 \sigma }\not=0 $ Spin: $ \sum_{\sigma } L_{11 \sigma }^{yx} =0 $
Heat: $ \sum_{\sigma } L_{21 \sigma } =0 $ Heat: $ \sum_{\sigma } L_{21 \sigma }^{yx} \not=0 $
Thermal gradient Magnon transport Helical magnon Hall transport
Spin: $ \sum_{\sigma } L_{12 \sigma } =0 $ Spin: $ \sum_{\sigma } L_{12 \sigma }^{yx} \not=0 $
Heat: $ \sum_{\sigma } L_{22 \sigma } \not=0 $ Heat: $ \sum_{\sigma } L_{22 \sigma }^{yx} =0 $
Thermomagnetic relations {#subsec:relation}
------------------------
In analogy to charge transport in metals [@AMermin] and magnon transport [@magnonWF; @ReviewMagnon] in FMs, we refer to $ {\cal{S}}_{\sigma } \equiv L_{12\sigma }/(T L_{11\sigma }) $ as the antiferromagnetic magnon Seebeck coefficient and $ {\cal{P}}_{\sigma } \equiv L_{21\sigma }/L_{11\sigma } $ as the antiferromagnetic Peltier coefficient for up and down magnons. The Onsager relation provides the Thomson relation (also known as Kelvin-Onsager relation [@spincal]) ${\cal{P}}_{\sigma } = T {\cal{S}}_{\sigma }$. In contrast to FMs, the total term vanishes for AFs, $ {\cal{S}}_{\rm{AF}} \equiv \sum_{\sigma } {\cal{S}}_{\sigma } =0 $ and $ {\cal{P}}_{\rm{AF}} \equiv \sum_{\sigma } {\cal{P}}_{\sigma } = 0 $, due to the opposite magnetic dipole moment, $ \sum_{\sigma } {{ L}}_{12\sigma } = \sum_{\sigma } {{ L}}_{21\sigma } =0 $. Still, focusing on each magnon mode separately, the coefficients ${\cal{S}}_{\sigma }$ and $ {\cal{P}}_{\sigma }$ show a universal behavior at low temperature, $b \equiv \Delta/(k_{\rm{B}} T) \gg 1$, in the sense that the coefficients do not depend on the antiferromagnetic exchange interaction $J$ specific to the material, and reduce to the qualitatively same form as the ones for FMs, [@magnonWF; @ReviewMagnon] $$\begin{aligned}
{\cal{S}}_{\sigma } \stackrel{\rightarrow }{=} \frac{\sigma }{g \mu _{\rm{B}}} \frac{\Delta}{T}, \,\,\,\,\,\,\,\,\,\,\,\,\,
{\cal{P}}_{\sigma } \stackrel{\rightarrow }{=} \sigma \frac{\Delta}{g \mu _{\rm{B}}}.
\label{eqn:Peltier} \end{aligned}$$ This is another demonstration of the fact that the dynamics of the AF reduces to independent copies for each magnon $\sigma =\pm 1$ in FMs; up and down magnons are completely decoupled in an AF in leading magnon-approximation given by Eq. (\[eqn:H2\]). This implies that the up and down contributions, $K_{\sigma }$, to the thermal conductance of the AF $K_{\rm{AF}} = \sum_{\sigma } K_{\sigma } $ can be considered separately, and are given by [@magnonWF; @ReviewMagnon] $$\begin{aligned}
K_{\sigma } = \frac{1}{T} \Big({{ L}}_{22\sigma } - \frac{{{ L}}_{12\sigma }{{ L}}_{21\sigma }}{{{ L}}_{11\sigma }} \Big).
\label{eqn:Kaf}\end{aligned}$$ As we have seen in the study of FMs [@magnonWF; @KJD; @ReviewMagnon], $K_{\sigma }$ is expressed by off-diagonal elements [@GrunwaldHajdu; @KaravolasTriberis] ${{{ L}}_{12\sigma }{{ L}}_{21\sigma }}/{{{ L}}_{11\sigma }}$ as well as ${{ L}}_{22\sigma }$. This can be seen in the following way (see Refs. \[\] for details). The applied temperature gradient $ \partial _x T$ induces a magnonic spin current for each magnon ($\sigma = \uparrow , \downarrow $), $ \langle j_{x \sigma } \rangle = - L_{12\sigma }\partial _x T/T$, which leads to an accumulation of each magnon at the boundaries and thereby builds up a non-uniform magnetization since two magnon modes are decoupled and do not interfere with each other in the AF. This generates an intrinsic magnetization gradient [@SilsbeeMagnetization; @Basso; @Basso2; @Basso3; @MagnonChemicalWees; @YacobyChemical] $\partial _x B_{\sigma }^{\ast }$ acting separately on each magnon that produces a magnonic counter-current. Then, the system reaches a stationary state such that in- and out-flowing magnonic spin currents balance each other; $\langle j_{x \sigma } \rangle = 0$ in this new quasi-equilibrium state where $$\begin{aligned}
\partial _x B_{\sigma }^{\ast } = \frac{L_{12\sigma }}{L_{11\sigma }} \frac{ \partial _x T}{T}.
\label{eqn:eachB}\end{aligned}$$ Thus the total thermal conductance $K_{\rm{AF}}= \sum_{\sigma } K_{\sigma } $ defined by $ \langle j_{x\sigma }^Q \rangle = - K_{\sigma } \partial _x T$ is measured. Since the thermally-induced intrinsic magnetization gradient $\partial _x B_{\sigma }^{\ast }$ given in Eq. (\[eqn:eachB\]) acts individually on each magnon as an effective magnetic field gradient, inserting Eq. (\[eqn:eachB\]) into Eq. (\[eqn:2by2af\]), the contribution from each magnon $ K_{\sigma }$ to the total thermal conductance of the AF $K_{\rm{AF}} $ becomes Eq. (\[eqn:Kaf\]) in terms of Onsager coefficients where the off-diagonal elements [@GrunwaldHajdu; @KaravolasTriberis] arise from the magnetization gradient-induced counter-current. This is in analogy to thermal transport of electrons in metals [@AMermin] where, however, the off-diagonal contributions are strongly suppressed by the sharp Fermi surface of fermions at temperatures $k_BT$ much smaller than the Fermi energy.
The coefficient $L_{11\sigma }$ is identified with the magnonic spin conductance $ G_{\sigma }= L_{11\sigma } $ for each magnon and the total one of the AF is given by $ G_{\rm{AF}} = \sum_{\sigma } G_{\sigma } $. From these we obtain the thermomagnetic ratio $K_{\rm{AF}}/G_{\rm{AF}} $, characterizing magnonic spin and thermal transport in the AF. At low temperatures, $ b\gg 1 $, the ratio becomes linear in temperature, $$\begin{aligned}
\frac{K_{\rm{AF}}}{G_{\rm{AF}}} \stackrel{\rightarrow }{=}
{\cal{L}}_{\rm{AF}} T \, .
\label{eqn:WFaf}\end{aligned}$$ Here, ${\cal{L}}_{\rm{AF}}$ is the magnetic Lorenz number for AFs given by $$\begin{aligned}
{\cal{L}}_{\rm{AF}} = \frac{5}{2} \Big(\frac{k_{\rm{B}}}{g \mu _{\rm{B}}}\Big)^2,
\label{eqn:LorentzAF}\end{aligned}$$ which is independent of material parameters apart from the $g$-factor. Thus, at low temperatures, the ratio $ {K_{\rm{AF}}}/{G_{\rm{AF}}}$ satisfies the WF law in the sense that it becomes linear in temperature; the WF law holds in the same way for magnons both in AFs and FMs [@magnonWF; @ReviewMagnon; @KJD], which are bosonic excitations, as for electrons [@WFgermany; @AMermin] which are fermions. In this sense, it can be concluded that the linear-in-$T$ behavior of the thermomagnetic ratio is indeed universal. We remark that if one wrongly omits the off-diagonal coefficients [@GrunwaldHajdu; @KaravolasTriberis] in Eq. (\[eqn:Kaf\]) which can be as large as the diagonal ones, the ratio would not obey WF law, breaking the linearity in temperature.
Lastly we comment on the factor ‘$5/2$’ in Eq. (\[eqn:LorentzAF\]) which is different from ‘1’ that we have derived in our last work on magnon transport in topologically trivial three-dimensional ferromagnetic junctions. [@magnonWF; @ReviewMagnon] The difference arises from the geometry of the system setup, single bulk or junction, rather than FMs or AFs. Indeed, the factor ‘$5/2$’ arises also in a single bulk FM and the ratio reduces to the same form Eq. (\[eqn:WFaf\]). This can be seen also as follows; until now, considering a topologically trivial three-dimensional single bulk AF, we have seen that the up and down magnons of the AF are completely decoupled (in leading order) and the dynamics indeed reduces to independent copies of each magnon in single bulk FMs \[Eq. (\[eqn:H2\])\]. Therefore focusing only on $\sigma$-magnons and using Eqs. (\[eqn:11\])-(\[eqn:22\]), the magnonic WF law for a single bulk FM can be derived, which becomes at low temperatures, $ b\gg 1 $, $$\begin{aligned}
\frac{K_{\sigma }}{G_{\sigma }} \stackrel{\rightarrow }{=} \frac{5}{2} \Big(\frac{k_{\rm{B}}}{g \mu _{\rm{B}}}\Big)^2 T.
\label{eqn:eachWFaf}\end{aligned}$$ Thus, we see that the factor ‘$5/2$’ arises also for the single bulk FM [^5], and we conclude that the factor ‘$5/2$’ is common to both ferro- and antiferromagnetic single bulk magnets. From this, we see that in contrast to the universality of the linear-in-$T$ behavior, the magnetic Lorenz number is not; it may vary from system to system depending on [*[e.g.]{}*]{}, the geometry of the setup, single bulk or junction [@magnonWF], and the system dimension [@KJD]. In addition, the Onsager coefficients for the systems depend on the details of the setup by having different types of polylogarithm function ${\rm{Li}}_{s}$; both ferro- and antiferromagnetic single bulk magnets are described by ${\rm{Li}}_{l+3/2}$ ($l = 0, 1, 2$), while the three-dimensional ferromagnetic junction by the exponential integral ${\rm{Li}}_{l}$. This difference in the polylogarithm functions gives rise to different prefactors depending on the system setup.
Topological AF {#sec:nontrivial}
==============
In this section, using above results, we consider a clean AF on a two-dimensional square lattice ($d=2$), embedded in the $xy$-plane, with a focus on the effects of an electric field ${\mathbf{E}}$ that couples to the magnetic dipole moment $ \sigma g \mu _{\rm{B}}{\mathbf{e}}_z$ of up and down magnons through the AC effect [@casher].
Aharonov-Casher effect on magnons {#subsec:ACmagnon}
---------------------------------
In the last section (Sec. \[sec:trivial\]), starting from the spin Hamiltonian Eq. (\[eqn:H\]) in the absence of electric fields, we have shown that the low-energy dynamics of AFs in the long wave-length (continuum) limit is described by the completely decoupled up and down magnons, see Eq. (\[eqn:H2\]), with dispersion $\hbar \omega _{{\mathbf{k}}\sigma } = Dk^2 + \Delta_{\sigma }$. We can then introduce an effective Hamiltonian for such magnon modes, given by ${\cal{H}}_{\rm{m} \sigma } = {\hat{{\mathbf{p}}}}^2/2m + \Delta_{\sigma }$, where the effective mass of the magnons is defined by $(2m)^{-1} = D/\hbar ^2$ with $d=2$, and $\hat{{\mathbf{p}}} =(p_x, p_y, 0)$ is the momentum operator. Thus, the magnons behave like ordinary particles of mass $m$ with quadratic dispersion, moving in the $xy$-plane and carrying a magnetic dipole moment $ \sigma g \mu _{\rm{B}}{\mathbf{e}}_z$.
In the presence of an electric field ${\mathbf{E}}({\mathbf{r}})$, the magnetic dipole moment $ \sigma g \mu _{\rm{B}}{\mathbf{e}}_z$ of a moving magnon experiences a magnetic force in the rest frame of the magnon. This system is formally identical to the one studied by Aharonov and Casher [@casher], namely that of a neutral particle carrying a magnetic dipole moment, moving in an electric field. Thus, following their work [@casher] we account for the electric field by replacing the momentum operator $ \hat{{\mathbf{p}}} $ by $\hat{{\mathbf{p}}} + \sigma {g \mu _{\rm{B}}} {\mathbf{A}}_{\rm{m}}/{c}$, where $$\begin{aligned}
{\mathbf{A}}_{\rm{m}}({\mathbf{r}})
= \frac{1}{c} {\mathbf{E}}({\mathbf{r}})\times {\mathbf{e_z}}
\label{magnon_gauge}\end{aligned}$$ is the ‘electric’ vector potential ${\mathbf{A}}_{\rm{m}}$ acting on the magnons at position ${\mathbf{r}}=(x,y,0)$. The total Hamiltonian then becomes $ {\cal{H}}_{\rm{m}} = \sum_{\sigma=\pm } {\cal{H}}_{\rm{m} \sigma }$ with $$\begin{aligned}
{\cal{H}}_{\rm{m} \sigma } = \frac{1}{2m} \Big(\hat{{\mathbf{p}}} + \sigma \frac{g \mu _{\rm{B}}}{c}{\mathbf{A}}_{\rm{m}} \Big)^2 + \Delta_{\sigma }.
\label{HamiltonianLL} \end{aligned}$$ This expression describes the low-energy dynamics of the magnons moving in an electric field ${\mathbf{E}}$. It is valid at sufficiently low temperatures where the effects of magnon-magnon and magnon-phonon interactions become [@magnonWF; @adachiphonon; @Tmagnonphonon] negligibly small. The Hamiltonian in Eq. (\[HamiltonianLL\]) is formally identical to that of a charged particle moving in a magnetic vector potential, in which the coupling constant is given by $\sigma g \mu _{\rm{B}}$ instead of the electric charge $e$.
When an electric field has the special quadratic form ${\mathbf{E}}({\mathbf{r}}) = {\mathcal{E}}(-x/2, -y/2,0)$, with $ {\mathcal{E}}$ a constant field gradient, it gives rise to the ‘symmetric’ gauge potential ${\mathbf{A}}_{\rm{m}}({\mathbf{r}}) = ({\mathcal{E}}/c)(-y/2, +x/2,0)$. Since $$\begin{aligned}
{\mathbf{\nabla }} \times {\mathbf{A}}_{\rm{m}} = \frac{{\mathcal{E}}}{c} {\mathbf{e}}_z,
\label{eqn:correspondenceEB} \end{aligned}$$ the field gradient $ {\mathcal{E}}$ plays the role of the perpendicular magnetic field in two-dimensional electron gases [@Ezawa]. The considered electric field with the constant gradient $\cal{E}$ can be realized $\it{e.g.}$ by an STM tip [@GeimSTM; @STM_Egradient]. Similarly, the analog of the Landau gauge ${\mathbf{A}}_{\rm{m}}({\mathbf{r}}) = ({\mathcal{E}}/c)(0, x,0)$ is provided by ${\mathbf{E}}({\mathbf{r}})= {\mathcal{E}}(-x, 0,0)$. Within the quantum-mechanical treatment [@Ezawa], the resulting magnon dynamics \[Eqs. (\[LL\])-(\[CM\])\] is identical to the one of the symmetric gauge since both satisfy $ {\mathbf{\nabla }} \times {\mathbf{A}}_{\rm{m}} = ({{\mathcal{E}}}/{c}) {\mathbf{e}}_z$ and thus the two Hamiltonians with different ‘gauges’ can be transformed into each other by the unitary gauge transformation $U_\sigma \equiv {\rm{exp}}(i \sigma g \mu _{\rm{B}} {\cal{E}} xy/2 \hbar c^2)$. With the Hamiltonian ${\cal{H}}_{\rm{m} \sigma }$ given in Eq. (\[HamiltonianLL\]) we can adopt the topological formulations [@NiuBerry; @Kohmoto] of the conventional QHE in terms of Chern numbers. See Ref. \[\] for the developed formulation of AC phase-induced magnon Hall effects in FM \[see also Fig. \[fig:HelicalAFChiralFM\] (a)\], which corresponds to that for $ {\cal{H}}_{\rm{m} \uparrow }$.
Using canonical equations, $ \dot{\mathbf{r}}= {\mathbf{v}} = \partial {\cal{H}}_{\rm{m} \sigma }/\partial {\mathbf{p}} $ and $ \dot{\mathbf{p}} = - \partial {\cal{H}}_{\rm{m} \sigma }/\partial {\mathbf{r}}$, where $\dot{\mathbf{r}} $ denotes the time derivative of $\mathbf{r}$ and ${\mathbf{v}}$ is the velocity, the force ${\mathbf{F}}_{\rm{AC}}$ acting on magnons in electric fields is then given by [@magnon2] $$\begin{aligned}
{\mathbf{F}}_{\rm{AC}} = \sigma g \mu_{\rm{B}} \Big[{\mathbf{\nabla }}B - \frac{{\mathbf{v}}}{c} \times ({\mathbf{\nabla }} \times {\mathbf{A}}_{\rm{m}})\Big].
\label{ForceAC}\end{aligned}$$ The force ${\mathbf{F}}_{\rm{AC}} $ is invariant under the gauge transformation $ {\mathbf{A}}_{\rm{m}} \mapsto {\mathbf{A}}_{\rm{m}}'= {\mathbf{A}}_{\rm{m}} + {\mathbf{\nabla }}\chi $ accompanied by ${\mathbf{E}} \mapsto {\mathbf{E}}' = {\mathbf{E}} + c {\mathbf{e}}_z \times {\mathbf{\nabla }}\chi $ for arbitrary scalar function $\chi = \chi (x,y)$. Note that the gauge invariance in the present case is specific to electrically neutral particles only, such as magnons, since $ {\mathbf{E}}'$ and ${\mathbf{E}}$ give rise to different physical forces on charged particles. Inserting Eq. (\[eqn:correspondenceEB\]) into Eq. (\[ForceAC\]), the force becomes $$\begin{aligned}
{\mathbf{F}}_{\rm{AC}} = m \dot{\mathbf{v}} = \sigma g \mu_{\rm{B}} \Big({\mathbf{\nabla }}B - {\mathbf{v}} \times \frac{{\mathcal{E}} {\mathbf{e}}_z}{c^2} \Big),
\label{ForceAC2}\end{aligned}$$ which indicates that the role of electric field and magnetic field in electrically charged particles [@Ezawa] is played by the magnetic field gradient ${\mathbf{\nabla }}B$ and the electric field gradient ${\mathcal{E}}$, respectively, for ‘magnetically charged’ particles such as our magnons. Assuming that the velocity ${\mathbf{v}}$ consists of the cyclotron motion ${\mathbf{v}}_{\rm{c}}$ (see Sec. \[subsec:MQSHE\]) and the drift velocity ${\mathbf{v}}_{\rm{d}} $ with [@Ezawa] ${\mathbf{\dot{v}}} _{\rm{d}}=0$, Eq. (\[ForceAC2\]) gives $ {\mathbf{v}}_{\rm{d}} \times {\mathbf{e}}_z = (c^2/{\cal{E}}) {\mathbf{\nabla }}B $. Applying the magnetic field gradient along the $x$ axis $\partial _x B \not=0$ while $\partial _y B =0$, the drift velocity becomes ${\mathbf{v}}_{\rm{d}}=(0, c^2 \partial _x B/{\cal{E}}, 0) $, which is perpendicular to the applied magnetic field gradient and independent of $\sigma$. Thus, in the presence of both magnetic and electric field gradients, each magnon ($\sigma = \pm 1$) performs the drift motion along the same direction since both driving forces depend on $\sigma$ \[see Eq. (\[ForceAC2\])\] and eventually the $\sigma$-dependence cancels out as $ \sigma^2 =1 $. This is consistent with the results of ‘bulk’ Hall conductances given in Eq. (\[eqn:ParticleHall\]) where (Sec. \[subsec:TopoHallBulk\]) the magnetic field gradient is applied perturbatively. The matrix element in Eq. (\[eqn:2by22222\]) for magnonic spin Hall effects of bulk magnons \[Eq. (\[eqn:G\_magnonHall77\])\] indeed vanishes due to the relation between each topological integer. See Eqs. (\[eqn:GKzero\])-(\[eqn:ParticleHall\]) for details.
Note that the drift velocity vanishes in the absence of the magnetic field gradient, while each magnon ($\sigma = \pm 1$) still performs the cyclotron motion in opposite directions due to the electric field gradient \[Eqs. (\[ForceAC2\])-(\[CM\])\], leading to the helical edge magnon states, and we consider this situation henceforth. See Sec. \[subsec:MQSHE\] (also Appendix \[sec:CyclotronMotion\]) for details.
A bosonic analog of QSHE by edge magnons {#subsec:MQSHE}
----------------------------------------
A straightforward calculation using Eq. (\[HamiltonianLL\]) with $B=0$ (see Appendix \[sec:CyclotronMotion\] for details) shows that the quantum dynamics [@Ezawa] of down and up magnons are identical except that the direction of their cyclotron motion is opposite (Fig. \[fig:HelicalAFChiralFM\]). Indeed, they form the same Landau levels [@KJD] with the principal quantum number $n_{\sigma }\in {\mathbb{N}}_0$, $$\begin{aligned}
E_{n_{\sigma }} = \hbar \omega _c \Big(n_{\sigma } + \frac{1}{2}\Big) + \Delta \ \ \ {\rm{for}} \ \ n_{\sigma } \in {\mathbb{N}}_0,
\label{LL} \end{aligned}$$ and the two magnons $ \sigma g \mu _{\rm{B}}{\mathbf{e}}_z$ perform cyclotron motions with the same frequency [@KJD] $$\begin{aligned}
\omega _c = \frac{g \mu _{\rm{B}}{\mathcal{E}}}{m c^2}
\label{omega_c} \end{aligned}$$ and same electric length [@KJD] $ l_{\rm{{\mathcal{E}}}}$, defined by $$\begin{aligned}
l_{\rm{{\mathcal{E}}}} \equiv \sqrt{{\hbar c^2}/{g \mu _{\rm{B}}{\mathcal{E}}}},
\label{l_E} \end{aligned}$$ but along opposite direction, cf. Fig. \[fig:HelicalAFChiralFM\] (b), $$\begin{aligned}
\frac{d}{dt}({\cal{R}}_{x \sigma } + i {\cal{R}}_{y \sigma })= i \sigma \omega _c ({\cal{R}}_{x \sigma } + i {\cal{R}}_{y \sigma }),
\label{CM} \end{aligned}$$ where ${\mathbf{R}}_{{\rm{{\mathcal{E}}}}\sigma } =({\cal{R}}_{x \sigma }, {\cal{R}}_{y \sigma }) $ is the relative coordinate [@SeeAppendix]. The factor $\sigma$ in Eq. (\[CM\]) is rooted in the magnetic dipole moment $ \sigma g \mu _{\rm{B}}{\mathbf{e}}_z$ of a magnon. The source of cyclotron motion is the electric field gradient $\cal{E}$ \[Eqs. (\[eqn:correspondenceEB\]) and (\[omega\_c\])\], which is common to the both modes.
![image](HelicalEdge.eps){width="18cm"}
Taking into account the opposite directions of the cyclotron motion, the quantum dynamics of the AF reduces to independent copies [@Z2topo; @TIreview; @TIreview2] for each magnon $\sigma =\pm 1$ in FMs (Fig. \[fig:HelicalAFChiralFM\]) $ {\cal{H}}_{\rm{m}} = \sum_{\sigma } {\cal{H}}_{\rm{m} \sigma }$. In Ref. \[\], we have shown that at low temperature $ k_{\rm{B}} T \ll \hbar \omega _c $, only the lowest energy mode $n_{\uparrow }=0$ in Eq. (\[LL\]) becomes relevant and the cyclotron motion of up magnons along one direction \[Fig. \[fig:HelicalAFChiralFM\] (a)\] leads to a [*[chiral]{}*]{} edge state giving [@HalperinEdge; @BulkEdgeHatsugai; @RShindou; @RShindou2; @RShindou3] the Chern number [^6] $ {\cal{N}}_{0 \uparrow } = +1$. Since the dynamics of down magnons is the same as that of up magnons except that the direction of cyclotron motion is opposite \[Fig. \[fig:HelicalAFChiralFM\] (b)\], the down magnon propagates also along the edge of the sample but in the opposite direction to that of the up magnon, which gives [@HalperinEdge; @BulkEdgeHatsugai; @RShindou; @RShindou2; @RShindou3] the Chern number ${\cal{N}}_{0 \downarrow } = -1$. Thus, at low temperatures, $ k_{\rm{B}} T \ll \hbar \omega _c $, the Chern number ${\cal{N}}_{0 \sigma }$ of up and down magnons in the lowest Landau level $n_{\sigma } = 0$ is summarized by $$\begin{aligned}
{\cal{N}}_{0 \sigma } = \sigma ,
\label{eqn:eachChernNumber} \end{aligned}$$ and the AF is characterized by the resulting [*[helical]{}*]{} edge magnon state (Fig. \[fig:HelicalEdge\]) where due to the opposite magnon spin $\sigma =\pm 1$, up and down magnons propagate along the edge of the sample but in opposite direction [@QSHE2016] \[Fig. \[fig:HelicalAFChiralFM\] (b)\]. This is a bosonic analog of the QSHE [@QSHE2005; @QSHE2006; @Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2] for electronic edge states, namely, the QSHE for edge magnons induced by the AC effect. Note that due to the opposite cyclotron motion, the total Chern number vanishes, $$\begin{aligned}
\sum_{\sigma } {\cal{N}}_{0 \sigma } = {\cal{N}}_{0 \uparrow }+ {\cal{N}}_{0 \downarrow } = 0,
\label{eqn:ChernPlus} \end{aligned}$$ while $$\begin{aligned}
{\cal{Z}}_{0} \equiv \frac{1}{2}({\cal{N}}_{0 \uparrow } - {\cal{N}}_{0 \downarrow } ) = 1 \ \ \ ({\rm{mod}} \,\, 2).
\label{eqn:Z2} \end{aligned}$$ Thus the QSHE of helical edge magnon states is characterized by a ${\mathbb{Z}}_{2}$ topological number[@Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2; @Z2Robust], explicitly given here by ${\cal{Z}}_{0} = 1$, and the AF with the AC effect may be identified [@ETI; @ETI2; @PhotonTopo; @PhotonTopo3D] with a bosonic version [@QSHE2016] of TIs. Such a magnonic analog of TIs can be understood as copies [@Z2topo; @TIreview; @TIreview2] of the ferromagnetic ‘quantum’ Hall system [@KJD] having opposite magnon polarization.
Energy spectrum and chiral edge states {#subsec:LatticeNumCal}
--------------------------------------
We calculate the magnon energy spectrum now for a finite geometry of strip shape to find the Landau levels and in particular the chiral edge modes of magnons, all in analogy to the QHE for electrons [@JKDL2013; @JKDL2014]. For the numerical evaluation we need to discretize the continuum Hamiltonian $ \sum_{\sigma}{\cal{H}}_{\rm{m} \sigma } $, given in Eq. (\[HamiltonianLL\]). This leads to the standard TBR of a continuum Hamiltonian in the presence of a gauge potential [@Textbook_TightBindingRep; @TBrepDatta; @TightBindingRep; @TBrepQHE], $$\begin{aligned}
{\cal{H}}_{\rm{AC}} = - \sum_{\sigma = \uparrow , \downarrow } \sum_{\langle i j \rangle} (t_{ij} {\rm{e}}^{ i \sigma \theta _{ij}}
a_{i, \sigma } a_{j, \sigma }^\dagger + {\rm{H.c.}}),
\label{magnon_hopping} \end{aligned}$$ where $a_{i \sigma}$ is the annihilation operator of spin-$\sigma$ magnons localized at the site $i$ satisfying the bosonic commutation relations, $[a_{i \sigma }, a_{j \sigma^{\prime} }^\dagger] = \delta _{i j} \delta _{\sigma \sigma^{\prime}} $ [*[etc.]{}*]{}, and where the Peierls phase $\theta _{ij} = (g \mu _{\rm{B}}/\hbar c) \int_{{\mathbf{r}}_i}^{{\mathbf{r}}_j} d {\mathbf{r}} \cdot {\mathbf{A}}_{\rm{m}} $ is the AC phase which the magnon with the magnetic moment $ \sigma g \mu _{\rm{B}}{\mathbf{e}}_z$ acquires during the hopping on the lattice, and $t_{ij} >0 $ is the hopping amplitude. Here, we suppressed the constant ${\Delta}$ \[Eq. (\[HamiltonianLL\])\], being irrelevant for the chiral edge states. If a magnon hops between site $i$ and $j$ along $x\, (y)$ direction, the amplitude is given by [@TBrepDatta] $t_{x(y)} = \hbar ^2/(2m a_{x(y)}^2) $, where $a_{x(y)}$ is the lattice constant along $x (y)$ direction in the TBR. For simplicity, we will consider the isotropic limit $t_x=t_y$ henceforth. In the continuum limit, $a_{x,y}\to 0$, Eq. (\[magnon\_hopping\]) reduces to the magnon Hamiltonian Eq. (\[HamiltonianLL\]).
We wish to emphasize that the tight-binding lattice is just introduced for calculational purposes and the tight-binding lattice is not related to the original lattice of the spin system, Eq. (\[eqn:H\]), from which we started. In other words, there is no relation between the lattice constants $a_{x(y)}$ occurring in the TBR and the lattice constants occurring in Eq. (\[eqn:H\]). Also, searching for edge states that are topological and thus independent of microscopic details, we can choose parameter values in the simulations that are most convenient from a numerical point of view.
Next, we use the analog of the Landau gauge such that the system is translation-invariant along the $y$ axis. Introducing the momentum $k_y$, we can perform a Fourier transformation of Eq. (\[magnon\_hopping\]) such that ${\cal{H}}_{\rm{AC}} =\sum_{k_y} H_{k_y}$,[@JKDL2013; @JKDL2014], with $$\begin{aligned}
H_{k_y}&=-t_x \sum_{n,\sigma} (a_{k_y,n+1,\sigma}^\dagger a_{k_y,n,\sigma} + {\rm{H.c.}} ) \label{TightBinding2} \\
&\hspace{11pt}- 2 t_y \sum_{n,\sigma} [ \cos (k_y a_y+\sigma \theta_n)] a_{k_y,n,\sigma}^\dagger a_{k_y,n,\sigma}, \nonumber \end{aligned}$$ where $a_{k_y,n,\sigma}$ annihilates a spin-$\sigma $ magnon with momentum $k_y$ in $y$ direction at site $n=x/a_x$ (along $x$ direction). The AC phase accumulated by the up magnon ($\sigma =1$) as it hops in $y$ direction by one lattice constant $a_y$ is given by $\theta _{n} =(g \mu _{\rm{B}}/\hbar c^2) \mathcal{E} n a_x a_y = n \theta _1 $, where $\theta_1 \equiv (g \mu _{\rm{B}}/\hbar c^2) \mathcal{E} a_x a_y$, while the down magnon ($\sigma =-1$) acquires the opposite sign $ - \theta _{n}$. For definiteness, we focus on the spectrum around the lowest Landau level.
Performing exact numerical diagonalization of the Hamiltonian (\[TightBinding2\]), we obtain the spectrum shown in Fig. \[fig:HelicalEdge\]. In the TI regime, the system hosts a pair of helical edge magnon states. We have checked numerically, that choosing different parameter values changes the spectrum quantitatively but the helical edge states remain, showing that they are indeed topologically stable.
To avoid a breakdown of the sample due to the huge voltage drop resulting from an applied strong electric field, we also consider electric fields and vector potentials ${\mathbf{A}}_{\rm{m}}$ that are periodic in $x$ direction and of saw-tooth shape [@KJD]. Using such a periodically extended fields only over a distance that can be much smaller than the sample dimensions or even the electric length $ l_{\rm{{\mathcal{E}}}}$, we [@KJD] have seen that the requirement of strong field gradients ${\mathcal{E}}$ needed for creating a quantum Hall effect of magnons in FMs ([*[e.g.]{}*]{}, Landau levels and the resultant level spacing) can be substantially softened, while still producing well-defined chiral edge magnon states [@KJD], since the magnitude of ${\mathcal{E}}$ for each period remains the same. Periodic fields may be realized by periodically arranging STM tips [@GeimSTM; @STM_Egradient; @KJD]. Such periodic potentials are easily implemented in our approach by assuming in Eq. (\[TightBinding2\]) $\theta_{n} = \theta_1 q \{n/q\}$, with period $q$ (integer) and where $\{\cdot \}$ denotes the fractional part smaller than one. This implies that the periodic vector potential has the form $ {\mathbf{A}}_{{\rm{m}}q} = ({\mathcal{E}} R_q/c) (0, \{x/R_q \}, 0) $, where $R_q= q a_x$ is the period.
From Fig. \[fig:HelicalEdge\] we see that for large period $q$ there is a well-developed gap in an almost flat band and with the corresponding edge states, see Fig. \[fig:HelicalEdge\] (a). If $q$ gets smaller than the electric length, the bulk gap is no longer uniform in momentum, see Figs. \[fig:HelicalEdge\] (b) and (c). As a result, edge states coexist with bulk modes at different momenta [@Li2016; @MagnonicWeylSemimetalAC], see Fig. \[fig:HelicalEdge\] (c). However, for fixed values of $k_y$, there is still a gap in the spectrum and furthermore, well-defined edge states still exist. Thus, if disorder is weak the edge modes will not couple to the bulk and the Hall conductance will still be dominated by these edge modes, similarly to Weyl semimetals.
Under the assumption that the spin along the $z$ direction remains a good quantum number [@Z2Robust; @JelenaYTwire], we have seen that the key to a nonzero Chern number $ {\cal{N}}_{0 \sigma } $ is the cyclotron motion of individual magnons. Indeed, the AC phase-induced cyclotron motion leads to edge magnon states (Fig. \[fig:HelicalEdge\]) each giving [@HalperinEdge; @BulkEdgeHatsugai; @RShindou; @RShindou2; @RShindou3] rise to a non-zero Chern number $ {\cal{N}}_{0 \sigma } = \sigma $. Therefore, as long as magnons can perform cyclotron motions, the edge magnon state is robust against external perturbation [@QSHE2005; @QSHE2006; @Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2] and the relation $ {\cal{N}}_{0 \uparrow }+ {\cal{N}}_{0 \downarrow } = 0 $ between each topological integer [@Z2topo; @TIreview; @TIreview2; @Z2Robust] remains valid. Indeed, it has been confirmed experimentally that magnons satisfy Snell’s law at interfaces [@Snell_Exp; @Snell2magnon], indicating specular (i.e., elastic) reflection at the boundary to vacuum, and thereby we can expect that magnons form skipping orbits along the boundary like electrons [@HalperinEdge], giving rise to edge states.
We note that there are still general differences [@KJD] to electrons due to the bosonic nature of the magnons. Due to the Bose-distribution function, even in the presence of topological edge states, the Hall transport coefficients of bulk magnons generally cannot be described in terms of a Chern integer. Only in almost flat bands [@KJD], the Hall coefficients become characterized by such a topological invariant that edge magnon states bring about, while the Hall coefficients are still characterized by the Bose-distribution function (see Sec. \[subsec:TopoHallBulk\]). This is in contrast to electronic systems.
Hall conductances of magnons {#subsec:TopoHallBulk}
----------------------------
In this section, we discuss Hall transport properties of bulk [@HalperinEdge; @BulkEdgeHatsugai; @RShindou; @RShindou2; @RShindou3] magnons in the AC effect-induced magnonic TIs characterized by a spin-dependent Chern number ${\cal{N}}_{0 \sigma } = \sigma $ \[Eq. (\[eqn:Z2\])\], by making use of the aforementioned mapping between the system and two independent copies [@Z2topo; @TIreview; @TIreview2] of a ferromagnetic ‘quantum’ Hall system [@KJD]. We consider the cases where again the total spin along the $z$ direction is a good quantum number. The crystal lattice creates a periodic potential for magnons [@AMermin; @Kohmoto; @NiuBerry] $ U({\mathbf{r}}) = U({\mathbf{r}}+ {\mathbf{R}}) $ with Bravais lattice vector ${\mathbf{R}}=(a_x,a_y)$, which gives rise to a band structure for magnons. In the absence of a magnetic field, $B=0$, the Hamiltonian for spin-$\sigma$ magnons is given by $ {\cal{H}}_{\sigma }({\mathbf{r}}) = {\cal{H}}_{{\rm{m}}\sigma } ({\mathbf{r}}) + U({\mathbf{r}})$. We then introduce the Bloch Hamiltonian with Bloch wavevector ${\mathbf{k}}=(k_x, k_y) $ following Refs. \[\], $ {\cal{H}}_{{\mathbf{k}}\sigma } \equiv {\rm{e}}^{- i {\mathbf{k}}\cdot{\mathbf{r}} } {\cal{H}}_{\sigma } {\rm{e}}^{ i {\mathbf{k}}\cdot{\mathbf{r}} } = [- i \hbar {\mathbf{\nabla}} + \hbar {\mathbf{k}} + \sigma g \mu _{\rm{B}}{\mathbf{A}}_{\rm{m}}({\mathbf{r}})/c]^2/2m + \Delta + U({\mathbf{r}})$, where ${\mathbf{A}}_{\rm{m}}$ is the periodically extended vector potential [@KJD]. The eigenfunction of the Schrödinger equation ${\cal{H}}_{{\mathbf{k}}\sigma } u_{n {\mathbf{k}}\sigma }({\mathbf{r}}) = E_{n {\mathbf{k}}\sigma } u_{n {\mathbf{k}}\sigma }({\mathbf{r}}) $ is given by [@KevinHallEffect; @Kohmoto; @NiuBerry] the magnonic Bloch wave function $ u_{n {\mathbf{k}}\sigma }({\mathbf{r}}) \equiv {\rm{e}}^{- i {\mathbf{k}}\cdot{\mathbf{r}} } \psi _{n {\mathbf{k}}\sigma } $, where ${\cal{H}}_{\sigma } \psi _{n {\mathbf{k}}\sigma } = E_{n {\mathbf{k}}\sigma } \psi _{n {\mathbf{k}}\sigma }$.
At sufficiently low temperature $k_{{\rm{B}}} T \ll \hbar \omega _c$, the lowest mode $ n=0$ dominates the dynamics (Fig. \[fig:HelicalEdge\]). In Ref. \[\], where we have studied the magnon bands of a FM in the ‘quantum’ Hall phases realized by the electric field gradient-induced AC effects, we have shown that the lowest magnon band is almost flat on the energy scale set by the temperature [@KevinHallEffect; @RSdisorder; @TopoMagBandKagome], i.e., the band width is much smaller than $ k_{{\rm{B}}} T$, and named it almost flat band. Due to this flatness, the lowest band \[[*[e.g.]{}*]{}, Fig. \[fig:HelicalEdge\] (a)\] can be well characterized by its typical energy $E_{0\sigma }^{\ast }$ in the sense that the value of the Bose-distribution function $n_{\rm{B}} (E_{0 {\mathbf{k}}\sigma }) =({\rm{e}}^{\beta E_{0 {\mathbf{k}}}\sigma }-1)^{-1} $ with $\beta \equiv (k_{\rm{B}}T)^{-1} $ can be considered as approximately uniform in the Brillouin zone, $n_{\rm{B}} (E_{0 {\mathbf{k}}\sigma }) \simeq n_{\rm{B}} (E_{0 \sigma }^*)$, which we will adopt in the subsequent discussion.
Within the linear response regime, the spin and heat Hall current densities for each mode, $j_{y \sigma } $ and $ j_{y \sigma }^{Q} $, subjected to a magnetic field gradient [@Haldane2; @ZyuzinRequest] and a temperature one are described by the Onsager matrix $$\begin{aligned}
\begin{pmatrix}
\langle j_{y \sigma } \rangle \\ \langle j_{y \sigma }^{Q} \rangle
\end{pmatrix}
=
\begin{pmatrix}
L_{11 \sigma }^{yx} & L_{12 \sigma }^{yx} \\
L_{21 \sigma }^{yx} & L_{22 \sigma }^{yx}
\end{pmatrix}
\begin{pmatrix}
\partial _x B \\ - \partial _x T/T
\end{pmatrix}.
\label{eqn:2by2}\end{aligned}$$ Since the band is almost flat, the Hall transport coefficients [@Matsumoto; @Matsumoto2; @KJD] $L_{i j \sigma }^{yx}$ can be characterized by the Chern number ${\cal{N}}_{0 \sigma } = \sigma $, $$\begin{aligned}
L_{i j \sigma }^{yx} = (k_{\rm{B}}T)^{\eta} (\sigma g \mu _{\rm{B}})^{2-{\eta}}
{\cal{C}}_{\eta} \big(n_{\rm{B}} (E_{0\sigma }^{\ast })\big) \cdot {\cal{N}}_{0 \sigma }/h,
\label{eqn:HallLij22}\end{aligned}$$ where $ \eta = i + j -2$, ${\cal{C}}_0 \big(n_{\rm{B}} (E_{0\sigma }^{\ast })\big) = n_{\rm{B}} (E_{0\sigma }^{\ast }) $, ${\cal{C}}_1 \big(n_{\rm{B}} (E_{0\sigma }^{\ast })\big) = [1+ n_{\rm{B}} (E_{0\sigma }^{\ast })] {\rm{log}} [1+ n_{\rm{B}} (E_{0\sigma }^{\ast })]
- n_{\rm{B}} (E_{0\sigma }^{\ast }) {\rm{log}} [n_{\rm{B}} (E_{0\sigma }^{\ast })] $, and ${\cal{C}}_2 \big(n_{\rm{B}} (E_{0\sigma }^{\ast })\big) = [1+ n_{\rm{B}} (E_{0\sigma }^{\ast })] \big({\rm{log}} [1+ 1/n_{\rm{B}} (E_{0\sigma }^{\ast })]\big)^2 - \big({\rm{log}} [n_{\rm{B}} (E_{0\sigma }^{\ast })]\big)^2 - 2 {\rm{Li}}_2 \big(- n_{\rm{B}} (E_{0\sigma }^{\ast })\big)$. The Onsager reciprocity is satisfied by having $L_{12 \sigma }^{yx} = L_{21 \sigma }^{yx}$. The coefficient $ L_{11 \sigma }^{yx} $ is identified with the magnonic spin Hall conductance $ G_{\sigma}^{yx } $ arising from each magnon and the total one of the AF is given by $ G_{\rm{AF}}^{yx } = \sum_{\sigma } G_{\sigma}^{yx } $. The contribution of each magnon $K_{\sigma}^{yx } $ to the thermal Hall conductance of the AF $K_{\rm{AF}}^{yx} = \sum_{\sigma } K_{\sigma}^{yx } $ is expressed in terms of Onsager coefficients by [@KJD] $$\begin{aligned}
K_{\sigma}^{yx } = \Big( L_{22\sigma }^{yx} - \frac{L_{21\sigma }^{yx} L_{12\sigma }^{yx}}{L_{11\sigma }^{yx}} \Big)/T,
\label{eqn:K_magnonHall}\end{aligned}$$ where as we have seen in Sec. \[sec:trivial\], the off-diagonal elements [@GrunwaldHajdu; @KaravolasTriberis] similarly arise from the magnon counter-current by the thermally-induced magnetization gradient [@SilsbeeMagnetization; @Basso; @Basso2; @Basso3; @MagnonChemicalWees; @YacobyChemical] $ \partial _x B_{\sigma }^{\ast } = ({L_{12\sigma }^{yx}}/{L_{11\sigma }^{yx}}) ({\partial _x T}/{T})$. See Ref. \[\] for details of the thermal Hall conductance in the ‘quantum’ Hall regime and the Hall coefficient $L_{i j \uparrow }^{yx}$.
In the almost flat band $E_{0 \uparrow }^{\ast } \approx E_{0 \downarrow }^{\ast } \equiv E_{0 }^{\ast } $, the Hall transport coefficient Eq. (\[eqn:HallLij22\]) becomes $$\begin{aligned}
L_{i j \sigma }^{yx} = \sigma ^{2-{\eta}} {L^{\prime}}_{i j} {\cal{N}}_{0 \sigma },
\label{eqn:spectrum222}\end{aligned}$$ where we introduced $ {L^{\prime}}_{i j} = (k_{\rm{B}}T)^{\eta} (g \mu _{\rm{B}})^{2-{\eta}} {\cal{C}}_{\eta} \big(n_{\rm{B}} (E_{0}^{\ast })\big)/h$, which does not depend on the index $\sigma $ with dropping the index $yx$ for convenience. This gives $ G_{\sigma}^{yx } = L^{\prime}_{11}{\cal{N}}_{0 \sigma } $ and $ K_{\sigma}^{yx } = (1/T)(L_{22}^{\prime} - {L_{21}^{\prime} L_{12}^{\prime}}/{L_{11}^{\prime}}) {\cal{N}}_{0 \sigma } $. Consequently,
$$\begin{aligned}
G_{\rm{AF}}^{yx } &=& {L^{\prime}}_{11} ({\cal{N}}_{0 \uparrow }+ {\cal{N}}_{0 \downarrow }),
\label{eqn:G_magnonHall77} \\
K_{\rm{AF}}^{yx } &=& (1/T) \Big( L_{22}^{\prime} - \frac{L_{21}^{\prime} L_{12}^{\prime}}{L_{11}^{\prime}} \Big)
({\cal{N}}_{0 \uparrow }+ {\cal{N}}_{0 \downarrow }).
\label{eqn:K_magnonHall2}\end{aligned}$$
The vanishing of the total Chern number [@Z2topo; @TIreview; @TIreview2] \[Eq. (\[eqn:ChernPlus\])\], ${\cal{N}}_{0 \uparrow }+ {\cal{N}}_{0 \downarrow } =0$, results in $$\begin{aligned}
G_{\rm{AF}}^{yx } = 0, \ \ \ \ \ K_{\rm{AF}}^{yx }=0.
\label{eqn:GKzero}\end{aligned}$$ Thus in contrast to the magnonic Hall system of FMs [@KJD], the thermomagnetic ratio of the AF $ K_{\rm{AF}}^{yx }/ G_{\rm{AF}}^{yx } $ becomes ill-defined in the sense that the total magnonic spin Hall conductance is zero $G_{\rm{AF}}^{yx } = 0$; the WF law [@magnonWF; @ReviewMagnon] characterized by the liner-in-$T$ behavior becomes violated since the total magnonic thermal Hall conductance vanishes, i.e. $K_{\rm{AF}}^{yx } = 0$.
Defining the total magnonic spin and heat Hall current densities, $ {\cal{J}}_{y} \equiv \sum_{\sigma } j_{y \sigma } $ and $ {\cal{J}}_{y}^{Q} \equiv \sum_{\sigma } j_{y \sigma }^{Q} $, respectively, Eq. (\[eqn:2by2\]) is rewritten as $$\begin{aligned}
\begin{pmatrix}
\langle {\cal{J}}_{y} \rangle \\ \langle {\cal{J}}_{y}^{Q} \rangle
\end{pmatrix}
=
\begin{pmatrix}
L^{\prime}_{11}({\cal{N}}_{0 \uparrow }+ {\cal{N}}_{0 \downarrow }) & L^{\prime}_{12}({\cal{N}}_{0 \uparrow } - {\cal{N}}_{0 \downarrow }) \\
L^{\prime}_{21}({\cal{N}}_{0 \uparrow }- {\cal{N}}_{0 \downarrow }) & L^{\prime}_{22}({\cal{N}}_{0 \uparrow }+ {\cal{N}}_{0 \downarrow })
\end{pmatrix}
\begin{pmatrix}
\partial _x B \\ - \frac{\partial _x T}{T} \nonumber
\end{pmatrix}.
\\
\label{eqn:2by22222}\end{aligned}$$ The component of $L^{\prime}_{12}$ represents the magnonic spin Nernst effect in AFs [@MagnonNernstAF; @MagnonNernstAF2; @MagnonNernstExp] where thermal gradients generate [*[helical]{}*]{} magnon Hall transport, and consequently, the total magnonic spin Hall current becomes nonzero. This arises from the opposite magnetic dipole moment $\sigma = \pm 1$ inherent to the N$\acute{{\rm{e}}} $el magnetic order in AF, and the effect is characterized or ensured by the ${\mathbb{Z}}_{2}$ topological invariant [@Z2topo; @Z2topoHaldane; @TIreview; @TIreview2; @Z2Robust] \[Eq. (\[eqn:Z2\])\] ${\cal{Z}}_{0} \equiv ({\cal{N}}_{0 \uparrow } - {\cal{N}}_{0 \downarrow })/2 = 1 $.
The same holds for the reciprocal phenomenon, the magnonic Nernst-Ettinghausen effects [@SMreviewMagnon] parametrized by $L^{\prime}_{21}$, while [@QSHE2005; @QSHE2006; @Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2]. Note that here we refer to the phenomenon described by $L^{\prime}_{11}$ term as ‘magnonic spin Hall effect’ in the bulk AF since it characterizes the magnonic spin Hall conductance $G_{\rm{AF}}^{yx }$ where all magnons subjected to a magnetic field gradient propagate in the same direction and, consequently, the total magnonic spin Hall current becomes zero. This can be qualitatively understood as follows; the particle Hall current density for each magnon $ j_{y \sigma }^{\rm{P}} $ is given by $ j_{y \sigma }^{\rm{P}} = j_{y \sigma }/(\sigma g \mu _{\rm{B}})$, and Eqs. (\[eqn:2by2\]) and (\[eqn:spectrum222\]) provide (see also Table \[tab:table1\]) $$\begin{aligned}
\langle j_{y \sigma }^{\rm{P}} \rangle = \sigma {\cal{N}}_{0 \sigma } \frac{{L^{\prime}}_{11}}{g \mu _{\rm{B}}} \partial _x B
- {\cal{N}}_{0 \sigma } \frac{{L^{\prime}}_{12}}{g \mu _{\rm{B}}} \frac{\partial _x T}{T}.
\label{eqn:ParticleHall}\end{aligned}$$ Since $ {\cal{N}}_{0 \sigma } = \sigma $, i.e., $ {\cal{N}}_{0 \uparrow } = -{\cal{N}}_{0 \downarrow } =1 $, it shows that thermal gradient generates [*[helical]{}*]{} magnon Hall transport in the topological bulk AF where up and down magnons flow in opposite $y$ direction, while all magnons subjected to the magnetic field gradient flow in the same $y$ direction because of the relation $ \sigma {\cal{N}}_{0 \sigma } = 1$. This is in contrast to the topologically trivial bulk AF (Sec. \[sec:trivial\]) where the magnetic field gradient working as a driving force $F_{\sigma} \propto \sigma g \mu _{\rm{B}} $ produces helical magnon currents. The difference arises from each topological integer [@QSHE2005; @QSHE2006; @Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2], i.e., the Chern number $ {\cal{N}}_{0 \sigma } = \sigma $ which leads to $ \sigma {\cal{N}}_{0 \sigma } =1 $. Note that each magnon by itself carries spin $G_{\sigma}^{yx } \not=0$ and heat $K_{\sigma}^{yx } \not=0$, and each mode respectively satisfies the same WF law [@KJD] $K_{\sigma}^{yx }/G_{\sigma}^{yx } = [k_{\rm{B}}/(g \mu _{\rm{B}})]^2 T$ as the ‘quantum’ Hall system of ferromagnetic magnons. However, due to the relation ${\cal{N}}_{0 \uparrow } +{\cal{N}}_{0 \downarrow }=0 $ between each topological integer [@QSHE2005; @QSHE2006; @Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2], magnonic spin and thermal Hall effects in the bulk represented by Eqs. (\[eqn:G\_magnonHall77\]) and (\[eqn:K\_magnonHall2\]), respectively, are prohibited in the topological bulk AF, while the magnonic Nernst-Ettinghausen effects [@SMreviewMagnon] shown by the off-diagonals in Eq. (\[eqn:2by22222\]), $L^{\prime}_{12} $ and $L^{\prime}_{21} $ terms, are characterized or ensured by the ${\mathbb{Z}}_{2}$ topological number ${\cal{Z}}_{0} $ defined in Eq. (\[eqn:Z2\]). Thermomagnetic properties of such magnon transport in the topological bulk [@MookPrivate] AF are summarized in Table \[tab:table1\].
Lastly, regarding the linear response to magnetic field gradient, [*[e.g.]{}*]{}, the total magnonic spin Hall conductance $G_{\rm{AF}}^{yx }$ or $ L^{\prime}_{11} $ term, we remark that in Eqs. (\[eqn:G\_magnonHall77\]) and (\[eqn:2by22222\]), we may still work under the assumption that the relation ${\cal{N}}_{0 \uparrow }+ {\cal{N}}_{0 \downarrow } = 0$ between each topological integer [@QSHE2005; @QSHE2006; @Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2] is valid since, just for a perturbative driving force, we assume a (negligibly) small magnetic field gradient that does not disturb the cyclotron motion of magnons; thanks to the spin anisotropy-induced energy gap, the energy spectrum is not affected at all and each edge magnon state remains unchanged thereby ensuring the relation ${\cal{N}}_{0 \uparrow }+ {\cal{N}}_{0 \downarrow } = 0$ between topological integer, i.e., ${\cal{N}}_{0 \sigma } = \sigma $. Recall that in Sec. \[subsec:MQSHE\], we have seen that the cyclotron motion induced by the AC effect leads to the helical edge magnon state characterized by the nonzero Chern number $ {\cal{N}}_{0 \sigma } = \sigma $. Therefore, as long as each magnon type performs a cyclotron motion, the relation remains unchanged. [^7]
Estimates for experiments {#sec:experimentAF}
=========================
Observation of spin-wave spin currents [@spinwave; @WeesNatPhys], thermal Hall effect of magnons [@onose], magnon planar Hall effect [@MagnonHallEffectWees], Snell’s law for spin-wave [@Snell_Exp; @Snell2magnon], and electrically-induced AC effect [@casher; @Mignani; @magnon2; @KKPD; @ACatom; @KJD] on a magnonic system [@ACspinwave] has been reported. Recently, measurement of magnonic spin conductance [@MagnonHallEffectWees] has been reported in Ref. \[\] and thermal generation of spin currents in AFs has been established experimentally in Ref. \[\] using the spin Seebeck effect [@uchidainsulator; @ishe; @ohnuma; @adachi; @adachiphonon; @xiao2010; @OnsagerExperiment; @Peltier], with the subsequent report [@MagnonNernstExp; @MagnonNernstAF; @MagnonNernstAF2] of magnonic spin Nernst effect in AFs. Moreover, on top of Brillouin light scattering spectroscopy [@demokritovReport; @demokritov; @MagnonPhonon], using infrared camera, the real-time observation of spin-wave propagation is now possible and Ref. \[\] reported the observation of magnon Hall-like effect [@MagnonHallEffectWees].
Therefore, we can expect that the observations of the magnonic WF law in the topologically trivial bulk AF and the magnonic QSHE (helical edge magnons) in the topological AF are now within experimental reach [@YacobyChemical; @KentBEC; @GeimSTM; @STM_Egradient; @ExpPSI; @NontopoSurfaceSpinwave] via measurement schemes proposed in Ref. \[\]. The considered electric field with the constant gradient can be realized by an electric skew-harmonic potential [@KJD] and, while being challenging, it may be realized by STM tips [@GeimSTM; @STM_Egradient]. The resulting magnetization gradient from the applied thermal gradient plays a role of an effective magnetic field gradient and works as a nonequilibrium magnonic spin chemical potential [@SilsbeeMagnetization; @Basso; @Basso2; @Basso3; @MagnonChemicalWees] that has been established experimentally in Ref. \[\].
For an estimate, we assume the following experiment parameter values [@CrOexp; @CrOexp2] for ${\rm{Cr}}_2{\rm{O}}_3$, $J=15 $meV, ${\cal{K}}=0.03 $meV, $S=3/2$, $g=2$, ${\mathcal{E}} = 1$V/nm$^2$, and $a = 0.5$nm. This provides the Landau gap $ \hbar \omega _c = 1 \mu $eV and $ l_{\rm{{\mathcal{E}}}}= 0.7 \mu $m \[Eqs. (\[omega\_c\]) and (\[l\_E\])\], with which the magnonic QSHE and helical edge magnons could be observed at $T \lesssim 10$mK. At these low temperatures, effects of magnon-magnon and magnon-phonon interactions can be expected to become negligible [@magnonWF; @adachiphonon; @Tmagnonphonon]. An alternative platform to look for topological magnon Hall effects would be skyrmion-like lattices of AFs [@AFskyrmion; @AFskyrmion2; @MookJrAFskyrmion] with Dzyaloshinskii-Moriya (DM) [@DM; @DM2; @DM3] interaction where the N$\acute{{\rm{e}}} $el order parameter varies slowly compared to the typical wavelength of magnons (i.e., spin-waves). In Ref. \[\] (see Appendix \[sec:ACvsDM\] for details), we have seen that the low-energy magnetic excitations in the skyrmion lattice are magnons and the DM interaction [@Mook2; @Lifa; @katsura2] produces intrinsically a vector potential analogous to ${\mathbf{A}}_{\rm{m}}$ which reduces to the same form as Eq. (\[HamiltonianLL\]). Assuming experimental parameter values [@SkyrmionReviewNagaosa; @SkyrmionExpTokura; @SkyrmionTheory] (see Ref. \[\] for details), Landau gaps on the order of a few meVs could be reached. Since the Hamiltonian for an AF in a skyrmion-like lattice where the N$\acute{{\rm{e}}} $el order varies slowly (compared to the typical wavelength of spin-waves) also reduces to the qualitatively same form as Eq. (\[HamiltonianLL\]), we expect that the topological magnon Hall effects could be observed at $T \lesssim {\cal{O}}(10) $K in such AFs [@AFskyrmion; @AFskyrmion2; @MookJrAFskyrmion]. The temperature, however, should be low enough to make spin-phonon and magnon-magnon contributions negligible [@magnonWF; @adachiphonon; @Tmagnonphonon]. As to the magnonic WF law in the topologically trivial bulk AFs, the energy gap amounts to $\Delta = 4$meV and thus the magnonic WF law may be observed at $ T = 40$K ($k_{\rm{B}} T = \Delta $). However, again, the temperature should be low enough [@PhononWF] to make spin-phonon and magnon-magnon contributions negligible [@magnonWF; @adachiphonon; @Tmagnonphonon]. Therefore we expect that the effect becomes observable at low temperature $T \lesssim {\cal{O}}(1) $K. [^8]
Given these estimates, we conclude that the observations of the magnonic and topological phenomena in AFs as proposed in this work, while being challenging, seem within experimental reach [@KentPrivate].
Summary {#sec:sum}
=======
Under the assumption that the spin along the $z$ direction remains a good quantum number, we have studied thermomagnetic properties of helical transport of magnons with the opposite magnetic dipole moment inherent to the N$\acute{{\rm{e}}} $el order both in topologically trivial and nontrivial bulk AFs. Since the quantum-mechanical dynamics of magnons in the insulating AF is described as the combination of independent copies of that in FMs, we found that both topologically trivial magnets satisfy the same magnonic WF law, exhibiting a linear-in-$T$ behavior at sufficiently low temperatures, while the law becomes violated in the topological bulk AF due to the topological invariant that helical edge magnon states bring about. In the electric field gradient-induced AC effect, up and down magnons form the same Landau energy level and perform cyclotron motion with the same frequency but in opposite directions giving rise to helical edge magnon states, i.e., QSHE of edge magnons, and the AF becomes characterized by the ${\mathbb{Z}}_{2}$ topological number consisting of the Chern integer that each edge state brings about and the AF can be identified as a bosonic version of a TI. In the almost flat band inherent to the electrically-induced topological AF, the magnonic spin and thermal Hall effects of bulk magnons are prohibited by the topological integer, while the Nernst-Ettinghausen effects are ensured by the ${\mathbb{Z}}_{2}$ topological invariant. The relation between each topological integer is robust against external perturbation as long as magnons can perform cyclotron motion giving the helical edge magnon states.
Finally, it would be interesting to test our predictions experimentally.
Discussion {#sec:discussion}
==========
To conclude a few comments on our approach are in order. Instead of deriving the helical edge states directly from the spin Hamiltonian in the presence of electric fields as done previously for FMs [@KJD], here we first derive the magnon approximation of the spin Hamiltonian in the continuum limit and then introduce the AC phase. The resulting Hamiltonian with quadratic magnon dispersion is then analyzed numerically by introducing the corresponding TBR. Throughout this paper we have thus restricted our consideration to AFs where within the long wave-length approximation the dispersion becomes gapped and parabolic, and the dynamics of magnons in the AF can be described as the combination of two independent copies of the dynamics of magnons in a FM for each mode $\sigma =\pm 1$. The helical edge states we found in this approximation are topologically stable and thus their emergence does not depend on the microscopic details as long as the gap remains open. Still, a general treatment of AFs (beyond the parabolic dispersion regime, on different lattices, [*[e.g.]{}*]{}, on a triangular spin lattice in the presence of frustration [*[etc.]{}*]{}) remains an open issue and deserves further study.
Lastly we remark that due to the opposite magnetic dipole moments of up and down magnons associated with the magnetic N$\acute{{\rm{e}}} $el order in AFs, the $\sigma$-dependence is simply added to the TBR Eq. (\[magnon\_hopping\]). This $\sigma $-dependence, while being a small theoretical difference from the FMs [@KJD], produces qualitatively new phenomena in AFs such as helical edge magnon states and the violation of the magnonic WF law [@KJD]. We stress that this simplicity of the $\sigma$-dependence is specific to the time-independent case considered here. In contrast, when the electric field becomes time-dependent ([*[e.g.]{}*]{}, in the presence of laser pulses) the AC gauge potential becomes also time-dependent and the $\sigma$-dependence could be controlled or even vanish for some ac electric fields. This opens up a new control on the topological phase. It will thus be interesting to study time-dependent effects in these systems in more detail.
We (KN, JK, and DL) acknowledge support by the Swiss National Science Foundation and the NCCR QSIT. One of the authors (SKK) was supported by the Army Research Office under Contract No. W911NF-14-1-0016. We would like to thank A. Mook, V. Zyuzin, A. Zyuzin, H. Katsura, K. Totsuka, K. Usami, J. Shan, C. Schrade, and Y. Tserkovnyak for helpful discussions.
Magnons in AFs {#sec:Mchirality}
==============
In this Appendix, we provide some details of the straightforward treatment of AFs [@altland; @RKuboAF; @AndersonAF] showing that their N$\acute{{\rm{e}}} $el order provides up and down magnons, i.e., ‘magnetically charged’ bosonic quasiparticles carrying opposite magnetic dipole moments $\sigma g\mu_{\rm{B}} {\bf e}_z$. An external magnetic field $ {\mathbf{B}} = B {\mathbf{e}}_z $ couples with spins via the Zeeman interaction given by ${\cal{H}}_{B} = - g\mu_{\rm{B}} B \sum_{l} S_l^z $. Assuming spins in the AF form the N$\acute{{\rm{e}}} $el order along $z$ direction, and using the sublattice-dependent Holstein-Primakoff [@HP; @altland; @MagnonNernstAF; @MagnonNernstAF2] transformation, $S_{i{\rm{A}}}^z = S - a_i^\dagger a_i$, $S_{j{\rm{B}}}^z = - S + b_j^\dagger b_j$, with $[a_{i}, a_{j}^{\dagger }] = \delta_{i,j}$ and $[b_{i}, b_{j}^{\dagger }] = \delta_{i,j}$, we find for the $z$ component of the total spin [@MagnonNernstAF] $ S^z \equiv \sum_{l} S_l^z = \sum_{i} (S_{i{\rm{A}}}^z + S_{i{\rm{B}}}^z) = \sum_{i} (- a_i^\dagger a_i + b_i^\dagger b_i) $. After Fourier transformation $ S^z = \sum_{\mathbf{k}} (-a_{\mathbf{k}}^{\dagger } a_{\mathbf{k}} + b_{\mathbf{k}}^{\dagger } b_{\mathbf{k}}) $, the Hamiltonian becomes ${\cal{H}}_{B} = g\mu_{\rm{B}} B \sum_{\mathbf{k}} (a_{\mathbf{k}}^{\dagger } a_{\mathbf{k}} - b_{\mathbf{k}}^{\dagger } b_{\mathbf{k}}) $. Using a Bogoliubov transformation $$\begin{aligned}
\begin{pmatrix}
a_{\mathbf{k}}^{\dagger } \\ b_{\mathbf{k}}
\end{pmatrix}
=
\cal{M}
\begin{pmatrix}
{\cal{A}}_{\mathbf{k}}^{\dagger } \\ {\cal{B}}_{\mathbf{k}}
\end{pmatrix},
\begin{pmatrix}
a_{\mathbf{k}} \\ b_{\mathbf{k}}^{\dagger }
\end{pmatrix}
=
\cal{M}
\begin{pmatrix}
{\cal{A}}_{\mathbf{k}} \\ {\cal{B}}_{\mathbf{k}}^{\dagger }
\end{pmatrix}
\label{eqn:2by2AA}\end{aligned}$$ with the coefficient matrix $\cal{M}$ defined by $$\begin{aligned}
\cal{M}
=
\begin{pmatrix}
{\rm{cosh}}\vartheta _{\mathbf{k}} & - {\rm{sinh}}\vartheta _{\mathbf{k}} \\
- {\rm{sinh}}\vartheta _{\mathbf{k}} & {\rm{cosh}}\vartheta _{\mathbf{k}}
\end{pmatrix},
\label{eqn:2by2MMM}\end{aligned}$$ the Hamiltonian $\cal{H}$ in the main text becomes diagonal [@altland; @RKuboAF; @AndersonAF], $ {\cal{H}} = \sum_{\mathbf{k}} \hbar \omega _{\mathbf{k}} ({\cal{A}}_{\mathbf{k}}^{\dagger } {\cal{A}}_{\mathbf{k}} + {\cal{B}}_{\mathbf{k}}^{\dagger } {\cal{B}}_{\mathbf{k}})$, in terms of Bogoliubov quasiparticle operators, ${\cal{A}}_{\mathbf{k}}$ and ${\cal{B}}_{\mathbf{k}}$, satisfying bosonic commutation relations $ [{\cal{A}}_{\mathbf{k}}, {\cal{A}}_{{\mathbf{k^{\prime}}}}^{\dagger }]= \delta_{{\mathbf{k}}, {\mathbf{k^{\prime}}}} $ and $ [{\cal{B}}_{\mathbf{k}}, {\cal{B}}_{{\mathbf{k^{\prime}}}}^{\dagger }]= \delta_{{\mathbf{k}}, {\mathbf{k^{\prime}}}} $, where $ {\rm{tanh}} (2 \vartheta _{\mathbf{k}}) = \gamma _{\mathbf{k}}/(1+\kappa) $, $\gamma _{\mathbf{k}} = (1/\rho ) \sum_{m=1}^{\rho } {\rm{e}}^{- i {\mathbf{k}}\cdot {\boldsymbol{\delta }}_{m} } $, the coordination number $ \rho =2d$, and ${\boldsymbol{\delta }}_{m}$ the relative coordinate vector that connects the nearest neighboring sites. The $z$ component of the total spin is rewritten as $ S^z = \sum_{\mathbf{k}} (-a_{\mathbf{k}}^{\dagger } a_{\mathbf{k}} + b_{\mathbf{k}}^{\dagger } b_{\mathbf{k}}) = \sum_{\mathbf{k}}(-{\cal{A}}_{\mathbf{k}}^{\dagger } {\cal{A}}_{\mathbf{k}} + {\cal{B}}_{\mathbf{k}}^{\dagger } {\cal{B}}_{\mathbf{k}}) $ and thereby ${\cal{H}}_{B} = g\mu_{\rm{B}} B \sum_{\mathbf{k}} (a_{\mathbf{k}}^{\dagger } a_{\mathbf{k}} - b_{\mathbf{k}}^{\dagger } b_{\mathbf{k}}) = g\mu_{\rm{B}} B \sum_{\mathbf{k}}({\cal{A}}_{\mathbf{k}}^{\dagger } {\cal{A}}_{\mathbf{k}} - {\cal{B}}_{\mathbf{k}}^{\dagger } {\cal{B}}_{\mathbf{k}}) $. Therefore, it can be seen that the ${\cal{A}}$- (${\cal{B}}$-) magnon carries $\sigma = -1$ ($+1$) spin angular momentum along the $z$ direction and can be identified with down and up magnons, respectively.
In the absence of the magnetic field, $B=0$, these up and down magnons are degenerate and the energy dispersion [@RKuboAF; @AndersonAF] is given by $ \hbar \omega _{{\mathbf{k}}} = 2J dS \sqrt{(1+\kappa)^2 - \gamma _{\mathbf{k}}^2} $. Within the long wave-length approximation, it becomes $ \gamma _{\mathbf{k}}^2= 1- (ak)^2/d $ for $ \mid {\mathbf{k}} \mid = k $ and thereby assuming a spin anisotropy [@CrOexp; @CrOexp2] at low temperature, the dispersion becomes parabolic in terms of $k$ and reduces to the form $\hbar \omega _{{\mathbf{k}}} = Dk^2 + \Delta$ with $ D = JSa^2/\sqrt{\kappa^2 + 2\kappa} $ which is used in the main text. Note that the dispersion becomes linear in terms of $k$, $\hbar \omega _{{\mathbf{k}}} \propto k $, when there is no spin anisotropy $ {\cal{K}} =0$, i.e., $ \kappa =0$.
Lastly we remark that the $z$ component of spins is a good quantum number [@Z2Robust] of our system, which commutes with the original spin Hamiltonian \[Eq. (\[eqn:H\])\]. Therefore regardless of the analytical approach taken, [*[e.g.]{}*]{}, noninteracting magnon picture [@RKuboAF; @AndersonAF] using the Holstein-Primakoff transformation we adopted throughout this work, the excitations should have a well-defined spin $z$ component. The Hamiltonian and the spin $z$ component are simultaneously diagonalizable. Therefore it can be expected that, apart from any magnon picture, two well-defined opposite spin modes and the helical nature of the resultant edge spin modes should survive in any case at sufficiently low temperatures where phonons die out [@adachiphonon; @Tmagnonphonon].
Boltzmann equation for magnons {#sec:Boltzmann}
==============================
In this Appendix, we provide some details of the straightforward calculation for the Onsager coefficients $L_{i j \sigma }$ in the topologically trivial bulk AF. Assuming the system is slightly out of equilibrium and using the Boltzmann transport equation [@Basso; @Basso2; @Basso3] given in Ref. \[\], the Bose-distribution function of magnons $f_{{\mathbf{k}}\sigma } $ becomes $f_{{\mathbf{k}}\sigma } = f^0_{{\mathbf{k}}\sigma } + g_{{\mathbf{k}}\sigma } $ where $ f^0_{{\mathbf{k}}\sigma } = ({\rm{e}}^{\beta \epsilon _{{{\mathbf{k}}\sigma }}}-1)^{-1} $ with $ \epsilon _{{{\mathbf{k}}\sigma }} = \hbar \omega _{{\mathbf{k}}\sigma } $ is the equilibrium distribution while the deviation from equilibrium $ g_{{\mathbf{k}}\sigma } = f_{{\mathbf{k}}\sigma } - f^0_{{\mathbf{k}}\sigma } $ is given by $ g_{{\mathbf{k}}\sigma } = \tau {\mathbf{v}}_{{\mathbf{k}}} \cdot [-\sigma g \mu_{\rm{B}} {\mathbf{\nabla }}B + (\epsilon _{{{\mathbf{k}}\sigma }}/T){\mathbf{\nabla }}T] (\partial f^0_{{\mathbf{k}}\sigma }/\partial \epsilon _{{{\mathbf{k}}\sigma }}) $ within the liner response regime, where $ {\mathbf{v}}_{{\mathbf{k}}} = \partial \epsilon _{{\mathbf{k}}\sigma }/\partial \hbar {\mathbf{k}}$ is the velocity and $\tau$ a phenomenologically introduced relaxation time of magnons, mainly due to nonmagnetic impurity scatterings and thereby we may assume it to be a constant at low temperature. The resulting particle, spin, and heat currents for each magnon mode, $ {\mathbf{j}}_{\sigma }^{\rm{P}} $, $ {\mathbf{j}}_{\sigma } $, $ {\mathbf{j}}_{\sigma }^{Q} $, respectively, are given by $ {\mathbf{j}}_{\sigma }^{\rm{P}} = \int [d^3{\mathbf{k}}/(2 \pi)^3] {\mathbf{v}}_{{\mathbf{k}}} g_{{\mathbf{k}}\sigma } $, $ {\mathbf{j}}_{\sigma } = \int [d^3{\mathbf{k}}/(2 \pi)^3] \sigma g \mu_{\rm{B}} {\mathbf{v}}_{{\mathbf{k}}} g_{{\mathbf{k}}\sigma } $, $ {\mathbf{j}}_{\sigma }^{Q} = \int [d^3{\mathbf{k}}/(2 \pi)^3] \epsilon _{{\mathbf{k}}\sigma } {\mathbf{v}}_{{\mathbf{k}}} g_{{\mathbf{k}}\sigma } $. Assuming spatial isotropy $ |k_x | = | k_y | = | k_z | $ for ${\mathbf{k}} = (k_x, k_y, k_z)$ and performing the Gaussian integrals, the Onsager coefficients $L_{i j \sigma }$ shown in the main text are obtained.
Cyclotron motion of magnons {#sec:CyclotronMotion}
===========================
In this Appendix, we provide for completeness some details of the straightforward calculation showing that the dynamics of magnons with the opposite magnetic dipole moments $ \sigma g \mu _{\rm{B}}{\mathbf{e}}_z$ are identical except that the resulting chirality of the magnon propagation becomes opposite \[Fig. \[fig:HelicalAFChiralFM\] (b)\]. Using the correspondence explained in the main text, the calculation becomes analogous to the one for electrons [@mahan; @Ezawa], and especially it parallels the one for ferromagnetic magnons given in Ref. \[\] except that we have now two magnon modes with opposite magnetic dipole moment ($\sigma =\pm 1$). Introducing operators analogous to a covariant momentum $ {\hat{{\mathbf{\Pi}}} }_{\sigma } \equiv \hat{{\mathbf{p}} } + \sigma g \mu _{\rm{B}}{\mathbf{A}}_{\rm{m}} /c =(\Pi_{x \sigma }, \Pi_{y \sigma }) $, which satisfy $[\Pi_{x \sigma }, \Pi_{y \sigma }]= - i \sigma \hbar ^2/ l_{\rm{{\mathcal{E}}}}^2 $, and dropping the irrelevant constant, the Hamiltonian for each mode can be rewritten as $ {\cal{H}}_{{\rm{m}}\sigma } = (\Pi_{x\sigma }^2 + \Pi_{y\sigma }^2)/2m $. Next, introducing the operators $ A_{\sigma } \equiv l_{\rm{{\mathcal{E}}}} (\Pi_{x \sigma } - i \sigma \Pi_{y \sigma })/\sqrt{2} \hbar $ and $A_{\sigma }^{\dagger }\equiv l_{\rm{{\mathcal{E}}}} (\Pi_{x \sigma } + i \sigma \Pi_{y \sigma })/\sqrt{2} \hbar $, which satisfy bosonic commutation relations, $ [A_{\sigma }, A_{\sigma }^{\dagger }] = 1$ with the remaining commutators vanishing, the Hamiltonian becomes $ {\cal{H}}_{{\rm{m}}\sigma } = \hbar \omega _c (A_{\sigma }^{\dagger } A_{\sigma } + 1/2)$. Indeed, introducing [@Ezawa] the guiding-center coordinate by $ X_{\sigma } = x - \sigma l_{\rm{{\mathcal{E}}}}^2 \Pi_{y\sigma }/\hbar $ and $ Y_{\sigma } = y + \sigma l_{\rm{{\mathcal{E}}}}^2 \Pi_{x \sigma }/\hbar $, which satisfy $[X_{\sigma }, Y_{\sigma }]= i \sigma l_{\rm{{\mathcal{E}}}}^2$ with $ dX_{\sigma }/dt = dY_{\sigma }/dt=0 $ indicating that the drift velocity vanishes in the absence of the magnetic field gradient, the time-evolution of the relative coordinate ${\mathbf{R}}_{{\rm{{\mathcal{E}}}}\sigma } =({\cal{R}}_{x \sigma }, {\cal{R}}_{y \sigma }) \equiv (- l_{\rm{{\mathcal{E}}}}^2 \Pi_{y \sigma }/\hbar, l_{\rm{{\mathcal{E}}}}^2 \Pi_{x \sigma }/\hbar)$ becomes $ d({\cal{R}}_{x \sigma } + i {\cal{R}}_{y \sigma }) /dt = i \sigma \omega _c ({\cal{R}}_{x \sigma } + i {\cal{R}}_{y \sigma })$. Thus in the presence of an electric field gradient, two magnons form the same Landau level and perform cyclotron motion with the same frequency, but propagate in opposite directions due to the opposite magnetic dipole moment $ \sigma g \mu _{\rm{B}}{\mathbf{e}}_z$ \[Fig. \[fig:HelicalAFChiralFM\] (b)\].
Landau levels in topological magnets {#sec:ACvsDM}
====================================
In this Appendix, we provide some insight into the magnons in DM interaction-induced skyrmion-like structures where the DM [@DM; @DM2; @DM3] interaction provides [@katsura2] an effective AC phase. In Ref. \[\], we have seen that the low-energy magnetic excitations in the skyrmion lattice are magnons and the DM interaction produces a textured equilibrium magnetization that works intrinsically as a vector potential analogous to ${\mathbf{A}}_{\rm{m}}$. The Hamiltonian of magnons indeed reduces to the same form of Eq. (\[HamiltonianLL\]) with the analog of the Landau gauge that produces the Landau energy level \[Eq. (\[LL\])\]. Assuming the magnitude of the DM interaction [@SkyrmionReviewNagaosa; @SkyrmionExpTokura; @SkyrmionTheory; @DMposition; @DMposition2] $\Gamma _{\rm{DM}}$, the Landau energy level spacing is given by [@KevinHallEffect] $ (4JS/\sqrt{3} \pi) (\Gamma _{\rm{DM}}/J)^2 $; see Refs. \[\] for details. Using the correspondence with the Landau energy level spacing by electric field gradient-induced AC effect $\hbar \omega _{\rm{c}}$ \[Eqs. (\[LL\]) and (\[omega\_c\])\], it can be seen to be qualitatively identified with an effective inner electric field gradient [@QSHE2016] and the magnitude is estimated by $ {\cal{E}}_{\rm{DM}} = [2 /(\sqrt{3} \pi a^2)] (\hbar c^2/g \mu_{\rm{B}})(\Gamma _{\rm{DM}}/J)^2 \propto \Gamma _{\rm{DM}}^2 $. This indicates that the DM interaction produces a slowly-varying textured equilibrium magnetization that provides an effective AC phase and in such a skyrmion-like structure, it works as an effective, fictitious, and intrinsic electric field gradient ${\cal{E}}_{\rm{DM}} = {\cal{O}}(10^2)$V/nm$^2$ of very large magnitude [@KevinHallEffect; @katsura2]. Note that the key to edge magnon states is the vector potential $ {\mathbf{A}}_{\rm{m}}$ that globally satisfies the relation \[Eq. (\[eqn:correspondenceEB\])\] ${\mathbf{\nabla }} \times {\mathbf{A}}_{\rm{m}} = ({{\mathcal{E}}}/{c}) {\mathbf{e}}_z$ where magnons experience the vector potential macroscopically, leading to cyclotron motion.
[^1]: We assume a sample in the absence of magnetic disorder that breaks the global spin rotation symmetry about the $z$ axis such as the uncompensated surface magnetization.
[^2]: See also Refs. \[\] for topological aspects of magnons in FMs, Ref. \[\] for observation of a topological magnon band [@KJD; @KevinHallEffect; @RSdisorder], and Refs. \[\] for photonic TIs.
[^3]: Within the mean-field treatment, interactions between magnons works [@magnonWF] as an effective magnetic field and the results qualitatively remain identical. A certain class of QHE in systems with interacting bosons, a SPT phase [@SPTreviewXGWen; @SPTreviewSenthil] is implied in Refs. \[\] and \[\].
[^4]: As long as the temperature is lower than the magnon gap $\Delta$, thermomagnetic and topological properties remain qualitatively the same also for magnons with a linear dispersion [@KJD; @magnonWF].
[^5]: This well agrees with a recent calculation by A. Mook [*[et al.]{}*]{} [@MookPrivate] of the magnonic WF law for a single bulk ferromagnetic insulator.
[^6]: See Ref. \[\] for the definition of the Berry curvature which gives the Chern number.
[^7]: The expression, Eqs. (\[eqn:G\_magnonHall77\]) and (\[eqn:2by22222\]), itself is valid in any case.
[^8]: Ref. \[\] reported measurements in a magnet at low temperature $T \lesssim {\cal{O}}(1) $K which showed that the exponent of the phonon thermal conductance is larger than that of magnons in terms of temperature. This indicates that in terms of thermal conductance, the effects of phonons die out more quickly than magnons at decreasing temperature.
|
In the past few years, the statistical mechanics of disordered systems has been frequently applied to understand the macroscopic behavior of many technologically useful problems, such as optimization (e.g. graph partitioning and traveling salesman) [@mpv], learning in neural networks [@hkp], error correcting codes [@ecode] and the $K$-satisfiability problem [@mz]. One important phenomenon studied by this approach is the phase transitions in such systems, e.g. the glassy transition in optimization when the noise temperature of the simulated annealing process is reduced, the storage capacity in neural networks and the entropic transition in the $K$-satisfiability problem. Understanding these transitions are relevant to the design and algorithmic issues in their applications. In turn, since their behavior may be distinct from conventional disordered systems, the perspectives of statistical mechanics are widened.
In this paper we consider the phase transitions in noise reduction (NR) techniques in signal processing. They have been used in a number of applications such as adaptive noise cancelation, echo cancelation, adaptive beamforming and more recently, blind separation of signals [@haykin; @blind]. While the formulation of the problem depends on the context, the following model is typical of the general problem. There are $N$ detectors picking up signals mixed with noises from $p$ noise sources. The input from detector $j$ is $x_j\!=\!a_jS\!+\!\sum_\mu\xi^\mu_j n_\mu$, where $S$ is the signal, $n_\mu$ for $\mu\!=\!1,..,p$ is the noise from the $\mu$th noise source, and $n_\mu\!\ll\!S$. $a_j$ and $\xi^\mu_j$ are the contributions of the signal and the $\mu$th noise source to detector $j$. NR involves finding a linear combination of the inputs so that the noises are minimized while the signal is kept detectable. Thus, we search for an $N$ dimensional vector $J_j$ such that the quantities $\sum_j
\xi^\mu_j J_j$ are minimized, while $\sum_j a_j J_j$ remains a nonzero constant. To consider solutions with comparable power, we add the constraint $\sum_j J_j^2 = N$. While there exist adaptive algorithms for this objective [@haykin], here we are interested in whether the noise can be intrinsically kept below a tolerance level after the steady state is reached, provided that a converging algorithm is available.
When both $p$ and $N$ are large, we use a formulation with normalized parameters. Let $h^\mu$ be the local fields for the $\mu$th source defined by $h^\mu \equiv \sum_j \xi^\mu_j J_j/\sqrt N$. Learning involves finding a vector $J_j$ such that the following conditions are fulfilled. (a) $|h^\mu|<k$ for all $\mu\!=\!1,..,p$, where $k$ is the tolerance bound. We assume that the vectors $\xi_j^\mu$ are randomly distributed, with $\left\langle\!\left\langle\xi^\mu_j\right\rangle\!\right\rangle\!=\!0$, and $\left\langle\!\left\langle\xi^\mu_i\xi^\nu_j\right\rangle\!\right\rangle
\!=\!\delta_{ij}\delta_{\mu\nu}$. Hence, they introduce symmetric constraints to the solution space. (b) The normalization condition $\sum_j J_j^2\!=\!N$. (c) $|\sum_j a_j J_j/\sqrt N|\!=\!1$; however, this condition is easily satisfied: if there exists a solution satisfying (a) and (b) but yields $|\sum_j a_j J_j/\sqrt N|$ different from 1, it is possible to make an adjustment of each component $J_j$ proportional to ${\rm sgn}a_j/\sqrt N$. Since the noise components $\xi_j^\mu$ are uncorrelated with $a_j$, the local fields make a corresponding adjustment of the order $1/\sqrt N$, which vanishes in the large $N$ limit. The space of the vectors $J_j$ satisfying the constraints (a) and (b) is referred to as the [*version space*]{}.
This formulation of the problem is very similar to that of pattern storage in the perceptron with continuous couplings [@Ga]. However, in the perceptron the constraints (a) are $h^\mu\!>\!k$, while there is an extra inversion symmetry in the NR model: the version space is invariant under $\vec J\to -\vec J$. We can also consider the NR model as a simplified version of the perceptron with multi-state output, in which the values of local fields for each pattern are bounded in one of the few possible intervals. In the present model, all local fields are bounded in the symmetric interval $[-k,k]$. This symmetry will lead to very different phase behavior, although it shares the common feature that the version space is not connected or not convex, with other perceptron models, e.g. errors were allowed [@ET], couplings were discrete [@KM] or pruned [@KGE], transfer functions were non-monotonic [@BE].
When the number of noise sources increases, the version space is reduced and undergoes a sequence of phase transitions, causing it to disappear eventually. These transitions are observed by monitoring the evolution of the overlap order parameter $q$, which is the typical overlap between two vectors in the version space. For few noise sources, the version space is extended and $q=0$. When the number of noise sources $p$ increases, the number of constraints increases and the version space shrinks.
One possible scenario is that each constraint reduces the volume of the version space, and there is a continuous transition to a phase of nonzero value of $q$. Alternatively, each constraint introduces a volume reduction resembling a percolation process, in which the version space remains extended until a sufficient number of constraints have been introduced, and the version space is suddenly reduced to a localized cluster. This may result in a discontinuous transition from zero to nonzero $q$. We expect that the transition takes place when $p$ is of the order $N$, and we define $\alpha\equiv p/N$ as the noise population. When $\alpha$ increases further, $q$ reaches its maximum value of 1 at $\alpha = \alpha_c$, which is called the critical population. The purpose of this paper is to study the nature and conditions of occurrence of these transitions.
We consider the entropy ${\cal S}$, which is the logarithm of the volume of the version space and is self-averaging. Using the replica method, ${\cal S}=\lim_{n\to0}(\left\langle \!\!\!\left\langle{\cal V}^n\right
\rangle\!\!\!\right\rangle\!-\!1)/n$, and we have to calculate $\left\langle \!\!\!\left\langle{\cal V}^n\right\rangle\!\!\!\right\rangle$ given by $$\left\langle \!\!\!\left\langle\prod_{a=1}^n \int\prod_{j=1}^N dJ^a_j
\delta(\sum_{j=1}^N J^{a2}_j\!-\! N)\prod_{\mu=1}^p \theta
(k^2\!-\!{h^\mu_a}^2)\right\rangle\!\!\!\right\rangle,
\label{pr:1}$$ with $h^\mu_a\!\equiv\!\sum_j J^a_j\xi^\mu_j/\sqrt N$. Averaging over the input patterns, and using the Gardner method [@Ga], we can rewrite (\[pr:1\]) as $\left\langle
\!\!\!\left\langle{\cal V}^n\right\rangle\!\!\!\right\rangle=\int
\prod_{a<b=1}^n dq_{ab}\exp(Ng)$. The overlaps between the coupling vectors of distinct replicas $a$ and $b$: $q_{ab}\equiv{\sum_{j=1}^N }J^a_jJ^b_j/N$, are determined from the stationarity conditions of $g$.
Due to the inversion symmetry of the constraints, it always has the all-zero solution ($q_{ab}\!=\!0,\forall a\!<\!b$), but it becomes locally unstable at a noise population $$\alpha_{\rm AT}(k)
={\pi\over2}{{\rm erf}({k\over\sqrt{2}})^2\over k^2
\exp(-k^2)}\ .
\label{pr:4}$$ For $\alpha\!>\!\alpha_{\rm AT}$, the simplest solution assumes $q_{ab}\!=\!q\!>\!0$. This replica symmetric solution (RS), however, is not stable against replica symmetry breaking (RSB) fluctuations for any $q\!>\!0$. Hence, (\[pr:4\]) is an Almeida-Thouless line [@mpv], and RSB solutions in the Parisi scheme [@mpv] have to be considered.
The transition of $q$ from zero to nonzero is absent in the problem of pattern storage in the perceptron, where $q$ increases smoothly from zero when the storage level $\alpha$ increases [@Ga]. Rather, the situation is reminiscent of the spin glass transition in the Sherrington-Kirkpatrick (SK) model, which does possess an inversion symmetry [@mpv].
The phase diagram is discussed in the following 3 schemes:
[*1. RS solution ([RS]{}, superscript $^{(0)}$):*]{} It provides a good approximation of the full picture, which will be described later. The RS solution is given by $q=q_{\rm EA}$, where $q_{\rm EA}$ is the Edwards-Anderson order parameter [@mpv]. Close to the AT line, $q\sim\!t$, where $t\!\equiv\!(\alpha\!-\!\alpha_{\rm AT})/\alpha_{\rm AT}\!\ll\!1$. The critical population, obtained in the limit $q\!\to\!1$, is $$\alpha^{(0)}_c(k)\!=\!\!\left(\!(1\!+\! k^2)(1\!-\!{\rm erf}
({k\over\sqrt{2}}))\!-\!\sqrt{{2\over\pi}}k\exp({-k^2\over2}\!)\!
\right)^{-1}\!\!\!\!\!\!\!.
\label{pr:5}$$ Two features are noted in the phase diagram in Fig. 1:
[*a) The critical population line crosses the AT line:*]{} When $k\!=\!0$, the version space is equivalent to the solution of $p$ homogeneous $N$-dimensional linear equations. Hence it vanishes at $p\!=\!N$, or $\alpha_c^{(0)}\!=\!1$. The small $k$ expansion of (\[pr:4\]) and (\[pr:5\]) gives $$\alpha_{\rm AT}(k)\simeq1\!+\!2k^2/3\ <\
\alpha^{(0)}_c(k)\simeq1\!+\!4k/\sqrt{2\pi}\ ,
\label{pr:6}$$ and $q$ increases smoothly from 0 at $\alpha_{\rm AT}$ to 1 at $\alpha_c^{(0)}$. However, in the large $k$ limit, the critical population grows exponentially with the square of the tolerance $k$, and the reverse is true: $$\alpha_{\rm AT}(k)\simeq{\pi\over2}{\exp(k^2)\over k^2}>
\alpha^{(0)}_c(k)\simeq\sqrt{\pi\over2}{k^3\over2}\exp({k^2\over2})
\ .
\label{pr:7}$$ [*b) The first order transition line takes over the AT line:*]{} The paradox in [*a)*]{} is resolved by noting that for a given $k$, multiple RS solutions of $q$ may coexist for a given $\alpha$. Indeed, $\alpha^{(0)}(k,q)$ is monotonic in $q$ for $k\!\le\!k^{(0)}_c\!=\!2.89$, and not so otherwise. Among the coexisting stable RS solutions for $k > k^{(0)}_c$, the one with the lowest entropy is relevant. Hence, there is a first order transition in $q$ at a value $\alpha^{(0)}_1(k)$, determined by the vanishing of the entropy difference between the coexisting stable solutions. The jump in $q$ across the transition widens from 0 at $k=k^{(0)}_c$, and when $k$ increases above $k^{(0)}_0=3.11$, $q$ jumps directly from zero to nonzero at the transition point. The line of $\alpha^{(0)}_1(k)$ starts from $k=k^{(0)}_c$; it crosses the line of $\alpha_{\rm AT}(k)$ at $k^{(0)}_0$, replacing it to become the physically observable phase transition line. In the limit of large $k$, the first order transition line is given by $\alpha^{(0)}_1(k)\!=\!0.94 \alpha^{(0)}_c(k)$, where $q$ jumps from 0 to a high value of $1\!-\!0.27k^{-2}$.
The phase diagram illustrates the nature of the phase transitions. For low tolerance $k$, each constraint results in a significant reduction in the version space, and there is a continuous transition to a phase of nonzero value of $q$. For high tolerance $k$, each constraint introduces a volume reduction which is less significant, resembling a percolation process, in which the transition of $q$ from zero to nonzero is discontinuous. However, even in the large $k$ limit, the region of high $q$ spans a nonvanishing range of $\alpha$ below the critical population, and the picture of percolation transition has to be refined in the next approximation.
[*2. First step RSB approximation ([RSB]{}$_1$, superscript $^{(1)}$):*]{} Here the $n$ replicas are organized into clusters, each with $m$ replicas. $q_{ab}\!=\!q_1$ for replicas in the same cluster, and $q_{ab}\!=\!q_0$ otherwise, and in the limit $n\to 0$, $\ 0\!<\!m\!<\!1$ for analytic continuation.
Two $(q_1,q_0,m)$ solutions exist just above the AT line: ($q_{\rm EA},{1\over3}q_{\rm EA},{2\over3}\!m^{(\infty)}$), ($q_{\rm EA},0,{1\over2}m^{(\infty)}$), where (see later) $m^{(\infty)}$ is the position of the turning point in the Parisi function of the full RSB solution. Only the $(q_0\!>\!0)
$-solution is stable with respect to fluctuations of $q_0$.
The features in the phase diagram in Fig. 2 are:
[*a) First order transition:*]{} For $k\!>\!k^{(1)}_c\!=\!2.31$ multiple solutions exist, and there is a first order transition of $q$ at $\alpha^{(1)}_1(k)$. This line starts from $k=k^{(1)}_c$, and crosses the line of $\alpha_{\rm AT}(k)$ at $k^{(1)}_0=2.61$, to become the physically observable phase transition line. For large $k$, $q_1\!=\!1\!-\!1.23/(k\ln k)^2$ at the transition, and $\alpha^{(1)}_1(k)\!=\!(1\!-\!0.33/\ln k)\alpha^{(1)}_c(k)$. Hence, the gap between the lines of $\alpha^{(1)}_1(k)$ and $\alpha^{(1)}_c(k)$ vanishes, and the extended phase disappears near the critical population via a narrow range of localized phase, resembling a percolation transition. However, the resemblance is not generic, since the approach of the two lines is logarithmically slow.
[*b) Reduced critical population:*]{} The critical population is reduced compared with $\alpha^{(0)}_c(k)$. In the large $k$ limit, $$\alpha^{(1)}_c(k)\simeq\sqrt{\pi/2}\ k\ \ln k\ \exp(k^2/2).
\label{pr:8}$$ [*c) Transition of $q_0$:*]{} The regime of nonzero $q_1$ and $q_0$ spans the region of low $k$ below the critical population. At larger values of $k$, however, there is a line $\alpha^{(1)}_m$ where $q_0$ becomes zero. The line starts from $k^{(1)}_{c,m}=2.37$ on the line of critical population, and ends at $k=k^{(1)}_0=2.61$ on the line of first order transition. Beyond this line in the localized phase, only the solution with $q_0=0$ exists, reflecting that constraints with larger $k$ are more likely to reduce the size of local clusters, but less likely to change the extended distribution of the clusters.
The formation of localized clusters at the percolation transition is illustrated by the evolution of the overlap distribution. For $k>k^{(1)}_0$ at the first order transition line, the parameter $m$ decreases smoothly, with increasing $\alpha$, from 1 across the phase transition line. This is analogous to the RS-RSB$_1$ transition in the random energy model [@mpv; @fh] and the high temperature perceptron [@gr]. Since the overlap distribution is $P(q)\!=\!m\delta(q\!-\!q_0)\!+\!(1\!-\!m)\delta(q\!-\!q_1)$, the transition is continuous in the overlap distribution, although discontinuous in terms of $q_0$. At the transition, the statistical weight of the $q=q_0=0$ component decreases smoothly, while that of the $q=q_1$ component increases from zero. When two points in the version space are sampled, they have a high probability to share an overlap $q_0$, meaning that they belong to different clusters. Hence the version space consists of many small clusters. Furthermore, since $q_0=0$, the clusters are isotropically distributed. In contrast, for the transition at low $k$, the overlap has the same form as the SK model, i.e. a high probability being $q_1$, implying that it consists of few large clusters.
[*3. Infinite step RSB solution ([RSB]{}$_\infty$, superscript $^{(\infty)}$):*]{} This is introduced since the RSB$_1$ solution is found to be unstable against further breaking fluctuations. Here the $n$ replicas are organized into hierarchies of clusters. In the $i$th hierarchy, the clusters are of size $m_i$, and $q_{ab}\!=\!q_i$. In the limit $n\!\to\!0$, $0\!<\!\cdots\!<\!
m_i\!<\!\cdots\!<\!1$ for analytic continuation. In the Parisi scheme [@mpv], the overlap $q_{ab}$ is represented by the Parisi function $q(x)$, where $x$ is the cumulative frequency of $q$, or $P(q)\!=\!dx(q)/dq$. We have only obtained solutions for $\alpha$ just above $\alpha_{\rm AT}$: $$q(x)=t\ q_p\min(x/m^{(\infty)},1)\ .
\label{pr:10}$$ As shown in Fig. 3, the Parisi function $q(x)$ is very similar to that of the SK model without external field near the critical temperature [@mpv].
For the behavior of the RSB$_\infty$ solution far away from the AT line, we only give some qualitative features of the phase behavior. Below the AT line we have the extended phase characterized by $q(x)\!=\!0$. For small $k$ the AT line lies under the $\alpha_c^{(\infty)}(k)$ line, because of (\[pr:6\]) and $\alpha^{(0)}_c\!\!-\!\alpha_c^{(\infty)}\!\simeq\!{\cal O}(k^2)$. Hence for $\alpha$ increasing above $\alpha_{\rm AT}$, there is a continuous transition of $q(x)$ given by (\[pr:10\]), and $q_{\rm EA}\to1$ for $\alpha\!\to\!\alpha_c^{(\infty)}(k)$.
For large $k$ the AT line runs below the $\alpha_c^{(\infty)}(k)$ line, because $\alpha^{(\infty)}_c(k)\!\leq\!\alpha^{(1)}_c(k)\!<\!
\alpha^{(0)}_c(k)$. Hence the $\alpha_c^{(\infty)}(k)$ line must intersect the AT line, and there must exist a critical $k^{(\infty)}_c$ above which there is a first order transition from a solution with low (or zero) $q_{\rm EA}$ to one with high $q_{\rm EA}$ (denoted by $\alpha^{(\infty)}_1(k)$). This line intersects the $\alpha_{\rm AT}(k)$ line at a value $k^{(\infty)}_0$, replacing it to be the physically observable phase transition line. The critical population $\alpha^{(\infty)}_c(k)$ is obtained by taking the limit $q_{\rm EA}\to1$.
We expect that the picture of version space percolation for large $k$ continues to be valid in the RSB$_\infty$ ansatz. This means that the $\alpha_1^{(\infty)}(k)$ line will become arbitrarily close to the $\alpha_c^{(\infty)}(k)$ line. When $\alpha$ increases above $\alpha_1^{(\infty)}(k)$, $q(x)$ will deviate appreciably from the zero function only in a narrow range of $x$ near 1. This means that the overlap $q$ is zero with a probability almost equal to 1, and nonzero with a probability much less than 1. The corresponding picture is that the version space consists of hierarchies of localized clusters which are scattered in all directions of an $N$ dimensional hypersphere. Again, the transition is continuous in terms of the overlap distribution, though not so in terms of $q(x)$.
$$\begin{aligned}
\begin{array}{l}{\mathord{\!\usebox{\all}}}\end{array} \nonumber\end{aligned}$$
Fig. 3. The RSB$\infty$ solution near the AT line (a). For comparison, the RSB$_1$ solution (b: nonzero $q_0$, c: zero $q_0$) and the RS solution (d) are also plotted.
The phase transitions observed here have implications to the NR problem. In the extended phase for low values of $\alpha$, an adaptive algorithm can find a solution easily in any direction of the $N$ dimensional parameter space. In the localized phase, the solution can only be found in certain directions. If the tolerance $k$ is sufficiently high, there is a jump in the overlap distribution, which means that the search direction is suddenly restricted on increasing $\alpha$. For very high tolerance $k$, the picture of percolation transition applies, so that even though localized clusters of solutions are present in all directions, it is difficult to move continuously from one cluster to another without violating the constraints.
Our results explain the occurrence of RSB in perceptrons with [ *multi-state*]{} and [*analog*]{} outputs, when there are sufficiently many intermediate outputs [@BvM; @BKvM]. In these perceptrons, the patterns to be learned, can be separated into two classes: $p_{\rm int}$ have zero (intermediate) output, $p_{\rm ext}\!\equiv\!(p\!-\!p_{\rm int})$ have positive or negative (extremal) output. The version space formed by the intermediate states has the same geometry as the model studied here, and consists of disconnected clusters for sufficiently large $p$.
We thank R. Kühn for informative discussions, M. Bouten, H. Nishimori and P. Ruján for critical comments. This work is partially supported by the Research Grant Council of Hong Kong and by the Research Fund of the K.U.Leuven (grant OT/94/9).
M. Mézard, G. Parisi, and M. Virasoro, [*Spin Glass Theory and Beyond*]{} (World Scientific, Singapore, 1987). J. Hertz, A. Krogh and R.G. Palmer, [*Introduction to the Theory of Neural Computation*]{}, Addison Wesley, Redwood City, 1991. N. Sourlas, [*Nature*]{} [**339**]{}, 693 (1989). R. Monasson and R. Zecchina, [*Phys. Rev. Lett.*]{} [**76**]{}, 3881 (1996). S. Haykin, [*Adpative Filter Theory*]{}, second edition, Prentice-Hall, Englewood Cliffs, NJ (1991). C. Jutten and J. Herault, [*Signal Processing*]{} [**24**]{}, 1 (1991). E. Gardner, [*J. Phys. A*]{} [**21**]{}, 257 (1988). R.Erichsen Jr. and W.K. Theumann, [*J. Phys. A*]{} [**26**]{}, L61 (1993). W. Krauth and M. Mézard, [*J. Phys. France*]{} [**50**]{}, 3057 (1987). R. Garcés, P. Kuhlman, and H. Eissfeller, [*J. Phys. A*]{} [**25**]{}, L1335 (1992). D. Bollé and R. Erichsen, [*J. Phys. A*]{} [**29**]{}, 2299 (1996). K. H. Fischer and J. Hertz, [*Spin Glasses*]{} (Cambridge University Press, Cambridge, U.K., 1991). G. Györgyi and P. Reimann, [*Phys. Rev. Lett.*]{} [**79**]{}, 2746 (1997). D. Bollé and J. van Mourik, [*J. Phys. A*]{} [**27**]{}, 1151 (1994). D. Bollé, R. Kühn, and J. van Mourik, [*J. Phys. A*]{} [**26**]{}, 3149 (1993).
|
---
abstract: 'In this article, we study optimal control problems of spiking neurons whose dynamics are described by a phase model. We design minimum-power current stimuli (controls) that lead to targeted spiking times of neurons, where the cases with unbounded and bounded control amplitude are considered. We show that theoretically the spiking period of a neuron, modeled by phase dynamics, can be arbitrarily altered by a smooth control. However, if the control amplitude is bounded, the range of possible spiking times is constrained and determined by the bound, and feasible spiking times are optimally achieved by piecewise continuous controls. We present analytic expressions of these minimum-power stimuli for spiking neurons and illustrate the optimal solutions with numerical simulations.'
author:
- Isuru Dasanayake
- 'Jr-Shin Li'
bibliography:
- 'SingleNeuron\_PRE.bib'
nocite: '[@*]'
title: 'Optimal Design of Minimum-Power Stimuli for Spiking Neurons'
---
Introduction {#sec:intro}
============
Control of neurons and hence the nervous system by external current stimuli (controls) has received increased scientific attention in recent years for its wide range of applications from deep brain stimulation to oscillatory neurocomputers [@uhlhaas06; @osipov07; @Izhikevich99]. Conventionally, neuron oscillators are represented by phase-reduced models, which form a standard nonlinear system [@Brown04; @Winfree01]. Intensive studies using phase models have been carried out, for example, on the investigation of the patterns of synchrony that result from the type and architecture of coupling [@Ashwin92; @Taylor98] and on the response of large groups of oscillators to external stimuli [@Moehlis06; @Tass89], where the inputs to the neuron systems were initially defined and the dynamics of neural populations were analyzed in detail.
Recently, control theoretic approaches have been employed to design external stimuli that drive neurons to behave in a desired way. For example, a multilinear feedback control technique has been used to control the individual phase relation between coupled oscillators [@kano10]; a nonlinear feedback approach has been employed to engineer complex dynamic structures and synthesize delicate synchronization features of nonlinear systems [@Kiss07]; and our recent work has illustrated controllability of a network of neurons with different natural oscillation frequencies adopting tools from geometric control theory [@Li_NOLCOS10].
There has been an increase in the demand for controlling not only the collective behavior of a network of oscillators but also the behavior of each individual oscillator. It is feasible to change the spiking periods of oscillators or tune the individual phase relationship between coupled oscillators by the use of electric stimuli [@Schiff94; @kano10]. Minimum-power stimuli that elicit spikes of a neuron at specified times close to the natural spiking time were analyzed [@Moehlis06]. Optimal waveforms for the entrainment of weakly forced oscillators that maximize the locking range have been calculated, where first and second harmonics were used to approximate the phase response curve [@kiss10]. These optimal controls were found mainly based on the calculus of variations, which restricts the optimal solutions to the class of smooth controls and the bound of the control amplitude was not taken into account.
In this paper, we apply the Pontryagin’s maximum principle [@Pontryagin62; @Stefanatos10] to derive minimum-power controls that spike a neuron at desired time instants. We consider both cases when the available control amplitude is unbounded and bounded. The latter is of practical importance due to physical limitations of experimental equipment and the safety margin for neurons, e.g., the requirement of a mild brain stimulations in neurological treatments for Parkinson’s disease and epilepsy.
This paper is organized as follows. In Section \[sec:phase\_model\], we introduce the phase model for spiking neurons and formulate the related optimal control problem. In Section \[sec:minpower\_control\], we derive minimum-power controls associated with specified spiking times in the absence and presence of control amplitude constraints, in which various phase models including sinusoidal PRC, SNIPER PRC, and theta neuron models are considered. In addition, we present examples and simulations to demonstrate the resulting optimal control strategies.
Optimal Control of Spiking Neurons {#sec:phase_model}
==================================
A periodically spiking or firing neuron can be considered as a periodic oscillator governed by the nonlinear dynamical equation of the form $$\label{eq:phasemodel}
\frac{d\theta}{dt}=f(\theta)+Z(\theta)I(t),$$ where $\theta$ is the phase of the oscillation, $f(\theta)$ and $Z(\theta)$ are real-valued functions giving the neuron’s baseline dynamics and its phase response, respectively, and $I(t)$ is an external current stimulus [@Brown04]. The nonlinear dynamical system described in is referred to as the phase model for the neuron. The assumption that $Z(\theta)$ vanishes only on isolated points and that $f\left(\theta\right)>0$ are made so that a full revolution of the phase is possible. By convention, neuron spikes occur when $\theta=2n\pi$, where $n\in\mathbb{N}$. In the absence of any input $I(t)$, the neuron spikes periodically at its natural frequency, while the periodicity can be altered in a desired manner by an appropriate choice of $I(t)$.
In this article, we study optimal design of neural inputs that lead to the spiking of neurons at a specified time $T$ after spiking at time $t=0$. In particular, we find the stimulus that fires a neuron with minimum power, which is formulated as the following optimal control problem, $$\begin{aligned}
\label{eq:opt_con_pro}
\min_{I(t)} \quad & \int_0^T I(t)^2\,dt\\
{\rm s.t.} \quad & \dot{\theta}=f(\theta)+Z(\theta)I(t), \nonumber\\
&\theta(0)=0, \quad \theta(T)=2\pi \nonumber\\
&|I(t)|\leq M, \ \ \forall\ t, \nonumber\end{aligned}$$ where $M>0$ is the amplitude bound of the current stimulus $I(t)$. Note that instantaneous or arbitrarily delayed spiking of a neuron is possible if $I(t)$ is unbounded, i.e., $M=\infty$; however, the range of feasible spiking periods of a neuron described as in is restricted with a finite $M$. We consider both unbounded and bounded cases.
Minimum-Power Stimulus for Specified Firing Time {#sec:minpower_control}
================================================
We consider the minimum-power optimal control problem of spiking neurons as formulated in for various phase models including sinusoidal PRC, SNIPER PRC, and theta neuron.
Sinusoidal PRC Phase Model {#sec:sine_prc}
--------------------------
Consider the sinusoidal PRC model, $$\label{eq:sin_model}
\dot{\theta}=\omega+z_d\sin\theta\cdot I(t),$$ where $\omega$ is the natural oscillation frequency of the neuron and $z_d$ is a model-dependent constant. The neuron described by this phase model spikes periodically with the period $T=2\pi/\omega$ in the absence of any external input, i.e., $I(t)=0$.
### Spiking Neurons with Unbounded Control {#sec:unbounded_control_sine}
The optimal current profile can be derived by Pontryagin’s Maximum Principle [@Pontryagin62]. Given the optimal control problem as in , we form the control Hamiltonian $$\label{eq:hamiltonian}
H=I^2+\lambda(\omega+z_d\sin\theta\cdot I),$$ where $\lambda$ is the Lagrange multiplier. The necessary optimality conditions according to the Maximum Principle give $$\begin{aligned}
\label{eq:lambda_dot}
\dot{\lambda}=-\frac{\partial H}{\partial \theta}=-\lambda z_d I\cos\theta,\end{aligned}$$ and $\frac{\partial H}{\partial I}=2I+\lambda z_d\sin\theta=0$. Hence, the optimal current $I$ satisfies $$\begin{aligned}
\label{eq:I}
I=-\frac{1}{2}\lambda z_d\sin\theta.\end{aligned}$$
Substituting into and , the optimal control problem is then transformed to a boundary value problem, which characterizes the optimal trajectories of $\theta(t)$ and $\lambda(t)$, $$\begin{aligned}
\label{eq:theta_p2}
\dot{\theta} &= \omega-\frac{z_d^2\lambda}{2} \sin^2 \theta,\\
\label{eq:lambda_p2}
\dot{\lambda} &= \frac{z_d^2\lambda^2}{2} \sin\theta\cos\theta,\end{aligned}$$ with boundary conditions $\theta(0)=0$ and $\theta(T)=2\pi$ while $\lambda(0)$ and $\lambda(T)$ are unspecified.
Additionally, since the Hamiltonian is not explicitly dependent on time, the optimal triple $(\lambda,\theta, I)$ satisfies $H(\lambda,\theta,I)=c$, $\forall\, 0\leq t\leq T$, where $c$ is a constant. Together with , this yields $$\label{eq:quadratic}
-\frac{z_d^2}{4}\sin^2\theta\lambda^2+\omega\lambda=c,$$ and the constant $c=\omega\lambda_0$ is obtained by the initial conditions $\theta(0)=0$ and $\lambda(0)=\lambda_0$, which is undetermined. Then, the optimal multiplier can be found by solving the above quadratic equation , which gives $$\label{eq:lambda}
\lambda=\frac{2\omega\pm2\sqrt{\omega^2-\omega\lambda_0z_d^2\sin^2\theta}}{z_d^2\sin^2\theta},$$ and the optimal trajectory of $\theta$ follows $$\label{eq:theta}
\dot{\theta}=\mp\sqrt{\omega^2-\omega\lambda_0 z_d^2\sin^2 \theta},$$ by plugging $\lambda$ in into . Integrating by separation of variables, we find the spiking time $T$ with respect to the initial condition $\lambda_0$, $$\label{eq:T} T=\frac{1}{\omega}F\left(\scriptstyle{2\pi,\sqrt{\frac{\lambda_0}{\omega}}z_d}\right)=\int_0^{2\pi}{\frac{1}{\scriptstyle{\sqrt{\omega^2-\omega\lambda_0 z_d^2\sin^2\theta}}}}d\theta,$$ where $F$ denotes the elliptic integral. Note that we choose the positive sign in since the negative velocity indicates the backward phase evolution. Therefore, given a desired spiking time $T$ of the neuron, the initial value, $\lambda_0$, corresponding to the optimal trajectory of the multiplier can be found via the one-to-one relation in . Consequently, the optimal trajectories of $\theta$ and $\lambda$ can be easily computed by evolving and forward in time. Plugging into , we obtain the optimal feedback law $$\label{eq:I*}
I^*= \frac{-\w+\sqrt{\w^2-\w\lambda_0 z_d^2\sin^2\t}}{z_d\sin\t},$$ which drives the neuron from $\theta(0)=0$ to $\theta(T)=2\pi$ with minimum power.
The feasibility of spiking the neuron at a desired time $T$ largely depends on the initial value of the multiplier, $\lambda_0$. It is clear from and that a complete $2\pi$ revolution is impossible when $\lambda_0>\omega/z_d^2$. This fact also can be seen from FIG. \[fig:vectorfield\], where the system evolution defined by and for $z_d=1$ and $\omega=1$ with respect to different $\lambda_0$ values ($\t=0$ axis) is illustrated. When $\lambda_0=0$, according to equation , the spiking period is equal to the natural spiking period, $2\pi/\omega$, and no external stimulus needs to be applied, i.e., $I^*(t)=0$, $\forall t\in[0,2\pi/\w]$. Since, from , $T$ is a monotonically increasing function of $\lambda_0$ for fixed $\w$ and $z_d$ and, from , the average phase velocity decreases when $\lambda_0$ increases, the spiking time $T>2\pi/\omega$ for $\lambda_0>0$ and $T<2\pi/\omega$ for $\lambda_0<0$. FIG. \[fig:T\_vs\_lambda0\_sine\] shows variation of the spiking time $T$ with the $\lambda_0$ corresponding to the optimal trajectories for different $\omega$ values with $z_d=1$.
![Extremals of sinusoidal PRC system with $z_d=1$ and $\w=1$[]{data-label="fig:vectorfield"}](fig1_vectorfield.eps)
![Variation of the spiking time, $T$, with respect to the initial multiplier value, $\lambda_0$, leading to optimal trajectories, with different values of $\omega$ and $z_d=1$ for sinusoidal PRC model.[]{data-label="fig:T_vs_lambda0_sine"}](fig2_terminal_time_vs_lambda0_sine.eps)
\
The relation between the spiking time $T$ and required minimum power $E=\min\int_0^{T}{{I}^2(t)}dt$ is evident via a simple sensitivity analysis [@Bryson75]. Since a small change in the initial condition, $d\theta$, and a small change in the initial time, $dt$, result in a small change in power according to $dE=\lambda(t) d\theta-H(t)d\theta$, it follows that [@Bryson75] $$\label{eq:energy}
-\frac{\partial E}{\partial t}=H=c=\w\lambda_0.$$ This implies that $E$ increases with initial time $t$ for $\lambda_0<0$ and decreases for $\lambda_0>0$. Since the increment of the initial time is equivalent to the decrement of spiking time $T$, $\partial E/\partial T=\omega\lambda_0$. Since $\lambda_0<0$ ($\lambda_0>0$) corresponds to $T<2\pi/\omega$ ($T>2\pi/\omega$), we see that the required minimum power increases if we move away from the natural spiking time.
The minimum-power stimulus $I^*$ as in plotted with respect to time and phase for various spiking times $T=3,5,10,12$ with $\w=1$ and $z_d=1$ are shown in FIG. \[fig:control\_vs\_time\_sine\] and \[fig:control\_vs\_theta\_sine\], respectively. The respective optimal trajectories of $\lambda(\t)$ and $\t(t)$ for these spiking times are presented in FIG. \[fig:lambda\_vs\_theta\_sine\] and \[fig:theta\_vs\_time\_sine\].
### Spiking Neurons with Bounded Control {#sec:bounded_control_sine}
In practice, the amplitude of stimuli in physical systems are limited, so we consider spiking the sinusoidal neuron with bounded control amplitude, namely, in the optimal control problem , $|I(t)|\leq M<\infty$ for all $t\in[0,T]$, where $T$ is the desired spiking period. In this case, there exists a range of feasible spiking periods depending on the value of $M$, in contrast to the previous case where any desired spiking time is feasible. We first observe that given this bound $M$, the minimum time it takes to spike a neuron can be achieved by choosing the control that keeps the phase velocity $\dot{\t}$ maximum over $t\in[0,T]$. Such a time-optimal control, for $z_d>0$, can be characterized by a switching, i.e., $$\label{sin_Tmin_control}
I^*_{Tmin}= \left\{\begin{array}{c} M \quad \mathrm{for} \quad 0\leq \theta <\pi\\ -M \quad \mathrm{for} \quad \pi \leq \theta <2\pi \end{array} \right..$$ Consequently, the spiking time with $I^*_{Tmin}$ can be computed using and , which yields $$T^{M}_{min}= \frac{2\pi-4\tan^{-1}\left\{z_dM/\sqrt{-z_d^2M^2+\omega^2}\right\}}{\sqrt{-z_d^2M^2+\omega^2}}. \label{eq:sin_min_T}$$ It follows that $I^*$, derived in , is the minimum-power stimulus that spikes the neuron at a desired spiking time $T$ if $|I^*|\leq M$ for all $t\in [0,T]$. However, there exists a shortest possible spiking time by $I^*$ given the bound $M$. Simple first and second order optimality conditions applied to find that the maximum value of $I^*$ occurs at $\theta=\pi/2$ for $\lambda_0<0$ and at $\theta=3\pi/2$ for $\lambda_0>0$. Therefore, the $\lambda_0$ for the shortest spiking time with control $I^*$ satisfying $|I^*(t)|\leq M$ can be calculated by substituting $I^*=M$ and $\theta=\pi/2$ to the equation , and then from we obtain this shortest spiking period $$\label{eq:min_time_UBC}
T^{I^*}_{min}=\int_0^{2\pi}\frac{1}{\sqrt{\omega^2+z_dM(z_dM+2\omega)\sin^2(\theta)}}.$$ Note that $T^{M}_{min}<T^{I^*}_{min}$. According to when $M\geq\omega/z_d$, arbitrarily large spiking times can be achieved by making $\dot{\theta}$ arbitrary close to zero. Therefore we consider two cases for $M\geq\omega/z_d$ and $M<\omega/z_d$.
![Variation of the maximum value of $I^*$ with spiking time $T$ for sinusoidal PRC model with $\w=1$ and $z_d=1$.[]{data-label="fig:max_I_vs_T"}](fig4_max_I_vs_T.eps)
*Case I: $M\geq\omega/z_d$*. Since $I^*$ takes the maximum value at $\t=3\pi/2$ for $\lambda_0>0$, we have $|I^*|\leq(\omega-\sqrt{\omega^2-\omega\lambda_0z_d^2})/z_d$, which leads to $|I^*|<\w/z_d\leq M$ for $\lambda_0>0$. This implies that $I^*$ is the minimum-power control for any desired spiking time $T>2\pi/\w$ when $M\geq\w/z_d$, and hence for any spiking time $T\geq T^{I^*}_{min}$. Variation of the maximum value of the control $I^*$ with spiking time $T$ for $\omega=1$ and $z_d=1$ is depicted in FIG. \[fig:max\_I\_vs\_T\]. Shorter spiking times $T\in[T^{M}_{min}, T^{I^*}_{min})$ are feasible but, due to the bound $M$, can not be achieved by $I^*$ since it requires a control with amplitude greater than $M$ for some $t\in[0,T]$. However, these spiking times can be optimally achieved by applying controls switching between $I^*$ and $I^*_{Tmin}$.
Let the desired spiking time $T\in[T^{M}_{min}, T^{I^*}_{min})$. Then, there exist two angles $\theta_1=\sin^{-1}[-2M\omega/(z_dM^2+z_d\omega \lambda_0)]$ and $\theta_2=\pi-\theta_1$ where $I^*$ meets the bound $M$. When $\theta\in(\theta_1,\theta_2)$, $I^*>M$ and we take $I(\theta)=M$ for $\theta\in[\theta_1,\theta_2]$. The Hamiltonian of the system when $\theta\in[\theta_1,\theta_2]$ is then, from , $H=M^2+\lambda(\omega+z_d\sin\theta\,M)$. If the triple $(\lambda,\t,M)$ is optimal, then $H$ is a constant, which gives $\lambda=(H-M^2)/(\omega+z_dM\sin\theta)$. This multiplier satisfies the adjoint equation , and therefore $I(\theta)=M$ is optimal for $\theta\in[\theta_1, \theta_2]$. Similarly, by symmetry, $I^*<-M$ when $\t\in[\t_3,\t_4]$, where $\theta_3=\pi+\theta_1$ and $\theta_4=2\pi-\theta_1$, if the desired spiking time $T\in[T^{M}_{min}, T^{I^*}_{min})$. It can be easily shown by the same fashion that $I(\t)=-M$ is optimal in the interval $\t\in[\t_3,\t_4]$.
Therefore, the minimum-power optimal control that spikes the neuron at $T\in[T^{M}_{min}, T^{I^*}_{min})$ can be characterized by four switchings between $I^*$ and $M$, i.e., $$\begin{aligned}
\label{eq:I1*}
I^*_1=\left\{\begin{array}{ll} I^* & \ 0\leq \theta < \theta_1 \\ M & \ \theta_1\leq\theta\leq\theta_2 \\ I^* & \ \theta_2 < \theta < \theta_3 \\ -M & \ \theta_3\leq\theta\leq\theta_4\\ I^* & \ \theta_4 < \theta \leq 2\pi.
\end{array}\right.\end{aligned}$$ The initial value of the multiplier, $\lambda_0$, resulting in the optimal trajectory, can then be found according to the desired spiking time $T\in[T^{M}_{min}, T^{I^*}_{min})$ through the relation $$T=\int_0^{\theta_1}{\frac{4}{\sqrt{\scriptstyle{\omega^2-\omega\lambda_0z_d^2\sin^2\theta}}}}d\theta +\int_{\theta_1}^{\frac{\pi}{2}}{\frac{4}{\omega+z_dM\sin\left(\theta\right)}}d\theta.$$
FIG. \[fig:T\_vs\_lambda0\_sine\_for\_I1\] shows the relation between $\lambda_0$ and $T$ by $I^*_1$ for $M=2.5,\ z_d=1,$ and $ \omega =1$. From the minimum possible spiking time with this control bound $M=2.5$ is $T^{M}_{min}=2.735$ and from the minimum spiking time by $I^*$ is $T^{I^*}_{min}=3.056$. Thus, in this example, any desired spiking time $T>3.056$ can be optimally achieved by $I^*$ whereas any $T\in[2.735,3.056)$ can be optimally obtained by $I^*_1$ as in . FIG. \[fig:I1\_and\_I\_sine\] illustrates the bounded and unbounded optimal controls that fire the neuron at $T=2.8$, where $I^*$ is the minimum-power stimulus when the control amplitude is not limited and $I^*_1$ is the minimum-power stimulus when the bound $M=2.5$. $I^*$ drives the neuron from $\t(0)=0$ to $\t(2.8)=2\pi$ with 13.54 units of power whereas $I^*_1$ requires 14.13 units.
*Case II : $M<\omega/z_d$*. In contrast with Case I in the previous section, achieving arbitrarily large spiking times is not feasible with a bound $M<\omega/z_d$. In this case, the longest possible spiking time is achieved by $$I^*_{Tmax}= \left\{\begin{array}{ll} -M & \quad \text{for} \ \ 0\leq \theta <\pi, \\ M & \quad \text{for} \ \ \pi\leq\theta<2\pi.\end{array}\right.$$ The spiking time of the neuron under this control is, $$T^M_{max}= \frac{2\pi+4\tan^{-1}\Big[z_dM/\sqrt{-z_d^2M^2+\omega^2}\Big]}{\sqrt{-z_d^2M^2+\omega^2}}, \label{eq:max_T_for_M<w/s}$$ and the longest spiking time feasible with control $I^*$ is given by $$\label{eq:max_time_UBC}
T^{I^*}_{max}=\int_0^{2\pi}\frac{1}{\sqrt{\omega^2+z_dM(z_dM-2\omega)\sin^2(\theta)}}.$$ Then, by similar analysis for Case I, any spiking time $T\in[T^M_{min},T^{I^*}_{min})$ for a given $M<\w/z_d$ can be achieved with the minimum-power control $I^*_1$ as given in , any $T\in[T^{I^*}_{min},T^{I^*}_{max}]$ can be achieved with minimum power by $I^*$ in , and moreover any $T\in(T^{I^*}_{max},T^M_{max}]$ can be obtained by switching between $I^*$ and $I^*_{max}$. The corresponding switching angles are $\theta_5=\sin^{-1}[2M\omega/(z_dM^2+z_d\omega \lambda_0)],\theta_6=\pi-\theta_5,\theta_7=\pi+\theta_5$ and $\theta_8=2\pi-\theta_5$, and the minimum-power optimal control for $T\in(T^{I^*}_{max},T^M_{max}]$ is characterized by $$\begin{aligned}
\label{eq:I2*}
I^*_2=\left\{\begin{array}{ll} I^* & \ 0\leq \theta < \theta_5 \\ -M & \ \theta_5\leq\theta\leq\theta_6 \\ I^* & \ \theta_6 < \theta < \theta_7 \\ M & \ \theta_7\leq\theta\leq\theta_8\\ I^* & \ \theta_8 < \theta \leq 2\pi.
\end{array}\right.\end{aligned}$$ The $\lambda_0$ resulting in the optimal trajectory by $I_2^*$ can be calculated according to the given $T\in(T^{I^*}_{max},T^M_{max}]$ via the relation $$T=\int_0^{\theta_5}{\frac{4}{\sqrt{\scriptstyle{\omega^2-\omega \lambda_0z_d^2\sin^2\theta}}}}d\theta+\int_{\theta_5}^{\frac{\pi}{2}}{\frac{4}{\omega-z_dM\sin\theta}}d\theta.$$
FIG. \[fig:T\_vs\_lambda0\_sine\_for\_I2\] shows the relation between $\lambda_0$ and $T$ by $I^*_2$ for $M=0.55$, $z_d=1$, and $\omega=1$. From the maximum possible spiking time with $M=0.55$ is $T^M_{max}=10.312$ and from the maximum spiking time feasible by $I^*$ is $I^{I^*}_{max}=9.006$. Therefore, in this example, any desired spiking time $T\in(9.006,10.312]$ can be obtained with minimum power by the use of $I^*_2$. FIG. \[fig:I2\_and\_I\_sine\] illustrates the bounded and unbounded optimal controls that spike the neuron at $T=10$, where $I^*$ is the minimum-power stimulus when the control amplitude is not limited and $I^*_2$ is the minimum-power stimulus when $M=0.55$. $I^*$ drives the neuron from $\t(0)=0$ to $\t(10)=2\pi$ with 2.193 units of power whereas $I^*_2$ requires 2.327 units.
A summary of the optimal (minimum-power) spiking scenarios for a prescribed spiking time of the neuron governed by the sinusoidal phase model is illustrated in FIG. \[fig:scenarios\].
\
SNIPER PRC and Theta Neuron Phase Models {#sec:sniper_prc}
----------------------------------------
We now consider the SNIPER PRC model in which $f(\theta)=\omega$ and $Z(\theta)=z_d(1-\cos\theta)$, where $z_d>0$ and $\omega>0$. That is, $$\label{eq:model_sniper}
\dot{\theta}=\omega+z_d(1-\cos\theta)I(t).$$ The minimum-power stimuli for spiking neurons modeled by this phase model can be easily derived with analogous analysis described previously in \[sec:unbounded\_control\_sine\] and \[sec:bounded\_control\_sine\] for the sinusoidal PRC phase model.
### Spiking Neurons with Unbounded Control {#sec:unbounded_control_sniper}
Employing the maximum principle as in \[sec:unbounded\_control\_sine\], the minimum-power stimulus that spikes the SNIPER neuron at a desired time $T$ can be derived and given by $$\label{eq:I*sniper}
I^*=\frac{-\omega+\sqrt{\omega^2-\omega \lambda_0 z_d^2(1-\cos\theta)^2}}{z_d(1-\cos\theta)},$$ where $\lambda_0$ corresponding to the optimal trajectory is determined through the integral relation with $T$, $$T=\int_0^{2\pi}{\frac{1}{\sqrt{\omega^2-\omega \lambda_0z_d^2(1-\cos \theta)^2}}}d\theta.$$
\
The minimum-power stimuli $I^*$ plotted with respect to time and phase for various spiking times $T=3,5,10,12$ with parameter values $z_d$=1 and $\omega=1$ are illustrated in FIG. \[fig:control\_vs\_time\_sniper\] and \[fig:control\_vs\_theta\_sniper\], respectively. The corresponding optimal trajectories of $\lambda(\t)$ and $\t(t)$ for these spiking times are displayed in FIG. \[fig:lambda\_vs\_theta\_sniper\] and \[fig:theta\_vs\_time\_sniper\].
### Spiking Neurons with Bounded Control {#sec:bounded_control_sniper}
When the amplitude of the available stimulus is limited, i.e., $|I(t)|\leq M$, the control that achieves the shortest spiking time for the SNIPER neuron modeled in is given by $I^*_{Tmin}=M>0$ for $0\leq\theta\leq 2\pi$, since $1-\cos\theta\geq 0$ for all $\theta\in [0,2\pi]$,. As a result, the shortest possible spiking time with this control is $T^M_{min}=2\pi/\sqrt{\omega^2+2z_d\omega M}$. Also, the shortest spiking time achieved by the control $I^*$ in given the bound $M$ is given by $$\label{eq:min_time_UBC_sniper}
T^{I^*}_{min}=\int_0^{2\pi}\frac{1}{\sqrt{\omega^2+z_dM(z_dM+\omega)(1-\cos\theta)^2}}.$$ Similar to the sinusoidal PRC case, the longest possible spiking time of the neuron varies with the control bound $M$. If $M\geq\omega/(2z_d)$, an arbitrarily large spiking time is achievable, however, if $M<\omega/(2z_d)$ there exists a maximum spiking time.
*Case I: $M\geq\omega/(2z_d)$*. Any spiking time $T\in[T^{I^*}_{min},\infty)$ is possible with control $I^*$ but a shorter spiking time $T\in[T^M_{min},T^{I^*}_{min})$ requires switching between $I^*$ and $I^*_{Tmin}$, which is characterized by two switchings, $$\begin{aligned}
\label{eq:I1*sniper}
I^*_1=\left\{\begin{array}{ll} I^*,&\quad 0\leq \theta < \theta_1\\
M,&\quad \theta_1\leq\theta\leq 2\pi-\theta_1 \\
I^*,& \quad 2\pi-\theta_1 < \theta \leq 2\pi
\end{array}\right.\end{aligned}$$ where $\theta_1=\cos^{-1}\left[1+2\omega M/(z_dM^2+z_d\omega \lambda_0)\right]$. The initial value $\lambda_0$ which results in the optimal trajectory is given by, $$T=\int_0^{\theta_1}{\frac{2}{\sqrt{\scriptstyle{\omega^2-\omega \lambda_0z_d^2(1-\cos\theta)^2}}}}d\theta+\int_{\theta_1}^{\pi}{\frac{2}{\scriptstyle{\omega+z_dM(1-\cos\theta)}}}d\theta.$$ FIG. \[fig:T\_vs\_lambda0\_sniper\_for\_I1\] illustrates the relation between $\lambda_0$ and $T\in[T^M_{min},T^{I^*}_{min})$ by $I_1^*$ for $M=2$, $z_d=1$, and $\omega=1$. In this case, the shortest feasible spiking time is $T^{M}_{min}=2.09$ and the shortest with the control $I^*$ is $T^{I^*}_{min}=3.18$. Any spiking time in the interval $(2.09,3.18]$ is achievable by $I^*_1$ in with minimum-power. FIG. \[fig:I1\_and\_I\_sniper\] illustrates the unbounded and bounded, with $M=2$, optimal stimuli that fire the neuron at $T=3$ with minimum-power.
*Case II: $M<\omega/(2z_d)$*. In this case there exists a longest possible spiking time which is achieved by $I_{max}^*=-M$ for all $\theta\in [0,2\pi]$. The longest spiking time feasible with the control $I^*$ as in is given by $$T^{I^*}_{max}=\int_0^{2\pi}\frac{1}{\sqrt{\omega^2+z_dM(z_dM-2\omega)(1-\cos\theta)^2}}.$$ Therefore, any spiking time $T\in[T^M_{min},T^{I^*}_{min})$ for a given $M<\w/z_d$ can be achieved with the minimum-power control $I^*_1$ as given in , any $T\in[T^{I^*}_{min},T^{I^*}_{max}]$ can be achieved with minimum power by $I^*$ in , and moreover any $T\in(T^{I^*}_{max},T^M_{max}]$ can be obtained by switching between $I^*$ and $I^*_{max}$, that is, $$\begin{aligned}
I^*_2=\left\{\begin{array}{ll} I^*,& \quad 0\leq \theta < \theta_2 \\
-M,& \quad \theta_2\leq\theta\leq 2\pi-\theta_2 \\
I^*,&\quad 2\pi-\theta_2 < \theta < 2\pi
\end{array}\right.\end{aligned}$$ where $\theta_2=\cos^{-1}\left[1-2\omega M/(z_dM^2+z_d\omega \lambda_0)\right]$. The $\lambda_0$ associated with the optimal trajectory is determined via the relation with the desired spiking time $T$, $$T=\int_0^{\theta_1}{\frac{2}{\sqrt{\scriptstyle {\omega^2-\omega \lambda_0z_d^2(1-\cos \theta)^2}}}}d\theta+\int_{\theta_1}^{\pi}{\frac{2}{\scriptstyle {\omega-z_dM(1-\cos\theta)}}}d\theta.$$ FIG. \[fig:T\_vs\_lambda0\_sniper\_for\_I2\] illustrates the relation between $\lambda_0$ and $T\in(T^{I^*}_{max},T^M_{max}]$ by $I_2^*$ for $M=0.3$, $z_d=1$, and $\omega=1$. In this case, the longest feasible spiking time is $T^M_{max}=9.935$ and the longest with the control $I^*$ is $T^{I^*}_{max}=8.596$. The unbounded and bounded, with $M=0.3$, optimal stimuli that fire the neuron at $T=9.8$ with minimum-power are illustrated in FIG. \[fig:I2\_and\_I\_sniper\].
A summary of the optimal (minimum-power) spiking scenarios for a prescribed spiking time of the neuron governed by the SNIPER PRC model in can be illustrated analogously to FIG. \[fig:spiking\_time\_line\_case1\] and \[fig:spiking\_time\_line\_case2\] for $M\geq \omega/(2z_d)$ and $M<\omega/(2z_d)$, respectively.
The theta neuron phase model is described by the dynamical equation $$\label{eq:model_theta}
\dot{\theta}=1+\cos\theta+z_d(1-\cos\theta)(I(t)+I_b),$$ where $I_b>0$ is the baseline current and the natural frequency $\omega$ of the theta neuron is given by $2\sqrt{I_b}$. In fact, by a coordinate transformation, $\theta(\phi)=2\tan^{-1}\left[\sqrt{I_b}\tan\left((\phi-\pi)/2\right)\right]+\pi$, the theta neuron model can be transformed to a SNIPER PRC with $z_d=\w/2$ in . Therefore, all of the results developed for the SNIPER PRC can be directly applied to the theta neuron phase model.
Conclusion and Future Work\[sec:conclusion\]
============================================
In this paper, we studied various phase-reduced models that describe the dynamics of a neuron system. We considered the design of minimum-power stimuli for spiking a neuron at a specified time instant and formulated this as an optimal control problem. We investigated both cases when the control amplitude is unbounded and bounded, for which we found analytic expressions of optimal feedback control laws. In particular for the bounded control case, we characterized the ranges of possible spiking periods in terms of the control bound. Moreover, minimum-power stimuli for steering any nonlinear oscillators of the form as in between desired states can be derived following the steps presented in this article. In addition, the charge-balanced constraint [@Nabi09] can be readily incorporated into this framework as well.
The optimal control of a single neuron system investigated in this work illustrates the fundamental limit of spiking a neuron with external stimuli and provides a benchmark structure that enables us to study optimal control of a spiking neural network with different individual oscillation frequencies. Our recent work [@Li_NOLCOS10] proved that simultaneous spiking of a network of neurons is possible; however, optimal control of such a spiking neural network has not been studied. We finally note that although one-dimensional phase models are reasonably accurate to describe the dynamics of neurons, studying higher dimensional models such as that of Hodgkin-Huxley is essential for more accurate computation of optimal neural inputs.
|
---
abstract: 'We consider network coding for a noiseless broadcast channel where each receiver demands a subset of messages available at the transmitter and is equipped with *noisy side information* in the form an erroneous version of the message symbols it demands. We view the message symbols as elements from a finite field and assume that the number of symbol errors in the noisy side information is upper bounded by a known constant. This communication problem, which we refer to as *broadcasting with noisy side information (BNSI)*, has applications in the re-transmission phase of downlink networks. We derive a necessary and sufficient condition for a linear coding scheme to satisfy the demands of all the receivers in a given BNSI network, and show that syndrome decoding can be used at the receivers to decode the demanded messages from the received codeword and the available noisy side information. We represent BNSI problems as bipartite graphs, and using this representation, classify the family of problems where linear coding provides bandwidth savings compared to uncoded transmission. We provide a simple algorithm to determine if a given BNSI network belongs to this family of problems, i.e., to identify if linear coding provides an advantage over uncoded transmission for the given BNSI problem. We provide lower bounds and upper bounds on the optimal codelength and constructions of linear coding schemes based on linear error correcting codes. For any given BNSI problem, we construct an equivalent index coding problem. A linear code is a valid scheme for a BNSI problem if and only if it is valid for the constructed index coding problem.'
title: Linear Codes for Broadcasting with Noisy Side Information
---
Broadcast channel, index coding, linear error correcting codes, network coding, noisy side information, syndrome decoding
Introduction {#Intro}
============
We consider the problem of broadcasting $n$ message symbols $x_1,\dots,x_n$ from a finite field ${\mathds{F}}_q$ to a set of $m$ users $u_1,\dots,u_m$ through a noiseless broadcast channel. The $i^{\text{th}}$ receiver $u_i$ requests the message vector ${{\bf{x}}}_{{\mathcal{X}}_i} = (x_j, \, j \in {\mathcal{X}}_i)$ where denotes the demands of $u_i$. We further assume that each receiver knows a noisy/erroneous version of *its own demanded message* as side information. In particular, we assume that the side information at $u_i$ is a ${\mathds{F}}_q$-vector ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}$ such that the demanded message vector ${{\bf{x}}}_{{\mathcal{X}}_i}$ differs from the side information ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}$ in at the most $\delta_s$ coordinates, where the integer $\delta_s$ determines the quality of side information. We assume that the transmitter does not know the exact realizations of the side information vectors available at the receivers. The objective of code design is to broadcast a codeword of as small a length as possible such that every receiver can retrieve its demanded message vector using the transmitted codeword and the available noisy side information. We refer to this communication problem as *broadcasting with noisy side information* (BNSI).
Wireless broadcasting in downlink communication channels has gained considerable attention and has several important applications, such as cellular and satellite communication, digital video broadcasting, and wireless sensor networks. The BNSI problem considered in this paper models the re-transmission phase of downlink communication channels at the network layer. Suppose during the initial broadcast phase each receiver of a downlink network decodes its demanded message packet erroneously (such as when the wireless channel experiences outage). Instead of discarding this decoded message packet, the erroneous symbols from this packet can be used as noisy side information for the re-transmission phase. If the number of symbol errors $\delta_s$ in the erroneously decoded packets is not large, we might be able to reduce the number of channel uses required for the re-transmission phase by intelligently coding the message symbols at the network layer.
Consider the example scenario shown in Fig. \[fig:subim1\]. The transmitter is required to broadcast 4 message symbols $x_1,x_2,x_3,x_4$ to 3 users. Each user requires a subset of the message symbols, for example, User 1, User 2 and User 3 demand $(x_1,x_2,x_3)$, $(x_2,x_3,x_4)$ and $(x_1,x_3,x_4)$, respectively. Suppose during the initial transmission the broadcast channel is in outage, as experienced during temporary weather conditions in satellite-to-terrestrial communications. As a result, at each user, one of the message symbols in the decoded packet is in error. Based on an error detection mechanism (such as cyclic redundancy check codes) all the users request for a re-transmission. We assume that the users are not aware of the position of the symbol errors.
[0.5]{} ![An example of broadcast channel with noisy side information.[]{data-label="fig:image0"}](block_diagram "fig:"){width="3.25in"}
[0.5]{} ![An example of broadcast channel with noisy side information.[]{data-label="fig:image0"}](problem_statement "fig:"){width="3.25in"}
The transmitter attempts a retransmission when the channel conditions improve. Instead of retransmitting each message packet individually, which will require $4$ symbols to be transmitted, the transmitter will broadcast the coded sequence $(x_1+x_4,x_2+x_4,x_3+x_4)$ consisting of $3$ symbols, as shown in Fig. \[fig:subim2\]. Upon receiving this coded sequence it can be shown that using appropriate decoding algorithm (Examples \[exmp2\] and \[exmp3\] in Sections \[sec3\] and \[sec4\], respectively) each user can correctly retrieve its own demanded message symbols using the erroneous version that it already has. By using a carefully designed code the transmitter is be able to reduce the number of channel uses in the retransmission phase.
Related Work
------------
*Index coding* [@YBJK_IEEE_IT_11] is a related code design problem that is concerned with the transmission of a set of information symbols to finitely many receivers in a noiseless broadcast channel where each receiver demands a subset of information symbols from the transmitter and already knows a *different* subset of symbols as side information. The demand subset and the side information subset at each receiver in index coding are disjoint and the side information is assumed to be noiseless. Several results on index coding are available based on algebraic and graph theoretic formulations [@linear_prog_10; @MCJ_IT_14; @VaR_GC_16; @MBBS_2016; @MaK_ISIT_17; @tehrani2012bipartite; @SDL_ISIT_13; @LOCKF_2016; @MAZ_IEEE_IT_13; @agar_maz_2016]. The problem of index coding under noisy broadcast channel conditions has also been studied. Dau et al. [@Dau_IEEE_J_IT_13] analyzed linear index codes for error-prone broadcast channels. Several works, for example [@karat_rajan_2017; @samuel_rajan_2017], provide constructions of error correcting index codes. Kim and No [@Kim_No_2017] consider errors both during broadcast channel transmission as well as in receiver side information.
Index coding achieves bandwidth savings by requiring each receiver to know a subset of messages that it does not demand from the source. This side information might be gathered by overhearing previous transmissions from the source to other users in the network. In contrast, the coding scenario considered in this paper does not require a user to overhear and store data packets that it does not demand (which may incur additional storage and computational effort at the receivers), but achieves bandwidth savings by exploiting the erroneous symbols already available from prior failed transmissions to the same receiver. To the best of our knowledge, no code design criteria, analysis of code length or code constructions are available for the class of broadcast channels with noisy side information considered in this paper.
Contributions and Organization
------------------------------
We view broadcasting with noisy side information as a coding theoretic problem at the network layer. We introduce the system model and provide relevant definitions in Section \[sec2\]. We consider linear coding schemes for the BNSI problem and provide a necessary and sufficient condition for a linear code to meet the demands of all the receivers in the broadcast channel (Theorem \[thm1\] and Corollary \[corr1\], Section \[sec3\]). Given a linear coding scheme for a BNSI problem, we show how each receiver can decode its demanded message from the transmitted codeword and its noisy side information using the syndrome decoding technique (Section \[sec4\]). We then provide an exact characterization of the family of BNSI problems where the number of channel uses required with linear coding is strictly less than that required by uncoded transmission (Theorem \[thm2\], Section \[sec5b\]). We provide a simple algorithm to determine if a given BNSI network belongs to this family of problems using a representation of the problem in terms of a bipartite graph (Algorithm 2, Section \[sec5c\]). Next we provide lower bounds on the optimal codelength (Section \[lb\]). A simple construction of an encoder matrix based on linear error correcting code is described (Section \[linecc\]). Based on this construction we then provide upper bounds on the optimal codelength (Section \[ub\]). Finally we relate the BNSI problem with index coding problem. We show that each BNSI problem is equivalent to an index coding problem (Section \[ic\_bnsi\]). We show that any linear code is a valid coding scheme for a BNSI problem if and only if it is valid for the equivalent index coding problem (Theorem \[BNSI\_IC\_L\], Section \[ic\_bnsi\]). A lower bound on optimal codelength of a BNSI problem is also derived from the equivalent index coding problem (Section \[lb\_ic\]).
*Notation*: Matrices and row vectors are denoted by bold uppercase and lowercase letters, respectively. For any positive integer $n$, the symbol $[n]$ denotes the set $\{1,\dots,n\}$. The Hamming weight of a vector ${{\bf{x}}}$ is denoted as $wt({{\bf{x}}})$. The symbol ${\mathds{F}}_q$ denotes the finite field of size $q$, where $q$ is a prime power. The $n \times n$ identity matrix is denoted as ${\bf I}_n$. For any matrix ${\bf L} \in {\mathds{F}}_q^{n \times N}$, $rowspan\{{\bf L}\}$ denotes the subspace of ${\mathds{F}}_q^N$ spanned by the rows of ${\bf L}$, and ${{\bf{L}}}^T$ is the transpose of ${{\bf{L}}}$.
System Model and Definitions {#sec2}
============================
Suppose a transmitter intends to broadcast a vector of $n$ information symbols from a finite field ${\mathds{F}}_q$ denoted as to $m$ users or receivers denoted as $u_1,u_2,\dots,u_m$. The demanded information symbol vector of $i^{\text{th}}$ user $u_i$ is denoted as where ${\mathcal{X}}_i \subseteq [n]$ is the *demanded information symbol index set* of the $i^{\text{th}}$ user. The $m$-tuple represents the demands of all the $m$ receivers in the broadcast channel. The erroneous version of the demanded information symbol vector available as side information at user $u_i$ is denoted as . We will assume that the noisy side information ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}$ differs from the actual demanded message vector ${{\bf{x}}}_{{\mathcal{X}}_i}$ in at the most $\delta_s$ coordinates, i.e. , where the noise vector ${{\pmb{\epsilon}}_{i}}\in {\mathds{F}}_q^{|{\mathcal{X}}_i|}$ and . We will further assume that the transmitter and all the receivers know the value of $\delta_s$ and ${\mathcal{X}}$, but are unaware of the exact realization of the noise vectors ${{\pmb{\epsilon}}}_1,\dots,{{\pmb{\epsilon}}}_m$.
The coding problem considered in this paper is to generate a transmit codeword ${{\bf{c}}}=(c_1,\dots,c_N) \in {\mathds{F}}_q^N$ of as small a length $N$ as possible to be broadcast from the transmitter such that each user $u_i$, $i \in [m]$, can correctly estimate its own demanded message ${{\bf{x}}}_{{\mathcal{X}}_i}$ using the codeword ${\bf{c}}$ and the noisy side information ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}$. Note that the task of decoding ${{\bf{x}}}_{{\mathcal{X}}_i}$ from ${{\bf{c}}}$ and ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}$ is equivalent to that of decoding the error vector ${{\pmb{\epsilon}}}_i = {{\bf{x}}}_{{\mathcal{X}}_i}^{e} - {{\bf{x}}}_{{\mathcal{X}}_i}$ at the user $u_i$.
The problem of designing a coding scheme for broadcasting $n$ information symbols to $m$ users with demands that are aided with noisy side information with at the most $\delta_s$ errors will be called the *$(m,n,{\mathcal{X}},\delta_s)$-BNSI (Broadcasting with Noisy Side Information) problem*.
A *valid encoding function* of codelength $N$ for the ($m,n,{\mathcal{X}},\delta_s$)-BNSI problem over the field ${\mathds{F}}_q$ is a function $$\mathfrak{E}:{\mathds{F}}_q^n\rightarrow {\mathds{F}}_q^N$$ such that for each user $u_i,$ there exists a decoding function satisfying the following property: $\mathfrak{D}_i(\mathfrak{E}({{\bf{x}}}),{{\bf{x}}}_{{\mathcal{X}}_i}+{{\pmb{\epsilon}}_{i}})={{\bf{x}}}_{{\mathcal{X}}_i}$ for every and all with $wt({{\pmb{\epsilon}}_{i}})\leq \delta_s$.
The aim of the code construction is to design a tuple $(\mathfrak{E},\mathfrak{D}_1,\mathfrak{D}_2,\dots, \mathfrak{D}_m)$ of encoding and decoding functions that minimizes the codelength $N$ and to calculate the *optimal codelength* for the given problem which is the minimum codelength among all valid BNSI coding schemes. In this paper we will consider only linear coding schemes for the BNSI problem. By imposing linearity, we are able to utilize the rich set of mathematical tools available from linear algebra and the theory of error correcting codes to analyze the BNSI network.
A coding scheme $(\mathfrak{E},\mathfrak{D}_1,\mathfrak{D}_2,\dots, \mathfrak{D}_m)$ is said to be *linear* if the encoding function is an ${\mathds{F}}_q$-linear transformation.
For a linear coding scheme, the codeword $\bf{c}=\mathfrak{E}({{\bf{x}}})={{\bf{x}}}{{\bf{L}}}$, where ${{\bf{x}}}\in {\mathds{F}}_q^n$ and ${{\bf{L}}}\in {\mathds{F}}_q^{n \times N}$. The matrix ${{\bf{L}}}$ is the *encoder matrix* of the linear coding scheme. The minimum codelength among all valid linear coding schemes for the $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem over the field ${\mathds{F}}_q$ will be denoted as either $N_{q,opt}(m,n,{\mathcal{X}},\delta_s)$ or simply $N_{q,opt}$ if there is no ambiguity.
Note that the trivial coding scheme that transmits the information symbols ${{\bf{x}}}$ ‘uncoded’, i.e., $\bf{c}=\mathfrak{E}({{\bf{x}}})={{\bf{x}}}$ is a valid coding scheme since each receiver $u_i$ can retrieve the demanded message ${{\bf{x}}}_{{\mathcal{X}}_i}$ directly from the received codeword. Further, this code is linear with ${\bf{L}}={\bf{I}}_{n}$. Thus, we have the following trivial upper bound on the optimum linear codelength $$\label{eq:trivial}
N_{q,opt}(m,n,{\mathcal{X}},\delta_s) \leq n.$$ We now introduce a representation of the BNSI problem as a bipartite graph.
The bipartite graph corresponding to the $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem consists of the node-sets and and the set of undirected edges .
The set $\mathcal{U}$ denotes the *user-set* and $\mathcal{P}$ denotes the set of packets or the *information symbol-set* and $\mathcal{E}$ represents the demands of each user in the broadcast channel. Note that the degree of the user node $u_i$ in ${\mathcal{B}}$ equals $|{\mathcal{X}}_i|$.
![.[]{data-label="fig:image1"}](bipartite_graph){width="1.5in"}
\[exmp1\] Consider the BNSI problem with information symbols, users, and user demand index sets , , . The bipartite graph in Fig. \[fig:image1\] describes this scenario where , and $\mathcal{E}$= $\{\{u_1,x_1\}$,$\{u_1,x_2\}$,$\{u_1,x_3\}$,$\{u_2,x_2\}$,\
$\{u_2,x_3\}$,$\{u_2,x_4\}$,$\{u_3,x_1\}$,$\{u_3,x_3\}$,$\{u_3,x_4\}\}$
Design Criterion for the Encoder Matrix {#sec3}
=======================================
Here, we derive a necessary and sufficient condition for a matrix ${\bf L} \in {\mathds{F}}_q^{n \times N}$ to be a valid encoder matrix for the $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem over ${\mathds{F}}_q$. We now define the set ${\mathcal{I}}(q,m,n,{\mathcal{X}},\delta_s)$ of vectors ${\bf z}$ of length $n$ such that for some choice of $i \in [m]$, i.e., $$\label{ical_def}
{\mathcal{I}}(q,m,n,{\mathcal{X}},\delta_s)=\bigcup\limits_{i=1}^{m} \left\{\,{{\bf{z}}}\in {\mathds{F}}_q^n~|~ 1 \leq wt({{\bf{z}}}_{{\mathcal{X}}_i}) \leq 2\delta_s \, \right\}.$$ When there is no ambiguity we will denote ${\mathcal{I}}(q,m,n,{\mathcal{X}},\delta_s)$ simply as ${\mathcal{I}}$.
\[thm1\] A matrix ${{\bf{L}}}\in{\mathds{F}}_q^{n \times N}$ is a valid encoder matrix for the $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem if and only if $${\bf zL} \neq {\bf 0},\quad \forall {{\bf{z}}}\in {\mathcal{I}}(q,m,n,{\mathcal{X}},\delta_s).$$
The encoding function $\mathfrak{E}({{\bf{x}}})={\bf xL}$ is valid for the given BNSI problem if and only if for each $i \in [m]$, user $u_i$ can uniquely determine ${\bf x}_{{\mathcal{X}}_i}$ from the received codeword ${\bf xL}$ and the side information ${{\bf{x}}}_{{\mathcal{X}}_i} + {{\pmb{\epsilon}}}_i$. Hence, for two distinct values of the demanded message ${{\bf{x}}}_{{\mathcal{X}}_i}$ and ${{\bf{x}}}'_{{\mathcal{X}}_i}$, if the noise vectors ${{\pmb{\epsilon}}}_i$ and ${{\pmb{\epsilon}}}'_i$ are such that the noisy side information at $u_i$ is identical (i.e., ${{\bf{x}}}_{{\mathcal{X}}_i} + {{\pmb{\epsilon}}}_i = {{\bf{x}}}'_{{\mathcal{X}}_i} + {{\pmb{\epsilon}}}'_i$) then the corresponding transmit codewords ${\bf xL}$ and ${\bf x'L}$ must be distinct for $u_i$ to distinguish the message ${\bf x}_{{\mathcal{X}}_i}$ from ${{{\bf{x}}}'_{{\mathcal{X}}_i}}$. Equivalently, the condition should hold for every pair ${{\bf{x}}},{{\bf{x}}}' \in {\mathds{F}}_q^n$ such that and ${{\bf{x}}}_{{\mathcal{X}}_i}+{{\pmb{\epsilon}}_{i}}={{\bf{x}}}'_{{\mathcal{X}}_i}+{{\pmb{\epsilon}}_{i}}'$ for some choice of ${{\pmb{\epsilon}}_{i}},{{\pmb{\epsilon}}_{i}}' \in {\mathds{F}}_q^{|{\mathcal{X}}_i|}$ with $wt({{\pmb{\epsilon}}_{i}}), wt({{\pmb{\epsilon}}_{i}}') \leq \delta_s$. Therefore, ${{\bf{L}}}$ is a valid encoder matrix if and only if $${{\bf{x}}}{{\bf{L}}}\neq {{\bf{x}}}'{{\bf{L}}}\label{equ1}$$ such that and , , for some $i \in [m]$ and ${{\pmb{\epsilon}}_{i}},{{\pmb{\epsilon}}_{i}}' \in {\mathds{F}}_q^{|{\mathcal{X}}_i|}$. Denoting the condition in (\[equ1\]) can be reformulated as for all ${{\bf{z}}}\in {\mathds{F}}_q^n$ such that and , , , for some choice of and ${{\pmb{\epsilon}}_{i}},{{\pmb{\epsilon}}_{i}}' \in {\mathds{F}}_q^{|{\mathcal{X}}_i|}$. Equivalently, when and for some . The statement of the theorem then follows.
\[exmp2\] Consider the $(3,4,{\mathcal{X}},1)$-BNSI problem of Example \[exmp1\] with the field size $q=2$. It is straightforward to verify that $\mathcal{I}={\mathds{F}}_2^4\setminus\{\pmb{0},\pmb{1}\}$, where $\pmb{0}$ and $\pmb{1}$ denote the all zero and all one vector respectively in ${\mathds{F}}_2^4$. Let $$\label{eq:exmp2}
{{\bf{L}}}=
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1\\
1 & 1 & 1
\end{bmatrix}.$$ It is easy to check that $\forall {{\bf{z}}}\in \mathcal{I}$, ${{\bf{z}}}{{\bf{L}}}\neq {\bf 0}$ because $wt({{\bf{z}}})$ is either $1,2$ or $3$ and any $3$ rows of ${{\bf{L}}}$ are linear independent. Hence, the matrix ${\bf L}$ in is a valid encoder matrix, and this coding scheme with codelength $N=3$ saves $1$ channel use with respect to uncoded transmission. It can be verified that no $4 \times 2$ binary matrix satisfies the criteria of Theorem \[thm1\] for this problem, and hence, $N_{2,opt}=3$.
We now provide a restatement of Theorem \[thm1\] in terms of the span of the rows of submatrices of ${\bf L}$. Towards this we first introduce some notation. For each $i \in [m]$, let . The set ${\mathcal{Y}}_i$ is the index set of messages that are not demanded by $u_i$. For any ${\mathcal{A}}\subseteq [n]$, ${\bf L}_{{\mathcal{A}}}$ is the matrix consisting of the rows of ${{\bf{L}}}$ with indices given in ${\mathcal{A}}$.
\[corr1\] ${{\bf{L}}}$ is a valid encoder matrix for the $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem if and only if for every , any non-zero linear combination of any $2\delta_s$ or fewer rows of ${\bf{L}}_{\mathcal{X}_i}$ does not belong to $rowspan\{{{\bf{L}}}_{\mathcal{Y}_i}\}$.
From Theorem \[thm1\], ${\bf L}$ is a valid encoder matrix if and only if ${\bf zL} \neq 0$ whenever $1 \leq wt({\bf z}_{{\mathcal{X}}_i}) \leq 2\delta_s$ for some choice of $i \in [m]$. Since ${\bf zL} = {{\bf{z}}}_{{\mathcal{X}}_i}{{\bf{L}}}_{{\mathcal{X}}_i} + {{\bf{z}}}_{{\mathcal{Y}}_i}{{\bf{L}}}_{{\mathcal{Y}}_i}$, we have ${{\bf{z}}}_{{\mathcal{X}}_i}{{\bf{L}}}_{{\mathcal{X}}_i} \neq -{{\bf{z}}}_{{\mathcal{Y}}_i}{{\bf{L}}}_{{\mathcal{Y}}_i}$ for any with $wt({{\bf{z}}}_{{\mathcal{X}}_i}) \leq 2\delta_s$ and for any . The corollary then immediately follows from this observation.
\[corr2\] If ${{\bf{L}}}$ is a valid encoder matrix for the $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem then any $2\delta_s$ or fewer rows of ${{\bf{L}}}_{{\mathcal{X}}_i}$ are linearly independent for every $i \in [m]$.
From Corollary \[corr1\], if ${\bf L}$ is a valid encoder matrix any non-zero linear combination of $2\delta_s$ or fewer rows of ${\bf L}_{{\mathcal{X}}_i}$ is not in $rowspan\{{{\bf{L}}}_{{\mathcal{Y}}_i}\}$. Since ${\bf 0} \in rowspan\{{{\bf{L}}}_{{\mathcal{Y}}_i}\}$, ${{\bf{z}}}_{{\mathcal{X}}_i}{{\bf{L}}}_{{\mathcal{X}}_i} \neq {\bf 0}$ if $1 \leq wt({{\bf{z}}}_{{\mathcal{X}}_i}) \leq 2\delta_s$. Therefore, any $2\delta_s$ or fewer rows of ${{\bf{L}}}_{{\mathcal{X}}_i}$ must be linearly independent.
Syndrome Decoding {#sec4}
=================
We now propose a decoding procedure for linear coding schemes for an arbitrary $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem which uses similar concept of *syndrome decoding* for linear error correcting codes. Consider a code for $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem generated by a valid encoder matrix ${{\bf{L}}}\in {\mathds{F}}_q^{n \times N}$. The user $u_i, i \in [m]$ receives the codeword ${{\bf{x}}}{{\bf{L}}}={{\bf{x}}}_{{\mathcal{X}}_i}{{\bf{L}}}_{{\mathcal{X}}_i}+{{\bf{x}}}_{{\mathcal{Y}}_i}{{\bf{L}}}_{{\mathcal{Y}}_i}$ and also possesses erroneous demanded information symbol vector ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}={{\bf{x}}}_{{\mathcal{X}}_i}+ {{\pmb{\epsilon}}_{i}}$, where ${{\pmb{\epsilon}}_{i}}\in {\mathds{F}}_q^{|{\mathcal{X}}_i|}$ and $wt({{\pmb{\epsilon}}_{i}})\leq \delta_s$.
Considering user $u_i$, suppose denotes the index set of the rows of ${{\bf{L}}}$ that form a basis for $rowspan\{{{\bf{L}}}_{{\mathcal{Y}}_i}\}$. Therefore $rowspan\{{{\bf{L}}}_{\beta_i}\}$ = $rowspan\{{{\bf{L}}}_{{\mathcal{Y}}_i}\}$, and ${{\bf{L}}}_{\beta_i}$ has linearly independent rows. So we can write ${{\bf{x}}}_{{\mathcal{Y}}_i}{{\bf{L}}}_{{\mathcal{Y}}_i}={{\pmb{\beta}}}{{\bf{L}}}_{\beta_i}$ for some ${{\pmb{\beta}}}\in {\mathds{F}}_q^{|\beta_i|}$. Hence, the received codeword is ${{\bf{c}}}= {{\bf{x}}}{{\bf{L}}}={{\bf{x}}}_{{\mathcal{X}}_i}{{\bf{L}}}_{{\mathcal{X}}_i} + {{\pmb{\beta}}}{{\bf{L}}}_{\beta_i}$. Note that ${{\pmb{\beta}}}{{\bf{L}}}_{\beta_i}$ is the interference at receiver $u_i$ due to the undesired messages ${{\bf{x}}}_{{\mathcal{Y}}_i}$. Regarding $rowspan\{{{\bf{L}}}_{\beta_i}\}$ as a linear code of length $N$ and dimension $|\beta_i|$ over ${\mathds{F}}_q$, let be a parity check matrix of $rowspan\{{{\bf{L}}}_{\beta_i}\}$. Since ${{\bf{L}}}_{\beta_i}$ is a generator matrix of this code, we have ${{\bf{H}}}_i{{\bf{L}}}^T_{\beta_i}={\bf 0}$.
The syndrome decoder at $u_i$ functions as follows. Given the codeword ${{\bf{c}}}$ and the noisy side information ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}$, the receiver first computes $$\begin{aligned}
{{\bf{y}}}' &= {{\bf{x}}}_{{\mathcal{X}}_i}^{e}{{\bf{L}}}_{{\mathcal{X}}_i} - {{\bf{c}}}= ({{\bf{x}}}_{{\mathcal{X}}_i}+ {{\pmb{\epsilon}}_{i}}){{\bf{L}}}_{{\mathcal{X}}_i} - ({{\bf{x}}}_{{\mathcal{X}}_i}{{\bf{L}}}_{{\mathcal{X}}_i}+ {{\pmb{\beta}}}{{\bf{L}}}_{\beta_i})\\
&= {{\pmb{\epsilon}}_{i}}{{\bf{L}}}_{{\mathcal{X}}_i} - {{\pmb{\beta}}}{{\bf{L}}}_{\beta_i}. \end{aligned}$$ In order to remove the interference from ${{\pmb{\beta}}}$, the receiver multiplies ${{{\bf{y}}}'}^T$ with ${{\bf{H}}}_i$ to obtain the *syndrome* $$\pmb{b}_i^T = {{\bf{H}}}_i {{{\bf{y}}}'}^T= {{\bf{H}}}_i {{{\bf{L}}}_{{\mathcal{X}}_i}}^T {{{\pmb{\epsilon}}_{i}}}^T - {{\bf{H}}}_i{{\bf{L}}}^T_{\beta_i} {{\pmb{\beta}}}^T = {{\bf{H}}}_i {{{\bf{L}}}_{{\mathcal{X}}_i}}^T {{{\pmb{\epsilon}}_{i}}}^T.$$ Defining , we have ${{\textbf{A}}}_i {{\pmb{\epsilon}}_{i}}^T=\pmb{b}^T_i$. Given the syndrome $\pmb{b}_i$ and the matrix ${{\textbf{A}}}_i$, the receiver must identify the error vector ${{\pmb{\epsilon}}_{i}}$. We now show that $\pmb{b}_i={{\textbf{A}}}_i {{\pmb{\epsilon}}_{i}}^T$ uniquely determines ${{\pmb{\epsilon}}_{i}}$ provided $wt({{\pmb{\epsilon}}_{i}}) \leq \delta_s$.
\[lmm11\] If ${{\bf{L}}}$ is a valid encoder matrix and ${{\pmb{\epsilon}}_{i}}$ and ${{\pmb{\epsilon}}_{i}}'$ are distinct vectors in ${\mathds{F}}_q^{|{\mathcal{X}}_i|}$ each with Hamming weight at the most $\delta_s$, then ${{\textbf{A}}}_i{{\pmb{\epsilon}}_{i}}^T \neq {{\textbf{A}}}_i{{\pmb{\epsilon}}_{i}}'^T$.
Proof by contradiction. Suppose $\exists$ ${{\pmb{\epsilon}}_{i}},{{\pmb{\epsilon}}_{i}}' \in {\mathds{F}}_q^{|{\mathcal{X}}_i|}$ such that and , which satisfies . Then we have $$\begin{aligned}
&{{\bf{H}}}_i {{{\bf{L}}}_{{\mathcal{X}}_i}}^T {{\pmb{\epsilon}}_{i}}^T = {{\bf{H}}}_i {{{\bf{L}}}_{{\mathcal{X}}_i}}^T {{\pmb{\epsilon}}_{i}}'^T\\
\Rightarrow ~&{{\bf{H}}}_i (({{\pmb{\epsilon}}_{i}}-{{\pmb{\epsilon}}_{i}}'){{\bf{L}}}_{{\mathcal{X}}_i})^T = \pmb{0}\\
\Rightarrow ~&({{\pmb{\epsilon}}_{i}}-{{\pmb{\epsilon}}_{i}}'){{\bf{L}}}_{{\mathcal{X}}_i} \in rowspan\{{{\bf{L}}}_{\beta_i}\} = rowspan\{{{\bf{L}}}_{{\mathcal{Y}}_i}\}.
$$ Assuming , we have and . This implies that there exists a non-zero linear combination of $2\delta_s$ or fewer rows of ${{\bf{L}}}_{{\mathcal{X}}_i}$ that belongs to $rowspan\{{{\bf{L}}}_{{\mathcal{Y}}_i}\}$ which contradicts the necessary and sufficient criterion (Corollary \[corr1\]) for ${{\bf{L}}}$ to be a valid encoder matrix.
Now Lemma \[lmm11\] leads us to the following syndrome decoding procedure: given the received codeword ${{\bf{c}}}$ and side information ${{\bf{x}}}_{{\mathcal{X}}_{i}}^{e}$, the receiver first computes the syndrome $\pmb{b}_i^T = {{\bf{H}}}_i ({{\bf{x}}}_{{\mathcal{X}}_i}^e{{\bf{L}}}_{{\mathcal{X}}_i} -{{\bf{c}}})^T$, and then identifies, either by exhaustive search or by using a look up table, the unique vector ${\hat{\pmb{\epsilon}}}\in {\mathds{F}}_q^{|{\mathcal{X}}_i|}$ of weight at the most $\delta_s$ that satisfies ${{\textbf{A}}}_i {\hat{\pmb{\epsilon}}}^T = \pmb{b}_i^T$. If the Hamming weight of the noise ${{\pmb{\epsilon}}}_i$ is at the most $\delta_s$, then the estimate ${\hat{\pmb{\epsilon}}}$ equals ${{\pmb{\epsilon}}_{i}}$, and the receiver retrieves the demanded message through ${{\bf{x}}}_{{\mathcal{X}}_i} = {{\bf{x}}}_{{\mathcal{X}}_i}^{e} - {\hat{\pmb{\epsilon}}}$. The algorithm for *syndrome decoding* is given in Algorithm 1 which is valid for any $i \in [m]$. Similar to the syndrome decoding procedure of a general linear error correcting code, the proposed algorithm relies on an exhaustive search (or a look up table) to identify the unique solution of weight at the most $\delta_s$ to the linear equation ${{\textbf{A}}}_i {\hat{\pmb{\epsilon}}}= \pmb{b}_i^T$. We are yet to address the problem of designing coding schemes that admit efficient low-complexity implementations of syndrome decoding.
**Input: ${{\bf{c}}}$, ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}$, ${{\bf{L}}}$, ${{\bf{H}}}_i$, ${{\textbf{A}}}_i$**\
**Output**: An estimate ${\hat{\pmb{\epsilon}}}$ of the error vector ${{\pmb{\epsilon}}_{i}}$\
**Procedure**
- Step 1: Compute ${{\bf{y}}}'= ({{\bf{c}}}-{{\bf{x}}}_{{\mathcal{X}}_i}^e{{\bf{L}}}_{{\mathcal{X}}_i})$
- Step 2: Compute syndrome $\pmb{b}_i^T = {{\bf{H}}}_i {{\bf{y}}}'^T $
- Step 3: Calculate ${{\textbf{A}}}_i {{\pmb{\epsilon}}}^T, \forall {{\pmb{\epsilon}}}\in {\mathds{F}}_q^{|{\mathcal{X}}_i|}$ with $wt({{\pmb{\epsilon}}}) \leq \delta_s$. Among these vectors identify a vector ${\hat{\pmb{\epsilon}}}$ that satisfies ${{\textbf{A}}}_i {\hat{\pmb{\epsilon}}}^T=\pmb{b}_i^T$
\[exmp3\] We now consider syndrome decoding at user $u_1$ (i.e., $i=1$) for the BNSI problem of Example \[exmp1\] with the binary ($q=2$) encoder matrix ${{\bf{L}}}$ given in in Example \[exmp2\]. For $u_1$, we have ${\mathcal{X}}_1=\{1,2,3\}$, ${\mathcal{Y}}_1=\{4\}$, ${{\bf{L}}}_{{\mathcal{X}}_1}={\bf I}_3$ and ${{\bf{L}}}_{{\mathcal{Y}}_1}=(1~1~1)$. In this case, the rows indexed by $\beta_1={\mathcal{Y}}_1=\{4\}$ form a basis for $rowspan\{{{\bf{L}}}_{{\mathcal{Y}}_1}\}$. A parity check matrix for $rowspan\{{{\bf{L}}}_{\beta_1}\}$ is $${{\bf{H}}}_1=
\begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 1
\end{bmatrix}.$$ The corresponding ${{\textbf{A}}}_1$ matrix is ${{\textbf{A}}}_1= {{\bf{H}}}_1 {{\bf{L}}}_{{\mathcal{X}}_1}^T = {{\bf{H}}}_1 \, {\bf I}_3 = {{\bf{H}}}_1$. The value of ${{\textbf{A}}}_1 {{\pmb{\epsilon}}}^T$ for all possible ${{\pmb{\epsilon}}}$ of weight at the most $\delta_s=1$ is given in the following look up table.
${{\pmb{\epsilon}}}$ (0 0 0) (0 0 1) (0 1 0) (1 0 0)
----------------------------------------- ----------- ----------- ----------- -----------
${{\textbf{A}}}_1 {{\pmb{\epsilon}}}^T$ (0 0)$^T$ (1 1)$^T$ (0 1)$^T$ (1 0)$^T$
Note that the syndrome ${{\textbf{A}}}_1{{\pmb{\epsilon}}}^T$ is distinct for each possible error vector ${{\pmb{\epsilon}}}$.
Suppose ${{\bf{x}}}= (1~0~0~1)$, i.e., the message vector demanded by $u_1$ is ${{\bf{x}}}_{{\mathcal{X}}_1}=(1~0~0)$. The transmitter will transmit the codeword ${{\bf{c}}}={{\bf{x}}}{{\bf{L}}}=$ (0 1 1). Suppose user $u_1$ has the erroneous demanded information symbol vector , i.e., . User $u_1$ will calculate the syndrome $\pmb{b}_1^T={{\bf{H}}}_1({{\bf{x}}}_{{\mathcal{X}}_1}^{e}{{\bf{L}}}_{{\mathcal{X}}_1} - {{\bf{c}}})^T = (1~1)^T$. Using the syndrome look up table, the decoder will output ${\hat{\pmb{\epsilon}}}=(0~0~1)$ as the estimated error vector. This is subtracted from ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}=(1~0~1)$ to obtain the estimate $(1~0~0)$ of the demanded message ${{\bf{x}}}_{{\mathcal{X}}_1}$.
Characterization of Networks with $N_{q,opt} < n$ {#sec5}
=================================================
We remarked in Section \[sec2\] that uncoded transmission ${{\bf{L}}}={\bf I}_n$ is a valid linear coding scheme where number of channel uses $N$ is equal to the length $n$ of the message vector. It is important to identify the subset of BNSI problems for which this uncoded transmission is optimal (i.e., $N_{q,opt}=n$), or equivalently, characterize the family of networks where linear coding provides strict gains over uncoded transmission (i.e., ). This will allow us to identify the key structural properties of BNSI problems that lead to performance gains through network coding and will be helpful in conceiving systematic constructions of explicit encoder matrices.
Preliminaries
-------------
We now derive a few results based on which we formulate a necessary and sufficient condition for a BNSI problem to have $N_{q,opt}=n$.
\[lemma1\] If ${{\bf{L}}}$ is a valid encoder matrix for an $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem and if , for some , then , $rowspan({{\bf{L}}}_{{\mathcal{X}}_i}) \cap rowspan({{\bf{L}}}_{{\mathcal{Y}}_i}) =\{{\bf 0}\}$, and $rank({{\bf{L}}}) = rank({{\bf{L}}}_{{\mathcal{X}}_i}) + rank({{\bf{L}}}_{{\mathcal{Y}}_i}) = |{\mathcal{X}}_i| + rank({{\bf{L}}}_{{\mathcal{Y}}_i})$.
Follows immediately from Corollaries \[corr1\] and \[corr2\] using the fact that number of rows of ${{\bf{L}}}_{{\mathcal{X}}_i}$ is not more than $2\delta_s$, and using the observation that the rows of ${{\bf{L}}}_{{\mathcal{X}}_i}$ are linearly independent and their span intersects trivially with the span of the rows of ${{\bf{L}}}_{{\mathcal{Y}}_i}$.
From Lemma \[lemma1\], if $|{\mathcal{X}}_i| \leq 2\delta_s$, the rows of ${{\bf{L}}}$ corresponding to the message vectors ${{\bf{x}}}_{{\mathcal{X}}_i}$ and those corresponding to ${{\bf{x}}}_{{\mathcal{Y}}_i}$ are linearly independent. Hence, when encoded using ${{\bf{L}}}$ the message symbols ${{\bf{x}}}_{{\mathcal{Y}}_i}$ do not interfere with the detection of the symbols in ${{\bf{x}}}_{{\mathcal{X}}_i}$.
### Subproblems of a given BNSI problem {#sec5_subproblem}
Let $(m,n,{\mathcal{X}},\delta_s)$ be any given BNSI problem. Consider the $(m',n',{\mathcal{X}}',\delta_s)$-BNSI problem derived from $(m,n,{\mathcal{X}},\delta_s)$ by removing the symbols ${{\bf{x}}}_{{\mathcal{X}}_i}$, for some $i \in [m]$, from the demands of all the receivers. The derived problem has users one corresponding to each $j \in [m] \setminus \{i\}$. The demand of the $j^{\text{th}}$ user is ${\mathcal{X}}'_j = {\mathcal{X}}_j \setminus {\mathcal{X}}_i = {\mathcal{X}}_j \cap {\mathcal{Y}}_i$, and ${\mathcal{X}}'=({\mathcal{X}}'_j, j \in [m] \setminus \{i\})$. The vector of information symbols for the new problem is ${{\bf{x}}}_{[n]\setminus {\mathcal{X}}_i} = {{\bf{x}}}_{{\mathcal{Y}}_i}$, and the number of message symbols is . The bipartite graph ${\mathcal{B}}'=({\mathcal{U}}',{\mathcal{P}}',{\mathcal{E}}')$ for the derived problem will consist of the user-set ${\mathcal{U}}'={\mathcal{U}}\setminus \{u_i\}$, information symbol-set ${\mathcal{P}}'=\{x_k | k \notin {\mathcal{X}}_i\}$, and edge set ${\mathcal{E}}' = \{ \, \{u_j,x_k\} \in {\mathcal{E}}\,| \, k \notin {\mathcal{X}}_i \,\}$. Note that ${\mathcal{B}}'$ is the subgraph of ${\mathcal{B}}$ induced by the nodes $\{x_k | k \notin {\mathcal{X}}_i\}$.
\[lemma\_subproblem\] If ${{\bf{L}}}$ is a valid encoder matrix for $(m,n,{\mathcal{X}},\delta_s)$, then ${{\bf{L}}}_{{\mathcal{Y}}_i}$ is a valid encoder matrix for $(m',n',{\mathcal{X}}',\delta_s)$.
For any $j \in [m] \setminus \{i\}$ we have, and ${\mathcal{X}}'_j = {\mathcal{X}}_j \cap {\mathcal{Y}}_i \subset {\mathcal{X}}_j$. From Corollary \[corr1\], any non-zero linear combination of $2\delta_s$ or fewer rows of ${{\bf{L}}}_{{\mathcal{X}}_j}$ does not belong to $rowspan\{{{\bf{L}}}_{{\mathcal{Y}}_j}\}$. Since $rowspan\{{{\bf{L}}}_{{\mathcal{Y}}'_j}\} \subset rowspan\{{{\bf{L}}}_{{\mathcal{Y}}_j}\}$ and ${{\bf{L}}}_{{\mathcal{X}}'_j}$ is a submatrix of ${{\bf{L}}}_{{\mathcal{X}}_j}$, we deduce that any non-zero linear combination of $2\delta_s$ or fewer rows of ${{\bf{L}}}_{{\mathcal{X}}'_j}$ is not in $rowspan\{{{\bf{L}}}_{{\mathcal{Y}}'_j}\}$. Lemma \[lemma\_subproblem\] then follows from Corollary \[corr1\].
### A simple coding scheme for a family of BNSI problems {#sec:simple_coding}
Consider any BNSI problem $(m,n,{\mathcal{X}},\delta_s)$ where $|{\mathcal{X}}_i| \geq 2\delta_s + 1$ for all $i \in [m]$, i.e., $|{\mathcal{Y}}_i| \leq n - 2\delta_s - 1$. We will now provide a simple coding scheme with $N=n-1$ for any such problem. Let ${{\bf{L}}}\in {\mathds{F}}_q^{n \times (n-1)}$ be such that its first $(n-1)$ rows form the identity matrix ${\bf I}_{n-1}$ and the last row is the all-one vector ${\bf 1}=(1~1~\cdots~1) \in {\mathds{F}}_q^{(n-1)}$. Observe that any $(n-1)$ rows of ${{\bf{L}}}$ are linearly independent. We now show that ${{\bf{L}}}$ satisfies the condition in Theorem \[thm1\]. For any ${{\bf{z}}}\in {\mathcal{I}}$, there exists an $i \in [m]$ such that $wt({{\bf{z}}}_{{\mathcal{X}}_i}) \leq 2\delta_s$. Using , $$\begin{aligned}
wt({{\bf{z}}}) &= wt({{\bf{z}}}_{{\mathcal{X}}_i}) + wt({{\bf{z}}}_{{\mathcal{Y}}_i})
\leq 2\delta_s + n - 2\delta_s - 1 = n-1.\end{aligned}$$ Since any $(n-1)$ rows of ${{\bf{L}}}$ are linearly independent, ${{\bf{z}}}{{\bf{L}}}\neq {\bf 0}$. This proves that ${{\bf{L}}}$ is a valid encoder matrix for this problem. We do not claim that this scheme is optimal, however, this scheme is useful in proving the main result of this section. The linear code in Example \[exmp2\] is an instance of this coding scheme.
Characterization of networks with $N_{q,opt} < n$ {#sec5b}
-------------------------------------------------
Suppose a bipartite graph represents an $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem. We now define a collection $\Phi({\mathcal{B}})$ of subsets of information symbol indices. A non-empty set ${\mathsf{C}}\subset [n]$ belongs to $\Phi({\mathcal{B}})$ if and only if the subgraph ${\mathcal{B}}'=({\mathcal{U}}',{\mathcal{P}}',{\mathcal{E}}')$ of ${\mathcal{B}}$ induced by the packet nodes ${\mathcal{P}}_{{\mathsf{C}}}=\{x_k \, | \, k \in {\mathsf{C}}\}$ has the following property: $deg(u)\geq 2\delta_s+1$ for all $u \in {\mathcal{U}}'$, where $deg(u)$ is the number of edges incident on the vertex $u$. Equivalently, $\Phi(\mathcal{B})$ is the collection of all non-empty ${\mathsf{C}}\subset [n]$ such that $$\text{for every } i \in [m],~|{\mathcal{X}}_i \cap {\mathsf{C}}| \notin [2\delta_s],\text{i.e., either}~|{\mathcal{X}}_i \cap {\mathsf{C}}|=0~\text{or}~|{\mathcal{X}}_i \cap {\mathsf{C}}|\geq 2\delta_s+1.$$
\[lem:suff\] If $\Phi({\mathcal{B}})$ is empty, i.e, $\Phi({\mathcal{B}})=\phi$, then $N_{q,opt}=n$.
Let ${{\bf{L}}}$ be an optimal encoder matrix with . Since $\Phi(\mathcal{B})=\phi$, there doesn’t exist any non-empty ${\mathsf{C}}\subset [n]$ such that . In particular, choosing we deduce that there exists at least one user $u_{i_1}$, $i_1 \in [m]$ such that . By Lemma \[lemma1\], $rank({{\bf{L}}})=rank({{\bf{L}}}_{{\mathcal{X}}_{i_1}}) + rank({{\bf{L}}}_{{\mathcal{Y}}_{i_1}}) = |{\mathcal{X}}_{i_1}| + rank({\mathcal{Y}}_{i_1})$. Removing the information symbols ${{\bf{x}}}_{{\mathcal{X}}_{i_1}}$ from the problem $(m,n,{\mathcal{X}},\delta_s)$, we obtain a derived BNSI problem $(m^{(1)},n^{(1)},{\mathcal{X}}^{(1)},\delta_s)$ (see Lemma \[lemma\_subproblem\], Section \[sec5\_subproblem\]) where $m^{(1)}=m-1$, $n^{(1)}=n-|{\mathcal{X}}_{i_1}|$ and ${\mathcal{X}}^{(1)}=({\mathcal{X}}_j \cap {\mathcal{Y}}_{i_1}, j \neq i_1)$. From Lemma \[lemma\_subproblem\], the matrix ${{\bf{L}}}^{(1)}={{\bf{L}}}_{{\mathcal{Y}}_{i_1}}$ is a valid encoder for this problem. The bipartite graph ${\mathcal{B}}^{(1)}$ of the derived problem is a subgraph of ${\mathcal{B}}$. Since $\Phi({\mathcal{B}})$ is empty, it follows from the definition of $\Phi$ that $\Phi({\mathcal{B}}^{(1)})$ is empty as well. Also, $rank({{\bf{L}}}) = |{\mathcal{X}}_{i_1}| + rank({{\bf{L}}}^{(1)})$.
Since $\Phi({\mathcal{B}}^{(1)})$ is empty, the arguments used with the original problem ${\mathcal{B}}$ in the previous paragraph hold for the derived problem ${\mathcal{B}}^{(1)}$ as well. Hence, there exists an $i_2 \in [m] \setminus \{i_1\}$ such that $rank({{\bf{L}}}^{(1)})$ = + $rank({{\bf{L}}}_{{\mathcal{Y}}_{i_1} \cap {\mathcal{Y}}_{i_2}})$, and ${{\bf{L}}}^{(2)}={{\bf{L}}}_{{\mathcal{Y}}_{i_1} \cap {\mathcal{Y}}_{i_2}}$ is a valid encoder matrix for the problem $(m^{(2)},n^{(2)},{\mathcal{X}}^{(2)},\delta_s)$ derived from $(m^{(1)},n^{(1)},{\mathcal{X}}^{(1)},\delta_s)$ by removing ${{\bf{x}}}_{{\mathcal{X}}_{i_2} \setminus {\mathcal{X}}_{i_1}}$. The bipartite graph ${\mathcal{B}}^{(2)}$ for this problem is a subgraph of ${\mathcal{B}}^{(1)}$, and hence, satisfies $\Phi({\mathcal{B}}^{(2)}) = \phi$. Note that $rank({{\bf{L}}}) = |{\mathcal{X}}_{i_1}| + rank({{\bf{L}}}^{(1)}) = |{\mathcal{X}}_{i_1}| + |{\mathcal{X}}_{i_2} \setminus {\mathcal{X}}_{i_1}| + rank({{\bf{L}}}^{(2)}) = |{\mathcal{X}}_{i_1} \cup {\mathcal{X}}_{i_2}| + rank({{\bf{L}}}^{(2)})$.
We will continue this process until the size of the information symbols-set is at the most $2\delta_s$. Say this will happen in $t^{\text{th}}$ iteration. Then, the matrix ${{\bf{L}}}^{(t)}={{\bf{L}}}_{{\mathcal{Y}}_{i_1} \cap \cdots \cap {\mathcal{Y}}_{i_t}}$ is a valid encoder matrix for the $t^{\text{th}}$ derived BNSI problem $(m^{(t)},n^{(t)},{\mathcal{X}}^{(t)},\delta_s)$, and $rank({{\bf{L}}}) = |{\mathcal{X}}_{i_1} \cup \cdots \cup {\mathcal{X}}_{i_t}| + rank({{\bf{L}}}^{(t)})$. Since ${{\bf{L}}}^{(t)}$ has at the most $2\delta_s$ rows, from Corollary \[corr1\], all the rows of ${{\bf{L}}}^{(t)}$ are linearly independent, and hence, $rank({{\bf{L}}}^{(t)})=|{\mathcal{Y}}_{i_1} \cap \cdots \cap {\mathcal{Y}}_{i_t}| = n - |{\mathcal{X}}_{i_1} \cup \cdots \cup {\mathcal{X}}_{i_t}|$. It then follows that $rank({{\bf{L}}})=n$. Thus, the number of columns $N_{q,opt}$ of ${{\bf{L}}}$ satisfies $N_{q,opt} \geq rank({{\bf{L}}}) \geq n$. From , we have $N_{q,opt} \leq n$ thereby proving that $N_{q,opt}=n$.
We will now show that $N_{q,opt}=n$ only if $\Phi({\mathcal{B}}) = \phi$.
\[lem:necess\] If $\Phi({\mathcal{B}}) \neq \phi$, then $N_{q,opt} < n$.
Here we will provide a constructive proof where we will design a valid coding scheme with $N < n$. Since $\Phi({\mathcal{B}})$ is non-empty, there exists a non-empty ${\mathsf{C}}\subset [n]$ such that for each $i \in [m]$ either $|{\mathcal{X}}_i \cap {\mathsf{C}}| \geq 2\delta_s + 1$ or ${\mathcal{X}}_i \cap {\mathsf{C}}= \phi$. The proposed linear coding scheme partitions the transmit codeword ${{\bf{c}}}$ into two parts $({{\bf{c}}}_1~{{\bf{c}}}_2)$. The vector ${{\bf{c}}}_1$ carries the symbols ${{\bf{x}}}_{[n]\setminus {\mathsf{C}}}$ uncoded, i.e., ${{\bf{c}}}_1 = {{\bf{x}}}_{[n] \setminus {\mathsf{C}}}$. When ${{\bf{c}}}_2$ is broadcast, we will assume all the receivers know the value of ${{\bf{x}}}_{[n]\setminus {\mathsf{C}}}$. Thus, the problem of designing the second part of the code transmission, wherein the symbols ${{\bf{x}}}_{{\mathsf{C}}}$ must be delivered to the receivers, is identical to the BNSI problem ${\mathcal{B}}'=({\mathcal{U}}',{\mathcal{P}}',{\mathcal{E}}')$ with information symbol-set ${\mathcal{P}}'=\{ x_j \, | j \in {\mathsf{C}}\}$, user-set ${\mathcal{U}}'=\{ u_i \, | \, {\mathcal{X}}_i \cap {\mathsf{C}}\neq \phi\}$ and demands ${\mathcal{X}}' = ({\mathcal{X}}_i \cap {\mathsf{C}}, \forall u_i \in {\mathcal{U}}')$. Since ${\mathsf{C}}\in \Phi({\mathcal{B}})$, ${\mathcal{X}}_i \cap {\mathsf{C}}\neq \phi$ implies $|{\mathcal{X}}_i \cap {\mathsf{C}}| \geq 2\delta_s + 1$. Thus, the demand set of every receiver in the problem ${\mathcal{B}}'$ has cardinality at least $2\delta_s+1$. By using the coding scheme of Section \[sec:simple\_coding\] for the problem ${\mathcal{B}}'$, we require a code length of $|{\mathcal{P}}'|-1=|{\mathsf{C}}|-1$ for the vector ${{\bf{c}}}_2$. Hence, the codelength $N$ of the overall coding scheme is the sum of the lengths of ${{\bf{c}}}_1$ and ${{\bf{c}}}_2$, i.e., $N = n - |{\mathsf{C}}| + |{\mathsf{C}}| - 1 = n - 1$. We conclude that $N_{q,opt} < n$.
The main result of this section follows immediately from Lemmas \[lem:suff\] and \[lem:necess\].
\[thm2\] For an $(m,n,{\mathcal{X}},\delta_s)$-BNSI problem represented by the bipartite graph ${\mathcal{B}}$, $N_{q,opt} = n$ if and only if $\Phi({\mathcal{B}}) = \phi$.
An algorithm to determine if $\Phi({\mathcal{B}})$ is empty {#sec5c}
-----------------------------------------------------------
We now propose a simple iterative procedure given in Algorithm 2 which determines whether $\Phi(\mathcal{B})=\phi$ for a given bipartite graph $\mathcal{B}=(\mathcal{U},\mathcal{P},\mathcal{E})$. The idea behind Algorithm 2 is to find $\mathsf{P}_{{\mathsf{C}}} \subseteq \mathcal{P}$ for which each user-node in the subgraph induced by information symbol-set $\mathsf{P}_{{\mathsf{C}}}$ has degree either $0$ or $2\delta_s+1$. The procedure in Algorithm 2 proceeds as follows
- Initialize $\mathsf{B}=(\mathsf{U},\mathsf{P},\mathsf{E})$, where $\mathsf{U}={\mathcal{U}}$, $\mathsf{P}={\mathcal{P}}$, $\mathsf{E}={\mathcal{E}}$.
- Check whether every user-node in $\mathsf{U}$ has degree at least $2\delta_s+1$ (It can not be $0$ because each user has non-empty demanded information symbol index set). If true, then $\{j \, | \, x_j \in \mathsf{P} \,\} \in \Phi(\mathcal{B})$ and $\Phi({\mathcal{B}})$ is non-empty. If false, proceed to Step 2.
- Find a user-node $u_i$ with $1 \leq deg(u_i) \leq 2\delta_s$. Modify the graph $\mathsf{B}$ by removing the packet nodes $\{x_j \, | \, j \in {\mathcal{X}}_i\}$ and all the edges incident on these packet nodes. Then, remove any user node with zero degree. If $|\mathsf{P}| \leq 2\delta_s$ declare $\Phi({\mathcal{B}})=\phi$, else go to Step 1.
**Input**: $\mathcal{B}=(\mathcal{U},\mathcal{P},\mathcal{E})$, $\delta_s$\
**Output**: [TRUE]{} if $\Phi(\mathcal{B})=\phi$, [FALSE]{} otherwise and one element $\mathsf{C} \in \Phi({\mathcal{B}})$\
% % [Initialization:]{}\
$\mathsf{U} \leftarrow \mathcal{U}$, $\mathsf{P} \leftarrow \mathcal{P}$, $\mathsf{E} \leftarrow \mathcal{E}$, Bipartite graph $\mathsf{B}=(\mathsf{U},\mathsf{P},\mathsf{E})$\
% % Iteration:\
`output` TRUE; `return`; % % $|\mathsf{P}| \leq 2\delta_s$, hence $\Phi =\phi$
The correctness of the algorithm follows from the observation that the subgraph of $\mathsf{B}$ obtained in Step 2 by removing the packet nodes $\{x_j | j \in {\mathcal{X}}_i\}$ has non-empty $\Phi$ if and only if the set $\Phi(\mathsf{B})$ of the original graph $\mathsf{B}$ is itself non-empty. This is due to the fact that any member of $\Phi(\mathsf{B})$ will contain no elements from ${\mathcal{X}}_i$ since the degree of $u_i$ is at the most $2\delta_s$.
Consider the scenario mentioned in Example \[exmp2\]. Applying Algorithm 2 we will obtain , so . A valid encoding and decoding scheme over ${\mathds{F}}_2$ with codelength $3$ for this scenario is given in Example \[exmp3\]. If we consider the following scenario where , , , , , and . Again applying Algorithm 1, we can conclude that for this scenario $\Phi(\mathcal{B})=\phi$, therefore $N_{q,opt}=n=5$
Bounds on $N_{q,opt}$ and some code constructions
=================================================
Until now we have not described any systematic construction of an encoder matrix ${{\bf{L}}}$ or any methodology for calculating the optimal codelength $N_{q,opt}$ for a general $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem. In this section we will present some lower bounds on the optimal codelength $N_{q,opt}$ and constructions of encoder matrices ${{\bf{L}}}$ for $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem. These constructions will provide upper bounds on $N_{q,opt}$.
Lower Bounds on $N_{q,opt}$ {#lb}
---------------------------
Here we will describe two lower bounds on $N_{q,opt}$, one of them is based on the size of the *demanded information symbol index set* of each user in a given BNSI problem and the other will be characterized based on the set $\Phi$ defined on a subgraph of the bipartite graph representing the BNSI problem. At first, we will derive a result that will help to obtain the lower bounds on optimal codelength described in the two subsequent sub-sections.
Consider a bipartite graph $\mathcal{B}=(\mathcal{U},\mathcal{P},\mathcal{E})$ that represents the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem. For any $\rho \subseteq [n]$, let $x_{\rho}=\{x_j~|~j \in \rho\}$. We will derive a subgraph $\mathcal{B'}=(\mathcal{U'},\mathcal{P'},\mathcal{E'})$ from $\mathcal{B}$ induced by the *information set* , where and $\mathcal{E'}=\{\{u_i,x_j\}\in \mathcal{E}~|~x_j \in
\mathcal{P'}, u_i \in \mathcal{U'}\}$. The bipartite graph $\mathcal{B'}=(\mathcal{U'},\mathcal{P'},\mathcal{E'})$ represents the $(m',n',{\mathcal{X}}',\delta_s)$ BNSI problem, where $m'=|\mathcal{U'}|$, $n'=|\mathcal{P'}|$ and ${\mathcal{X}}'$ is the tuple $({\mathcal{X}}'_i={\mathcal{X}}_i \cap \rho,~\forall u_i \in \mathcal{U'})$. In other words, the $(m',n',{\mathcal{X}}',\delta_s)$ BNSI subproblem is derived from the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem by deleting some information symbols from the *information symbol set* of the original BNSI problem.
\[subprob\] Let $N_{q,opt}(m',n',{\mathcal{X}}',\delta_s)$ be the optimal codelength over ${\mathds{F}}_q$ for the $(m',n',{\mathcal{X}}',\delta_s)$ BNSI problem. Then $N_{q,opt}(m',n',{\mathcal{X}}',\delta_s)$ satisfies the following property, $$N_{q,opt}(m',n',{\mathcal{X}}',\delta_s) \leq N_{q,opt}(m,n,{\mathcal{X}},\delta_s).$$
In the $(m',n',{\mathcal{X}}',\delta_s)$ BNSI subproblem, the size of the *demanded information symbol index set* for each user is reduced compared to the original BNSI problem. Consider a valid encoder matrix ${{\bf{L}}}$ with optimal codelength $N_{q,opt}(m,n,{\mathcal{X}},\delta_s)$ for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem. Now in $(m',n',{\mathcal{X}}',\delta_s)$ BNSI subproblem represented by the subgraph $\mathcal{B'}=(\mathcal{U'},\mathcal{P'},\mathcal{E'})$ of $\mathcal{B}$, any user $u_i \in \mathcal{U'}$ has and ${\mathcal{Y}}'_i ={\mathcal{Y}}_i \cap \rho \subset {\mathcal{Y}}_i$. As *rowspan*$\{{{\bf{L}}}_{{\mathcal{Y}}'_i}\} \subset$ *rowspan*$\{{{\bf{L}}}_{{\mathcal{Y}}_i}\}$ and ${{\bf{L}}}_{{\mathcal{X}}'_i}$ is a submatrix of ${{\bf{L}}}_{{\mathcal{X}}'_i}$, in submatrix ${{\bf{L}}}_{\rho}$ any non-zero linear combination of $2\delta_s$ or fewer rows of ${{\bf{L}}}_{{\mathcal{X}}'_i}$ is not in *rowspan*$\{{{\bf{L}}}_{{\mathcal{Y}}'_i}\}$. Therefore using Corollary \[corr1\], we conclude that ${{\bf{L}}}_{\rho}$ is a valid encoder matrix for $(m',n',{\mathcal{X}}',\delta_s)$ BNSI problem with codelength $N_{q,opt}(m,n,{\mathcal{X}},\delta_s)$. Thus the optimal codelength $N_{q,opt}(m',n',{\mathcal{X}}',\delta_s)$ for $(m',n',{\mathcal{X}}',\delta_s)$ BNSI problem does not exceed $N_{q,opt}(m,n,{\mathcal{X}},\delta_s)$.
### Lower bound based on size of the demanded information symbol index set of each user {#lb_size}
Consider an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem represented by the bipartite graph $\mathcal{B}$. Now we obtain the following lower bound.
\[lowerbound1\] Suppose $S=\{i \in [m]~\vert~|\mathcal{X}_i| \in [2\delta_s]\}$ and let ${\mathcal{X}}_S=\bigcup_{i\in S}{{\mathcal{X}}_i}$. Then the optimal codelength $N_{q,opt}$ over ${\mathds{F}}_q$ for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem satisfies, $$N_{q,opt} \geq |{\mathcal{X}}_S|+ \min\{2\delta
_s,n-|{\mathcal{X}}_S|\}.$$
To derive the lower bound, first we will show that for any subgraph $\mathcal{B'}$ of $\mathcal{B}$ induced by the information symbols indexed by ${\mathcal{X}}_S$ and any $\min\{2\delta_s,n-|{\mathcal{X}}_S|\}$ of the remaining information symbols, the set . Then from Theorem \[thm2\] the optimal codelength $N_{q,opt}(\mathcal{B'})$ over ${\mathds{F}}_q$ for the subgraph $\mathcal{B'}$ will be $|{\mathcal{X}}_S|+ \min\{2\delta_s,n-|{\mathcal{X}}_S|\}$, and then using Lemma \[subprob\], we have $N_{q,opt}(\mathcal{B}) \geq N_{q,opt}(\mathcal{B'})=|{\mathcal{X}}_S|+ \min\{2\delta_s,n-|{\mathcal{X}}_S|\}$.
Now to show $\Phi(\mathcal{B'})=\phi$, we will use Algorithm 2. At first Algorithm 2 will take the bipartite graph $\mathcal{B'}$ as input and check whether the size of its *information symbol set* is greater than $2\delta_s$ or not. Now we can have $2$ cases, *Case I.* $S=\phi$ or *Case II.* $S \neq \phi$.
*Case I:* If $S=\phi$, $|{\mathcal{X}}_s|=0$. Then the size of the *information symbol set* is which is at the most $2\delta_s$. As a result for this case $\Phi(\mathcal{B'})=\phi$ from Algorithm 2.
*Case II:* If $S \neq \phi$, $|{\mathcal{X}}_s|>0$. Hence, the size of the *information symbol set* could be at least $2\delta_s+1$. If so, the bipartite graph $\mathcal{B'}$ will go through the iteration steps in the *while* loop in Algorithm 2. In each step, one *user node* with index from $S$ and its associated *demanded information symbols* will be removed from the bipartite graph $\mathcal{B'}$ since the degree of each of these user nodes is at the most $2\delta_s$. After removing all the information symbols indexed with ${\mathcal{X}}_S$, the remaining number of packets present in the graph will be which is at the most $2\delta_s$. Therefore the algorithm will conclude that $\Phi(\mathcal{B'})=\phi$.
### Lower bound based on the set $\Phi({\mathcal{B}})$ {#lb_phi}
Using Theorem \[thm2\], we now provide another lower bound on $N_{q,opt}$ of a BNSI problem. We are interested in a subset $\mathsf{B} \subseteq [n]$ such that the subgraph induced by $x_{\mathsf{B}} \subseteq \mathcal{P}$ denoted by $\mathcal{B}_{x_{\mathsf{B}}}$ satisfies $\Phi(\mathcal{B}_{x_{\mathsf{B}}})=\phi$. Suppose $\mathsf{B}_{max}$ denotes such a $\mathsf{B}$ with largest size. Now the following lower bound holds.
\[bmax\] The optimal codelength $N_{q,opt}$ over ${\mathds{F}}_q$ of the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem satisfies, $$N_{q,opt}(m,n,{\mathcal{X}},\delta_s) \geq |\mathsf{B}_{max}|.$$
From Theorem \[thm2\], we have that for any choice of $\mathsf{B}$ with $\Phi(\mathsf{B})=\phi$, the optimal codelength for the BNSI problem represented by the subgraph induced by $\mathsf{B}$ is $|\mathsf{B}|$. As $\mathsf{B}_{max}$ denotes such $\mathsf{B}$ with largest size, it holds that $|\mathsf{B}| \leq |\mathsf{B}_{max}|$ for all $\mathsf{B}$ such that $\Phi(\mathcal{B}_{x_{\mathsf{B}}})=\phi$. Now using Lemma \[subprob\], $N_{q,opt}(m,n,{\mathcal{X}},\delta_s) \geq |\mathsf{B}_{max}|$.
Now, we will derive a lemma that will provide a comparison between two the lower bounds given in Theorems \[lowerbound1\] and \[bmax\].
\[compare\] Let $\mathsf{B}_{max}$ be a largest subset of $[n]$ such that $\Phi(\mathcal{B}_{x_{\mathsf{B}_{max}}})=\phi$. Also let ${\mathcal{X}}_S=\bigcup_{i\in S}{{\mathcal{X}}_i}$ where, . Then, $|\mathsf{B}_{max}| \geq |{\mathcal{X}}_S|+ \min\{2\delta
_s,n-|{\mathcal{X}}_S|\}$.
In the proof of Theorem \[lowerbound1\], we have already shown that for any subgraph $\mathcal{B'}$ of $\mathcal{B}$ induced by the information symbols indexed by ${\mathcal{X}}_S$ and any of the remaining $\min\{2\delta_s,n-|{\mathcal{X}}_S|\}$ information symbols, the set . Therefore these information symbols constitute a set $\mathsf{B}$ such that $\Phi(\mathcal{B}_{x_{\mathsf{B}}})=\phi$. Since $\mathsf{B}_{max}$ is a set of largest size among all $\mathsf{B}$ with the property $\Phi(\mathcal{B}_{x_{\mathsf{B}}})=\phi$ the inequality in Lemma \[compare\] holds.
From Lemma \[compare\], we can remark that given an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem the lower bound on optimal codelength $N_{q,opt}$ found in Theorem \[bmax\] is at least as good as the lower bound found in Theorem \[lowerbound1\]. However the lower bound in Theorem \[lowerbound1\] can be calculated easily while we do not know of an efficient technique to compute $|\mathsf{B}_{max}|$.
\[lbexp\] Consider the BNSI problem scenario mentioned in Example \[exmp1\]. For this problem scenario $|{\mathcal{X}}_S|=0$, hence from Theorem \[lowerbound1\] we have $N_{q,opt} \geq \min\{2\delta_s,n\}=2$. Also we can check that any subset of $\{1,2,3,4\}$ of size $3$, i.e., $\{1,2,3\}$, $\{1,3,4\}$, $\{2,3,4\}$, serves as $\mathsf{B}_{max}$. So, from Theorem \[bmax\] $N_{q,opt} \geq |\mathsf{B}_{max}|=3$. A valid encoding and decoding scheme over ${\mathds{F}}_2$ is given in Example \[exmp3\] that meets this lower bound for this scenario. Further, this scheme can be easily generalized to any finite field ${\mathds{F}}_q$. Hence, $N_{q,opt}=3$ for this problem for any ${\mathds{F}}_q$.
Construction of encoder matrix ${{\bf{L}}}$ based on linear error correcting codes {#linecc}
----------------------------------------------------------------------------------
In this subsection, we describe a construction of a valid encoder matrix ${{\bf{L}}}$ for an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem based on linear error correcting codes over ${\mathds{F}}_q$. Consider a parity check matrix of an $[n',k']$ linear error correcting code over ${\mathds{F}}_q$ where $n'$, $k'$ denote the blocklength and the dimension of the code, respectively. Let $d_{min}$ be the *minimum distance* of the code. Then any set of ($d_{min}-1$) columns of ${{\bf{H}}}$ are linearly independent and at least one set of $d_{min}$ columns are linearly dependent [@huffman_2010]. Define, $\eta =2\delta_s+ \max_{i \in [m]}{|{\mathcal{Y}}_i|}$, where ${\mathcal{Y}}_i$ is the index set of the messages that are not demanded by $i^{th}$ user in the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem. Now if $d_{min} \geq \eta+1$, $n'=n$ and ${{\bf{L}}}={{\bf{H}}}^T$, the following lemma holds.
If ${{\bf{H}}}$ is a parity check matrix of an $[n,k',d_{min}]$ code over ${\mathds{F}}_q$ with $d_{min} \geq 2\delta_s+ \max_{i \in [m]}{|{\mathcal{Y}}_i|}+1$, then ${{\bf{L}}}={{\bf{H}}}^T$ is a valid encoder matrix for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem.
From Corollary \[corr1\] we know that to be a valid encoder matrix for an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem it is sufficient that any $|{\mathcal{Y}}_i|+2\delta_s$ rows in ${{\bf{L}}}$ are linearly independent for each $i \in [m]$. As $(|{\mathcal{Y}}_i|+2\delta_s) \leq \eta$, if we consider ${{\bf{L}}}={{\bf{H}}}^T$ and $d_{min} \geq \eta+1$, ${{\bf{L}}}$ has any set of $\eta$ rows as linearly independent. In particular for any $i^{th}$ user, $i \in [m]$, any $2\delta_s$ or fewer rows of ${{\bf{L}}}_{{\mathcal{X}}_i}$ and all the rows of ${{\bf{L}}}_{{\mathcal{Y}}_i}$ together form a linearly independent set. Therefore, ${{\bf{L}}}$ is a valid encoder matrix for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem.
We can utilize a linear error correcting code having blocklength $n$ and $d_{min} \geq \eta+1$ over ${\mathds{F}}_q$ with maximum possible dimension $k'$ such that $n-k'$ is minimized. Then ${{\bf{L}}}$ will be the transpose of a parity check matrix of the error correcting code with codelength $N=(n-k')$.
Suppose $m=4$, $n=6$, ${\mathcal{X}}_1=\{1,2,3,4\}$, ${\mathcal{X}}_2=\{2,3,4,5\}$, ${\mathcal{X}}_3=\{1,3,4,5,6\}$, ${\mathcal{X}}_4=\{2,3,4,5,6\}$ and $\delta_s=1$. Therefore $\eta=2\delta_s+ \max_{i \in [m]}{|{\mathcal{Y}}_i|}=2+2=4$. We now use a $[6,k']$ linear error correcting code over ${\mathds{F}}_q$ with maximum possible $k'$ having $d_{min} \geq 5$. From [@codetables], we can find that such codes over ${\mathds{F}}_2$ are $[6,1,6]$ and $[6,1,5]$ and the resulting codelength $N$ for both the cases will be $5$. Over ${\mathds{F}}_5$ such a linear error correcting code is $[6,2,5]$ and the resulting codelength $N=4$.
Among all the linear error correcting codes over ${\mathds{F}}_q$ having blocklength $n$ and $d_{min}= \eta+1$, the dimension $k'$ will be maximum for *Maximum Distance Separable* (MDS) codes if such an MDS code exists over ${\mathds{F}}_q$. Suppose ${{\bf{L}}}$ is a valid encoder matrix for an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem constructed based on the transpose of a parity check matrix ${{\bf{H}}}$ of an MDS code over ${\mathds{F}}_q$($q \geq n$) having blocklength $n'= n$ and $d_{min}= \eta+1$. Then the dimension of the code $k'= n'-d_{min}+1 = (n-\eta)^+ = (n-2\delta_s - \max_{i \in [m]}{|{\mathcal{Y}}_i|})^+=(n-2\delta_s - \max_{i \in [m]}{(n-|{\mathcal{X}}_i|)})^+=\min_{i \in [m]}{(|\mathcal{X}_i|-2\delta_s)^+}$, where $x^+=x~\text{for}~x \geq 0~\text{and}~x^+=0~\text{for}~x<0$.
\[exmp4\] Consider the BNSI problem scenario, where $m=4$, $n=10$, $\delta_s=1$, ${\mathcal{X}}_1=\{1,3,5,7,9\}$, ${\mathcal{X}}_2=\{2,4,6,8,10\}$, ${\mathcal{X}}_3=\{1,2,4,6,8,10\}$, ${\mathcal{X}}_4=\{3,4,5,6,7,9\}$. For this example, $k'=\min_{i \in [m]}{(|\mathcal{X}_i|-2\delta_s)^+}$ $=3$. Over ${\mathds{F}}_{16}$ there exists an $[10,3]$ linear error correcting code with $d_{min}=8$ which is an MDS code. The transpose of the parity-check matrix of this code is a valid encoder matrix for the BNSI problem. Note that using this MDS code we save $3$ transmissions compared to uncoded scheme.
Upper Bounds on $N_{q,opt}$ {#ub}
----------------------------
Here, we will describe three upper bounds on $N_{q,opt}$ of an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem. First one is based on the code construction from linear error correcting codes as given in Section \[linecc\], the second one is based on *disjoint elements* of the set $\Phi({\mathcal{B}})$ defined over the bipartite graph ${\mathcal{B}}$ which represents the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem and the last one is based on partitioning the set of information symbols.
### Upper bound based on linear error correcting codes {#ub_ecc}
From Section \[linecc\], we have a valid encoder matrix of an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem with codelength $N=n'-k'$ derived from an $[n',k']$ linear error correcting code having blocklength $n'=n$ and $d_{min} \geq \eta+1$ with maximum possible dimension $k'$. Let $k(q,n,d_{min})$ be the largest possible dimension among all linear error correcting codes over ${\mathds{F}}_q$ with blocklength $n$ and minimum distance at least $d_{min}$. Then we have $N_{q,opt} \leq n-k(q,n,d_{min})$. From this inequality condition, we now obtain an upper bound on the optimal codelength $N_{q,opt}$.
\[thm12\] The optimal codelength $N_{q,opt}$ over ${\mathds{F}}_q~(q \geq n)$ for an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem satisfies $$N_{q,opt} \leq n-\min\limits_{i \in [m]}{(|\mathcal{X}_i|-2\delta_s)^+}.$$
The codelength of a valid coding scheme for an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem based on linear error correcting codes as given in Section \[linecc\] will be minimum if the encoder matrix ${{\bf{L}}}$ is derived from an $[n',k']$ linear MDS code with blocklength $n'=n$, dimension $k'$ and $d_{min} = \eta+1$ if such an MDS code exists. We have the dimension of such MDS code is $k'=\min_{i \in [m]}{(|\mathcal{X}_i|-2\delta_s)^+}$. If $q \geq n$ then such an MDS code exists over ${\mathds{F}}_q$. Hence, the upper bound in Theorem \[thm12\] holds.
### Upper bound based on disjoint elements of $\Phi(\mathcal{B})$ {#ub_min}
Now we provide an upper bound on optimal codelength $N_{q,opt}$ for an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem represented by the bipartite graph $\mathcal{B}=(\mathcal{U},\mathcal{P},\mathcal{E})$. This upper bound is motivated by *Cycle-Covering scheme* for Index Coding [@MAZ_IEEE_IT_13; @CASL_ISIT_11]. For each element ${\mathsf{C}}\in \Phi(\mathcal{B})$, the subgraph induced by $x_{{\mathsf{C}}}$ denoted as $\mathcal{B}_{{\mathsf{C}}}=(\mathcal{U}_{{\mathsf{C}}},x_{{\mathsf{C}}},\mathcal{E}_{{\mathsf{C}}})$ represents the $(m_{{\mathsf{C}}},n_{{\mathsf{C}}},{\mathcal{X}}_{{\mathsf{C}}},\delta_s)$ BNSI problem, where $m_{{\mathsf{C}}}=|\mathcal{U}_{{\mathsf{C}}}|=|\{u_i \in \mathcal{U}~|{\mathsf{C}}\cap {\mathcal{X}}_i \neq \phi\}|$, $n_{{\mathsf{C}}}=|{\mathsf{C}}|$, ${\mathcal{X}}_{{\mathsf{C}}}=\{{\mathcal{X}}_{{\mathsf{C}},i}~|{\mathcal{X}}_{{\mathsf{C}},i}={\mathcal{X}}_i \cap {\mathsf{C}}, \forall u_i \in \mathcal{U}_{{\mathsf{C}}}\}$ and $\mathcal{E}_{{\mathsf{C}}}=\{\{u_i,x_j\} \in \mathcal{E}~|u_i \in \mathcal{U}_{{\mathsf{C}}},~j \in {\mathsf{C}}\}$. Now since ${\mathsf{C}}\in \Phi({\mathcal{B}})$ it can be noticed that $\forall u_i \in \mathcal{U}_{{\mathsf{C}}}$, the degree of $u
_i$ in ${\mathcal{B}}_{{\mathsf{C}}}$, $deg(u_i) \geq 2\delta_s+1$. Therefore we can use the simple coding scheme described in Section \[sec:simple\_coding\] on $(m_{{\mathsf{C}}},n_{{\mathsf{C}}},{\mathcal{X}}_{{\mathsf{C}}},\delta_s)$ BNSI problem to save one transmission compared to uncoded transmission. Therefore the length of this code to transmit all the information symbols indexed by ${\mathsf{C}}\subseteq [n]$ over ${\mathds{F}}_q$ is $N_{{\mathsf{C}}}=|{\mathsf{C}}|-1$. For some integer $K$, let ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Phi({\mathcal{B}})$ and $R=[n] \setminus ({\mathsf{C}}_1 \cup {\mathsf{C}}_2 \cup \dots \cup {\mathsf{C}}_K)$. Given such a collection of elements of $\Phi({\mathcal{B}})$, we design a valid coding scheme as follows. We apply the coding scheme described in Section \[sec:simple\_coding\] on each element ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K$ and transmit the information symbols indexed by the set $R$ uncoded. The codelength for this scheme is $$\begin{aligned}
N &= \sum_{i=1}^{K}{(|{\mathsf{C}}_i|-1)}+|R|
= \sum_{i=1}^{K}{|{\mathsf{C}}_i|}-K+|R|.\end{aligned}$$
\[disjoint\] Let $N$ be the codelength of the linear coding scheme based on the set ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Phi({\mathcal{B}})$. Then there exist disjoint ${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots,{\mathsf{C}}'_{K'} \in \Phi({\mathcal{B}})$ such that $K' \leq K$ and the codelength $N'$ of the linear coding scheme based on ${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots,{\mathsf{C}}'_{K'}$ is at the most $N$.
From the set ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Phi({\mathcal{B}})$, we construct $K$ sets ${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots,{\mathsf{C}}'_K$ as follows, ${\mathsf{C}}'_1={\mathsf{C}}_1$, ${\mathsf{C}}'_2={\mathsf{C}}_2 \setminus {\mathsf{C}}_1$, ${\mathsf{C}}'_3={\mathsf{C}}_3 \setminus ({\mathsf{C}}_1 \cup {\mathsf{C}}_2)$, $\dots$, ${\mathsf{C}}'_K={\mathsf{C}}_K \setminus ({\mathsf{C}}_1 \cup {\mathsf{C}}_2 \cup \dots \cup {\mathsf{C}}_{K-1})$. Note that ${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots, {\mathsf{C}}'_K$ are disjoint and $|{\mathsf{C}}'_1| \leq |{\mathsf{C}}_1|$, $|{\mathsf{C}}'_2| \leq |{\mathsf{C}}_2|$, $\dots$, $|{\mathsf{C}}'_K| \leq |{\mathsf{C}}_K|$. Now we categorize ${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots, {\mathsf{C}}'_K$ into two sets ${\mathsf{C}}'$ and $R'$ as follows, if ${\mathsf{C}}'_i \in \Phi({\mathcal{B}})$ where $i \in [K]$, we keep the set ${\mathsf{C}}'_i$ in the set ${\mathsf{C}}'$ otherwise keep the set ${\mathsf{C}}'_i$ in the set $R'$. Without loss of generality we assume that the first $K'$ sets, $K' \leq K$, among ${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots, {\mathsf{C}}'_K$ belongs to ${\mathsf{C}}'$. Then ${\mathsf{C}}'=\{{\mathsf{C}}'_1, {\mathsf{C}}'_2, \dots, {\mathsf{C}}'_{K'}\}$ and $R'={\mathsf{C}}'_{K'+1} \cup {\mathsf{C}}'_{K'+2} \cup \dots \cup {\mathsf{C}}'_{K}$. Let $R_{mod}=R \cup R'$. Note that $R$ and $R'$ are disjoint. Now we design a valid coding scheme as follows, we apply the coding scheme described in Section \[sec:simple\_coding\] on each element of ${\mathsf{C}}'$ and send the information symbols indexed by the set $R_{mod}$ uncoded. Therefore the codelength for this scheme is $$\begin{aligned}
N' &= \sum_{i=1}^{K'}{|{\mathsf{C}}'_i|-1} + |R_{mod}|\\
&= \sum_{i=1}^{K'}{|{\mathsf{C}}'_i|-1} + |R| + |R'|\\
&= \sum_{i=1}^{K'}{|{\mathsf{C}}'_i|-1} + |R| + \sum_{i=K'+1}^{K}{|{\mathsf{C}}'_i|} \\
&= \sum_{i=1}^{K}{|{\mathsf{C}}'_i|}-K' + |R|.\end{aligned}$$ For any $K' < i \leq K$, $|{\mathsf{C}}_i|-|{\mathsf{C}}'_i| \geq 1$. Hence we have $$\begin{aligned}
\sum_{i=K'+1}^{K}{|{\mathsf{C}}_i|-|{\mathsf{C}}'_i|} &\geq (K-K'), \text{ and thus}\\
\sum_{i=1}^{K}{|{\mathsf{C}}_i|-|{\mathsf{C}}'_i|} &\geq (K-K')\\
\sum_{i=1}^{K}{|{\mathsf{C}}'_i|}-K' &\leq \sum_{i=1}^{K}{|{\mathsf{C}}_i|}-K.\end{aligned}$$ Therefore $N'= \sum_{i=1}^{K}{|{\mathsf{C}}'_i|}-K' + |R| \leq \sum_{i=1}^{K}{|{\mathsf{C}}_i|}-K +|R|=N$. Hence the lemma holds.
Now applying our designed coding scheme on disjoint elements of $\Phi({\mathcal{B}})$ we have the following upper bound on the optimal codelength $N_{q,opt}$.
\[thm13\] Let $\mathfrak{C}$ be a largest collection of disjoint elements of $\Phi({\mathcal{B}})$ for an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem and $\mathfrak{R}=[n] \setminus \mathfrak{C}$. The optimal codelength $N_{q,opt}$ for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem over ${\mathds{F}}_q$ satisfies $$N_{q,opt} \leq n-|\mathfrak{C}|.$$
Applying the coding scheme mentioned in Section \[sec:simple\_coding\] on each element of $\mathfrak{C}$, we can save one transmission compared to uncoded scheme. Thereby we can save $|\mathfrak{C}|$ transmission for the collection $\mathfrak{C}$.
\[equality\] Let $\mathfrak{C}=\{{\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K\}$ and $i_1 \in {\mathsf{C}}_1, i_2 \in {\mathsf{C}}_2,\dots, i_k \in {\mathsf{C}}_K$ are such that the subgraph ${\mathcal{B}}'$ of the bipartite graph ${\mathcal{B}}$ induced by ${\mathcal{P}}'={\mathcal{P}}\setminus \{x_{i_1},x_{i_2},\dots,x_{i_K}\}$ satisfies $\Phi({\mathcal{B}}')=\phi$. Then the optimal codelength $N_{q,opt}$ over ${\mathds{F}}_q$ satisfies $$N_{q,opt} = n-|\mathfrak{C}|.$$
From Theorem \[thm13\] we have the upper bound on $N_{q,opt}$. It remains to show that $N_{q,opt} \geq n-|\mathfrak{C}|$. The number of information symbols in ${\mathcal{B}}'$ is $n-|\mathfrak{C}|$. As $\Phi({\mathcal{B}}')=\phi$, from Theorem \[thm2\] we have $N_{q,opt}({\mathcal{B}}')=n-|\mathfrak{C}|$. Now using Lemma \[subprob\] we have $N_{q,opt}({\mathcal{B}}) \geq N_{q,opt}({\mathcal{B}}')=n-|\mathfrak{C}|$.
If we apply the coding scheme derived from a linear MDS code on each element of $\mathfrak{C}$ and transmit the information symbols index by $\mathfrak{R}$ uncoded then we can have the following upper bound on $N_{q,opt}$.
\[thm14\] Suppose the subgraph of $\mathcal{B}$ induced by the information symbols indexed by a set ${\mathsf{C}}\in \mathfrak{C}$ is denoted by and define . Then the optimal length $N_{q,opt}$ over ${\mathds{F}}_q$ where $q \geq \max_{{\mathsf{C}}\in \mathfrak{C}}{|{\mathsf{C}}|}$ for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem represented by the bipartite graph $\mathcal{B}$ satisfies $$N_{q,opt} \leq n- \sum\limits_{{\mathsf{C}}\in \mathfrak{C}}{d_{{\mathsf{C}}}}.$$
To transmit the information symbols indexed by the set ${\mathsf{C}}\in \mathfrak{C}$, if we use an encoder matrix derived from an $[n',k']$ linear MDS code with blocklength $n'=|{\mathsf{C}}|$ and dimension $k'=d_{{\mathsf{C}}}$ then from Theorem \[thm12\] it is known that we can save $d_{{\mathsf{C}}}$ transmissions compared to uncoded scheme. Such an MDS code exists over ${\mathds{F}}_q$ if $q \geq \max_{{\mathsf{C}}\in \mathfrak{C}}{|{\mathsf{C}}|}$. As the elements in $\mathfrak{C}$ are disjoint then the total number of transmissions that can be saved is $\sum_{{\mathsf{C}}\in \mathfrak{C}}{d_{{\mathsf{C}}}}$. Therefore the upper bound in Theorem \[thm14\] holds.
For a given $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem, in general the upper bound on optimal codelength $N_{q,opt}$ found in Theorem \[thm14\] is at least as good as the upper bound found in Theorem \[thm13\] because in each , $d_{{\mathsf{C}}} \geq 1$, therefore $\sum_{{\mathsf{C}}\in \mathfrak{C}}{d_{{\mathsf{C}}}} \geq |\mathfrak{C}|$. In other words, if we apply coding scheme mentioned in Section \[sec:simple\_coding\] on each ${\mathsf{C}}\in \mathfrak{C}$ we can save exactly one transmission compared to uncoded scheme whereas if we apply coding scheme based on an linear MDS code we can save *at least* one transmission. However for the upper bound given in Theorem \[thm14\] we need the finite field size $q$ to be large while Theorem \[thm13\] holds for any $q$.
Consider a BNSI problem scenario where $m=3$, $n=10$, $\delta_s=1$, ${\mathcal{X}}_1=\{1,2,3,9\}$, ${\mathcal{X}}_2=\{4,5,6,10\}$, ${\mathcal{X}}_3=\{7,8\}$. We can find that a possible choice of $\mathfrak{C}=\{\{1,2,3,9\},\{4,5,6,10\}\}$. If the finite field size $q=2$, then using Theorem \[thm12\] we obtain $N_{q,opt} \leq n=10$ whereas using Theorem \[thm13\] we obtain . However if $q \geq 4$ then using Theorem \[thm14\] we obtain $N_{q,opt} \leq n-4=6$.
Consider a BNSI problem scenario where $m=4$, $n=7$, $\delta_s=1$, ${\mathcal{X}}_1=\{1,3,5\}$, ${\mathcal{X}}_2=\{2,4,6\}$, ${\mathcal{X}}_3=\{3,6,7\}$, ${\mathcal{X}}_4=\{4,5,6\}$. We can find that a possible choice of $\mathfrak{C}=\{\{1,3,5\},\{2,4,6\}\}$. For the finite field size $q=2$, using Theorem \[thm13\] we obtain . Now we are deleting one index from each element of $\mathfrak{C}$. Suppose ${\mathcal{B}}'$ is the subgraph of ${\mathcal{B}}$ induced by the information symbols indexed by the remaining indices after deleting any one index from each element in $\mathfrak{C}$. We can check that $\Phi({\mathcal{B}}') = \phi$. Hence applying Lemma \[equality\] we have $N_{q,opt}=5$.
### Upper bound based on partitioning the maximum element of $\Phi(\mathcal{B})$ {#ub_par}
We now provide another upper bound on the optimal codelenth $N_{q,opt}$ for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem represented by the bipartite graph ${\mathcal{B}}=({\mathcal{U}},{\mathcal{P}},{\mathcal{E}})$ based on the partitioning the maximum element of $\Phi({\mathcal{B}})$. This upper bound is motivated the by the *partition multicast* scheme for Index Coding as described in [@tehrani2012bipartite; @iscod_1998]. We will now show that the set ${\mathsf{C}}$ output by Algorithm 2 is a maximal element of $\Phi({\mathcal{B}})$ and then show that ${\mathsf{C}}$ is the unique maximal element in $\Phi({\mathcal{B}})$. Hence ${\mathsf{C}}$ is the maximum element in $\Phi({\mathcal{B}})$.
\[maximal\_element\] If $\Phi(\mathcal{B}) \neq \phi$, the index set of the information symbols $\mathsf{C}$ output by Algorithm 2 is a maximal element of $\Phi(\mathcal{B})$.
To show that set $\mathsf{C}$ is a maximal element, we will show that if we further add any set of information symbols with the set $\mathsf{C}$ then the resulting set will not be an element of $\Phi(\mathcal{B})$. Algorithm 2 keeps deleting user $u_i$ and its corresponding ${\mathcal{X}}_i$ iteratively until a $\mathsf{C} \in \Phi({\mathcal{B}})$ is found. Suppose in Algorithm 2 after deleting $t$ users from *user-set* $\mathcal{U}$ we found the set $\mathsf{C}$ and $\mathcal{U}_{del}=\{u_1,u_2,\dots,u_t\}$ denotes the set of deleted users, where without loss of generality we have assumed that $u_1$ is first deleted user and then $u_2,u_3,\dots,u_t$ are deleted consecutively. The set of deleted information symbols denoted by ${\mathcal{X}}_{del}= {\mathcal{X}}_1 \cup {\mathcal{X}}_2 \cup \dots \cup {\mathcal{X}}_t$. Suppose we are adding a set of information symbols indexed by ${\mathcal{X}}_A, {\mathcal{X}}_A \subseteq {\mathcal{X}}_{del}$ with the set $\mathsf{C}$. Let $i$ be the smallest integer such that ${\mathcal{X}}_i \cap {\mathcal{X}}_A \neq \phi$. Now in the subgraph $\mathcal{B}_{x_{(\mathsf{C}~\cup~{\mathcal{X}}_A)}}$ induced by the information symbols indexed by the set ${\mathsf{C} \cup {\mathcal{X}}_A}$, *deg*($u_i$) = $|{\mathcal{X}}_i \cap (\mathsf{C} \cup {\mathcal{X}}_A)| = |{\mathcal{X}}_i \cap ({\mathcal{X}}_A \cup {\mathsf{C}})| \leq |x_{{\mathcal{X}}_i} \cap (x_{{\mathcal{X}}_i} \cup x_{{\mathcal{X}}_{i+1}} \cup \dots \cup x_{{\mathcal{X}}_t} \cup \mathsf{C})|$ = *deg*($u_i$) in $\mathcal{B}_{x_{({\mathcal{X}}_i \cup {\mathcal{X}}_{i+1} \cup \dots \cup {\mathcal{X}}_t) \cup \mathsf{C}}} \in [2\delta_s]$. This is due to the fact that Algorithm 2 deletes $u_i \in {\mathcal{U}}_{del}$ from the bipartite graph $\mathcal{B}_{x_{({\mathcal{X}}_i \cup {\mathcal{X}}_{i+1} \cup \dots \cup {\mathcal{X}}_t) \cup \mathsf{C}}}$ as its *degree* is at the most 2$\delta_s$. So, the index set $(\mathsf{C} \cup {\mathcal{X}}_A)$ is not an element of $\Phi(\mathcal{B})$ which shows that the set $\mathsf{C}$ is a maximal element of $\Phi(\mathcal{B})$.
\[unique\] $\Phi(\mathcal{B})$ contains a unique maximal element.
We will use proof by contradiction. Suppose $\mathsf{C}$ and $\mathsf{C'}$ are two maximal elements of $\Phi(\mathcal{B})$ such that $\mathsf{C} \neq \mathsf{C'}$. Recall that for any $i \in [m]$, $|{\mathcal{X}}_i \cap \mathsf{C}| \notin [2\delta_s]$ and $|{\mathcal{X}}_i \cap \mathsf{C'}| \notin [2\delta_s]$. Consider the set $\mathsf{C} \cup \mathsf{C'}$ which is a subset of $[n]$. Now for any $i^{th}$ user, $i \in [m]$, ${\mathcal{X}}_i$ will satisfy one of the four following possibilities, (i) ${\mathcal{X}}_i \cap \mathsf{C}= \phi$ and ${\mathcal{X}}_i \cap \mathsf{C'}= \phi$, (ii) ${\mathcal{X}}_i \cap \mathsf{C}= \phi$ and ${\mathcal{X}}_i \cap \mathsf{C'} \neq \phi$, (iii) ${\mathcal{X}}_i \cap \mathsf{C} \neq \phi$ and ${\mathcal{X}}_i \cap \mathsf{C'}= \phi$, (iv) ${\mathcal{X}}_i \cap \mathsf{C} \neq \phi$ and ${\mathcal{X}}_i \cap \mathsf{C'} \neq \phi$. From the knowledge that $|{\mathcal{X}}_i \cap \mathsf{C}|$ and $|{\mathcal{X}}_i \cap \mathsf{C'}|$ is either $0$ or at least $2\delta_s+1$, we can conclude that $|{\mathcal{X}}_i \cap (\mathsf{C} \cup \mathsf{C'})|$ is either $0$ or at least $2\delta_s+1$. Therefore $\mathsf{C} \cup \mathsf{C'}$ is an element of $\Phi(\mathcal{B})$ and $|(\mathsf{C} \cup \mathsf{C'})| > |\mathsf{C}|,|\mathsf{C'}|$ which contradicts the maximality of both $\mathsf{C}$ and $\mathsf{C'}$. Hence the lemma holds.
From now onward we denote the maximum or the unique maximal element of $\Phi(\mathcal{B})$ as $\mathsf{C}_{max}$. We now provide a result that will provide some knowledge regarding to those information symbols that do not belong to the set $\mathsf{C}_{max}$.
\[acyclic\] Suppose an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem is represented by the bipartite graph $\mathcal{B}=(\mathcal{U},\mathcal{P},\mathcal{E})$ and $\mathsf{C}_{max}$ denotes the maximum element of $\Phi(\mathcal{B})$. The subgraph $\mathcal{B}'$ of $\mathcal{B}$ induced by the set $\mathcal{P} \setminus x_{\mathsf{C}_{max}}$ satisfies $\Phi(\mathcal{B'})=\phi$.
We will use proof by contradiction. Suppose $\Phi({\mathcal{B}}') \neq \phi$ and a set $\mathsf{C'} \in \Phi(\mathcal{B'})$. Consider the set $\mathsf{C}_{max} \cup \mathsf{C'} \subseteq [n]$. Using the same argument used to prove Lemma \[unique\], we can conclude that the set $\mathsf{C}_{max} \cup \mathsf{C'}$ is an element of $\Phi(\mathcal{B})$ which contradicts the maximality of $\mathsf{C}_{max}$. Hence, $\Phi(\mathcal{B'})=\phi$.
From Theorem \[thm12\] we deduce that for an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem if we denote $d=\min_{i \in [m]}$ ${(|{\mathcal{X}}_i|-2\delta_s)^+}$, we can save $d$ transmissions compared to uncoded transmission by using an encoder matrix derived from an MDS code over ${\mathds{F}}_q$ ($q \geq n$) with blocklength $n$ and dimension $d$. We now partition the maximum element $\mathsf{C}_{max}$ of $\Phi(\mathcal{B})$ into $K$ disjoint subsets $\mathsf{S}_1, \mathsf{S}_2, \dots, \mathsf{S}_K \subset \mathsf{C}_{max}$, i.e., for any $a,a' \in [K]$, $a \neq a'$, $\mathsf{S}_{a} \cap \mathsf{S}_{a'}=\phi$ and $\bigcup_{a=1}^{K}{\mathsf{S}_{a}}=\mathsf{C}_{max}$. Note that for each $a \in [K]$, the subgraph $\mathcal{B}_{a}=(\mathcal{U}_{a},x_{\mathsf{S}_{a}},\mathcal{E}_{a})$ induced by $x_{\mathsf{S}_{a}}$ denotes the $(m_{a},n_{a},{\mathcal{X}}_{a},\delta_s)$ BNSI problem where $m_{a}=|\mathcal{U}_{a}|=|\{u_i \in \mathcal{U}~|\mathsf{S}_{a} \cap {\mathcal{X}}_i \neq \phi\}|$, $n_{a}=|\mathsf{S}_{a}|$, and $\mathcal{E}_{a}=\{\{u_i,x_j\} \in \mathcal{E}~|u_i \in \mathcal{U}_{a},~j \in \mathsf{S}_{a}\}$. Let $d_{a}=\min_{u_{i'} \in \mathcal{U}_{a}}{(|{\mathcal{X}}_{i'} \cap \mathsf{S}_{a}|-2\delta_s)^+}$. While transmitting the information symbols indexed by $\mathsf{S}_{a}$, we can save $d_{a}$ transmissions compared to the uncoded scheme by using an encoder matrix derived from an MDS code over ${\mathds{F}}_q$ ($q \geq |\mathsf{S}_{a}|$) with blocklength $|\mathsf{S}_{a}|$ and dimension $d_{a}$. We encode the symbols in each $\mathsf{S}_{a},~a \in [K]$ independently using this coding scheme. The symbols whose indices are not in $\mathsf{C}_{max}$ are transmitted uncoded. Therefore the total number of transmissions we can save through partitioning is $d_{sum}=\sum_{a=1}^{K}{d_{a}}$. To save maximum transmissions we need to partition the set $\mathsf{C}_{max}$ in such a way that maximizes $d_{sum}$. Therefore the *optimal partitioning* is the solution of the following optimization problem.
\[opt1\] $$\begin{aligned}
\vspace{-8mm}
& \text{maximize}~d_{sum}=\sum_{a=1}^{K}{d_{a}},~~\text{where}~d_{a}=\min_{u_{i'} \in \mathcal{U}_{a}}{(|{\mathcal{X}}_{i'} \cap \mathsf{S}_{a}|-2\delta_s)^+}\\
& \text{subject to}~ 1 \leq K \leq n\\
&\mathsf{S}_1, \mathsf{S}_2, \dots, \mathsf{S}_K \subset \mathsf{C}_{max}~\text{such that}\\
& \text{for any}~a,a' \in [K],a \neq a',~~\mathsf{S}_{a} \cap \mathsf{S}_{a'}=\phi\\
& \text{and}~ \bigcup_{a=1}^{K}{\mathsf{S}_{a}}=\mathsf{C}_{max}.\end{aligned}$$
\[subopt\] For any $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem represented by the bipartite graph $\mathcal{B}=(\mathcal{U},\mathcal{P},\mathcal{E})$ if $\mathsf{C}_{max}$ is the only element in $\Phi(\mathcal{B})$ or in other words $|\Phi(\mathcal{B})|=1$, then partitioning $\mathsf{C}_{max}$ into two or more subsets is not optimal. It is trivial to check that if we partition $\mathsf{C}_{max}$, none of the partitions will be an element of $\Phi(\mathcal{B})$. Therefore we can not save any transmission from any of the partition whereas using the full set $\mathsf{C}_{max}$ we can save at least one transmission.
The following upper bound on the optimal codelength $N_{q,opt}$ is a direct result of the optimal partitioning of $\mathsf{C}_{max}$.
\[thm:ub\_par\] Let $D_{sum}$ be the solution to the optimization problem ***Optimization 1***. Then optimal codelength $N_{q,opt}$ over ${\mathds{F}}_q$ satisfies $$N_{q,opt} \leq n-D_{sum}.$$
BNSI problem and Index Coding {#BNSI_IC}
=============================
In this section we will show that every BNSI problem is equivalent to an *Index Coding* problem [@YBJK_IEEE_IT_11] and using the equivalent Index Coding problem we find a valid encoder matrix for $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem. We also obtain a lower bound on $N_{q,opt}$ for a $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem based on this property.
Index Coding with side information
----------------------------------
*Index Coding* [@YBJK_IEEE_IT_11] deals with the problem of code design for the transmission of a vector of $n_{IC}$ information symbols or messages denoted as ${{\bf{x}}}'_{IC}=(x'_1,x'_2,\dots, x'_{n_{IC}}) \in {\mathds{F}}_q^{n_{IC}}$ to $m_{IC}$ users denoted as $u'_1,u'_2,\dots,u'_{m_{IC}}$ over a noiseless broadcast channel. It is assumed that $i^{th}$ user $u'_i,~\forall i \in [m_{IC}]$ already knows a part of the transmitted message vector as *side information* denoted as ${{\bf{x}}}'_{{\mathcal{X}}_{i,IC}},~{\mathcal{X}}_{i,IC} \subseteq [n_{IC}]$ and demands message $x'_{f(i)},~f(i) \in [n_{IC}]$ where such that $f(i) \notin {\mathcal{X}}_{i,IC}$. The set ${\mathcal{X}}_{i,IC}$ is *side information index set* and $f(i)$ is *demanded message index*. Upon denoting ${\mathcal{X}}_{IC}=({\mathcal{X}}_{1,IC}, {\mathcal{X}}_{2,IC}, \dots, {\mathcal{X}}_{{n_{IC}},IC})$, we describe this Index Coding problem as $(m_{IC},n_{IC},{\mathcal{X}}_{IC},f)$ Index Coding problem. As described in [@YBJK_IEEE_IT_11], a valid *encoding function* over ${\mathds{F}}_q$ for an $(m_{IC},n_{IC},{\mathcal{X}}_{IC},f)$ Index Coding problem is defined by, $$\mathfrak{E}_{IC}:{\mathds{F}}_q^{n_{IC}}\rightarrow {\mathds{F}}_q^{N_{IC}}$$ such that for each user $u'_i,$ there exists a *decoding function* satisfying the following property: $\mathfrak{D}_{i,IC}(\mathfrak{E}_{IC}({{\bf{x}}}'_{IC}),{{\bf{x}}}'_{{\mathcal{X}}_{i,IC}})=x'_{f(i)}$ for every .
The design objective is to design a tuple $(\mathfrak{E}_{IC},\mathfrak{D}_{1,IC},\mathfrak{D}_{2,IC},\dots, \mathfrak{D}_{m_{IC},IC})$ of encoding and decoding functions that minimizes the codelength $N_{IC}$ and obtain the *optimal codelength* for the given Index Coding problem which is the minimum codelength among all valid Index Coding schemes.
A *scalar linear Index Code* for an $(m_{IC},n_{IC},{\mathcal{X}}_{IC},f)$ Index Coding problem is defined as a coding scheme where the encoding function is a linear transformation over ${\mathds{F}}_q$ described as , $\forall {{\bf{x}}}'_{IC} \in {\mathds{F}}_q^{n_{IC}}$, where ${{\bf{L}}}_{IC} \in {\mathds{F}}_q^{n_{IC} \times N_{IC}}$ is the *encoder matrix for scalar linear Index Code*. The minimum codelength among all valid linear coding schemes for the $(m_{IC},n_{IC},{\mathcal{X}}_{IC},f)$ Index Coding problem over the field ${\mathds{F}}_q$ will be denoted as $N_{q,opt,IC}(m_{IC},n_{IC},{\mathcal{X}}_{IC},f)$.
From [@Dau_IEEE_J_IT_13] we have a design criterion for a matrix ${{\bf{L}}}_{IC}$ to be a valid encoder matrix for $(m_{IC},n_{IC},{\mathcal{X}}_{IC},f)$ scalar linear Index Coding problem. Following the results in [@Dau_IEEE_J_IT_13], we define the set ${\mathcal{I}}_{IC}(q,m_{IC},n_{IC},{\mathcal{X}}_{IC},f)$ or equivalently ${\mathcal{I}}_{IC}$ of vectors ${{\bf{z}}}$ of length $n$ such that ${{\bf{z}}}_{{\mathcal{X}}_{i,IC}}={\bf{0}} \in {\mathds{F}}_q^{|{\mathcal{X}}_{i,IC}|}$ and $z_{f(i)} \neq 0$ for some choice of $i \in [m_{IC}]$ i.e., $$\label{i_ic}
{\mathcal{I}}(q,m_{IC},n_{IC},{\mathcal{X}}_{IC},f)=\bigcup\limits_{i=1}^{m_{IC}}\{{{\bf{z}}}\in {\mathds{F}}_q^{n_{IC}}~| {{\bf{z}}}_{{\mathcal{X}}_{i,IC}}={\bf{0}}~\text{and}~z_{f(i)} \neq 0\}.$$ Now from Corollary 3.10 in [@Dau_IEEE_J_IT_13] it follows that that ${{\bf{L}}}_{IC}$ is a valid encoder matrix for an $(m_{IC},n_{IC},{\mathcal{X}}_{IC},f)$ scalar linear Index Coding problem if and only if $$\label{L_IC}
{{\bf{z}}}{{\bf{L}}}_{IC} \neq {\bf{0}},\quad \forall {{\bf{z}}}\in {\mathcal{I}}_{IC}.$$ It is possible to represent an $(m_{IC},n_{IC},{\mathcal{X}}_{IC},f)$ Index Coding problem by means of a directed bipartite graph as described in [@tehrani2012bipartite] which is as follows. The directed bipartite graph corresponding to the $(m_{IC},n_{IC},{\mathcal{X}}_{IC},f)$ Index Coding problem consists of the node-sets , $\mathcal{P}_{IC}=\{x'_1,x'_2,\dots,x'_{n_{IC}}\}$, the set of directed edges and the set of directed edges . The set $\mathcal{U}_{IC}$ denotes the *user-set* and $\mathcal{P}_{IC}$ denotes the set of packets or the *information symbol-set*. The directed edges from *user-set* to *information symbol-set* in $\mathcal{E}_{IC}$ denotes the user’s side information and directed edges from *information symbol-set* to *user-set* in $\mathcal{E}_{IC}$ denotes the user’s demanded message.
![.[]{data-label="fig:image_IC"}](bipartite_graph_IC){width="2in"}
\[exmp\_IC\] Consider the Index Coding problem with information symbols, users and user side information index sets , , . The directed bipartite graph in Fig. \[fig:image\_IC\] describes this scenario where , and $\mathcal{E}_{IC}=\{(x'_1,u'_1),(u'_1,x'_2),(u'_1,x'_3),(x'_2,u'_2),(u'_2,x'_1),(u'_2,x'_3),(x'_3,u'_3),(u'_3,x'_1),(u'_3,x'_2)\}$
Construction of an Index Coding Problem from a given BNSI problem {#ic_bnsi}
-----------------------------------------------------------------
From the definition of $\mathcal{I}(q,m,n,{\mathcal{X}},\delta_s)$ given in (\[ical\_def\]) and Theorem \[thm1\], now we construct an $(\hat{m},n,\hat{{\mathcal{X}}},f)$ Index Coding problem from an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem, where $$\label{hat_m}
\hat{m}=
\left\{
\begin{array}{lc}
\sum_{i=1}^{m}{{}^{|{\mathcal{X}}_i|}C_{1}} \times {{}^{|{\mathcal{X}}_i|-1}C_{2\delta_s-1}} & \mbox{if}~|{\mathcal{X}}_i| \geq 2\delta_s \\ \sum_{i=1}^{m}{{}^{|{\mathcal{X}}_i|}C_{1}} & \mbox{otherwise}
\end{array}
\right.$$ and $\hat{{\mathcal{X}}}$ and $f$ are obtained from the construction of an Index Coding problem as described in Algorithm 3.
**Input**: $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem\
**Output**: $(\hat{m},n,\hat{{\mathcal{X}}},f)$ Index Coding problem\
% % [Initialization:]{} $j=0$\
% % Iteration:\
$\hat{{\mathcal{X}}}=({\mathcal{X}}_{1,IC},{\mathcal{X}}_{2,IC},\dots,{\mathcal{X}}_{\hat{m},IC})$\
`output` $(\hat{m},n,\hat{{\mathcal{X}}},f)$ Index Coding problem; `return`;
Algorithm 3 considers each $i^{th}$ user $u_i,~i \in [m]$ in $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem, for every possible choice of an element $p \in {\mathcal{X}}_i$ and a set $Q \subseteq {\mathcal{X}}_i \setminus \{p\}$ such that $|Q|=\min\{|{\mathcal{X}}_i|-1,2\delta_s-1\}$, it defines a new user $u'_j$ with $f(j)=p$ and ${\mathcal{X}}_{j,IC}={\mathcal{X}}_i \setminus (Q \cup \{p\})$. In the newly constructed Index Coding problem, the total number of users $m_{IC}=\hat{m}$, number of information symbols $n_{IC}=n$, the tuple of side information index sets ${\mathcal{X}}_{IC}$ is given by $\hat{{\mathcal{X}}}$ and the demanded message $f(j)$ of each user $u_j,~j \in [\hat{m}]$ is given by mapping $f$. Hence we will obtain an $(\hat{m},n,\hat{{\mathcal{X}}},f)$ Index Coding problem. Now we relate the set $\mathcal{I}(q,m,n,{\mathcal{X}},\delta_s)$ defined for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem and the set ${\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f)$ for the $(\hat{m},n,\hat{{\mathcal{X}}},f)$ Index-Coding problem.
\[i\_bnsi\_i\_ic\] Let for an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem, the set $\mathcal{I}(q,m,n,{\mathcal{X}},\delta_s)$ be defined by (\[ical\_def\]), and for the equivalent $(\hat{m},n,\hat{{\mathcal{X}}},f)$ Index Coding problem constructed from the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem, the set ${\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f)$ be defined by (\[i\_ic\]). Then $\mathcal{I}(q,m,n,{\mathcal{X}},\delta_s)={\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f)$.
To show that $\mathcal{I}(q,m,n,{\mathcal{X}},\delta_s)={\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f)$, we will show that $\mathcal{I}(q,m,n,{\mathcal{X}},\delta_s)\subseteq {\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f)$ and ${\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f) \subseteq \mathcal{I}(q,m,n,{\mathcal{X}},\delta_s)$.
*Proof for* $\mathcal{I}(q,m,n,{\mathcal{X}},\delta_s) \subseteq {\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f)$: Suppose a vector ${{\bf{z}}}\in \mathcal{I}(q,m,n,{\mathcal{X}},\delta_s)$. Then from (\[ical\_def\]), we obtain that there exists at least one $i \in [m]$ such that $wt({{\bf{z}}}_{{\mathcal{X}}_i}) \in [2\delta_s]$. Therefore ${{\bf{z}}}_{{\mathcal{X}}_i} \neq \mathbf{0}$. Hence there exists a $p \in {\mathcal{X}}_i$ such that $z_p \neq 0$. Note that $wt({{\bf{z}}}_{{\mathcal{X}}_i}) \leq 2\delta_s$ and since $wt({{\bf{z}}}_{{\mathcal{X}}_i \setminus \{p\}}) \leq 2\delta_s-1$ there exists $Q \subseteq {\mathcal{X}}_i \setminus \{p\}$ such that $|Q|=\min\{|{\mathcal{X}}_i|-1,2\delta_s-1\}$ and ${{\bf{z}}}_{{\mathcal{X}}_i \setminus (Q \cup \{p\})}= \mathbf{0}$. Now using the construction procedure described in Algorithm 3 we see that there exists a $j$ such that ${\mathcal{X}}_{j,IC}={\mathcal{X}}_i \setminus (Q \cup \{p\})$ satisfies ${{\bf{z}}}_{{\mathcal{X}}_{j,IC}}=\mathbf{0}$ and and $f(j)=p$ satisfies $z_{f(j)} \neq 0$. Hence ${{\bf{z}}}\in {\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f)$.
*Proof for* ${\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f) \subseteq \mathcal{I}(q,m,n,{\mathcal{X}},\delta_s)$: Suppose a vector ${{\bf{z}}}\in {\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f)$. Then there exists at least one user $j \in [\hat{m}]$ such that $z_{f(j)} \neq 0$ and ${{\bf{z}}}_{{\mathcal{X}}_{j,IC}}=\mathbf{0}$. Let $p=f(j)$ and $Q$ be any elements from the remaining set $[n] \setminus (\{f(j)\} \cup {\mathcal{X}}_{j,IC})$ . Note that $wt({{\bf{z}}}_{(\{p\}\cup {\mathcal{X}}_{j,IC} \cup Q))}) \in [2\delta_s]$. From Algorithm 3 we see that there exists $i \in [m]$ such that satisfies $wt({{\bf{z}}}_{{\mathcal{X}}_i}) \in [2\delta_s]$. Hence ${{\bf{z}}}\in \mathcal{I}(q,m,n,{\mathcal{X}},\delta_s)$.
Hence the theorem holds.
\[BNSI\_to\_IC\] Here we will consider the BNSI problem scenario given in Example \[exmp1\]. The total number of users in corresponding Index-Coding problem will be $3 \times {{}^{3}C_{1}} \times {{}^{2}C_{1}}=18$ but among them only $12$ users have distinct *(side information, demanded message)* pair. The number of information symbols will be same for Index Coding and BNSI problem. Table \[table:1\] shows all the distinct users of the Index Coding problem with their *demanded message* and *side information* symbols and Fig. \[fig:image2\] shows the corresponding bipartite graph for the Index Coding problem.
Using the $(\hat{m},n,\hat{{\mathcal{X}}},\hat{f})$ Index-Coding problem corresponding to an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem, now we relate the construction of ${{\bf{L}}}$ to the problem of designing scalar linear index coding scheme.
\[BNSI\_IC\_L\] ${{\bf{L}}}$ is a valid encoder matrix for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem if and only if ${{\bf{L}}}$ is a valid encoder matrix for the $(\hat{m},n,\hat{{\mathcal{X}}},\hat{f})$ scalar linear Index Coding problem.
From Theorem \[thm1\] we know that ${{\bf{L}}}$ is a valid encoder matrix for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem if and only if it satisfies $${{\bf{z}}}{{\bf{L}}}\neq {\bf{0}},\quad \forall {{\bf{z}}}\in \mathcal{I}(q,m,n,{\mathcal{X}},\delta_s).$$ Now from Theorem \[i\_bnsi\_i\_ic\] we have $\mathcal{I}(q,m,n,{\mathcal{X}},\delta_s) = {\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},f)$. So, ${{\bf{L}}}$ also satisfies, $${{\bf{z}}}{{\bf{L}}}\neq {\bf{0}},\quad \forall {{\bf{z}}}\in {\mathcal{I}}_{IC}(q,\hat{m},n,\hat{{\mathcal{X}}},\hat{f}).$$ Therefore using (\[L\_IC\]) we can conclude that ${{\bf{L}}}$ is a valid encoder matrix for the $(\hat{m},n,\hat{{\mathcal{X}}},f)$ scalar linear Index Coding problem if and only if ${{\bf{L}}}$ is a valid encoder matrix for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem.
Theorem \[BNSI\_IC\_L\] claims that constructing an encoder matrix ${{\bf{L}}}$ for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem is equivalent to constructing an encoder matrix for the $(\hat{m},n,\hat{{\mathcal{X}}},f)$ scalar linear Index Coding problem. From [@YBJK_IEEE_IT_11; @Dau_IEEE_J_IT_13], we know that an encoder matrix for scalar linear Index Coding problem can be found by finding a matrix that fits its *side information hypergraph* and the optimal length of a scalar linear Index Code equals the *min-rank* of its side information hypergraph.
Here we will consider the BNSI problem scenario given in Example \[exmp1\]. The users in the corresponding Index Coding problem is listed in Table \[table:1\] and the graphical representation is given in Fig. \[fig:image2\]. In the bipatite graph, we can notice that the edge sets $\{(x_1,u'_3),(u'_3,x_4),(x_4,u'_{10}),(u'_{10},x_1)\}$, $\{(x_2,u'_6),(u'_6,x_4),(x_4,u'_{11}),(u'_{11},x_2)\}$ and $\{(x_3,u'_9),(u'_9,x_4),(x_4,u'_{12}),(u'_{12},x_3)\}$ constitute $3$ cycles involving information symbol sets $\{x_1,x_4\}$, $\{x_2,x_4\}$ and $\{x_3,x_4\}$ respectively. Now using the *Cyclic Code Actions* as described in [@MAZ_IEEE_IT_13] on each of these cycles, we can save one transmission. We encode the information symbols corresponding to $1^{st}$, $2^{nd}$ and $3^{rd}$ cycles as $x_1+x_4$, $x_2+x_4$, $x_3+x_4$ respectively. Therefore the codeword $(x_1+x_4,x_2+x_4,x_3+x_4)$ saves one transmission. Hence, $N_{q,opt,IC} \leq 3$.
Again we can notice that users $u'_3,u'_6,u'_9$ have $x_4$ as side information and each of the three users demands three distinct messages $x_1,x_2,x_3$, respectively. Therefore the encoder needs to encode $x_1,x_2,x_3$ such that with appropriate decoding functions $u'_3,u'_6,u'_9$ can decode $x_1,x_2,x_3$ respectively using the common side information $x_4$. Hence, $N_{q,opt,IC} \geq 3$. So, $N_{q,opt,IC}=3$. The encoder matrix that generates the codeword $(x_1+x_4,x_2+x_4,x_3+x_4)$ is
$${{\bf{L}}}=
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1\\
1 & 1 & 1
\end{bmatrix}.$$
Note that this ${{\bf{L}}}$ we took in Example \[exmp2\] to validate the design criterion of a valid encoder matrix for a $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem given in Example \[exmp1\] and we have also used this ${{\bf{L}}}$ to describe the *Syndrome Decoding* for $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem in Example \[exmp3\]. Therefore the matrix ${{\bf{L}}}$ serves as a valid encoder matrix both for $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem given in Example \[exmp1\] and corresponding $(\hat{m},n,\hat{{\mathcal{X}}},f)$ Index Coding problem given in Example \[BNSI\_to\_IC\].
Lower Bound on $N_{q,opt}$ based on Index Coding {#lb_ic}
------------------------------------------------
A lower bound on the optimal codelength $N_{q,opt}$ of an $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem can be derived based on its equivalent $(\hat{m},n,\hat{{\mathcal{X}}},f)$ scalar linear Index Coding problem. Suppose a directed bipartite graph $\mathcal{B}_{IC}=(\mathcal{U}_{IC},\mathcal{P}_{IC},\mathcal{E}_{IC})$ represents the $(\hat{m},n,\hat{{\mathcal{X}}},f)$ scalar linear Index Coding problem. Theorem 1 of [@MAZ_IEEE_IT_13] shows that if a directed bipartite graph $\mathcal{G}$ representing a scalar linear Index-Coding problem with $P$ information symbols is acyclic then its optimal codelength $N_{opt}(q,\mathcal{G})=P$. Now consider the bipartite graph $\mathcal{B}_{IC}$ and perform the *pruning operations* given Section II-A in [@MAZ_IEEE_IT_13] to construct a subgraph $\mathcal{B}_{IC}^s=(\mathcal{U}_{IC}^s,\mathcal{P}_{IC}^s,\mathcal{E}_{IC}^s)$ which is an acyclic subgraph with information-set $\mathcal{P}_{IC}^s$. This leads to a lower bound on the optimal codelength of our BNSI problem.
Let $\mathcal{B}_{IC}^s=(\mathcal{U}_{IC}^s,\mathcal{P}_{IC}^s,\mathcal{E}_{IC}^s)$ be any acyclic subgraph of ${\mathcal{B}}_{IC}$ induced by information-set $\mathcal{P}_{IC}^s$. Then the optimal codelength over ${\mathds{F}}_q$ for the $(m,n,{\mathcal{X}},\delta_s)$ BNSI problem satisfies, $$N_{q,opt} \geq |\mathcal{P}_{IC}^s|.$$
From equivalence relation of the BNSI problem and the scalar linear Index Coding problem described in Theorem \[BNSI\_IC\_L\], we have $N_{q,opt} = N_{q,opt,IC}(\mathcal{B}_{IC})$. As $\mathcal{B}_{IC}^s$ is a subgraph of ${\mathcal{B}}_{IC}$, using Lemma 1 in [@MAZ_IEEE_IT_13] we have $N_{q,opt,IC}(\mathcal{B}_{IC}) \geq N_{q,opt,IC}(\mathcal{B}_{IC}^s)$. The directed bipartite graph $\mathcal{B}_{IC}^s$ is an acyclic subgraph with information-set $\mathcal{P}_{IC}^s$. Thereby using Theorem 1 of [@MAZ_IEEE_IT_13], we have $N_{q,opt,IC}(\mathcal{B}_{IC}^s)=|\mathcal{P}_{IC}^s|$. Hence, $N_{q,opt}(\mathcal{B}) \geq |\mathcal{P}_{IC}^s|$
Conclusions and discussions
===========================
We derived a design criterion for linear coding schemes for BNSI problems, and identified the subset of problems where linear coding provides gains over uncoded transmission. Reduction in the codelength is achieved by jointly coding the information symbols to simultaneously meet the demands of all the receivers. We have derived lower bounds on the optimal codelength. We have shown a valid encoder matrix can be constructed from the transpose of a parity check matrix of linear error correcting codes. Based on the construction of a valid encoder matrix derived from MDS code, we found some upper bounds on optimal codelength. Codelength can be further reduced by partitioning any BNSI problems into many BNSI subproblems. We have shown that each BNSI problem is equivalent to an Index Coding problem. The presented results bring to light several questions regarding BNSI networks, such as evaluation of optimum code length $N_{q,opt}$, designing linear coding schemes that achieve this optimum length, designing schemes that admit low complexity decoding at the receivers, some efficient algorithms to find the presented lower and upper bounds and designing schemes for broadcasting in the presence of channel noise.
[10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{}
Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol, “Index coding with side information,” *[IEEE]{} Trans. Inf. Theory*, vol. 57, no. 3, pp. 1479–1494, Mar. 2011.
A. Blasiak, R. D. Kleinberg, and E. Lubetzky, “Index coding via linear programming,” *CoRR*, vol. abs/1004.1379, 2010. \[Online\]. Available: <http://arxiv.org/abs/1004.1379>
H. Maleki, V. R. Cadambe, and S. A. Jafar, “Index coding – an interference alignment perspective,” *[IEEE]{} Trans. Inf. Theory*, vol. 60, no. 9, pp. 5402–5432, Sept 2014.
M. B. Vaddi and B. S. Rajan, “Optimal scalar linear index codes for one-sided neighboring side-information problems,” in *2016 IEEE Globecom Workshops (GC Wkshps)*, Dec 2016, pp. 1–6.
——, “Optimal vector linear index codes for some symmetric side information problems,” in *2016 IEEE International Symposium on Information Theory (ISIT)*, July 2016, pp. 125–129.
V. K. Mareedu and P. Krishnan, “Uniprior index coding,” in *Proc. 2017 IEEE Int. Symp. Inf. Theory*, Jun. 2017 (to appear).
A. S. Tehrani, A. G. Dimakis, and M. J. Neely, “Bipartite index coding,” in *Information Theory Proceedings (ISIT), 2012 IEEE International Symposium on*.1em plus 0.5em minus 0.4emIEEE, 2012, pp. 2246–2250.
K. Shanmugam, A. G. Dimakis, and M. Langberg, “Local graph coloring and index coding,” in *Proc. 2013 IEEE Int. Symp. Inf. Theory*, July 2013, pp. 1152–1156.
L. Ong, C. K. Ho, and F. Lim, “The single-uniprior index-coding problem: The single-sender case and the multi-sender extension,” *IEEE Transactions on Information Theory*, vol. 62, no. 6, pp. 3165–3182, June 2016.
M. J. Neely, A. S. Tehrani, and Z. Zhang, “Dynamic index coding for wireless broadcast networks,” *[IEEE]{} Trans. Inf. Theory*, vol. 59, no. 11, pp. 7525–7540, Nov 2013.
A. Agarwal and A. Mazumdar, “Local partial clique and cycle covers for index coding,” in *2016 IEEE Globecom Workshops (GC Wkshps)*, Dec 2016, pp. 1–6.
S. H. Dau, V. Skachek, and Y. M. Chee, “Error correction for index coding with side information,” *[IEEE]{} Trans. Inf. Theory*, vol. 59, no. 3, pp. 1517–1531, March 2013.
N. S. Karat and B. S. Rajan, “Optimal linear error correcting index codes for some index coding problems,” in *2017 IEEE Wireless Communications and Networking Conference (WCNC)*, March 2017, pp. 1–6.
S. Samuel and B. S. Rajan, “Optimal linear error-correcting index codes for single-prior index-coding with side information,” in *2017 IEEE Wireless Communications and Networking Conference (WCNC)*, March 2017, pp. 1–6.
J. W. Kim and J. S. No, “Index coding with erroneous side information,” *[IEEE]{} Trans. Inf. Theory*, vol. 63, no. 12, pp. 7687–7697, Dec 2017.
W. C. Huffman and V. Pless, *Fundamentals of error-correcting codes*.1em plus 0.5em minus 0.4emCambridge university press, 2010.
M. Grassl, “[Bounds on the minimum distance of linear codes and quantum codes]{},” Online available at <http://www.codetables.de>, 2007, accessed on 2017-11-29.
M. A. R. Chaudhry, Z. Asad, A. Sprintson, and M. Langberg, “On the complementary index coding problem,” in *2011 IEEE International Symposium on Information Theory Proceedings*, July 2011, pp. 244–248.
Y. Birk and T. Kol, “Informed-source coding-on-demand (iscod) over broadcast channels,” in *INFOCOM ’98. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE*, vol. 3, Mar 1998, pp. 1257–1264 vol.3.
|
---
abstract: 'Hybrid-kinetic numerical simulations of firehose and mirror instabilities in a collisionless plasma are performed in which pressure anisotropy is driven as the magnetic field is changed by a persistent linear shear $S$. For a decreasing field, it is found that mostly oblique firehose fluctuations grow at ion Larmor scales and saturate with energies $\propto$$S^{1/2}$; the pressure anisotropy is pinned at the stability threshold by particle scattering off microscale fluctuations. In contrast, nonlinear mirror fluctuations are large compared to the ion Larmor scale and grow secularly in time; marginality is maintained by an increasing population of resonant particles trapped in magnetic mirrors. After one shear time, saturated order-unity magnetic mirrors are formed and particles scatter off their sharp edges. Both instabilities drive sub-ion-Larmor–scale fluctuations, which appear to be kinetic-Alfvén-wave turbulence. Our results impact theories of momentum and heat transport in astrophysical and space plasmas, in which the stretching of a magnetic field by shear is a generic process.'
author:
- 'Matthew W. Kunz'
- 'Alexander A. Schekochihin'
- 'James M. Stone'
bibliography:
- 'KSS14.bib'
title: Firehose and Mirror Instabilities in a Collisionless Shearing Plasma
---
#### Introduction.
Describing the large-scale behavior of weakly collisional magnetized plasmas, such as the solar wind, hot accretion flows, or the intracluster medium (ICM) of galaxy clusters, necessitates a detailed understanding of the kinetic-scale physics governing the dynamics of magnetic fields and the transport of momentum and heat. This physics is complicated by the fact that such plasmas are expected to exhibit particle distribution functions with unequal thermal pressures in the directions parallel ($||$) and perpendicular ($\perp$) to the local magnetic field [@msrmpn82; @sc06; @shqs06]. This pressure anisotropy can trigger fast micro-scale instabilities [@rosenbluth56; @ckw58; @parker58; @vs58; @barnes66; @hasegawa69], whose growth and saturation impact the structure of the magnetic field and the effective viscosity of the plasma. While solar-wind observations suggest that these instabilities are effective at regulating the pressure anisotropy to marginally stable levels [@gsss01; @klg02; @htkl06; @matteini07; @bkhqss09; @mhglvn13], it is not known how this is achieved.
We address this question with nonlinear numerical simulations of the firehose and mirror instabilities. We leverage the universal physics at play in turbulent $\beta \gg 1$ astrophysical plasmas such as the ICM [@sckhs05; @kscbs11] and Galactic accretion flows [@qdh02; @rqss12]—magnetic field being changed by velocity shear, coupled with adiabatic invariance—to drive self-consistently a pressure anisotropy beyond the instability thresholds. Our setup represents a local patch of a turbulent velocity field, in which the magnetic field is sheared and its strength changed on a timescale much longer than that on which the unstable fluctuations grow. This approach is complementary to expanding-box models of the $\beta \sim 1$ solar wind [@gv96] used to drive firehose [@mlhv06; @ht08] and mirror/ion-cyclotron [@ht05] instabilities.
#### Hybrid-kinetic equations in the shearing sheet.
A non-relativistic, quasi-neutral, collisionless plasma of electrons (mass $m_{\rm e}$, charge $-e$) and ions (mass $m_{\rm i}$, charge $Ze$) is embedded in a linear shear flow, ${\mbox{\boldmath{$u$}}}_0 = - S x {\hat{{\mbox{\boldmath{$y$}}}}}$, in $(x,y,z)$ Cartesian coordinates. In a frame co-moving with the shear flow, the equations governing the evolution of the ion distribution function $f_{\rm i} (t, {\mbox{\boldmath{$r$}}}, {\mbox{\boldmath{$v$}}})$ and the magnetic field ${\mbox{\boldmath{$B$}}}$ are, respectively, the Vlasov equation $$\label{eqn:vlasov}
{\frac{{\rm d} f_{\rm i}}{{\rm d} t}} + {\mbox{\boldmath{$v$}}} {{\mbox{\boldmath{$\cdot$}}}}{{\mbox{\boldmath{$\nabla$}}}}f_{\rm i} + \left[ \frac{Ze}{m_{\rm i}} \left( {\mbox{\boldmath{$E$}}}' + \frac{{\mbox{\boldmath{$v$}}}}{c} {{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$B$}}} \right) + S v_x {\hat{{\mbox{\boldmath{$y$}}}}}\right] \! {{\mbox{\boldmath{$\cdot$}}}}{\frac{\partial f_{\rm i}}{\partial {\mbox{\boldmath{$v$}}}}} = 0$$ and Faraday’s law $$\label{eqn:induction}
{\frac{{\rm d} {\mbox{\boldmath{$B$}}}}{{\rm d} t}} = - c {{\mbox{\boldmath{$\nabla$}}}}{{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$E$}}}' - S B_x {\hat{{\mbox{\boldmath{$y$}}}}},$$ where ${\rm d} / {\rm d} t \equiv \partial / \partial t - S x \, \partial / \partial y$. The electric field, $$\label{eqn:efield}
{\mbox{\boldmath{$E$}}}' = - \frac{{\mbox{\boldmath{$u$}}}_{\rm i} {{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$B$}}}}{c} + \frac{( {{\mbox{\boldmath{$\nabla$}}}}{{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$B$}}} ) {{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$B$}}}}{4\pi Z e n_{\rm i}} - \frac{T_{\rm e} {{\mbox{\boldmath{$\nabla$}}}}n_{\rm i}}{e n_{\rm i}} ,$$ is obtained by expanding the electron momentum equation in $( m_{\rm e} / m_{\rm i} )^{1/2}$, enforcing quasi-neutrality $$\label{eqn:quasineutrality}
n_{\rm e} = Z n_{\rm i} \equiv Z \! \int {\rm d}^3 {\mbox{\boldmath{$v$}}} \, f_{\rm i} ,$$ assuming isothermal electrons, and using Ampère’s law to solve for the mean velocity of the electrons $$\label{eqn:ampere}
{\mbox{\boldmath{$u$}}}_{\rm e} = {\mbox{\boldmath{$u$}}}_{\rm i} - \frac{{\mbox{\boldmath{$j$}}}}{Z e n_{\rm i}} \equiv \frac{1}{n_{\rm i}} \int {\rm d}^3 {\mbox{\boldmath{$v$}}} \, {\mbox{\boldmath{$v$}}} f_{\rm i} - \frac{ c {{\mbox{\boldmath{$\nabla$}}}}{{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$B$}}} }{4 \pi Z e n_{\rm i}}$$ in terms of the mean velocity of the ions ${\mbox{\boldmath{$u$}}}_{\rm i}$ and the current density ${\mbox{\boldmath{$j$}}}$ [@rsrc11; @rsc14]. This constitutes the “hybrid” description of kinetic ions and fluid electrons [@bcch78; @hn78].
#### Adiabatic invariance and pressure anisotropy.
The final terms in Eqs. (\[eqn:vlasov\]) and (\[eqn:induction\]) represent the stretching of the phase-space density and the magnetic field in the $y$-direction by the shear flow. Conservation of the first adiabatic invariant $\mu \equiv m_{\rm i} v^2_\perp / 2B$ then renders $f_{\rm i}$ anisotropic with respect to the magnetic field. If ${\mbox{\boldmath{$E$}}}' = 0$, the ratio of the perpendicular and parallel pressures is $$\label{eqn:paniso}
\frac{p_\perp}{p_{||}} \equiv \frac{ \int {\rm d}^3 {\mbox{\boldmath{$v$}}} \, \mu B \, f_{\rm i}}{\int {\rm d}^3 {\mbox{\boldmath{$v$}}} \, m_{\rm i} v^2_{||} \, f_{\rm i} } = \left[ 1 - 2 \frac{B_x B_{y0}}{B^2_0} S t + \frac{B^2_x}{B^2_0} ( S t )^2 \right]^{3/2} ,$$ where the subscript ‘$0$’ denotes initial values [@cgl56].
#### Method of solution.
We solve Eqns. (\[eqn:vlasov\])–(\[eqn:ampere\]) using the second-order–accurate particle-in-cell code [Pegasus]{} [@ksb14]. We normalize magnetic field to $B_0$, velocity to the initial Alfvén speed $v_{\rm A0} \equiv B_0 / \sqrt{4\pi m_{\rm i} n_{\rm i0}}$, time to the inverse of the initial ion gyrofrequency $\Omega_{\rm i0} \equiv Z e B_0 / m_{\rm i} c$, and distance to the initial ion skin depth $d_{\rm i0} \equiv v_{\rm A0} / \Omega_{\rm i0}$. The ion Larmor radius $\rho_{\rm i} = \beta^{1/2}$, where $\beta \equiv 8 \pi n_{\rm i} T_{\rm i} / B^2$. $N_{\rm p}$ particles are drawn from a Maxwell distribution with $\beta_0 = 200$ and placed on a 2D grid $N_x$$\times$$N_y = 1152^2$ cells spanning $L_x$$\times$$L_y = 1152^2$. The electrons are Maxwellian and gyrotropic with $T_{\rm i} / Z T_{\rm e} = 1$. A $\delta f$ method reduces the impact of discrete-particle noise on the moments of $f_{\rm i}$ [@pl93; @hk94]. Orbital advection updates the particle positions and magnetic field due to the background shear [@sg10]. The boundary conditions are shearing-periodic: $f(x,y) = f(x \pm L_x , y \mp S L_x t)$. We scan $S = (1,3,10,30) \times 10^{-4}$. These parameters guarantee a healthy scale separation between the grid scale, the ion Larmor radius, the wavelengths of the instabilities, and the box size. In what follows, $\langle \cdot \rangle$ denotes a spatial average over all cells.
#### Firehose instability.
We choose $N_{\rm p} = 1024 N_x N_y$ and set ${\mbox{\boldmath{$B$}}}_0 = (2 {\hat{{\mbox{\boldmath{$x$}}}}}+ 3 {\hat{{\mbox{\boldmath{$y$}}}}}) / \sqrt{13}$, so that $\langle B_{y} \rangle = \langle B_{x} \rangle$ at $St = 1/2$. As $B$ decreases, adiabatic invariance drives $p_\perp / p_{||} < 1$ (Eq. \[eqn:paniso\]), with plasma becoming firehose unstable when $\Lambda_{\rm f} \equiv 1 - p_\perp / p_{||} - 2 / \beta_{||} > 0$. Exponentially growing, Alfvénically polarized ($|\delta {\mbox{\boldmath{$B$}}}_\perp| \gg \delta B_{||}$), oblique modes with growth rate $\gamma \simeq k_{||} \rho_{\rm i} ( \Lambda_{\rm f} / 2 )^{1/2}$ and $k_{||} \rho_{\rm i} \approx k_\perp \rho_{\rm i} \approx 0.4$ then appear (Fig. \[fig:fhs-boxavg\]a; cf. [@ywa93; @hm00]). Fig. \[fig:fhs\] shows their spatial structure. $\Lambda_{\rm f}$ continues to grow, driven by shear ($\Lambda_{\rm f} \sim St$; Fig. \[fig:fhs-boxavg\]b), until the perturbations become large enough to reduce the pressure anisotropy to its marginally stable value ($\Lambda_{\rm f} \rightarrow 0$).
It has been proposed [@sckrh08; @rsrc11] that they do this by canceling the rate of change of the mean field: $(1/2) \, {\rm d} \langle | \delta {\mbox{\boldmath{$B$}}}_\perp |^2 \rangle / {\rm d} t \approx - {\rm d} \ln | \langle {\mbox{\boldmath{$B$}}} \rangle | / {\rm d} t \sim S$, giving rise to secular evolution, $\langle | \delta {\mbox{\boldmath{$B$}}}_\perp |^2 \rangle \sim S t$. Matching $\gamma \sim \Lambda^{1/2}_{\rm f} \sim (St)^{1/2}$ with the rate of growth in the secular phase ($\gamma \sim 1/t$), we find $\langle | \delta {\mbox{\boldmath{$B$}}}_\perp |^2 \rangle \sim St \sim \Lambda_{\rm f} \sim S^{2/3}$ at the transition from linear to nonlinear evolution (cf. [@ss64; @qs96]; “quasi-linear saturation"). This scenario is indeed what we observe: the evolution of $\langle | \delta {\mbox{\boldmath{$B$}}}_\perp |^2 \rangle$ and $\Lambda_{\rm f}$ is shown in Fig. \[fig:fhs-boxavg\]; note $\langle \Lambda_{\rm f} \rangle_{\rm max} \propto S^{2/3}$ (inset in Fig. \[fig:fhs-boxavg\]b). To test the idea [@sckrh08; @rsrc11] that, during the secular phase, the average $B$ seen by particles streaming along the field is constant, we plot in Fig. \[fig:fhs-scattering\] a representative particle’s $\mu$ and $B$ (evaluated at the particle’s position) for $S = 3 \times 10^{-4}$. During the secular phase, the particle nearly conserves $\mu$ and $B \simeq {\rm const}$ along its trajectory, as expected.
![Evolution of firehose instability. (a) Energy in perpendicular magnetic fluctuations, $\langle | \delta {\mbox{\boldmath{$B$}}}_\perp |^2 \rangle$, whose saturated value $\propto$$S^{1/2}$ (inset). (b) Firehose stability parameter, $\langle \Lambda_{\rm f} \rangle$, whose maximum value $\propto$$S^{2/3}$ (inset; see text for explanation).[]{data-label="fig:fhs-boxavg"}](fig1a.eps "fig:"){width="8.5cm"} ![Evolution of firehose instability. (a) Energy in perpendicular magnetic fluctuations, $\langle | \delta {\mbox{\boldmath{$B$}}}_\perp |^2 \rangle$, whose saturated value $\propto$$S^{1/2}$ (inset). (b) Firehose stability parameter, $\langle \Lambda_{\rm f} \rangle$, whose maximum value $\propto$$S^{2/3}$ (inset; see text for explanation).[]{data-label="fig:fhs-boxavg"}](fig1b.eps "fig:"){width="8.5cm"}
![Spatial structure of the firehose instability with $S = 3\times 10^{-4}$. $\delta B_z / B_0$ (color) and magnetic-field lines are shown in the linear ([*left*]{}) and saturated ([*right*]{}) regimes.[]{data-label="fig:fhs"}](fig2.eps){width="8.5cm"}
However, this secular growth is not sustainable: the magnetic fluctuation energy saturates at a low level $\propto$$S^{1/2}$ (inset of Fig. \[fig:fhs-boxavg\]a) in a state of firehose turbulence. During this saturated state, particles scatter off fluctuations with $k_{||} \rho_{\rm i} \sim 1$, $\mu$ conservation is broken, and $B$ decreases at a rate approaching $-{\rm d} \ln | \langle {\mbox{\boldmath{$B$}}} \rangle | / {\rm d} t \sim S$ (Fig. \[fig:fhs-scattering\]). The production of pressure anisotropy is no longer adiabatically tied to the rate of change of the magnetic field and marginality ($\Lambda_{\rm f} \simeq 0$) is maintained independently of $S$ via anomalous particle scattering. We calculate the mean scattering rate $\nu_{\rm scatt}$ by tracking 4096 randomly selected particles, constructing a distribution of times taken by each to change its $\mu$ by a factor of ${\rm e}$, and taking the width of the resulting exponential function to be $\nu^{-1}_{\rm scatt}$. In a collisional, incompressible plasma without heat flows, the pressure anisotropy would be $p_\perp / p_{||} - 1 = (3 / \nu) ( {\rm d} \ln | \langle {\mbox{\boldmath{$B$}}} \rangle | / {\rm d} t )$, where $\nu$ is collision rate [@braginskii65; @rsrc11]. The effective scattering rate needed to maintain $\Lambda_{\rm f} = 0$ at saturation would then be $\nu_{\rm f} \equiv -3 (\beta_{||,{\rm sat}} /2) ( {\rm d} \ln | \langle {\mbox{\boldmath{$B$}}} \rangle | / {\rm d}t )_{\rm sat} \sim S \beta$. Remarkably, we find $\nu_{\rm scatt} \simeq \nu_{\rm f}$ in the saturated state (Fig. \[fig:nuscat\]).
![Evolution of $\mu$ and $B$ for a representative particle in the firehose simulation with $S = 3 \times 10^{-4}$.[]{data-label="fig:fhs-scattering"}](fig3.eps){width="8.5cm"}
![Mean scattering rate $\nu_{\rm scatt}$ for ([*left*]{}) firehose and ([*right*]{}) mirror in the secular (crosses) and saturated (plus signs) phases versus $S \beta_0$. The collision rates required to maintain marginal stability in the saturated phase, $\nu_{\rm f}$ and $\nu_{\rm m}$ respectively, are shown for comparison. See text for definitions.[]{data-label="fig:nuscat"}](fig4.eps){width="8.5cm"}
#### Mirror instability.
We choose $N_{\rm p} = 625 N_x N_y$ and set ${\mbox{\boldmath{$B$}}}_0 = (2 {\hat{{\mbox{\boldmath{$x$}}}}}- {\hat{{\mbox{\boldmath{$y$}}}}}) / \sqrt{5}$, so that $\langle B_{y} \rangle = - \langle B_{x} \rangle$ at $St = 1/2$. As $B$ increases, adiabatic invariance drives $p_\perp / p_{||} > 1$ (Eq. \[eqn:paniso\]), with plasma becoming mirror unstable when $\Lambda_{\rm m} \equiv p_\perp / p_{||} - 1 - 1/\beta_\perp > 0$ [^1]. Near threshold, linearly growing perturbations have $\gamma \sim \Lambda^2_{\rm m}$, $k_{||} \rho_{\rm i} \sim \Lambda_{\rm m}$, and $k_\perp \rho_{\rm i} \sim \Lambda_{\rm m}^{1/2}$ [@hellinger07]—they grow slower than the firehose, are more elongated in the magnetic-field direction, and have $\delta B_{||} \gg | \delta {\mbox{\boldmath{$B$}}}_\perp |$. Fig. \[fig:mrs\] shows their spatial structure.
The saturation scenario is analogous to the firehose: $\Lambda_{\rm m}$ continues growing (Fig. \[fig:mrs-boxavg\]b) until the mirror perturbations are large enough to drive $\Lambda_{\rm m} \rightarrow 0$, at which point the perturbations’ exponential growth gives way to secular evolution with $\langle \delta B^2_{||} \rangle \propto t^{4/3}$ (Fig. \[fig:mrs-boxavg\]a, discussed below). As $\Lambda_{\rm m} \rightarrow 0$, the dominant modes shift to longer wavelengths ($k_{||} \rho_{\rm i} \ll 1$) and become more elongated in the mean-field direction. Excepting the (non-asymptotic) $S = 10^{-3}$ case, this secular phase appears to be universal, lasting until $\delta B / B_0 \sim 1$ at $S t \gtrsim 1$, independently of $S$. The final saturation is caused by particle scattering off sharp ($\delta B / B_0 \sim 1$, $k_{||} \rho_{\rm i} \sim 1$) bends in the magnetic field, which occur at the boundaries of the magnetic mirrors.
As foreseen by [@sk93; @ks96; @pantellini98], trapped particles play a crucial role in the nonlinear evolution. Following [@sckrh08; @rsc14], we expect the pressure anisotropy to be pinned at marginal by an increasing fraction ($\sim$$|\delta B_{||}|^{1/2}$) of particles becoming trapped in magnetic mirrors, thereby sampling regions where the increase of the mean field is compensated by the decrease in the perturbed field, viz. $ - {\rm d} \overline{\delta B_{||}} / {\rm d} t \sim {\rm d} \langle | \delta B_{||} |^{3/2} \rangle / {\rm d} t \sim {\rm d} \ln | \langle {\mbox{\boldmath{$B$}}} \rangle | / {\rm d} t \sim S$, where the overbar denotes averaging along particle trajectories (i.e. bounce-averaging for trapped particles). It follows that $\langle \delta B^2_{||} \rangle \sim ( S t )^{4/3}$, as is indeed seen in Fig. \[fig:mrs-boxavg\]a.
![Evolution of mirror instability versus $S$. (a) Energy in parallel fluctuations of the magnetic field, $\langle \delta B^2_{||} \rangle$. (b) Mirror stability parameter, $\langle \Lambda_{\rm m} \rangle$, whose maximum value $\propto$$S^{1/2}$.[]{data-label="fig:mrs-boxavg"}](fig5a.eps "fig:"){width="8.5cm"} ![Evolution of mirror instability versus $S$. (a) Energy in parallel fluctuations of the magnetic field, $\langle \delta B^2_{||} \rangle$. (b) Mirror stability parameter, $\langle \Lambda_{\rm m} \rangle$, whose maximum value $\propto$$S^{1/2}$.[]{data-label="fig:mrs-boxavg"}](fig5b.eps "fig:"){width="8.5cm"}
![Spatial structure of the mirror instability with $S = 3\times 10^{-4}$. $\delta B_{||} / B_0$ and (last panel) re-scaled $\delta n_{\rm i} / n_{\rm i0}$ are shown (color) with magnetic-field lines in the shearing plane.[]{data-label="fig:mrs"}](fig6a.eps "fig:"){width="8.5cm"} ![Spatial structure of the mirror instability with $S = 3\times 10^{-4}$. $\delta B_{||} / B_0$ and (last panel) re-scaled $\delta n_{\rm i} / n_{\rm i0}$ are shown (color) with magnetic-field lines in the shearing plane.[]{data-label="fig:mrs"}](fig6b.eps "fig:"){width="8.5cm"}
Fig. \[fig:mrs-scattering\] displays $\mu$, $B$, and $v_{||}$ for representative passing and trapped particles in the simulation with $S = 3 \times 10^{-4}$. In the linear phase, both particles conserve $\mu$ very well. During the secular phase ($St \simeq 0.2$–$1.4$), one of the particles becomes trapped and bounces while nearly conserving $\mu$; $B \simeq {\rm const}$ along its path, despite the growing mean field. The other remains passing, with $\overline{\delta B_{||}} \approx 0$. At the end of the secular phase, the trapped particle scatters out of the mirror and becomes passing.
The mean scattering rates $\nu_{\rm scatt}$ are different for the trapped and passing populations. During the secular phase, the trapped particles ($\sim$$70\%$ towards the end of the secular phase [^2]) have $\nu_{\rm scatt} \approx 0.002$ (Fig. \[fig:nuscat\]), while the passing particles have $\nu_{\rm scatt} \approx 0.03$. Excepting the $S = 10^{-3}$ case, these values are independent of $S$, indicating that particle scattering is irrelevant for $St \lesssim 1$ and $S \ll 1$. At saturation ($St \gtrsim 1$), the percentage of trapped particles drops to $\sim$$30\%$ (with $\nu_{\rm scatt} \approx 0.004$) and the total $\nu_{\rm scatt} \simeq \nu_{\rm m}$, where $\nu_{\rm m} \equiv 3 \beta_{\perp,\rm sat} ( {\rm d} \ln | \langle {\mbox{\boldmath{$B$}}} \rangle | / {\rm d} t )_{\rm sat}$ is the collisionality required to maintain $\Lambda_{\rm m} = 0$ at saturation (by the same argument as in the firehose discussion).
![Evolution of $\mu$, $B$, and $v_{||}$ (evaluated at particle position) for representative passing (red) and trapped (blue) particles in the mirror simulation with $S = 3 \times 10^{-4}$.[]{data-label="fig:mrs-scattering"}](fig7.eps){width="8.5cm"}
#### Firehose- and mirror-driven turbulence.
The saturated state of both instabilities is characterized by super-Larmor-scale driving and sub-Larmor-scale fluctuations. Fig. \[fig:spectra\] shows 1D magnetic fluctuation spectra for firehose and mirror at saturation versus $k_{||}$ and $k_\perp$ for $S = 3 \times 10^{-4}$. Energy is injected at successively larger scales as marginality is approached [cf. @qs96; @mlhv06] and several power laws are established. Firehose modes with $k \rho_{\rm i} < 1$ satisfy $| \delta B_{z,k} |^2 \propto k^{-3}$, a spectrum reminiscent of that predicted for parallel-firehose turbulence [@rsrc11]. Mirror modes with $k \rho_{\rm i} < 1$ satisfy $| \delta B_{||,k_{||}} |^2 \propto k^{-11/3}_{||}$. This scaling is obtained by an argument analogous to that proposed in [@rsrc11]: seek a power-law spectrum, $| \delta B_{||,k_{||}} |^2 \sim k^{-\alpha}_{||}$; estimate $\gamma_{\rm peak} \sim \Lambda^2_{\rm m} \sim 1/t$ and $k_{||,{\rm peak}} \sim \Lambda_{\rm m} \sim 1/t^{1/2}$ for the energy-containing mode in the secular phase; recall $\sum_{k_{||}} | \delta B_{||,k_{||}} |^2 \sim (S t)^{4/3}$; and demand that this be consistent with $\sum_{k_{||}} | \delta B_{||,k_{||}}|^2 \sim k_{||,{\rm peak}}^{1-\alpha} \sim t^{-(1-\alpha)/2}$. This procedure yields $\alpha = 11/3$. Finally, the $k$-shell-averaged density fluctuation spectra (Fig. \[fig:spectra\]c) follows $|\delta B_{||}|^2$, as expected for pressure-balanced mirrors.
Both spectra indicate that energy is removed at sub-Larmor scales by what appears to be a turbulent cascade, whose spectral slope and polarization of the fluctuations ($\delta n_{\rm i} \sim \beta^{-1}\, \delta B_{||}$ [@scdhhqt09; @bhxp13]) approximately matches observations of KAW turbulence in gyrokinetic simulations [@htdqsnt11] and the solar wind [@sgbcr10; @aslmmsr09; @cbxp13], as well as of “mirror turbulence" in the magnetosheath [@sbrcpb06]. This marks the first time in a simulation of mirror or firehose turbulence that a KAW cascade has been observed. Nevertheless, we caution that our simulations were performed in 2D; a proper study of this cascade requires 3D geometry [@scdhhqt09; @htdqsnt11; @bp12].
![1D magnetic fluctuation spectra for (a) firehose and (b) mirror versus $k_{||}$ and $k_\perp$, and (c) $k$-shell-averaged density fluctuation spectra for firehose and mirror versus $k$, all in the saturated state ($St = 1$) of the $S = 3 \times 10^{-4}$ simulations.[]{data-label="fig:spectra"}](fig8.eps){width="8.5cm"}
#### Summary.
We have presented numerical simulations of firehose and mirror instabilities driven by a changing magnetic field in a local shear flow. Both instabilities start in the linear regime with exponential growth, a process that is well understood analytically. The theoretical expectation, that after linear saturation the growth becomes secular as the pressure anisotropy is persistently driven [@sckrh08; @rsrc11; @rsc14], is borne out by our simulations. For the firehose, the marginal state is initially achieved via $\mu$-conserving changes in the magnetic field, but is subsequently maintained (independent of $S$) by particle scattering off $k_{||} \rho_{\rm i} \sim 1$ fluctuations. For the mirror, marginal stability is achieved and maintained during the secular phase by particle trapping in magnetic mirrors. Saturation occurs once $\delta B / B_0 \sim 1$ at $St \gtrsim 1$ via particle scattering off the sharp ends of the mirrors. For both instabilities, the mean scattering rate at saturation adjusts to maintain marginal stability, effectively reducing the viscosity to $v^2_{\rm th} / \nu_{\rm scatt} \sim v^2_{\rm A,sat} / S$.
Support for MWK was provided by NASA through Einstein Postdoctoral Fellowship Award Number PF1-120084, issued by the [*Chandra*]{} X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. The Texas Advanced Computer Center at The University of Texas at Austin provided HPC resources under grant numbers TG-AST090105 and TG-AST130002, as did the PICSciE-OIT TIGRESS High Performance Computing Center and Visualization Laboratory at Princeton University. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by NSF grant OCI-1053575. MWK and AAS thank Merton College, Oxford and the Max-Planck/Princeton Center for Plasma Physics for travel support. This work benefitted from conversations with Ian Abel, Chris Chen, Geoffroy Lesur, Greg Hammett, Peter Porazik, Eliot Quataert, Francois Rincon, Prateek Sharma, and especially Steve Cowley.
[^1]: Technically, this instability parameter is for cold electrons [@hellinger07], but it is close enough to reality for simplicity to outweigh precision in our treatment.
[^2]: This is consistent with the fraction of trapped particles being $f_{\rm T} = ( 1 - B_{\rm min} / B_{\rm max} )^{1/2}$. For $B_{\rm max} / B_{\rm min} \simeq 1.8$ near the end of the secular phase (see Fig. \[fig:mrs\]), $f_{\rm T} \simeq 0.66$.
|
\
.4in [*Department of Physics,\
National Technical University of Athens,\
Zografou Campus, 15780 Athens, Greece\
kfarakos@central.ntua.gr, metaxas@central.ntua.gr*]{}\
**Abstract**
We consider the one-loop effective potential at zero and finite temperature in scalar field theories with anisotropic space-time scaling. For $z=2$, there is a symmetry breaking term induced at one loop at zero temperature and we find symmetry restoration through a first-order phase transition at high temperature. For $z=3$, we considered at first the case with a positive mass term at tree level and found no symmetry breaking effects induced at one-loop, and then we study the case with a negative mass term at tree level where we cannot conclude about symmetry restoration effects at high temperature because of the imaginary parts that appear in the effective potential for small values of the scalar field.
Introduction
============
Non-relativistic field theories in the Lifshitz context, with anisotropic scaling between temporal and spatial directions, measured by the dynamical critical exponent, $z$, $$t\rightarrow b^z t,\,\,\,x_i\rightarrow b x_i,$$ have been considered recently since they have an improved ultraviolet behavior and their renormalizability properties are quite different than conventional Lorentz symmetric theories [@visser]–[@iengo]. Various field theoretical models and extensions of gauge field theories at the Lifshitz point have already been considered [@hor3].
When extended in curved space-time, these considerations may provide a renormalizable candidate theory of gravity [@hor1] and applications of these concepts in the gravitational and cosmological context have also been widely investigated [@kk].
We will consider here the case of a single scalar field in flat space-time. The weighted in the units of spatial momenta scaling dimensions are $[t]=-z$ and $[x_i]=-1$, with $z$ the anisotropic scaling, and $i=1,...,D$ the spatial index (here we consider $D=3$). The action with a single scalar field is $$S=\int dt d^Dx \left[ \frac{1}{2} \dot{\phi}^2 -\frac{1}{2}\phi(-\Delta)^z \phi-U_0(\phi)\right],
\label{gen}$$ with $\Delta=\partial_i^2$ and $[\phi]=\frac{D-z}{2}$.
In order to investigate the various implications of a field theory, in particle physics and cosmology, it is particularly important to examine its symmetry structure, both at the classical and the quantum level, at zero and finite temperature, via the effective action and effective potential [@col]– [@dolan]. We should note that, in order to get information on possible instabilities of the theory, we study the one-loop, perturbative effective potential, given by the one-particle irreducible diagrams of the theory, and not the full, non-perturbative, convex effective potential given by the so-called Maxwell construction [@wett2].
In a recent work [@kim1], the effective potential for a scalar theory was considered for the case of $z=2$ and it was shown that, at one loop order, there is a symmetry breaking term induced quantum mechanically; also the finite temperature effective potential was studied at one loop, and it was argued that there is no symmetry restoration at high temperature.
We study the theory with $z=2$ in Sec. 2 and find at zero temperature a symmetry breaking term at one loop that agrees with the results of [@kim1]. However, we have also studied the finite temperature effective potential both analytically and numerically, and have found the interesting result of symmetry restoration at high temperature through a first-order phase transition. In view of the importance of symmetry breaking phenomena throughout field theory and cosmology, we have also studied the situation for the case of $z=3$ in Sec. 3: in the case of a positive or zero mass term in the tree level we found no symmetry breaking terms induced at one loop. In the case of a negative mass term at the tree level we calculated the full effective potential at high temperature and found no symmetry restoration effects induced at one loop because of the imaginary parts that appear in the effective potential for small values of the scalar field.
Effective potential for $z=2$ at zero and finite temperature
============================================================
We consider the action (\[gen\]) with $z=2$, $$S=\int dt d^3x \left[ \frac{1}{2} \dot{\phi}^2 -\frac{1}{2}(\partial_i^2\phi)^2-U_0(\phi)\right].$$ Here we have $[\phi]=1/2$ and $U_0(\phi)$, the tree-level potential, is a polynomial up to the weighted marginal power of $\phi$ (here the tenth). The one-loop contribution to the effective potential, $$U_1=\frac{1}{2}\int\frac{d^4k}{(2\pi)^4}\ln (k_0^2+k_i^4+U_0'')
=\frac{1}{4\pi^2}\int k^2 dk \sqrt{k^4+U_0''}$$ (where, in the last equation, $k^2=k_i^2$) can be evaluated with a cutoff $\Lambda$ in the spatial momentum via differentiation with respect to $y=U_0''$ (primes denote differentiation with respect to $\phi$). We get $$\frac{d^2 U_1}{dy^2}=-\frac{1}{16\pi^2}\frac{1}{y^{3/4}}\int_0^{\infty}dx\frac{x^2}{(x^4+1)^{3/2}}$$ and, using the boundary conditions $$\frac{d U_1}{dy}(y=0)=\frac{\Lambda}{8\pi^2},\,\,\,U_1(y=0)=\frac{\Lambda^5}{20\pi^2},$$ we get $$U_1(\phi)=\frac{1}{8\pi^2}U_0'' \,\Lambda \, - \, c (U_0'')^{5/4},$$ where $c= \frac{1}{4\pi^2} \int_0^{\infty}dx\frac{x^2}{(x^4+1)^{3/2}} =\Gamma(3/4)^2/10\pi^{5/2}$. The first term, which is linearly divergent, can be renormalized with appropriate counterterms in the potential, and the second term, which is generally negative, can lead to a non-zero minimum, even if the original potential had a unique minimum at $\phi=0$.
We consider here the case of a massless theory, with a single relevant operator, $U_0(\phi)=\frac{\lambda}{4!}\phi^4$, and add the counterterms $\frac{1}{2}A\phi^2+\frac{1}{4!}B\phi^4$. The condition $U''(0)=0$ eliminates the quadratic terms and, because of the infrared divergence, the condition at a non-zero $\phi=\alpha$, $U''''(\alpha)=\lambda$, has been imposed. Since $[\lambda]=3$ and $[\phi]=1/2$, we write $\alpha^2 =\mu$ and $\lambda=\tilde{\lambda}\mu^3$, in terms of an overall mass scale $\mu$.
The full effective potential after renormalization is $$U(\phi)=\frac{\lambda}{4!}\left(1-\frac{15c\tilde{\lambda}^{1/4}}{2^5\cdot2^{1/4}}\right)\phi^4 \,-\,c \left(\frac{\lambda}{2}\phi^2\right)^{5/4}.
\label{res1}$$
The full effective potential now has a non-zero minimum, and a mass term will be generated after expansion around this minimum, but it should be noted that the situation is not entirely analogous to the usual Coleman-Weinberg mechanism, since the tree-level potential has a dimensionful parameter already. The situation is similar if other relevant operators with dimensionful couplings are considered ($\phi^6$ and $\phi^8$) but not if only the marginal $\phi^{10}$ operator, with a dimensionless coupling is considered in the tree-level potential. These results agree with the corresponding conclusions from [@kim1]. We now proceed to the calculation of the finite temperature effects and show that when the appropriate corrections to the effective potential are taken into account, there appear to exist symmetry restoration effects at high temperature, and indeed with a first-order phase transition.
The one-loop effective potential at finite temperature is [@dolan] $$U_{1T}=\frac{1}{2\beta}\sum_n \int\frac{d^3k}{(2\pi)^3}\ln
\left(\frac{4\pi^2n^2}{\beta^2}+E^2\right),$$ where $\beta = 1/T$ is the inverse temperature, $E^2=k^4+U_0''$ and the sum is over non-negative integers, $n$. Using $$\sum_n\ln\left(\frac{4\pi^2n^2}{\beta^2}+E^2\right)=
2\beta\left[\frac{E}{2}+\frac{1}{\beta}\ln(1-e^{-\beta E})\right],$$ the total effective potential can be written as $U_{1T}=U_1 + U_T$, where $U_1$ is the zero temperature contribution analyzed before and the temperature-dependent part is $$U_T(\phi)=T \int\frac{d^3 k}{(2\pi)^3} \ln \left( 1- e^{-\beta\sqrt{k^4+U_0''}} \right)$$ and is manifestly real for all values of $\phi$. The total effective potential can be plotted for various temperatures (at fixed $\lambda$) and the results indicate symmetry restoration at high temperature with a first-order phase transition.
As an example, the full potential is plotted in Fig. 1 for various temperatures. All quantities are rescaled in terms of appropriate powers of the dimensionful constant $\mu$ introduced before: the potential in units of $\mu^5$, the temperature in units of $\mu^2$ and $\phi$ in units of $\mu^{1/2}$. The results are shown for temperatures $\tilde{T}=0.008, 0.01, 0.012$ where $T=\tilde{T}\mu^2$ (we took $\tilde{\lambda}=0.1$). Also a constant, temperature-dependent term discussed below is not shown.
![The exact expression for the effective potential as a function of $\phi$ for the theory with $z=2$, at one loop at zero and finite temperature. The potential is plotted in units of $\mu^5$ and $\phi$ in units of $\mu^{1/2}$.The temperature is in units of $\mu^2$, the lowest curve is the potential at zero temperature and the other three curves are at increasing temperatures $\tilde{T}=0.008, 0.01, 0.012$ where $T=\tilde{T}\mu^2$ (we took $\tilde{\lambda}=0.1$).](f1.eps)
In [@kim1] it was argued that no such phase transition takes place. However, the conclusions of [@kim1] were based on the examination of the second derivative of the potential at the origin. This is not an appropriate test at this case because of the non-analytical terms that appear in the potential at the finite-temperature limit. One can see the possible such terms that may arise by considering an analytical approximation with an infrared momentum cutoff for the effective potential at finite temperature: in order to obtain this approximation, we can expand the logarithm in the previous equation and write $$U_T(\phi)=-\frac{T^{5/2}}{2\pi^2}\int dx\,x^2 \sum_n \frac{1}{n} e^{-n\sqrt{x^4+\beta^2 U_0''}}.
\label{app1}$$ Now, in the exponent in the previous expression, it turns out to be a good approximation to use $(x^4 +a^2)^{1/2}\approx x^2 + \frac{a^2}{2 x^2}$ (where $a^2 =\beta^2 U_0''$). The integrand in (\[app1\]) is maximum for values of $x\approx 1$ (for small values of $n$) so our approximation to consider the main contribution for $x^2>a$ is valid since we are interested in the high-temperature regime $a<<1$. It is definitely a good approximation for large $x$, and, since for small $x$ the exponential goes to zero, it is equivalent to an effective infrared momentum cutoff. Then the sum can be done with the help of the elementary integral $\int dx e^{-c_1 x^2- \frac{c_2}{x^2}}=\frac{1}{2}\sqrt{\frac{\pi}{c_1}}e^{-2\sqrt{c_1 c_2}} $. We get $$U_T=-\frac{T^{5/2}}{8\pi^{3/2}}\sum_n\left(\frac{1}{n^{5/2}}
+\frac{\sqrt{2}a}{n^{3/2}}\right)
e^{-\sqrt{2} an}.$$ The sums can be expressed in terms of the polylog function $P_\nu(w)=\sum_n \frac{1}{n^\nu} w^n$, with $w=e^{-\sqrt{2} a}$. In the high-temperature regime, where $a<<1$, one can expand around $w=1$ using the Taylor expansions $$P_{5/2}(w)=\zeta(5/2) + \zeta(3/2)(w-1) +\frac{4}{3}i\sqrt{\pi}(w-1)^{3/2}+ O(w-1)^2,$$ $$P_{3/2}(w)=\zeta(3/2) +2i\sqrt{\pi}(w-1)^{1/2}+ \zeta(1/2)(w-1) + O(w-1)^{3/2},$$ The leading terms of the final result at high temperature are $$U_T(\phi)= -\frac{\zeta(5/2)}{8 \pi^{3/2}}\, T^{5/2} + \frac{2^{3/4}}{12\pi} T\,(U_0'')^{3/4},
\label{res2}$$ which shows, besides a constant term, a second positive term, linear in the temperature, which will dominate, in the high-temperature limit, the previous, symmetry-breaking, zero-temperature terms. The first, constant term is exact and is not shown in the figures. It corresponds to the black-body radiation term that is proportional to $T^4$ in the usual Lorentz-invariant case with $z=1$. The second, non-analytical term was not found in [@kim1], where the condition with the second derivative of the potential at the origin was used in order to determine the critical temperature. It is clear from the above form of the potential that, since the extra term is non-analytical, and of the form $\phi^{3/2}$ for the interaction $\lambda \phi^4$, this condition cannot be used reliably in this case.
In Fig. 2 we show the full effective potential in the analytical approximation with the infrared cutoff, with the same parameters as in Fig. 1, and we see a flattening of the potential compared with the exact evaluation, the general features of the phase transition, however, remain the same. In fact, even the full quantitative features of the phase transition may be more appropriately investigated using a coarse-grained potential with an infrared cutoff of the form used here [@wett]. It is clear from these results that one has the interesting phenomenon of symmetry restoration at high temperature, with a potential term that indicates a first-order phase transition.
![Same as Fig. 1, with the same parameters, using the analytical approximation with the infrared cutoff of Eq. (\[res2\])](f2.eps)
Effective potential for $z=3$
=============================
In view of the previous results it would be interesting to investigate similar effects in the case with anisotropic scaling $z=3$; unfortunately we find no indication of symmetry-breaking terms in one-loop order. The action (\[gen\]) with $z=3$ is $$S=\int dt d^3x \left[ \frac{1}{2} \dot{\phi}^2 -\frac{1}{2}(\partial_i \nabla^2\phi)^2-U_0(\phi)\right]$$ where, now, $[\phi]=0$ and the potential, $U_0$, can be an arbitrary function of $\phi$. Taking, for definitiveness, $$U_0(\phi) =\frac{1}{2} m^2 \phi^2 + \frac{1}{4!}\lambda\phi^4,$$ with $[\lambda]=[m^2]=6$, one gets for the one-loop contribution to the effective potential, with the momentum cut-off, $\Lambda$, $$U_1(\phi) = \frac{1}{12 \pi^2} \left[ \frac{\Lambda\sqrt{\Lambda^2+ U_0''}}{2}
+\frac{U_0''}{2}\ln (\Lambda +\sqrt{\Lambda^2+U_0''})
-\frac{U_0''}{2}\ln (\sqrt{U_0''}) \right],$$ which, for large $\Lambda$, becomes $$U_1(\phi)=\frac{1}{24\pi^2}\left[ U_0'' \ln(2\Lambda)-\frac{U_0''}{2}\ln U_0''\right].$$
In the case of $m^2>0$, where there is no symmetry breaking at tree level, we add the counterterms $\frac{1}{2}A\phi^2+\frac{1}{4!}B\phi^4$ and impose the renormalization conditions $U''(0)=m^2$ and $U''''(0)=\lambda$, to get $$U(\phi) = \frac{1}{2} m^2\left(1+\frac{1}{48\pi^2}\frac{\lambda}{m^2}\right) \phi^2 +
\frac{\lambda}{4!}(1+\frac{3}{48\pi^2}\frac{\lambda}{m^2}) \phi^4
-\frac{1}{48\pi^2} U_0''(\phi)
\ln\frac{U_0''(\phi)}{m^2}.
\label{res3}$$ One may take the $m^2\rightarrow 0$ limit in this expression, remembering that the dimensionless coupling constant, which is $\lambda / m^2$, is to be kept fixed and small. Doing that, one can easily see that the resulting potential is everywhere positive, without signs of any symmetry-breaking effects (the same conclusion holds for all positive values of $m^2$).
In the case of $m^2<0$, where there is symmetry breaking at tree level, after adding the same counterterms, we can take renormalization conditions at the minimum $\phi=\sigma$, with $\sigma^2=-6m^2/\lambda$. Imposing the conditions $U'(\sigma)=0$, $U''(\sigma)=-2m^2$, which preserve the tree level mass and minimum, we get $$U(\phi) = \frac{1}{2} m^2\left(1-\frac{1}{96\pi^2}\frac{\lambda}{m^2}\right) \phi^2 +
\frac{\lambda}{4!}(1-\frac{1}{32\pi^2}\frac{\lambda}{m^2}) \phi^4
-\frac{1}{48\pi^2} U_0''(\phi)
\ln\frac{U_0''(\phi)}{U_0''(\sigma)}.
\label{res4}$$
Now the finite temperature effective potential, $$U_T(\phi)=T \int\frac{d^3 k}{(2\pi)^3} \ln \left( 1- e^{-\beta\sqrt{k^6+U_0''}} \right),$$ can be calculated via $$\frac{\partial U_T}{\partial a^2} =\frac{T^2}{12 \pi^2}\int dx
\frac{1}{\sqrt{x^2+a^2}}\frac{1}{e^{\sqrt{x^2+a^2}}-1},$$ where $a^2=\beta^2 U_0''$, and expanded in the high-temperature limit using formulas from [@dolan]; the temperature-dependent part of the potential is $$U_T(\phi)=-\frac{T^2}{36}+\frac{T}{12\pi} \sqrt{U_0''(\phi)} +
\frac{U_0''(\phi)}{48\pi^2}\ln \frac{U_0''(\phi)}{c_B T^2}
-\frac{\zeta(3)}{384\pi^4}\frac{U_0''(\phi)^2}{T^2}+\cdots,$$ where the first term is the $\phi$-independent black-body radiation term, $\ln c_B =1-2\gamma+2\ln 4\pi$, $\gamma=0.577...$, and subsequent terms are of higher order in $U_0''(\phi)/T^2$. Only the second term in this expansion can lead to symmetry restoration at high temperature, we see, however, that, for negative $m^2$, one cannot make any such conclusion because of the imaginary parts that appear in the expression for the effective potential for values of $\phi$ near zero.
Comments
========
In this work we studied the symmetry breaking effects of the one-loop effective potential at zero temperature for theories of the Lifshitz type with a single scalar field with anisotropic scaling $z=2$ and $z=3$, and the possible symmetry restoration effects at high temperature.
In the case of $z=2$ we found symmetry breaking terms induced at one loop at zero temperature (in agreement with a previous work [@kim1]) and we studied the effective potential at finite temperature at one loop, both numerically and analytically through an approximation that is equivalent to imposing an infrared cutoff, and may be useful for future studies of applications of these theories in other field theoretical or cosmological contexts. We found the interesting effects of symmetry restoration at high temperature, through an apparently first-order phase transition.
Because of the importance of symmetry breaking and restoration phenomena in quantum field theory and cosmology we also studied the case of scalar field theory with $z=3$ but found no similar effects: in the case where we have a positive or zero mass term at the potential at tree level we calculated the one-loop contribution to the effective potential and found no symmetry breaking terms induced at this level. In the case where there is a negative mass term in the potential (with symmetry breaking at tree level) we calculated the full effective potential at high temperature and found that there is no conclusion of symmetry restoration at high temperature because of the imaginary parts that appear in the expression for the effective potential for values of the field near the origin.
In view of the above results it would be interesting to include gauge fields and consider the symmetry breaking and restoration effects including gauge field interactions in a future investigation.
**Acknowledgements**
We would like to thank Jean Alexandre for several discussions and for bringing into our attention the results of [@kim1].
[99]{} M. Visser, [*Phys. Rev.*]{} [**D80**]{}, 025011 (2009). B. Chen and Q. G. Huang, [*Phys. Lett.*]{} [**B683**]{}, 108 (2010). D. Anselmi and M. Halat, [*Phys. Rev.*]{} [**D76**]{}, 125011 (2007).
D. Anselmi, [*Annals Phys.*]{} [**324**]{}, 874 (2009). R. Iengo, J. G. Russo and M. Serone, [*JHEP*]{} [**0911**]{}, 020 (2009).
P. Horava, [*Phys. Lett.*]{} [**B694**]{}, 172 (2010).
R. Dijkgraaf, D. Orlando and S. Reffert, [*Nucl. Phys.*]{} [**B824**]{}, 365 (2010).
J. Alexandre and A. Vergou, [*Phys. Rev.*]{} [**D83**]{}, 125008 (2011).
J. Alexandre and N. E. Mavromatos, [*Phys. Rev.*]{} [**D83**]{}, 127703 (2011).
A. Dhar, G. Mandal and S. R. Wadia, [*Phys. Rev.*]{} [**D80**]{}, 105018 (2009).
J. Alexandre, K. Farakos, P. Pasipoularides and A. Tsapalis, [*Phys. Rev.*]{} [**D81**]{}, 045002 (2010).
J. Alexandre, K. Farakos and A. Tsapalis, [*Phys. Rev.*]{} [**D81**]{}, 105029 (2010).
J. E. Thompson and R. R. Volkas, [*Phys. Rev.*]{} [**D82**]{}, 116007 (2010).
K. Anagnostopoulos, K. Farakos, P. Pasipoularides and A. Tsapalis, arXiv:1007.0355 \[hep-th\].
P. Horava, [*JHEP*]{} [**0903**]{}, 020 (2009).
P. Horava, [*Phys. Rev.*]{} [**D79**]{}, 084008 (2009).
P. Horava and C. M. Melby-Thompson, [*Phys. Rev.*]{} [**D82**]{}, 064027 (2010).
E. Kiritsis and G. Kofinas, [*Nucl. Phys.*]{} [**B821**]{}, 467 (2009).
S. Mukohyama, [*JCAP*]{}, [**0906**]{}, 001 (2009).
R. Brandenberger, [*Phys. Rev.*]{} [**D80**]{}, 043516 (2009).
R. G. Cai, L. M. Cao and N. Ohta, [*Phys. Rev.*]{} [**D80**]{}, 024003 (2009).
S. Mukohyama, K. Nakayama, F. Takahashi and S. Yokoyama, [*Phys. Lett.*]{} [**B679**]{}, 6 (2009).
A. Kehagias and K. Sfetsos, [*Phys. Lett.*]{} [**B678**]{}, 123 (2009).
C. Charmousis, G. Niz, A. Padilla and P. M. Saffin, [*JHEP*]{} [**0908**]{}, 070 (2009).
G. Koutsoumbas and P. Pasipoularides, [*Phys. Rev.*]{} [**D82**]{}, 044046 (2010).
M. Eune and W. Kim, [*Mod. Phys. Lett.*]{} [**A25**]{}, 2923 (2010).
D. Orlando and S. Reffert, [*Class. Quant. Grav.*]{} [**26**]{}, 155021 (2009).
M. Jamil, E. N. Saridakis and M. R. Setare, [*JCAP*]{} [**1011**]{}, 032 (2010).
G. Koutsoumbas, E. Papantonopoulos, P. Pasipoularides and M. Tsoukalas, [*Phys. Rev.*]{} [**D81**]{}, 124014 (2010).
C. Soo, J. Yang and H. L. Yu, [*Phys. Lett.*]{} [**B701**]{}, 275 (2011).
J. Alexandre and P. Pasipoularides, [*Phys. Rev.*]{} [**D83**]{}, 084030 (2011).
S. R. Coleman and E. J. Weinberg, [*Phys. Rev.*]{} [**D7**]{}, 1888 (1973). R. H. Brandenberger, [*Rev. Mod. Phys.*]{} [**57**]{}, 1 (1985). L. Dolan and R. Jackiw, [*Phys. Rev.*]{} [**D9**]{}, 3320 (1974). C. Wetterich, [*Nucl. Phys.*]{} [**B352**]{}, 529 (1991).
J. Alexandre, arXiv:0909.0934 \[hep-ph\].
M. Eune, W. Kim and E. J. Son, [*Phy. Lett.*]{} [**B703**]{}, 100 (2011). J. Berges, N. Tetradis and C. Wetterich, [*Phys. Rep.*]{} [**363**]{}, 223 (2002).
|
---
abstract: 'In this paper, we use the approximation of shallow water waves (Margaritondo 2005 [*Eur. J. Phys.*]{} [**26**]{} 401) to understand the behavior of a tsunami in a variable depth. We deduce the shallow water wave equation and the continuity equation that must be satisfied when a wave encounters a discontinuity in the sea depth. A short explanation about how the tsunami hit the west coast of India is given based on the refraction phenomenon. Our procedure also includes a simple numerical calculation suitable for undergraduate students in physics and engineering.'
address:
- 'Instituto de Física da Universidade de São Paulo, C.P. 66318, CEP 05315-970, São Paulo, Brazil'
- 'Universidade Estadual Paulista, CEP 18409-010, Itapeva/SP, Brazil'
author:
- O Helene
- M T Yamashita
title: Understanding the tsunami with a simple model
---
Introduction
============
Tsunamis are water waves with long wavelengths that can be triggered by submarine earthquakes, landslides, volcanic eruption and large asteroid impacts. These non-dispersive waves can travel for thousands of kilometers from the disturbance area where they have been created with a minimum loss of energy. As any wave, tsunamis can be reflected, transmitted, refracted and diffracted.
The physics of a tsunami can be very complex, especially if we consider its creation and behavior next to the beach, where it can break. However, since tsunamis are composed of waves with very large wavelengths, sometimes greater than 100 km, they can be considered as shallow waves, even in oceans with depths of a few kilometers.
The shallow water approximation simplifies considerably the problem and still allows us to understand a lot of the physics of a tsunami. Using such approximation, Margaritondo [@MaEJP05] deduced the dispersion relation of tsunami waves extending a model developed by Behroozi and Podolefsky [@BeEJP01]. Since energy losses due to viscosity and friction at the bottom [@CrAJP87] can be neglected in the case of shallow waves, Margaritondo, considering energy conservation, explained the increase of the wave height when a tsunami approaches the coast, where the depth of the sea and the wave velocity are both reduced.
In this paper we use one of the results of Ref. [@MaEJP05] in order to deduce the wave equation and include the variation of the seabed. Thus, we are able to explain the increase of the wave amplitude when passing from deeper to shallow water. Also, we discuss the refraction of tsunami waves. This phenomenon allowed the tsunami of December 24, 2004, created at the Bay of Bengal, to hit the west coast of India (a detailed description is given in [@LaSc05]). These both inclusions - the seabed topography and the wave refraction - where pointed out by Chu [@chu] as necessary to understand some other phenomena observed in tsunamis.
This paper is organized as follows. The wave equation and the water flux conservation are used in section 2 in order to explain how and how much a shallow wave increases when passing from a deeper to a shallow water. In section 3, we extend the results obtained in section 2 to study how a wave packet propagates in a water tank where the depth varies; in this section we use some numerical procedures that can be extended to the study of any wave propagating in a non-homogeneous medium. Also, the refraction of the 2004 tsunami in the south on India is discussed in section 3. The shallow wave and the continuity equations are deduced in appendix A.
Reflection and transmission of waves in one dimension
=====================================================
Consider a perturbation on the water surface in a rectangular tank with a constant depth. In the limit of large wavelengths and a small amplitude compared with the depth of the tank, the wave equation can be simplified to (see Appendix A) $$\frac{\partial^2y}{\partial t^2}=gh\frac{\partial^2y}{\partial x^2},
\label{wave}$$ where $y(x,t)$ is the vertical displacement of the water surface at a time $t$, propagating in the $x$ direction, $g$ is the gravity acceleration and $h$ is the water depth.
Equation (\[wave\]) is the most common one-dimensional wave equation. It is a second-order linear partial differential equation and, since $g$ and $h$ are constants, any function $y=f(x\pm vt)$ is a solution ($v=\sqrt{gh}$ is the wave velocity). An interesting aspect of eq. (\[wave\]) is that a propagating pulse does not present a dispersion due to the same velocity of all wavelengths and, thus, preserves its shape. Light in vacuum (and, in a very good approximation, in air) is non-dispersive. Also, sound waves in the air are nearly non-dispersive. (If dispersion was important in the propagation of the sound in the air, a sound would be heard different in different positions, i.e., music and conversation would be impossible)
However, the velocity of shallow-water wave varies with the depth. Thus, shallow-water waves are dispersive in a non-uniform seadepth.
In order to study the evolution of a tsunami in a rectangular box with variable depth, which will be detailed in the next section, we approximate the irregular depth by successive steps. So, in the next paragraphs we will explain the treatment used when a wave encounters a discontinuity.
Every time the tsunami encounters a step, part is transmitted and part is reflected. Then, consider a wave with an amplitude given by $y=\cos(kx-\omega t)$, where $k$ and $\omega$ are, respectively, the wave number and the frequency incoming in a region where the depth of the water, and also the wave velocity, have a discontinuity as represented in Fig. \[fig1\].
On the left-side of the discontinuity the perturbation is given by $$y_1(x,t)=\cos(kx-\omega t)+R\cos(kx+\omega t+\varphi_1),
\label{left}$$ where $R\cos(kx+\omega t+\varphi_1)$ corresponds to the reflected wave and $\varphi_1$ is a phase to be determined by the boundary conditions. On the right-side of the discontinuity the wave amplitude is given by $$y_2(x,t)=T\cos(k^\prime x-\omega t+\varphi_2),
\label{right}$$ corresponding to the transmitted wave part. The wave numbers for $x<0$ and $x>0$ are, respectively, $$k=\frac{\omega}{v}
\label{k}$$ and $$k^\prime=\frac{\omega}{v^\prime},
\label{kprime}$$ where $v$ and $v^\prime$ are the velocities of the wavepacket at the left and right sides of the discontinuity.
In order to determine $R$ and $T$ we must impose the boundary conditions at $x=0$. For any instant, the wave should be continuous at $x=0$: $\cos\omega t+R\cos(\omega t+\varphi_1)=T\cos(-\omega t+\varphi_2)$. The same should happen with the flux, $f(x,t)$, given by (see eq. (\[flux1\])) $$f(x,t)=h\frac{\partial z(x,t)}{\partial t},$$ where $z(x,t)$ is the horizontal displacement of a transversal section of water (see equation (\[mcons\]) for the relation between $z$ and $y$).
Imposing the boundary conditions $y_1(0,t)=y_2(0,t)$ and $f_1(0,t)=f_2(0,t)$ we can deduce $\sin\varphi_1=\sin\varphi_2=0$. Then choosing $\varphi_1=\varphi_2=0$ we obtain $$R=\frac{k^\prime-k}{k+k^\prime}=\frac{v-v^\prime}{v+v^\prime}
\label{R}$$ and $$T=\frac{2k^\prime}{k+k^\prime}=\frac{2v}{v+v^\prime}.
\label{T}$$
It is worthwhile to mention here that other choices of $\varphi_1$ and $\varphi_2$ will change the signs of $R$ and $T$. However, in this case, it will also change the phases of the reflected and transmitted waves. Both modifications will compensate themselves and the shape of the wave will remain unchanged in relation of the choice $\varphi_1=\varphi_2=0$.
Reflection and transmission are very important effects in wave propagation: every time a traveling wave (light, water waves, pulses in strings, etc.) encounters a discontinuity in the medium where it propagates, reflection and transmission occur. Since there is no energy losses, energy flux is conserved. The energy of a wave is proportional to the square of its amplitude [@MaEJP05]. Thus, the energy flux is proportional to the squared amplitude times the wave velocity. The energy flux of the incident wave at $x=0$ is given by $$\phi_{inc}=v$$ (the amplitude of the incident wave was chosen as 1).
The reflected and transmitted energy flux are given by $$\phi_{refl}=R^2v$$ and $$\phi_{trans}=T^2v^\prime,$$ respectively.
Using eqs. (\[R\]) and (\[T\]) it is immediate to show that $$\phi_{inc}=\phi_{refl}+\phi_{trans}.
\label{angrel}$$ It is worthwhile to mention here that eqs. (\[R\]), (\[T\]) and (\[angrel\]) are classical textbook results on wave at interfaces.
Fig. \[fig1\] shows the evolution of a wavepacket in three distinct situations: before find a step, passing the step and after passing the step. On the left side of $x=0$ the depth is 2000 m and on the right side 10 m. We can note a growth of the wavepacket amplitude after passing a step due to the velocity variation, in this case $k^\prime/k=14$, and, then $T=1.9$.
Using energy conservation, Margaritondo deduced that the wave amplitude, when going from a sea depth $h_1$ to $h_2$, increases by the factor $(h_1/h_2)^{1/4}$ [@MaEJP05]. According to this result, the amplitude in our example should grow by a factor of 3.8. However, the growth observed was 1.9. The difference is due to the fact that when a wave packet encounters a discrete step of velocity, part of the energy is reflected (the reader can verify that eq. (\[angrel\]) is satisfied). As will be shown in the next section, when the sea depth varies smoothly, the reflected wave can be neglected, and our results becomes equal to Margaritondo’s result.
Waves in a variable depth
=========================
In order to study the evolution of a tsunami when it propagates in a rectangular box where the depth varies, we initially made a wavepacket propagating into a crescent $x$ direction. The variable depth was approximated by a succession of steps taken as narrow as we wish.
The evolution of the wave packet was calculated as follows:
- At time $t$, the wave packet amplitude $y(x,t)$ was divided in $n$ small discrete transversal sections of length $\Delta x$ (in our case $\Delta x$ = 50000 m).
- Every small part of the wave packet $y(x,t)$ was investigated: (i) If in a time interval $\Delta t$ it stays in the same velocity step, then the wave packet at $t+\Delta t$ was simply increased by $y(x,t)$ at the position $x+v(x)\Delta t$. (ii) If in a time interval $\Delta t$, we choose $\Delta t$ as 30 s, it encounters a velocity step, part is reflected and part is transmitted. The reflected and transmitted parts were calculated from eqs. (\[R\]) and (\[T\]) and added to the wave packet at $t+\Delta t$ propagating to the left or to the right, respectively. The step width and the time interval $\Delta t$ were chosen such that never the reflected or transmitted parts encounter a second step.
Fig. \[fig2\] shows three positions of the right-propagating wavepacket (the left-propagating was omitted). The initial wavepacket, fig. 2A, has its center at a depth of about 3930 m ($v\sim200$ m/s), then the center of the wavepacket goes to a position where the depth is about 165 m ($v\sim40$ m/s), fig. 2C. The growth of the amplitude is about 1.7. The difference between our result and the one expected by Ref. [@MaEJP05] ($(3903/165)^{0.25}=2.2$) is due to the fact that we approximate the continuous depth variation by discretes steps and, as a consequence, the left-propagating wavepacket was not negligible.
In the last paragraphs of this section we insert a short discussion of the refraction phenomenon.
Consider, for instance, a water wave propagating in a medium where the sea depth varies with $q$, as shown in fig. \[seadepth\] (for instance $q$ can be the distance from the coast). The wave crest at $q_1$ has a velocity $v_1$ and at $q_2$ velocity $v_2$. It is a matter of geometry to show that a wavefront will change orientation as making a curve with radius $$R=\frac{v}{\left|\frac{dv}{dq}\right|}.$$
For instance, consider what happened in south of India. The seadepth varies from about 2000 m at $q_1=$500 km far from the coast to about 100 m near the coast, $q_2\sim0$. Thus, $\left|dv/dq\right|\approx 2.2\times10^{-4}$ s$^{-1}$. As a consequence, the radius formed by a wave crest varies from about 640 km far from the coast to about 140 km near the coast. This is about what we can see from fig. 4. In fig. \[seadepth\], a tsunami wavefront propagation, part in deep water and part in shallow water near the coast, will refract and, in consequence, change orientation. Fig. \[refraction\] shows a refraction map for the December 26, 2005 tsunami. The dashed curves are the frontwaves at 100, 150, 200 and 300 minutes after the earthquake [@LaSc05].
Discussion
==========
The tsunami of 26 December 2004, obviously, did not propagate in a rectangular box, but, approximately, in a circular tank. As the tsunami propagates, its extension increases and, in consequence, its amplitude diminishes. However, when it approaches shallow waters, near to the coast, its amplitude grew up again as shown by the simplified model developed in this paper.
The developed model depends on two approximations: the wave amplitude is small and the length of the wavepacket is large when compared with the sea depth. Since the first approximation is not valid near the beach, we stopped the evolution when the front part of the tsunami wavepacket attained a depth of about 50 m, shown in Fig. \[fig2\]C.
In summary, with the model developed in this paper we showed how to use the approximation of shallow water waves to a variable depth. This simple model allows us to understand what occurs when a tsunami goes from deeper to shallow waters: the velocity of the rear part of the wavepacket is larger than the velocity of its front part, causing water to pile up. Also, refraction effects, that are not present in a sea of constant depth, can be observed near to the coast.
MTY thanks the Brazilian agency FAPESP (Fundação de Amparo a Pesquisa do Estado de São Paulo) for financial support.
Deduction of the wave equation for waves with wavelengths much greater than the water depth
===========================================================================================
We will deduce the wave equation in a simplified situation. We will make the following approximations (all of them can be applied to tsunamis located far from the beach): the part of the restoration force that depends on the surface tension can be neglected in the case of waves with large wavelengths; the wavelength or the extension of the wavepacket will be considered much longer than the depth of the water (in the case of tsunamis the wavelengths and the ocean depth can have, approximately, hundreds of km and a few km, respectively); the wave amplitude will be considered much smaller than the ocean depth. Another simplification is the tank where the wave propagates: we will consider a wave propagating in a rectangular box with vertical walls and constant depth. In this approximation of shallow water waves, all the droplets in the same transversal portion have the same oscillatory horizontal motion along the $x$ direction. Finally, friction at the bottom will be neglected [@CrAJP87].
Fig. \[fig1A\] illustrates the situation considered. The wave direction of propagation is $x$; $h_0$ is the unperturbed height of the water; $L$ is the box width.
Fig. \[fig1B\] shows the same box of Fig. \[fig1A\] in a side-view showing a perturbation in the water. A lamellar slice with width $\Delta x$ in $x$ and a height $h$ at a time $t$, will have a height $h+y$ and a width $\Delta x+\Delta z$ when it occupies the position $x+z$ at an instant $t+\Delta t$. $z=z(x,t)$ is the horizontal displacement - along the $x$ direction - of a vertical lamellar slice with an equilibrium position at $x$. When the wave propagates, this part of the water oscillates to left and right. Equaling the volume of water in $\Delta x$ and $\Delta x+\Delta z$, we have: $$\begin{aligned}
\nonumber
Lh\Delta x&=&L(h+y)(\Delta x+\Delta z)\\
&=&L(h\Delta x+h\Delta z+y\Delta x+y\Delta z).
\label{flux}\end{aligned}$$
If we consider $y<<h$ and $\Delta z<<\Delta x$, then eq. (\[flux\]) becomes $$h\Delta z+y\Delta x=0,$$ or $$y=-h\frac{\partial z}{\partial x}.
\label{mcons}$$ This last equation is the mass conservation equation of the fluid and relates the vertical displacement of the water surface, $y$, with the horizontal displacement of a vertical slice of water.
To apply the second Newton Law to a small portion of the lamellar slice, $\Delta h$, of water (see Fig. \[fig1B\]), we should calculate the total force, $\Delta F$, acting on it. This force depends on the pressure difference between the opposite sides of the slice: $$\Delta F=\Delta hL\left(P(x)-P(x+\Delta x)\right)\simeq-\Delta hL\frac{\partial P}{\partial x}\Delta x.$$
Then, $F=ma$ leads to $$-\Delta hL\frac{\partial P}{\partial x}\Delta x=\rho L\Delta h\Delta x\frac{\partial^2z}
{\partial t^2},
\label{force}$$ where $m$ is the mass of the water slice, $a$ is the acceleration of the transversal section of water given by the second partial derivative of $z$ with respect to $t$, and $\rho$ is the water density.
Since $$\frac{\partial P}{\partial x}=\rho g\frac{\partial y}{\partial x},
\label{dpress}$$ where $g$ is the gravity acceleration, we can write eq. (\[force\]) as $$\frac{\partial^2z}{\partial t^2}=-g\frac{\partial y}{\partial x}.
\label{d2z}$$
Derivate both sides of eq. (\[mcons\]) with respect to $x$ we have $$\frac{\partial y}{\partial x}=-h\frac{\partial^2z}{\partial x^2}.
\label{dy}$$
Finally, using eq. (\[dy\]) in eq. (\[d2z\]) we obtain the wave equation [@CrAJP87; @alonso] $$\frac{\partial^2z}{\partial t^2}=gh\frac{\partial^2z}{\partial x^2}.
\label{waveeq}$$
Using eq. (\[mcons\]) we can show that the vertical displacement of the water surface, $y$, obeys an equivalent wave equation: $$\frac{\partial^2y}{\partial t^2}=gh\frac{\partial^2y}{\partial x^2}.
\label{waveeqy}$$
The solutions of eqs. (\[waveeq\]) and (\[waveeqy\]) are any function of $x-vt$ or $x+vt$, where the wave velocity is given by $$v=\sqrt{gh}.
\label{veloc}$$
Eq. (\[veloc\]) is a particular case of the general expression for the dispersion relation for waves in water surface [@alonso],
$$v=\sqrt{\left(\frac{g\lambda}{2\pi}+\frac{2\pi\sigma}{\rho\lambda}\right)\tanh\frac{2\pi h}{\lambda}},
\label{veloc2}$$
where $\lambda$ is the wavelength, $\rho$ the water density and $\sigma$ is the water surface tension. In the case of long wavelength and neglecting surface tension, eq. (\[veloc2\]) reduces to eq. (\[veloc\]).
Eq. (\[veloc\]) gives two useful conclusions for waves presenting the same characteristics of a tsunami: the wave velocity does not depend on the wavelength, and the wavepacket does not disperse when propagating in a region of constant depth.
Since $z=z(x,t)$ is the horizontal displacement of a lamellar slice of water, then the water flux, $f$, is given by $$f=Lh\frac{\partial z}{\partial t}.
\label{flux1}$$
References {#references .unnumbered}
==========
[4]{} Margaritondo G 2005 [*Eur. J. Phys.*]{} [**26**]{} 401 Behroozi F and Podolefsky N 2001 [*Eur. J. Phys.*]{} [**23**]{} 225 Crawford F S, 1987 [*Am. J. Phys.*]{} [**55**]{} 171; [*Waves - Berkeley physics course volume 3*]{} (McGraw-Hill book company, 1968). Lay T et al., “The great Sumatra-Andaman earthquake of 26 December 2004”, Science [**308**]{}, 1127–1133 (2005). Chu A K H, [*Eur. J. Phys.*]{} [**26**]{} L19 Alonso M and Finn E J, [*Physics*]{} (Addison Wesley, 1992).
|
---
author:
- |
Amir Rosenfeld, John K. Tsotsos\
Department of Electrical Engineering and Computer Science\
York University, Toronto, ON, Canada\
`amir@eecs.yorku.ca,tsotsos@cse.yorku.ca`\
bibliography:
- 'cognitivePrograms1.bib'
title: Bridging Cognitive Programs and Machine Learning
---
abstract
========
While great advances are made in pattern recognition and machine learning, the successes of such fields remain restricted to narrow applications and seem to break down when training data is scarce, a shift in domain occurs, or when intelligent reasoning is required for rapid adaptation to new environments. In this work, we list several of the shortcomings of modern machine-learning solutions, specifically in the contexts of computer vision and in reinforcement learning and suggest directions to explore in order to try to ameliorate these weaknesses.
Introduction
============
The Selective Tuning Attentive Reference (STAR) model of attention is a theoretical computational model designed to reproduce and predict the characteristics of the human visual system when observing an image or video, possibly with some task at hand. It is based on psycho-physical observations and constraints on the amount and nature of computations that can be carried out in the human brain. The model contains multiple sub-modules, such as the Visual Hierarchy (VH), visual working memory (vWM), fixation controller (FC), and other. The model describes flow of data between different components and how they affect each other. As the model is given various tasks, an executive controller orchestrates the action of the different modules. This is viewed as a general purpose processor which is able to reason about the task at hand and formulate what is called Cognitive Programs (CP). Cognitive Programs are made up of a language describing the the set of steps required to control the visual system, obtain the required information and track the sequence of observations so that the desired goal is achieved. In recent years, methods of pattern recognition have taken a large step forward in terms of performance. Visual recognition of thousands of object classes as well as detection and segmentation have been made much more reliable than in the past. In the related field of artificial intelligence, progress has been made by the marriage of reinforcement learning and deep learning, allowing agents to successfully play a multitude of game and solve complex environments without the need for manually crafting feature spaces or adding prior knowledge specific to the task. There is much progress still to be made in all of the above mentioned models, namely
\(1) a computational model of the human visual system (2) purely computational object recognition systems (e.g, computer vision) and (3) intelligent agents. The purpose of this work is to bridge the gap between the worlds of machine learning and modeling of the way human beings solve visual tasks. Specifically, providing a general enough solution to the problem of coming up with Cognitive Programs which will enable solving visual tasks given some specification.
We make two main predictions:
1. Many components of the STAR model can benefit greatly from modern machine learning tools and practices.
2. Constraining the machine learning methods used to solve tasks, using what is known on biological vision will benefit these models and, if done right, improve their performance and perhaps allow us to gain further insights.
The next sections will attempt to briefly overview the STAR model as well as the recent trends in machine learning. In the remainder of this report, we shall show how the best of both worlds of STAR and Machine Learning can be brought together to create a working model of an agent which is able to perform various visual tasks.
Selective Tuning & Cognitive Programs
=====================================
The Selective Tuning (ST) [@tsotsos1993inhibitory; @tsotsos1995modeling; @culhane1992attentional; @books/daglib/0026815] is a theoretical model set out to explain and predict the behavior of the human visual system when performing a task on some visual input. Specifically, it focuses on the phenomena of visual attention, which includes overt attention (moving the eyes to fixate on a new location), covert attention (internally attending to a location inside the field of view without moving the eyes) and neural modulation and feedback that facilitates these processes. The model is derived from first principles which involve analysis of the computational complexity of general vision tasks, as well as biological constraints known from experimental observation on human subjects. Following these constraints, it aims to be biologically plausible while ensuring a runtime which is practical (in terms of complexity) to solve various vision tasks. In [@tsotsos2014cognitive] , ST has been extended to the STAR (Selective Tuning Attentive Reference) model to include the capacity for cognitive programs.
We will now describe the main components of STAR. This description is here to draw a high-level picture and is by no means complete. For a reader interested in delving into further details, please refer to [@books/daglib/0026815] for theoretical justifications and a broad discussion and read [@tsotsos2014cognitive] for further description of these components. The ST model described here is extended with a concept of Cognitive Programs (CP) which allows a controller to break down visual tasks into a sequence of actions designed to solve them.
![image](figures/figure5){width="90.00000%"}
Fig. \[fig:High-level-view-of\] describes the flow of information in the STAR architecture at a high-level. Central to this architecture is the Visual Hierarchy. The VH is meant to represent the ventral and dorsal streams of processing in the brain and is implemented as a neural network with feedforward and recurrent connections. The structure of the VH is designed to allow recurrent localization of input stimuli, as well as discrimination, categorization and identification. While a single feed-forward pass may suffice for some of the tasks, for other, such as visual search, multiple forward-backward passes (and possibly changing the focus of attention) may be required. Tuning of the VH is allowed so it will perform better on specific tasks. The recurrent tracing of neuron activation along the hierarchy is performed using a $\Theta$-WTA decision process. This induces an Attentional Sample (AS) which represents the set of neurons whose response matches the currently attended stimulus.
The Fixation Control mechanism has two main components. The Peripheral Priority Map (PPM) represents the saliency of the peripheral visual field. The History Biased Priority Map (HBPM) combines the focus of attention derived from the central visual field (cFOA) and the foci of attention derived from the peripheral visual field (pFOA). Together, these produce a map based on the previous fixations (and possibly the current task), setting the priority for the next gaze.
Cognitive Programs
------------------
To perform some task, the Visual Hierarchy and the Fixation Controller need to be controlled by a process which receives a task and breaks it down into a sequence of *methods*, which are basic procedures commonly used across the wide range of visual tasks. Each method may be applied with some degree of tuning to match it to the specific task at hand, whereas it become an executable *script.* A set of functional sub-modules is required for the execution of CP’s.
The controller orchestrating the execution of tasks is called the Visual Task Executive (vTE). Given a task (from some external source), the vTE selects appropriate methods, tunes them into scripts and controls the execution of these scripts by using several sub-modules. Each script initiates an attentive cycle and sends the element of the task required for attentive tuning to the Visual Attention Executive (vAE). The vAE primes the Visual Hierarchy (VH) with top-down signals reflecting the expectations of the stimulus or instructions and sets required parameters. Meanwhile, the current attention is disengaged and any feature surround suppression imposed for previous stimuli is lifted. Once this is completed, a feed-forward signal enters the tuned VH. After the feed-forward pass is completed, the $\Theta$-WTA process selects the makes a decision as to what to attend and passes on this choice from the next stage. The vTE, monitoring the execution of the scripts, can decide based on this information whether the task is completed or not.
The selection of the basic methods to execute a task is done by using the Long Term Memory for Method (mLTM). This is an associative memory which allows for fast retrieval of methods.
The Visual Working Memory (vWM) contains two representations: the Fixation History Map stores the last several fixation locations, each decaying over time. This allows for location based Inhibition of Return (IOR). The second representation is the Blackboard (BB), which store the current Attentional Sample (AS).
Task Working Memory (tWM) includes the Active Script NotePad which itself might have several compartments. One such compartment would store the active scripts with pointers to indicate progress along the sequence. Another might store information relevant to script progress including the sequence of attentional samples and fixation changes as they occur during the process of fulfilling a task. Another might store relevant world knowledge that might be used in executing the CP. The Active Script NotePad would provide the vTE with any information required to monitor task progress or take any corrective actions if task progress is unsatisfactory.
Finally, the Visual Attention Executive contains a Cycle Controller, which is responsible for starting and terminating each stage of the ST process. The vAE also initiates and monitors the recurrent localization process in the VH [@rothenstein2014attentional]. A detailed view of the entire architecure can be seen in Fig \[fig:Detailed-view-of\].
![image](figures/figure6){width="90.00000%"}
The Selective Tuning with Cognitive Programs framework allows a very rich set of visual tasks to be solved, given the correct sequence of methods is performed. A recent realization of Cognitive Programs in challenging environments has been presented in [@New1] where an agent is able to successfully play two video games by using a set of methods to control and tune the Visual Hierarchy and decide on the next move for the player.
Nevertheless, some open questions remain, which are (1) how to design the structure and parameters of the VH so that it can, given the proper task-biasing / priming, deal with a broad range of visual inputs? (2) How does one learn the type of tuning that is to applied to VH for each given task (3) How to create a visual task executive which is able to appropriately select a set of methods which will accomplish a visual task.
It seems that the main questions posed here have to do with control and with planning solution given some set of tools, as well as fitting models to a complex data (such as images). It is only natural to proceed with the recent trends in machine learning which can facilitate the solution of such problems. The following descriptions are not meant as very in-depth descriptions of the respective methods, but more as a high-level overview, elaborating on details as required. Importantly, we highlight shortcomings that these methods face and suggest solutions for some.
Machine Learning
================
In this section, we provide an overview of the main methods in machine learning which are relevant to require some intelligent agent to observe the world and perform various given complex tasks. This seems like a very broad subject, certainly one which is yet to be fully solved (as having this fully solved would mark the start of a general AI). Yet, notable progress has been made in recent years in machine learning and pattern recognition. In this short exposition, we mention the two main methods of interest which we deem relevant to the current goal of this work.
Deep Learning
-------------
Deep learning is certainly not a new field and has its roots set back in the 1960’s. Due to various reasons which are out of the scope of this work, it has not always been as popular as today and certainly there are still those that claim that the current hype around it is exaggerated. A turning point responsible for its current surge in popularity is the 2012 paper [@krizhevsky2012imagenet] which won the ImageNet [@russakovsky2015imagenet] large-scale visual recognition challenge. This is a massive benchmark for computer-vision methods where a classifier is required to predict the class of an object in an image out of a possible 1000 different classes. Significantly outperforming all other results, the work spurred an avalanche of follow-ups and modifications, both from an optimization point of view and of different architectures, as well as theoretical works attempting to justify the success of such methods over others. To date, it is rare to see a leading method in computer vision which is not based on deep learning, be it in the sub-tasks for object recognition, detection (i.e, localization), segmentation, tracking, 3D-reconstruction, face recognition, fine-grained categorization and others. Specifically, deep convolutional neural networks, a certain form of neural nets which exploits assumptions about the structure of natural images, is a main class in deep networks. The success of deep learning has also spread to other media such as audio (e.g, speech recognition), natural language processing (translation) and other sub-fields involving pharmaceutical and medical applications, etc. The literature in recent years on Deep Learning is vast and the reader is encouraged to turn to it for more in depth information. [@schmidhuber2015deep; @litjens2017survey].
The crux of the various deep-learning based methods lie in their need for massive amounts of supervised data. To obtain good performance, tens of thousands (sometimes more) of example are required. While some semi-supervised methods are being suggested, non have approached the performance of fully supervised ones. This is not to say that their utility is discarded - on the contrary, we believe that they will play a major role in the developments of the near future. Additional issue lies in their current seemingly inherent inability to adjust to new kinds of data or apply compositions of already learned solutions to new problems [@rosenfeld2018challenging]. Further weakenesses of deep learning systems are discussed in [@shalev2017failures] and [@marcus2018deep], as well as a discussion about some major differences between the way humans and machines solve problems [@lake2017building].
### Semi-supervised and Unsupervised Learning
One variant of machine learning potentially holds some promise to ameliorate the need for supervision at scale which is required by methods such as Deep Learning. Such methods attempt to perform learning by receiving a much smaller amount of supervision. For example, learning how to distinguish the data into two classes, but doing so by learning on a datasets where only 10% is labeled and the rest is not. This can be done by exploiting observed similarities in the underlying data and / or assuming some regularities such as smoothness, etc. An extreme case would be using no labeled data at all, however, as at some point there will be a task where a system should learn in a supervised manner, the utility of the unsupervised learning will be measured by finding how it benefits the supervised learner. Another form of unsupervised learning is Generative Models, which is able to produce at test time data points whose properties ideally resemble those observed at training time, though of course not identical to them. An example of Semi-supervised Learning is Ladder-Networks [@rasmus2015semi], where an unsupervised loss is added to the network in addition to the supervised loss. A notable method which has recently gained popularity are Generative Adversarial Networks (GAN) [@goodfellow2014generative], where two networks constantly compete: the goal of the Generator network is to generate images which are as realistic as possible in the sense that they resemble images from the training set and a Discriminator network whose goal is to tell apart the images from the Generator and the images from the real dataset. This has quickly evolved to produce impressive results, a recent one due to [@nguyen2016plug], see Fig. \[fig:Output-of-Conditional\] for some results.
![image](figures/161201__ppgn_teaser){width="80.00000%"}
Deep Reinforcement Learning
---------------------------
Reinforcement Learning (RL) refers to a set of classical and well studied methods in the field of control systems and artificial intelligence. The general setting is that of an agent who is supposed to take actions in some given environment. As a result the agent may encounter new situations and be given some reward (or penalty). The actions of the agent may affect the environment. The agent does not necessarily see the entire environment at all times, rather have access to some input which is its current observation. Through this loop of act-observe-receive reward the agent must increase its total future reward. This setting is very general in the sense that it is limited only by the richness of the environment and of the agent. For an extreme example, we may say that the environment is planet Earth and the agent is some human being or animal. As simulating either of these seems like a virtual impossibility one can model e.g, robots in closed and well defined environments, or anything in between. Much research has gone into making agents which can learn and are able to perform well in various environments, as well as making robust control systems. RL was also one of the fields to benefit and regrow in popularity following the success of deep learning, leading to a new method called Deep Reinforcement Learning. The first widely known success of this new method has been published in [@mnih2013playing], where an agent was shown to be able to learn how to perform well in multiple Atari video games, outperforming many previous methods. Notably, the system was learned end-to-end without any input except the raw pixel data and the score of the game. In some cases, it has even learned to outperform human players. The reported performance was a result of using a single architecture (except the number of output variables where games had different number of possible controls) and set of hyper-parameters. Although there were many game on which the method performed poorly at the time (and still does), this was a significant results which lead to others, such as beating a human expert in the game of GO [@silver2016mastering] which is widely acknowledged as a long standing challenge for the artificial intelligence community. For a recent overview on this subject, please refer to [@li2017deep].
Formally, RL assumes the following setting: an agent may interact with an environment at each time t by applying an action $a_{t}$, given an observation $s_{t}$. Note that the entire state of the environment may not be observed, and in this context the state $s_{t}$ represents only the observation of the agent - it is all that it can directly measure. The interaction $a_{t}$ of the agent leads to another state $s_{t+1}$, where the agent may perform another action $a_{t+1}$, and so on. A Markov Decision Process (MDP) is defined as an environment where the probability of the next state is fully determined by the current state and the action:
$$P_{ss'}^{a}=Pr(s_{t+1}=s'\mid s_{t}=s,a_{t}=a)$$
i.e., the probability of state $s'$ following state $s$ after action $a$.
Note that this has the Markov property, i.e., that each state is dependent only on the previous one and not on ones before that. For example, in a game of Chess, where the entire board is observed as the state, nothing needs to be known about previous steps of the game to determine the next move. For each action the agent receives a reward $r_{t}$, which is a real scalar that can take on any value, be it positive, negative or zero. Hence, the entire sequence of $n$ actions of an agent in an environment is $$s_{0},a_{0},r_{1},s_{1},a_{1},r_{2},s_{2},a_{2},r_{3},\dots s_{n-1},a_{n-1},r_{n},s_{n}$$
Where $s_{n}$ is the terminal reward (win/lose/terminate). The goal of the agent is to maximize the total future reward: assuming that the agent performed $n$ steps and at each step received a reward $r_{t},$the total reward is $$R=\sum_{t=1}^{n}r_{t}$$
The total *future* reward from time $t$ is $$R_{t}=\sum_{i=1}^{n-t}r_{t+i}$$
However, as the close future holds less uncertainty, it is common to consider the *discounted future reward*, that is a reward which is exponentially decayed over time:
$$\begin{aligned}
R_{t} & =r_{t}+\gamma r_{t+1}+\gamma^{2}r_{t+2}+\dots+\gamma^{n-t}r_{n}\\
& =r_{t}+\gamma(r_{t+1}+\gamma(r_{t+2}+\dots))\\
& =r_{t}+\gamma R_{t+1}\label{eq:discounted}\end{aligned}$$
The strategy that the agent uses to determine the next action is called a *policy*, and is usually denoted by $\pi$. A good policy would maximize the discounted future reward. A value-action function $Q$ is defined as a function which assigns the maximum discounted future reward for an action $a_{t}$ performed at a state $s_{t}$:
$$Q(s_{t},a_{t})=\max R_{t+1}$$
Given this function, the optimal policy can simply choose for each state $s$ the action $a$ which maximizes $Q$:
$$\pi(s)=\operatorname*{arg\,max}_{a}Q(s,a)$$
From Eq. \[eq:discounted\]the following relation holds:
$$Q(s,a)=r+\gamma\max_{a'}Q(s',a')$$
Meaning that if we find a function $Q$ for which the above holds, we can use it to generate an optimal policy. For a discrete number of states and actions, a simple method known as *value iteration* is know converge to the optimal policy [@sutton1998reinforcement], given that each state/action pair is visited an infinite number of times. This is simply implemented as continuously updating $Q$, until some stopping criteria is met:
$$Q(s_{t},a_{t})\leftarrow Q(s_{t},a_{t})+\alpha[r+\gamma\max_{a_{t+1}}Q(s_{t+1},a_{t+1})-Q(s_{t},a_{t})].$$
Such methods can work well for a finite number of states and actions. However, for many interesting environments it is challenging to define the states as a discrete set, and doing so naively would result in an exponential number. For examples, if the task is to play a video game, while the available actions form a small set, enumerating the number of possible stimuli would easily lead to intractable numbers, that is, all possible combinations of pixel values on the screen.
With this in mind, we turn to Deep Reinforcement Learning. Here, instead of representing each state explicitly as some symbol in a large set, a neural network is learned to predict the $Q$-value from the state, by being applied directly to each input frame (or set of a few consecutive ones to capture motion). Hence the state is represented implicitly by the network’s weights and structure. This allows the agent to learn how to act in the environment without having privileged knowledge about its specific inner workings. There are many variants and improvements on this idea, though the basic setting remains the same. In what follows, we highlight some of the challenges and shortcomings of the current approaches of Deep RL.
### Efficient Exploration
A very big challenge currently holding back RL methods is the huge exploration space that should potentially be sought in order to produce a good policy. This is a chicken and egg problem of sorts: exploration is needed to find out a good policy and a good policy is required to be able to do sufficient exploration; consider even a simple game such as Atari Breakout, where the player is able to move a paddle left or right and hit the ball so it doesn’t fall off the bottom of the screen. If nothing else is known, it would take some amount of exploration to find out the paddle should bounce the ball to avoid losing. Before that, the agent will probably start off just moving randomly to the left and right. As the rewards of the game are sparse, it will not be until the agent encounters its first reward that it will be able to update its policy. For this reason, it will take many iterations until it can start learning how to act to avoid losing quickly. Only then can it continue to explore further states of the game, which could not even be reached if it had not passed the very first steps of hitting the ball. The $\epsilon$greedy strategy somewhat improves on this problem by choosing a random move with a probability of $\epsilon$, a hyper-parameter which is usually decayed over time as the system learns. This helps getting out of local minima in the exploration space, though the general problem described here is certainly not solved.
### Exploration vs Representation
The problem of exploration is exacerbated for the case of Deep RL. In a discrete search space, each state is well recognized once encountered. When the state space is represented implicitly by a deep network, the evolution of the Q function is tied with the representation of the environment by the Deep network. This means that updating the Q function can lead to unstable results. One strategy to address this is by reducing the frequency in which the network which chooses the next action is updated . We suggest here a couple of additional strategies:
- A strong visual representation: the visual system of human beings is a strong one and is able to represent stimuli very robustly, owing both to evolution and learning from prior experience. An agent usually learns the visual representation of the environment from scratch. Certainly, being able to robustly represent the observations right from the start would allow the agent to focus more on planning and less on learning the representation. Nevertheless, the representation may continue evolving as the agent encounters new situations. One way to allow this is to use as a starting point a pre-trained visual representation, be it in a supervised or unsupervised manner, and adapt it as needed for the task.
- Symbolic representation: allowing the agent to group observations into equivalence classes by assigning symbols or compact representations to them would allow policies to be learned more efficiently and probably converge to a higher level of performance. This can be done in an implicit manner by attempting to cluster the representation of the environment into few informative clusters which carry the maximal information with respect to the task. More explicitly, the observation can be somehow parsed into objects, background, possibly other agents, etc. The representation of the scene will then be made up of the properties (speed, location, state) of the constituents of the scene. While the latter would probably carry more meaning (and presumably lead to higher performance), it seems hard to do so in a purely data-driven approach without external knowledge about the world.
### Prior and External Knowledge
A child is able to learn how to play a game reasonably well within a few minutes (a few tens of thousands of frames). Current methods require many millions of frames to do so, if they succeed at all. Why is this so? Besides the reasons stated above, we claim that additional forms of prior experience are useful.
One form of experience is having solved tasks in the past which may be related to the current task. Indeed, this has been recently shown to be effective in [@parisotto2015actor] where a single network learns to mimic the behaviour of multiple expert networks, each of which was pre-trained on a single tasks. Thus the new network represents simultaneously the knowledge to solve all of the learned tasks in a relatively compact manner. In most cases, such a network was shown to learn new tasks much faster than a randomly initialzied version as well as converge in a more stable manner.
World knowledge also plays a major role in understanding a new situation. The factual knowledge we gain from experience, if written as a list of many different facts and rules, would probably make a very long one. Here are a few examples:
- An intuitive understanding of Newtonian Physics - even children understand that object tend to continue in their general direction, tend to fall down after going up, may move if pushed by some external force, etc.
- Relations and interactions between objects: doors may require keys to be opened
- Survival: falling off a cliff is usually a bad idea; if an opponent comes your way, you’d better avoid it or terminate it
- General facts: roses are red. Violets are blue. Gold gives you points.
It is difficult to imagine how all of this is learned and stored in our brains and how the relevant facts come into play in the abundance of different situations that we encounter. Being able to effectively utilize such a vast knowledge-base about the behavior of the world would no doubt aid intelligent agents in many environments. Attempts at using external knowledge to aid tasks have already been made in Computer Vision for image captioning and Visual Question Answering (VQA) [@wu2016image], Zero-Shot Learning (ZSL) [@frome2013devise] and in general to gain knowledge about unseen objects or categories by comparing their detected attributes to those of known ones [@farhadi2009describing]. Such world-knowledge is collected either by data-driven approches such as word2vec (a learned vector space representation of words) [@mikolov2013distributed] or word relation graphs (WordNet [@miller1995wordnet]), datasets collected manually or by scanning online knowledge collections such as Wikipedia, such as ConceptNet [@speer2012representing].
Such collections of linguistic and factual knowledge can certainly help an agent quickly reason about its surrounding environment - *only if it is able to link its observations to items in the knowledge base*. It is interesting to ask how a person acquires such knowledge in the first years of his/her lifetime, through an experience which is quite different that simply being explosed to millions of online articles. Somehow a collection of useful facts and rules is picked up from experience despite being drowned in a pool of distracting and noisy signals.
Some recent work by [@dubey2018investigating] has demonstrated that prior knowledge is quite critical to the success of humans in simple games. The work devises a few ways to remove the semantics from gameplay by replacing graphical elements in the game by semantically meaningless ones. For example, switch each piece of texture in the game to a random one (but do so consistently). This makes the game screen appear meaningless to the human observer. The performance of humans in such modified games dropped significanlty while that of the tested machine-learning based method remained the same. Another type of modification was switching elements with elements of different meaning. An example is replacing the appearance of a ladder to be climbed to a column of flames, or transposing the screen so gravity appears to work sideways. Though there is a one-to-one translation between the original and modified version of the game, human players did much worse on these semantically modified examples, and on others. This demonstrates the heavy reliance humans have on prior knowledge. In this context, learning a game from scratch without prior knowledge is “unfair” for machine-learning methods.
Nevertheless, such knowledge bases still do not account for an intuitive physical understanding, which seems to require some other type of experience. Such knowledge can either be pre-injected into the agent but, as children do not come equipped with such knowledge, we believe that the agent should learn the rules of physical interactions from its own experience or observations. An interesting attempt at this direction can be seen in [@agrawal2016learning] where robots gain a reportedly “intuitive” understanding of physical interactions by attempting to perform simple tasks on objects such as moving them around.
### High Level Reasoning and Control
Planning can be performed at several levels of granularity. Certainly, a human being or animal does not think in terms of the force that needs to be applied by each of the muscels in order to pick up some object. It rather seems that plans are made at a higher level of abstraction and some process then breaks them down to motor commands and everything that is required for them to be carried out. The motor commands can also be grouped into logical units above the most basic ones, such as “fully stretch out left arm”, which is only then translated to low level commands. Newborns are not able to control their limbs and fingers immediately, but over time they acquire this ability and perform tasks with seamless movements, usually dedicating little or no concious thought to the movement of muscles. Similarly, exploration which goes on early on in the “life” of an agent should allow the agent to learn how to perform simple and common actions and store these as routines to be later used in more elaborate plans. End-to-end learning of motor policies from raw pixel data is attempted in [@levine2016end] .
The above was only the simplest level of high-level control. Further advances would requires strategic thinking in terms of long-range goals and actions. We claim that this cannot be done effectively without first obtaining a hierarchy of basic control over the agents actions and being able to predict quite reliably their immediate future effect.
|
---
abstract: 'We have obtained $V$ and $I$ images of the lone globular cluster that belongs to the dwarf Local Group irregular galaxy known as WLM. The color-magnitude diagram of the cluster shows that it is a normal old globular cluster with a well-defined giant branch reaching to $M_V=-2.5$, a horizontal branch at $M_V=+0.5$, and a sub-giant branch extending to our photometry limit of $M_V=+2.0$. A best fit to theoretical isochrones indicates that this cluster has a metallicity of \[Fe/H\]$=-1.52\pm0.08$ and an age of $14.8\pm0.6$ Gyr, thus indicating that it is similar to normal old halo globulars in our Galaxy. From the fit we also find that the distance modulus of the cluster is $24.73\pm0.07$ and the extinction is $A_V=0.07\pm0.06$, both values that agree within the errors with data obtained for the galaxy itself by others. We conclude that this normal massive cluster was able to form during the formation of WLM, despite the parent galaxy’s very small intrinsic mass and size.'
author:
- 'Paul W. Hodge, Andrew E. Dolphin, and Toby R. Smith'
- Mario Mateo
title: 'HST Studies of the WLM Galaxy. I. The Age and Metallicity of the Globular Cluster [^1]'
---
Introduction
============
The galaxy known as WLM is a low-luminosity, dwarf irregular galaxy in the Local Group. A history of its discovery and early study was given by Sandage & Carlson (1985). Photographic surface photometry of the galaxy was published by Ables & Ables (1977). Its stellar population has been investigated from ground-based observations by Ferraro et al (1989) and by Minniti & Zijlstra (1997). The former showed that the main body of the galaxy consists of a young population, which dominates the light, while the latter added the fact that there appears to be a very old population in its faint outer regions. Cepheid variables were detected by Sandage & Carlson (1985), who derived its distance, and were reanalyzed by Feast & Walker (1987) and by Lee et al. (1992). The latter paper used $I$ photometry of the Cepheids and the RGB (red giant branch) distance criterion to conclude that the distance modulus for WLM is 24.87 $\pm$ 0.08. The extinction determined by Feast & Walker (1987) is $A_B$ = 0.1.
Humason et al. (1956), when measuring the radial velocity of WLM, noticed a bright object next to it that had the appearance of a globular cluster. Its radial velocity was the same as that of WLM, indicating membership. Ables & Ables (1977) found that the cluster’s colors were like those of a globular cluster, and Sandage & Carlson (1985) confirmed this. Its total luminosity is unusual for its being the sole globular of a galaxy. Sandage & Carlson (1985) quote a magnitude of $V$ = 16.06, indicating an absolute magnitude of $M_V$ = -8.8. This can be compared to the mean absolute magnitude of globulars in galaxies, which is $M_V = -7.1 \pm 0.43$ (Harris 1991). The cluster, though unusually bright, has only a small fraction of the $V$ luminosity of the galaxy, which is 5.2 magnitudes brighter in $V$.
One could ask the question of whether there are other massive clusters in the galaxy, such as luminous blue clusters similar to those in the Magellanic Clouds. Minniti & Zijlstra (1997), using the NTT and thus having a wider field than ours, searched for other globular clusters and found none. However, the central area of the galaxy has one very young, luminous cluster, designated C3 in Hodge, Skelton and Ashizawa (1999). This object is the nuclear cluster of one of the brightest HII regions (Hodge & Miller 1995). There do not appear to be any large intermediate age clusters, such as those in the Magellanic Clouds or that recently identified spectroscopically in the irregular galaxy NGC 6822 by Cohen & Blakeslee (1998) .
No other Local Group irregular galaxy fainter than $M_V$ = -16 contains a globular cluster. The elliptical dwarf galaxies NGC 147 and NGC 185 (0.8 and 1.3 absolute magnitudes brighter than WLM, respectively) do have a few globular clusters each and Fornax (1.7 absolute magnitudes fainter) has five, which makes it quite anomalous, even for an elliptical galaxy (see Harris, 1991, for references).
Another comparison can be made using the specific frequency parameter, as defined and discussed by Harris (1991). The value of the specific frequency calculated for WLM is 7.4, which can be compared to Harris’ value calculated for late-type galaxies, which is $0.5 \pm 0.2$. The highest average specific frequency is found for nucleated dwarf elliptical galaxies by Miller et al (1998), which is $6.5 \pm 1.2$, while non-nucleated dwarf elliptical galaxies have an average of $3.1 \pm 0.5$. These values are similar to those found by Durrell et al (1996), implying that the specific frequency for WLM is comparable to that for dwarf elliptical galaxies but possibly higher than that for other late-type galaxies.
Because the WLM cluster as a globular in an irregular dwarf galaxy is unique, it may represent an unusual opportunity to investigate the question of whether Local Group dwarf irregulars share the early history of our Galaxy and other more luminous Group members, which formed their massive clusters some 15 Gyr ago, or whether they formed later, as the early ideas about the globular clusters of the Magellanic Clouds seemed to indicate. Of course, we now know that the LMC has several true globular clusters that are essentially identical in age to the old halo globulars of our Galaxy (Olsen et al. 1998, Johnson et al. 1998), so the evidence suggesting a delayed formation now seems to come only from the SMC. In any case, WLM gives us a rare opportunity to find the oldest cluster (and probably the oldest stars) in a more distant and intrinsically much less luminous star-forming galaxy in the Local Group.
Data and Reduction
==================
Observations
------------
As part of a Cycle 6 HST GO program, we obtained four images of the WLM globular cluster on 26 September, 1998. There were two exposures taken with the F814W filter of 2700 seconds and two with the F555W filter, one each of 2700 seconds and 2600 seconds. The globular cluster was centered on the PC chip and the orientation of the camera was such that the WF chips lay approximately along the galaxy’s minor axis, providing a representative sample of the WLM field stars to allow us to separate cluster stars reliably.
Reductions
----------
With two images of equal time per filter, cosmic rays were cleaned with an algorithm nearly identical to that used by the IRAF task CRREJ. The two images were compared at each pixel, with the higher value thrown out if it exceeded 2.5 sigma of the average. The cleaned, combined F555W image is shown in Figure \[fig\_image\].
Photometry was then carried out using a program specifically designed to reduce undersampled WFPC2 data. The first step was to build a library of synthetic point spread functions (PSFs), for which Tiny Tim 4.0 (Krist 1995) was used. PSFs were calculated at 49 positions on each chip in F555W and F814W, subsampled at 20 per pixel in the WF chips and 10 in the PC chip. The subsampled PSFs were adjusted for charge diffusion and estimated subpixel QE variations, and combined for various locations of a star’s center within a pixel. For example, the library would contain a PSF for the case of a star centered in the middle of a pixel, as well as for a star centered on the edge of a pixel. In all, a 10x10 grid of possible centerings was made for the WF chips and a 5x5 grid for the PC chip. This served as the PSF library for the photometry.
The photometry was run with an iterative fit that located stars and then found the best combinations of stellar profiles to match the image. Rather than using a centroid to determine which PSF to use for a star, a fit was attempted with each PSF centered near the star’s position and the best-fitting PSF was chosen. This method helped avoid the problem of centering on an undersampled image. Residual cosmic rays and other non-stellar images were removed through a chi-squared cut over the final photometry list.
The PSF fit was normalized to give the total number of counts from the star falling within a 0.5 arcsec radius of the center. This count rate was then converted into magnitudes as described in Holtzman et al (1995), using the CTE correction, geometric corrections, and transformation. For the color-magnitude diagram (CMD) and luminosity function analyses below, roughly the central 20% of the image was analyzed to maximize the signal from the globular while minimizing the background star contamination. In that region, the effect of background stars is negligible. The CMD from this method, showing all stars observed, is shown in Figure \[fig\_phot\]a, with the same data reduced with DAOPHOT shown in Figure \[fig\_phot\]b. Error bars are shown corresponding to the artificial star results (which account for crowding in addition to photon statistics), rather than the standard errors from the PSF fits and transformations. The photometry list is given in Table \[tabphot\], which will appear in its entirety only in the electronic edition. The table contains X and Y positions of each star, with $V$ and $I$ magnitudes and uncertainties. Uncertainties given are from the PSF fitting and from the transformations.
Artificial star tests were made, with each star added and analyzed one at a time to minimize additional crowding often caused by the addition of the artificial stars. The artificial stars were added to both the combined $V$ and $I$ images, so that a library of artificial star results for a given position, $V$ magnitude, and color could be built. In addition to completeness corrections, these data were employed in the generation of synthetic CMDs for the determination of the star formation history of the cluster.
Analysis
========
Luminosity Function
-------------------
The luminosity functions (LFs) of the globular cluster in $V$ and $I$ are shown in Figure \[fig\_lf\]a and \[fig\_lf\]b, respectively, binned into 0.5 magnitude bins. Theoretical luminosity functions are given as well, from interpolated Padova isochrones (Girardi et al 1996, Fagotto et al 1994) using the star formation parameters and distance obtained in the CMD analysis below.
The observed and theoretical LFs are in excellent agreement. The bump in the observed $V$ LF between magnitudes 25 and 26, and the bump in the observed $I$ LF starting at magnitude 24.5 are due to the horizontal branch stars, which cannot be separated from the rest of the CMD cleanly. The only other significant deviation, the bump in the $V$ LF between magnitudes 22.5 and 23, is the clump of stars at the tip of the RGB, which is also observed in the CMD. This seems to be a statistical fluke, a result of the relatively small number of stars in that part of the RGB, and is similar to statistical flukes seen in Monte Carlo simulations. Thus as far as can be determined, the observed LF agrees with the theoretical expectations.
Color-Magnitude Diagram
-----------------------
For the CMD analysis, a cleaner CMD was achieved by omitting all stars with PSF fits worse than a chi-squared value of 3. The observed $V, V-I$ CMD is shown in Figure \[fig\_cmd\]a, and was analyzed as described in Dolphin (1997). Interpolated Padova isochrones (Girardi et al 1996, Fagotto et al 1994) were used to generate the synthetic CMDs, with photometric errors and incompleteness simulated by application of artificial star results to the isochrones. No assumptions were made regarding the star formation history, metallicity, distance, or extinction to the cluster, and a fit was attempted with all of these parameters free. The best fits that returned single-population star formation histories were then combined with a weighted average to determine the best parameters of star formation. Uncertainties were derived by taking a standard deviation of the parameters from the fits, and thus include the fitting errors and uncertainties resulting from an age-metallicity-distance “degeneracy.” Systematic errors due to the particular choice of evolutionary models are naturally present, but are not accounted for in the uncertainties. The following parameters were obtained:
- Age: 14.8 $\pm$ 0.6 Gyr
- Fe/H: -1.52 $\pm$ 0.08
- Distance modulus: 24.73 $\pm$ 0.07
- Av: 0.07 $\pm$ 0.06
A synthetic CMD constructed from these parameters is shown in Figure \[fig\_cmd\]b, using the artificial star data to mimic photometric errors, completeness, and blending in the data. The poorly reproduced horizontal branch is a result of the isochrones we used, but the giant branch was reproduced well, with the proper shape and position.
Structure
---------
Profiles were calculated in bins of 10 pixels (0.45 arcsec) in both the $V$ and $I$ images, and are shown in Figure \[fig\_prof\], corrected for incompleteness (both as a function of magnitude and position). The cutoff magnitudes of 27 in $V$ and 26 in $I$ were chosen to minimize the corrections required due to incompleteness. Additionally, the central bin (0-10 pixels) was omitted because of extreme crowding problems. The remaining bins were fit to King models with a least-squares fit. The best parameters for the King models (assuming a distance modulus of 24.73) are as follows (shown by the solid lines in Figure \[fig\_prof\]).
- core radius: 1.09 $\pm$ 0.14 arcsec (4.6 $\pm$ 0.6 pc)
- tidal radius: 31 $\pm$ 15 arcsec (130 $\pm$ 60 pc)
- core density: 59 $\pm$ 8 stars/arcsec$^2$ (3.2 $\pm$ 0.4 stars/pc$^2$) $V$, 44 $\pm$ 6 stars/arcsec$^2$ (2.4 $\pm$ 0.3 stars/pc$^2$) $I$
- background density: 0.77 $\pm$ 0.12 stars/arcsec$^2$ (0.042 $\pm$ 0.007 stars/pc$^2$) $V$, 0.77 $\pm$ 0.12 stars/arcsec$^2$ (0.042 $\pm$ 0.007 stars/pc$^2$) $I$
For a distance modulus of 24.87 (Lee et al. 1992), the corresponding sizes would be 7% larger. For comparison, Trager et al. (1993) find that 2/3 of Milky Way clusters have core radii between approximately 5 and 60 pc.
Conclusions
===========
Our analysis shows that the WLM globular cluster is virtually indistinguishable from a halo globular in our Galaxy. We find that a formal fit to theoretical isochrones indicates an age of 14.8 $\pm$ 0.6 Gyr, which agrees with ages currently being measured for Galactic globulars (e.g., vandenBerg 1998) and a metallicity of \[Fe/H\] of -1.52 $\pm$ 0.08, a typical globular cluster value that is similar to that obtained for the outer field giant stars along the minor axis of WLM by Minniti and Zijlstra (1997) and by us (Dolphin 1999). The distance modulus for the cluster, derived independently from the parent galaxy, is 24.73 $\pm$ 0.07, which agrees within the errors with that derived from Cepheids and the RGB (Lee et al. 1992) of the galaxy.
In structure the globular is elongated in outline, with a mean radial profile that fits a King (1962) model within the observational uncertainties. We derive a core radius of 1.09 $\pm$ 0.14 arcsec and a tidal radius of 31 $\pm$ 15 arcsec, which translate to 4.6 $\pm$ 0.6 pc and 130 $\pm$ 60 pc, respectively. The core radius is very similar to that found for massive globulars in our Galaxy (Trager et al. 1993), while the tidal radius, though quite uncertain, is rather large by comparison. The former result indicates that formation conditions in this galaxy near its conception were such that a massive, highly concentrated star cluster could form, despite the very small amount of the total mass of material available. The latter result is probably an indication that the tidal force of the galaxy on the cluster is small.
The presence of a normal, massive globular cluster in this dwarf irregular galaxy may be an useful piece of evidence regarding the early history of star, cluster and galaxy formation. Recent progress in the field of globular cluster formation has resulted from both observational and theoretical studies (Searle & Zinn 1978, Harris & Pudritz 1994, McLaughlin & Pudritz 1994, Durrell et al 1996, Miller et al 1998, and McLaughin 1999). Although the uncertainties from a single data point are sufficiently large to discourage quantitative analysis, the presence of a globular cluster in WLM would constrain formation models that predict $\ll 1$ cluster in such a galaxy.
We are indebted to the excellent staff of the Space Telescope Science Institute for obtaining these data and to NASA for support of the analysis through grant GO-06813.
\[tabphot\]
Ables, H. D. & Ables, P. G. 1977, ApJS 34, 245 Cohen, J. G. & Blakeslee, J. P. 1008, AJ 115, 2356 Dolphin, A. E. 1997, New Astronomy 2, 397 Dolphin, A. E. 1999, in prep. Durrell, P. R., Harris, W. E., Geisler, D., & Pudritz, R. E. 1996, AJ 112, 972 Fagotto, F., Bressan, A., Bertelli, G., & Chiosi, C. 1994, A&AS 105, 29 Feast, M. W. & Walker, A. R. 1987, 25, 345 Ferraro, F. R., Pecci, F. F., Tosi, M., & Buonanno, R. 1989, MNRAS 241, 433 Girardi, L., Bressan, A., Chiosi, C., Bertelli, G., & Nasi, E. 1996, A&AS 117, 113 Harris, W. E. 1991, Ann. Rev. Astron. Astrophys. 29, 543 Harris, W. E. & Pudritz 1994, ApJ 429, 177 Hodge, P. & Miller, B. W. 1995, ApJ 451, 176 Hodge, P., Skelton, B., & Ashizawa, J., *An Atlas of Local Group Galaxies*, Dordrecht: Kluwer Holtzman, J. A., Burrows, C. J., Casertano, S., Hester, J. J., Trauger, J. T., Watson, A. M., & Worthey, G. 1995, PASP 107, 1065 Humason, M. L., Mayall, N. U., & Sandage, A. R. 1956, AJ 61, 97 Johnson, J. A. Bolte, M.. Bond, H., Hesser, J. E., De Oliveira, C. M., Richer, H. B., Stetson, P. B., & vandenBerg, D. A. 1998, IAUS 190, 154 King, I. R. 1962, AJ 71, 64 Krist, J. 1995, Astronomical Data Analysis Software and Systems IV, 77, 349 Lee, M. G., Freedman, W. L., & Madore, B. F. 1993, in *New perspectives on stellar pulsation and pulsating variable stars*, eds. J. M. Nemec & J. M. Matthews (Cambridge: Cambridge University Press), 92 McLaughlin, D. E. 1999, AJ in press McLaughlin, D. E. & Pudritz, R. E. 1994, AAS 185, 52.10 Miller, B. W., Lotz, J. M., Ferguson, H. C., Stiavelli, M., & Whitmore, B. C. 1998, ApJL 508, 133 Minniti, D. & Zijlstra, A. A. 1997, AJ 114, 147 Olsen, K. A. G., Hodge, P. W., Mateo, M., Olszewski, E. W., Schommer, R. A., Suntzeff, N. B., & Walker, A. R. 1998, MNRAS 300, 665 Sandage, A. & Carlson, G. 1985, AJ 90, 1464 Searle, L. & Zinn, R. 1978, ApJ 225, 357 Trager, S. C., Djorgovski, S., & King, I. R. 1993, in *Structure and Dynamics of Globular Clusters*, eds. S. G. Djorgovski & G. Meylan (San Francisco: BookCrafters), 347 vandenBerg, D. A. 1998, IAUS 189, 439
[^1]: Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
|
---
abstract: 'Magnetism in the FeAs stoichiometric compounds and its interplay with superconductivity in vortex states are studied by self-consistently solving the BdG equations based on a two-orbital model with including the on-site interactions between electrons in the two orbitals. It is revealed that for the parent compound, magnetism is caused by the strong Hund’s coupling, and the Fermi surface topology aids to select the spin-density-wave (SDW) pattern. The superconducting (SC) order parameter with $s_{\pm}=\Delta_{0}\cos(k_{x})\cos(k_{y})$ symmetry is found to be the most favorable pairing for both the electron- and hole-doped cases, while the local density-of-states (LDOS) exhibits the characteristic of nodal gap for the former and full gap for the latter. In the vortex state, the emergence of the field-induced SDW depends on the strength of the Hund’s coupling and the Coulomb repulsions. The field-induced SDW gaps the finite energy contours on the electron and hole pocket sides, leading to the dual structures with one reflecting the SC pairing and the other being related to the SDW order. These features can be discernable in STM measurements for identifying the interplay between the field-induced SDW order and the SC order around the core region.'
author:
- 'Hong-Min Jiang'
- 'Jian-Xin Li'
- 'Z. D. Wang'
title: 'Vortex states in iron-based superconductors with collinear antiferromagnetic cores'
---
introduction
============
The recently discovered iron arsenide superconductors, [@kami1; @xhchen1; @zaren1; @gfchen1; @wang1] which display superconducting transition temperature as high as more than 50K, appear to share a number of general features with high-$T_{c}$ cuprates, including the layered structure and proximity to a magnetically ordered state. [@kami1; @cruz1; @jdong] The accumulated evidences have subsequently established a fact that the parent compounds are generally poor metal and undergo structure and antiferromagnetic (AFM) spin-density-wave (SDW) transitions below certain temperatures. [@cruz1; @klau1] Elastic neutron scattering experiments have shown the antiferromagnetic order is collinear and has a wavevector $(\pi,0)$ or $(0,\pi)$ in the unfolded Brillouin zone corresponding to a unit cell with only one Fe atom per unit cell. [@cruz1] Either chemical doping or/and pressure suppresses the AFM SDW instability and eventually results in the emergence of superconductivity. [@kami1; @taka1] The novel magnetism and superconducting properties in these compounds have been a great spur to recent researches. [@zhao1; @chen1; @goto1; @luet1; @qsi1; @ccao1; @djsingh; @yao1; @huwz; @daikou; @qhwang; @ragh1]
The relation between magnetism and superconductivity and the origin of magnetic order have attracted significant attentions in the current research on FeAs superconductors. Discrepancies exist in the experimental results, i.e., whether the superconductivity and antiferromagnetic order are well separated or they can coexist in the underdoped region of the phase diagram, and how they coexist if they happen to do so. For example, there is no overlap between those two phases in CeFeAsO$_{1-x}$F$_{x}$ [@zhao1], while the coexistence of the two phases was observed in a narrow doping range in SmFeAsO$_{1-x}$F$_{x}$ [@liudrew], and in a broader range in Ba$_{1-x}$K$_{x}$Fe$_{2}$As$_{2}$ [@chen1; @goto1]. Even for the same LaFeAsO$_{1-x}$F$_{x}$ system, different experiments display conflicting results. It was reported that before the orthorhombic SDW phase is completely suppressed by doping, superconductivity has already appeared at low temperatures [@kami1], while it was also observed experimentally that superconductivity appears after the SDW is completely suppressed [@luet1]. As for the origin of the SDW phase, two distinct types of theories have been proposed: local moment antiferromagnetic ground state for strong coupling, [@qsi1] and itinerant ground state for weak coupling. [@ccao1; @djsingh; @yao1; @ragh1] The detection of the local moment seems to question the weak coupling scenario, but the metallic-like (or bad metal) nature as opposed to a correlated insulator as in cuprates renders the strong coupling theories doubtable. [@huwz] More recently, a compromised scheme was adopted: the SDW instability is assumed to result from the coupling of itinerant electrons with local moment, namely, neither the Fermi surface nesting nor the local moment scenario alone is able to account for it. [@daikou]
Although many research efforts have been already made to identify the existence of magnetic order and its origin as well as the relationship with superconductivity, there have been fewer studies on vortex states of the systems. While the interplay between magnetism and superconductivity has been yet to be experimentally clarified, the superconducting critical temperature $T_{c}$ reaches its maximum value after the antiferromagnetic spin order is completely suppressed in the materials, indicating the competition nature between AFM SDW instability and superconductivity. At this stage, it is valuable and interesting to investigate vortex states in the family of FeAs compounds, mainly considering that the magnetic order may arise naturally when the superconducting order is destroyed by the magnetic vortex. Therefore, one can perform local tunneling spectroscopic probes in vortex states to understand profoundly the interplay between magnetic order and superconductivity.
In this paper, we investigate magnetism in the FeAs stoichiometric compounds, and the interplay between it and superconductivity upon doping in vortex states by self-consistently solving the BdG equations based on the two-orbital model with including the on-site interactions between electrons in the two orbitals. It is shown that for the parent compound, magnetism is caused by the strong Hund’s coupling, and the Fermi surface topology aids to select the SDW ordering pattern. The SDW results in the pseudogap-like feature at the Fermi level in the LDOS. It is found that the SC order parameter with $s_{\pm}=\Delta_{0}\cos(k_{x})\cos(k_{y})$ symmetry is the most favorable pairing at both the electron- and hole-doped sides, while the LDOS exhibits the characteristic of nodal gap for the former and full gap for the latter. In the vortex states, the emergence of the field-induced SDW order depends heavily on the strength of the Hund’s coupling and the Coulomb repulsions. The coexistence of the field-induced SDW order and SC order around the core region is realized due to the fact that the two orders emerge at different energies. The corresponding LDOS at the core region displays a kind of dual structures, with one reflecting the SC pairing and the other being related to the SDW order.
The paper is organized as follows. In Sec. II, we introduce the model Hamiltonian and carry out analytical calculations. In Sec. III, we present numerical calculations and discuss the results. In Sec. IV, we make remarks and conclusion.
THEORY AND METHOD
=================
We start with an effective two-orbital model [@ragh1] that takes only the iron $d_{xz}$ and $d_{yz}$ orbitals into account. By assuming an effective attraction that causes the superconducting pairing and including the possible interactions between the two orbitals’ electrons, one can construct an effective model to study the vortex physics of the iron-based superconductors in the mixed state: $$\begin{aligned}
H=H_{0}+H_{pair}+H_{int}.\end{aligned}$$ The first term is a tight-binding model $$\begin{aligned}
H_{0}=&&-\sum_{ij,\alpha\beta,\sigma}e^{i\varphi_{ij}}t_{ij,\alpha\beta}
c^{\dag}_{i,\alpha,\sigma}c_{j,\beta,\sigma}
\nonumber \\
&&-\mu\sum_{i,\alpha,\sigma}c^{\dag}_{i,\alpha,\sigma}c_{i,\alpha,\sigma},\end{aligned}$$ which describes the electron effective hoppings between sites $i$ and $j$ of the Fe ions on the square lattice, including the intra- ($t_{ij,\alpha\alpha}$) and inter-orbital ($t_{ij,\alpha,\beta},
\alpha\neq\beta$) hoppings with the subscripts $\alpha$, $\beta$ ($\alpha,(\beta)=1,2$ for $xz$ and $yz$ orbital, respectively) denoting the orbitals and $\sigma$ the spin. $c^{\dag}_{i,\alpha\sigma}$ creates an $\alpha$ orbital electron with spin $\sigma$ at the site $i$ ($i\equiv(i_{x},i_{y})$), and $\mu$ is the chemical potential. The magnetic field is introduced through the Peierls phase factor $e^{i\varphi_{ij}}$ with $\varphi_{ij}=\frac{\pi}{\Phi_{0}}\int^{r_{i}}_{r_{j}}\mathbf{A(r)}\cdot
d\mathbf{r}$, where $A=(-Hy, 0, 0)$ stands for the vector potential in the Landau gauge and $\Phi_{0}=hc/2e$ is the superconducting flux quantum. The hopping integrals are chosen as to capture the essence of the density function theory (DFT) results. [@gxu1] Taking the hopping integral between the $d_{yz}$ orbitals $|t_{1}|=1$ as the energy unit, we have, $$\begin{aligned}
t_{i,i\pm \hat{x},xz,xz}=&&t_{i,i\pm \hat{y},yz,yz}=t_{1}=-1.0 \nonumber\\
t_{i,i\pm\hat{y},xz,xz}=&&t_{i,i\pm \hat{x},yz,yz}=t_{2}=1.3 \nonumber\\
t_{i,i\pm\hat{x}\pm\hat{y},xz,xz}=&&t_{i,i\pm\hat{x}\pm
\hat{y},yz,yz}=t_{3}=-0.9 \nonumber\\
t_{i,i+\hat{x}-\hat{y},xz,yz}=&&t_{i,i+\hat{x}-\hat{y},yz,xz}
=t_{i,i-\hat{x}+\hat{y},xz,yz} \nonumber\\
=&&t_{i,i+\hat{x}-\hat{y},yz,xz}=t_{4}=-0.85 \nonumber\\
t_{i,i+\hat{x}+\hat{y},xz,yz}=&&t_{i,i+\hat{x}+\hat{y},yz,xz}
=t_{i,i-\hat{x}-\hat{y},xz,yz} \nonumber\\
=&&t_{i,i-\hat{x}-\hat{y},yz,xz}=-t_{4}.\end{aligned}$$ Here, $\hat{x}$ and $\hat{y}$ denote the unit vector along the $x$ and $y$ direction, respectively.
The second term accounts for the superconducting pairing. Considering that a main purpose here is to address the interplay between the SC and magnetism in the vortex state for the FeAs-based superconductors, we take a phenomenological form for the pairing interaction, $$\begin{aligned}
H_{pair}=&&\sum_{i\neq
j,\alpha\beta}V_{ij}(\Delta_{ij,\alpha\beta}c^{\dag}_{i,\alpha\uparrow}c^{\dag}_{j,\beta\downarrow}
+h.c.)\end{aligned}$$ with $V_{ij}$ as the strengths of effective attractions.
The third term represents the interactions between electrons, [@oles1] $$\begin{aligned}
H_{int}=&& U\sum_{i,\alpha}n_{i,\alpha\uparrow}n_{i,\alpha\downarrow}\nonumber\\
&&
+U^{'}\sum_{i,\alpha<\beta,\sigma}n_{i,\alpha,\sigma}n_{i,\beta,\bar{\sigma}}
\nonumber\\
&&+(U^{'}-J)\sum_{i,\alpha<\beta,\sigma}
n_{i,\alpha,\sigma}n_{i,\beta,\sigma}
\nonumber\\
&&+J^{'}\sum_{i,\alpha<\beta}(c^{\dag}_{i,\alpha\uparrow}c^{\dag}_{i,\alpha\downarrow}c_{i,\beta\downarrow}
c_{i,\beta\uparrow}\nonumber\\
&&+c^{\dag}_{i,\alpha\uparrow}c^{\dag}_{i,\beta\downarrow}c_{i,\alpha\downarrow}
c_{i,\beta\uparrow}+h.c.),\end{aligned}$$ which includes the intra-orbital (inter-orbital) Coulomb repulsion $U$ ($U^{'}$), the Hund’s rule coupling $J$ as well as the inter-orbital Cooper pairing hopping term $J^{'}$.
After the Hartree-Fock decomposition of the on-site interaction term, one arrives at the Bogoliubov-de Gennes equations in the mean field approximation for this model Hamiltonian $$\begin{aligned}
\sum_{j,\alpha<\beta}\left(
\begin{array}{cccc}
H_{ij,\alpha\alpha,\sigma} &
\tilde{\Delta}_{ij,\alpha\alpha} & H_{ij,\alpha\beta,\sigma} & \Delta^{\ast}_{ii,\beta\alpha} \\
\tilde{\Delta}^{\ast}_{ij,\alpha\alpha} &
-H^{\ast}_{ij,\alpha\alpha,\bar{\sigma}} &
\Delta_{ii,\alpha\beta} & -H^{\ast}_{ij,\alpha\beta,\bar{\sigma}} \\
H_{ij,\alpha\beta,\sigma} & \Delta^{\ast}_{ii,\alpha\beta} &
H_{ij,\beta\beta,\sigma} & \tilde{\Delta}_{ij,\beta\beta} \\
\Delta_{ii,\beta\alpha} & -H^{\ast}_{ij,\alpha\beta,\bar{\sigma}} &
\tilde{\Delta}^{\ast}_{ij,\beta\beta} &
-H^{\ast}_{ij,\beta\beta,\bar{\sigma}}
\end{array}
\right)\nonumber\\
\times\left(
\begin{array}{cccc}
u^{n}_{j,\alpha,\sigma} \\
v^{n}_{j,\alpha,\bar{\sigma}} \\
u^{n}_{j,\beta,\sigma} \\
v^{n}_{j,\beta,\bar{\sigma}}
\end{array}
\right)= E_{n}\left(
\begin{array}{cccc}
u^{n}_{i,\alpha,\sigma} \\
v^{n}_{i,\alpha,\bar{\sigma}} \\
u^{n}_{i,\beta,\sigma} \\
v^{n}_{i,\beta,\bar{\sigma}}
\end{array}
\right),\end{aligned}$$ where, $$\begin{aligned}
H_{ij,\alpha\alpha,\sigma}=&&-e^{i\varphi_{ij}}t_{ij,\alpha\alpha}+\delta_{ij}
[U\langle n_{i,\alpha,\bar{\sigma}}\rangle+ \nonumber\\
&&U^{'}\langle
n_{i,\beta(\beta\neq\alpha),\bar{\sigma}}\rangle+(U^{'}-J)\langle n_{i,\beta(\beta\neq\alpha),\sigma}\rangle
\nonumber\\ &&-\mu] \nonumber\\
H_{ij,\alpha\beta(\beta\neq\alpha),\sigma}=&&-e^{i\varphi_{ij}}t_{ij,\alpha\beta(\beta\neq\alpha)}
\nonumber\\
\tilde{\Delta}_{ij,\alpha\alpha}=&&\Delta_{ij,\alpha\alpha}+
\Delta^{\ast}_{ii,\beta\beta(\beta\neq\alpha)}.\end{aligned}$$ $u^{n}_{j,\alpha,\sigma}$ ($u^{n}_{j,\beta,\bar{\sigma}}$), $v^{n}_{j,\alpha,\sigma}$ ($v^{n}_{j,\beta,\bar{\sigma}}$) are the Bogoliubov quasiparticle amplitudes on the $j$-th site with corresponding eigenvalues $E_{n}$.
The pairing amplitude and electron densities are obtained through the following self-consistent equations, [@chen-wang] $$\begin{aligned}
\Delta_{ij(i\neq
j),\alpha\alpha}=&&\frac{V}{4}\sum_{n}(u^{n}_{i,\alpha,\sigma}
v^{n\ast}_{j,\alpha,\bar{\sigma}}
+v^{n\ast}_{i,\alpha,\bar{\sigma}}u^{n}_{j,\alpha,\sigma})\times \nonumber\\
&&\tanh(\frac{E_{n}}{2k_{B}T})
\nonumber\\
\Delta_{ii,\alpha\beta}=&&\frac{J}{4}\sum_{n}(u^{n}_{i,\alpha,\sigma}
v^{n\ast}_{i,\beta,\bar{\sigma}}
+v^{n\ast}_{i,\beta,\bar{\sigma}}u^{n}_{i,\alpha,\sigma})\times \nonumber\\
&&\tanh(\frac{E_{n}}{2k_{B}T})
\nonumber\\
n_{i,\alpha,\uparrow}=&&\sum_{n}|u^{n}_{i,\alpha,\uparrow}|^{2}f(E_{n}) \nonumber\\
n_{i,\alpha,\downarrow}=&&\sum_{n}|v^{n}_{i,\alpha,\downarrow}|^{2}[1-f(E_{n})].\end{aligned}$$
The electronic structure associated with the SDW and the vortex states, namely, the local density-of-states (LDOS), $N(\mathbf{r}_{i},E)$ is calculated by $$\begin{aligned}
N(\mathbf{r}_{i},E)=&&
-\sum_{n,\alpha}[|u_{i,\alpha,\uparrow}^{n}|^{2}f^{'}(E_{n}-E) \nonumber\\
&&+|v_{i,\alpha,\downarrow}^{n}|^{2} f^{'}(E_{n}+E)].\end{aligned}$$ where, $f^{'}(E)$ is the derivative of the Fermi-Dirac distribution function with respect to energy.
In numerical calculations, the undoped case is determined by the equality of the area enclosed by the electron- and hole-pocket in the unfolded Brillouin zone, which leads to $n_{h}=\sum_{i,\alpha,\sigma}(n_{\alpha,\sigma})/(N_{x}N_{y})=2$. The Coulomb interactions $U, U^{'}, J, J^{'}$ are expected to satisfy the conventional relation $U^{'}=U-2J$ and $J^{'}=J$. [@yao1; @wern1] In the literatures, $U\sim 0.2W-0.5W$ and $J\sim 0.09W$ are expected. [@ccao1; @yao1] Here, $W$ is the energy bandwidth, which is $12.4|t_{1}|$ in our case. This gives rise to $J\sim |t_{1}|$ [@haule] and $U\sim 2.2J-5.5J$. We have found numerically that the results presented here are not subject to the qualitative changes in the intermediate coupling range $U\approx
3J\sim 4J$, where the ground state is an AFM metal. [@lorenz; @rongyu] In the following, the typical result with $U=3.5J$ will be presented. We take $V_{ij}=0$ for the normal state. In the SC state, $V_{ij}$ is chosen to give a short coherence length of a few lattice spacing being consistent with experiments. [@takeshita] We use $V_{ij}=V=2.0$, $\mu=1.75$ ($\mu=1.13$), which gives rise to the filling factor $n=\sum_{i,\alpha,\sigma}(n_{\alpha,\sigma})/(N_{x}N_{y})=2.2$ ($n=1.8$) and the coherent peak of the SC order parameter in the DOS being at $\Delta_{max}\simeq0.4$. Thus, we estimate the coherence length $\xi_{0}\sim E_{F}a/|\Delta_{max}|~\cite{ydzhu}\sim4a$. Due to this short coherence length, presumably the system will be a type-II superconductor. The unit cell with size $N_{x}\times
N_{y}=40\times20$ and the number of such unit cells $M_{x}\times
M_{y}=10\times20$ are used in the numerical calculations. In view of these parameters, we estimate the upper critical field $B_{c2}\sim130T$. Therefore, the model calculation is particularly suitable for the iron-based type-II superconductors such as CaFe$_{1-x}$Co$_{x}$AsF, Eu$_{0.7}$Na$_{0.3}$Fe$_{2}$As$_{2}$ and FeTe$_{1-x}$S$_{x}$ , where the typical coherence length $\xi_{0}$ deduced from the experiments is of a few lattice spacing [@takeshita] and the upper critical field achieves as high as dozens of Tesla.
results and discussion
======================
SDW phase in the absence of the magnetic field
----------------------------------------------
In the absence of a magnetic field and pairing term, we obtain the collinear AFM SDW at the half filling. Fig. 1(a) is the typical result with $J=0.96$ for the real space distribution of the moment $M_{i}$ defined as $M_{i}=\sum_{\alpha}M_{i,\alpha}$ with $M_{i,\alpha}=\frac{1}{2}(\langle
n_{i,\alpha,\uparrow}\rangle-\langle
n_{i,\alpha,\downarrow}\rangle)$ being the spin order defined on the $\alpha$ orbital. As can be seen in Fig. 1(a), the real space distribution of $M_{i}$ antiferromagnetically alines along the $x$ direction but ferromagnetically along the $y$ direction. In Fig. 1(b) Fourier transformation of $M_{i}$ gives an SDW order with wave vector $Q=(\pi,0)$, which is consistent with experimental results in the undoped systems. [@cruz1; @chen2] \[For another initial input parameters, the degenerate configuration of $M_{i}$ with wave vector $(0,\pi)$ can be obtained.\] We note that the emergence of magnetic order is heavily dependent on the Hund’s coupling strength $J$. For $J=0$, the magnetic ordered phase does not exist even with very large $U$ and $U^{'}$. Therefore, magnetism itself is generated by the strong Hund’s coupling, whereas the Fermi surface topology aids to select the ordering pattern. [@mazi2] This is a reminiscent of the spin freezing phase found in a three-orbital model relevant to transition metal oxide SrRuO$_{3}$, [@wern1] and may be a common feature respecting the magnetic order origin in multiple orbital systems involving the Hund’s coupling interaction.
In Fig. 1(c), we plot the LDOS $N(\mathbf{r}_{i},E)$ in the SDW state at a site with positive $M_{i}$, i.e., the spin-up site labeled $A$ in Fig. 1(a). The electronic structure in the SDW state displays a clear pseudogap-like feature with a heavily depressed but nonvanishing density-of-state (DOS) at the Fermi energy, pointing to the metallic magnetic ordered state. The magnetic order derived pseudogap-like feature is consistent with the experimental observation of partial gaps in the SDW state of the parent compounds [@wzhu1] and may account for the pseudogap feature in several experiments. [@liuzhao; @boysai]
The pseudogap feature comes from a fact that when the SDW order with the wave vector $Q$ is involved, there will be gaps on those parts of the Fermi surface which are best connected by the wave vector $Q$, while those who are not connected by the wave vector $Q$ remain untouched, leading to the partial gaps in the SDW states of the parent compounds. [@hsie1] We make this point more clearly in Fig. 1(d), in which the spectral weight distribution $I(k)=\int_{\varepsilon_{F}-\Delta w}^{\varepsilon_{F}+\Delta
w}A(k,\omega)d\omega$ is shown. Here, $A(k,\omega)$ is the single-particle spectral function and $\Delta w$ is an integration window. As shown, both the electron and hole Fermi pockets are partially gaped.
Configuration of the order parameters
-------------------------------------
In the search for the most favorable pairing symmetry, we consider all possible singlet pairings, including the extended $s$- and $d$-wave symmetries, between the nearest, next-nearest, and third-nearest neighbor (NN, NNN, TNN) sites, as shown in Figs. 2(a)-2(c). The pairing amplitude of the $s$-wave symmetry has the same sign along the $x$ and $y$ directions for the NN or TNN sites pairing, resulting in the $k$ dependent pairing form $\Delta(k)=\Delta_{0}[\cos(k_{x})+\cos(k_{y})]$ for the NN sites pairing and $\Delta(k)=\Delta_{0}[\cos(2k_{x})+\cos(2k_{y})]$ for the TNN sites pairing, respectively; and the same sign along the $x=y$ and $x=-y$ directions for the NNN sites pairing, resulting in the $k$ dependent pairing form $\Delta(k)=\Delta_{0}[\cos(k_{x})\cos(k_{y})]$. The $d$-wave pairing, on the other hand, has amplitude $+\Delta_{0}$ along the $x$ direction and $-\Delta_{0}$ along the $y$ direction for the NN or TNN sites pairing, resulting in the $k$ dependent pairing form $\Delta(k)=\Delta_{0}[\cos(k_{x})-\cos(k_{y})]$ for the NN sites pairing and $\Delta(k)=\Delta_{0}[\cos(2k_{x})-\cos(2k_{y})]$ for the TNN sites pairing, respectively; and $+\Delta_{0}$ along $x=y$ direction and $-\Delta_{0}$ along $x=-y$ direction for the NNN sites pairing, resulting in the $k$ dependent pairing form $\Delta(k)=\Delta_{0}[\sin(k_{x})\sin(k_{y})]$.
The introduction of pairing interaction suppresses the SDW order completely on both the electron- and hole-doped sides, and leads to the homogeneous SC order in real space. We carry out extensive calculations, and find that in the reasonable doping range the most favorable pairing symmetry is the intra-orbital pairing between NNN sites, $$\begin{aligned}
\Delta_{i,i+\hat{x}+\hat{y},\alpha\alpha}=&&\Delta_{i,i-\hat{x}-\hat{y},\alpha\alpha}
=\nonumber\\
\Delta_{i,i+\hat{x}-\hat{y},\alpha\alpha}=&&\Delta_{i,i-\hat{x}+\hat{y},\alpha\alpha}
=\Delta_{i,\alpha\alpha},\end{aligned}$$ which leads to the $s_{\pm}$-wave pairing $s_{\pm}=\Delta_{0}\cos(k_{x})\cos(k_{y})$, being consistent with that obtained before. [@yao1; @mazihu; @dhfcch] Then, the superconducting order parameter $\Delta_{i}$ is expressed as, $$\begin{aligned}
\Delta_{i}=\frac{1}{2}\sum_{\alpha}\Delta_{i,\alpha\alpha}.\end{aligned}$$ For the choice of $V=2.0$ in this paper, one gets the amplitude $\Delta_{i}\sim0.12$ for the $s_{\pm}$ SC order .
Vortex states
-------------
When a magnetic field is applied, the SC order parameter around the vortex core is suppressed, so that the system may be driven into a vortex state. We find that there exists a critical Hund’s coupling value $J_{c}$ separating the regimes of two kinds of vortex states associated respectively with and without the field-induced SDW order. In the following, we address these two regimes in detail.
### The vortex state without the field-induced SDW order
The vortex state without the field induced SDW order is stable when $J$ is less than $J_{ce}=0.9$ on the electron-doped side with $n=2.2$ and less than $J_{ch}=1.25$ on the hole-doped side with $n=1.8$, respectively. Typical results on the nature of the vortex state are displayed in Fig. 3 for $n=2.2$ with the Hund’s coupling $J=0.85$, for which no magnetic order is induced. As shown in Fig. 3(a), each unit cell accommodates two superconducting vortices each carrying a flux quantum $hc/2e$. The SC order parameter $\Delta_{i}$ vanishes at the vortex core center and recovers its bulk value at the core edge with radium $\xi_{1}$ on the scale of coherent length $\xi_{0}$.
In Figs. 3(b) and 3(c) we plot the LDOS as a function of energy at the vortex core center in the absence of the field-induced magnetic order for electron-doped case with $n=2.2$ and hole-doped case with $n=1.8$, respectively. For comparison, we have also displayed the LDOS at the midpoint between two nearest neighbor vortices along the $x$ direction, which resembles that for the bulk system. As seen from Figs. 3(b) and 3(c), when $J=0.85$ for which no local SDW order is induced, the LDOS at the core center shows a single resonant peak within the SC gap edge for both the electron- and hole-doped cases, which is similar to that reported by other authors for the cuprates high-$T_{c}$ superconductors in the vortex state. [@yongwang] However, the differences are obvious with respect to the position of the resonant peak and the line shape of the bulk LDOS between the electron- and hole-doped cases in despite of the same SC pairing symmetry considered here. More specifically, for the electron-doped case, the position of the in-gap resonant peak is almost at the Fermi level and the bulk system exhibits the $V$-shaped LDOS curve, the typical characteristics which indicate a nodal SC gap. However, for the hole-doped case, the resonant peak deviates from the Fermi level to a higher energy and the bulk system exhibits the $U$-shaped LDOS curve, from which the conclusion for a full SC gap can be made.
The notable differences can be qualitatively understood as follows: The Fermi surface of the FeAs superconductors consists of hole Fermi surfaces around the $\Gamma$-point at $(k_{x}, k_{y}) = (0, 0)$ forming the hole-pocket, and the electron Fermi surfaces around the $M_{1,2}$ points at $(\pi, 0)$ and $(0, \pi)$ forming the electron-pocket, respectively. Both Fermi pockets change their size upon doping as depicted in Fig. 3(d), where only the relevant electron-pockets are displayed. The size of the electron-pocket enlarges and approaches to the nodal line of the $s_{\pm}$ SC gap with electron doping while shrinks and deviates from the nodal line with hole doping. Thus the low energy quasiparticles in the SC phase show the nodal behavior in the electron-doped system and nodeless behavior in the hole-doped system. This may explain the discrepancy observed in experiments concerning the pairing symmetry, where the conclusion for the nodal gap were obtained on the electron-doped LnFeAsO (Ln stands for the rare earth elements) samples [@mushr; @grluca; @maahma] and a dominant full gap feature was found on the hole-doped (Ba,Sr)$_{1-x}$K$_{x}$Fe$_{2}$As$_{2}$ systems [@limu; @wzhu1] in the measurement of the thermal and transport properties.
### The vortex state with the field-induced SDW order
As $J$ increases to about $J_{ce}=0.9$ on the electron-doped side with $n=2.2$ and $J_{ch}=1.25$ on the hole-doped side with $n=1.8$, the SDW order is induced around the core region. Fig. 4 displays the vortex structure with $J=0.96$ for the electron-doped case, where the local magnetic order is induced around the vortex core, as shown in Fig. 4(b) which presents the spatial distribution of the local SDW order as defined in Sec. III A. As seen in Fig. 4(a), the vortex core expands further with a radium $\xi_{2}$ compared with that in Fig. 3(a). Meanwhile, the maximum strength of $M_{i}$ appears at the vortex core center and decays with a scale of $\xi_{2}$ to zero into the superconducting region, depicting a competition nature between SC and magnetic orders as observed in experiment [@prat1].
In this case, it is shown that there is no obvious splitting of the in-gap bound state peak though the peak intensity is suppressed heavily in the LDOS for both the electron- and hole-doped cases, as shown in Figs. 4(c) and 4(d). In addition to the in-gap state peak, an additional peak structure below the Fermi energy appears for the electron-doped case, while two peaks situate separately around and above the Fermi energy for the hole-doped case. These features are dramatically different from the high-$T_{c}$ cuprates in vortex states with the field-induced antiferromagnetic order, where the in-gap resonant peak of the core bound state is split into two peaks sitting symmetrically about the Fermi energy. [@maggio]
Simply, one can analyze the present vortex state in the following way: there are two factors that play a role in the physics around the vortex core region. One is the pure SC of vortex state without SDW in the magnetic field, while the other is the SDW state without the SC order for the doped case. It is known that doping destroys the nesting property between parts of the Fermi surface on the electron- and hole-pocket, due to the size change of the hole- and electron-pocket upon doping compared with that in the undoped case. However, as depicted in Figs. 5(c) and 5(d), the SDW wave vector $Q$ now connects the finite energy contour for the doped case, resulting in the gap-like feature below (above) Fermi energy in the LDOS shown in Fig. 5(a) (Fig. 5(b)) for electron (hole) doped case. Combination of the in-gap resonant peak in the vortex state without SDW and the SDW-induced finite energy gap feature produce a kind of dual structures of the LDOS for the finite doped case, ie., the in-gap bound state peak reflecting the SC pairing and the other peak structure being related to the SDW order.
remarks and conclusion
======================
Clarification of the interplay between magnetism and superconductivity is a key step toward the understanding of the underlying physics of the Fe-based high-$T_{c}$ superconductors. Although the competition nature between them has been identified in both classes of materials, some still show a coexistence of them. [@liudrew; @chen1; @goto1; @prat1] Competition between the AFM SDW and SC is natural in FeAs compounds when one considers that both originate from the multiple Fe $d$ conduction bands, but quest for the mechanism of the coexistence of them is shown to be more challenged. Recently, an incommensurate SDW state with wave vector $Q^{'}=Q\pm q$ has been proposed to account for the coexistence of the AFM SDW and superconductivity at finite doping [@cvet1; @voro1]. In such a state, the mismatch between the electron and hole Fermi pocket at finite doping is compensated by the incommensurate wave vector $q$, leading to the inferior “nesting” between the electron and hole Fermi pocket and allowing for the coexistence of magnetism and superconductivity. This mechanism only works at the doping level near the AFM instabilities, where the mismatch between the electron and hole Fermi pocket is small. Here, we show that the field-induced SDW at the doping level being far from the AFM instabilities is commensurate with the same wave vector $Q$ as in the undoped case, and it gaps the finite energy contour on the electron and hole pocket sides. Therefore, at optimal doping, the field-induced SDW and the SC order around the core region may associate with the DOS at different energies, allowing them to coexist.
In conclusion, we have studied magnetism in the FeAs stoichiometric compounds and the interplay between it and superconductivity upon doping in the vortex state by self-consistently solving the BdG equations based on the two-orbital model including the on-site interactions between electrons in the two orbitals. It has been shown that for the parent compound, magnetism is caused by the strong Hund’s coupling, and the Fermi surface topology aids to select the SDW ordering pattern. The SDW results in the pseudogap-like feature at the Fermi level in the LDOS. The SDW order is completely suppressed upon the introduction of the SC interaction. We have also found that the SC order parameter with $s_{\pm}=\Delta_{0}\cos(k_{x})\cos(k_{y})$ symmetry is the most favorable pairing at both the electron- and hole-doped sides, while the LDOS exhibits the characteristic of nodal gap for the former and full gap for the latter. In vortex states, the emergence of the field-induced SDW order depends heavily on the strength of the Hund’s coupling and the Coulomb repulsions, while the coexistence of the field-induced SDW order and SC order around the core region is realized due to the fact that the two orders emerge at different energies. The LDOS at the core region for the vortex state with SDW displays the dual structures with one reflecting the SC pairing and the other being related to the SDW order. These features can be discernable in the STM measurements for identifying the interplay between the field-induced SDW order and the SC order around the core region.
acknowledgement
===============
This project was supported by National Natural Science Foundation of China (Grant No. 10525415 and No. 10904062), the Ministry of Science and Technology of Science (Grants Nos. 2006CB601002, 2006CB921800), a GRF grant of Hong Kong (HKU7055/09P), the China Postdoctoral Science Foundation (Grant No. 20080441039), and the Jiangsu Planned Projects for Postdoctoral Research Funds (Grant No. 0801008C).
[*Note added*]{}-After completing this work, we get to know a related work done by X. Hu, C. S. Ting, and J. X. Zhu [@xhu].
![(Color online) (a) The real space distribution of the moment $M_{i}$. (b) The Fourier transformation of $M_{i}$. (c) The LDOS at the site labeled as A in Fig. 1(a) in the SDW state. and (d) spectral weight distribution in the SDW state (see text).[]{data-label="fig1"}](sdw1.eps){width="260pt" height="230pt"}
![(Color online) The SC order parameter configuration in the real space between the NN sites pairing (a), the NNN sites pairing (b), and the TNN sites pairing (c), respectively. The minus signs in the parentheses correspond to the $d$-wave pairings, otherwise the $s$-wave pairings.[]{data-label="fig2"}](oc1.eps){width="270pt" height="130pt"}
![(Color online) (a) The real space distribution of the SC order amplitude $|\Delta_{i}|$ without the field-induced SDW. The LDOS curves at the core center (black line) and for the bulk system (green line) with electron doping $n=2.2$ (b), and hole doping $n=1.8$ (c), respectively. (d) The electron-pocket in the iron-pnictides in the unfolded Brillouin zone $-\pi\leq k_{x}<\pi$, $-\pi\leq k_{y}<\pi$. The solid and dashed curves correspond to electron doping with $n=2.2$ and hole doping with $n=1.8$, respectively. The dotted (red) lines mark the nodal lines at ($¡À\pi/2, k_{y}$) and ($k_{x},¡À\pi/2$) for the $s_{\pm}=\Delta_{0}\cos(k_{x})\cos(k_{y})$ order parameter.[]{data-label="fig3"}](vort11.eps){width="260pt" height="230pt"}
![(Color online) (a) The real space distribution of the SC order amplitude $|\Delta_{i}|$ in the presence of the field-induced SDW. (b) The real space distribution of the field-induced moment $M_{i}$. The LDOS curves at the core center (black line) and for the bulk system (green line) with electron doping $n=2.2$ (c), and hole doping $n=1.8$ (d), respectively.[]{data-label="fig4"}](vort21.eps){width="260pt" height="230pt"}
![(Color online) LDOS curves in a bulk system simulating the SDW state around the core region for $n=2.2$ (a), and $n=1.8$ (b), respectively. The wave vector $Q$ of the field-induced SDW connects the finite energy contour (red line) for $n=2.2$ (c), and $n=1.8$, respectively (Black curves denote the Fermi surface at corresponding doping level).[]{data-label="fig5"}](sdw21.eps){width="210pt" height="210pt"}
[60]{} Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, J. Am. Chem. Soc. 130, 3269 (2008). X. H. Chen, T. Wu, G. Wu, R. H. Liu, H. Chen, and D. F. Fang, Nature **453**, 761 (2008). Z.-A. Ren, G.-C. Che, X.-L. Dong, J. Yang, W. Lu, W. Yi, X.-L. Shen, Z.-C. Li, L.-L. Sun, F. Zhou, and Z.-X. Zhao, Europhys. Lett. **83**, 17002 (2008). G. F. Chen, Z. Li, D. Wu, G. Li, W. Z. Hu, J. Dong, P. Zheng, J. L. Luo, and N. L. Wang, Phys. Rev. Lett. **100**, 247002 (2008). C. Wang, L. Li, S. Chi, Z. Zhu, Z. Ren, Y. Li, Y. Wang, X. Lin, Y. Luo, S. Jiang, X. Xu, G. Cao, and Z. Xu, Europhys. Lett. **83**, 67006 (2008). C. de la Cruz, Q. Huang, J. W. Lynn, J. Li, W. Ratcliff II, J. L. Zarestky, H. A. Mook, G. F. Chen, J. L. Luo, N. L. Wang, and P. Dai, Nature **453**, 899 (2008). J. Dong, H. J. Zhang, G. Xu, Z. Li, G. Li, W. Z. Hu, D. Wu, G. F. Chen, X. Dai, J. L. Luo, Z. Fang, and N. L. Wang, Europhys. Lett. **83**, 27006 (2008). H.-H. Klauss, H. Luetkens, R. Klingeler, C. Hess, F. J. Litterst, M. Kraken, M. M. Korshunov, I. Eremin, S.-L. Drechsler, R. Khasanov, A. Amato, J. Hamann-Borrero, N. Leps, A. Kondrat, G. Behr, J. Werner, and B. Buchner Phys. Rev. Lett. **101**, 077005 (2008). H. Takahashi, K. Igawa, K. Arii, Y. Kamihara, M. Hirano, and H. Hosono, Nature (London) **453**, 376 (2008). J. Zhao, Q. Huang, C. de la Cruz, S. Li, J. W. Lynn, Y. Chen, M. A. Green, G. F. Chen, G. Li, Z. Li, J. L. Luo, N. L. Wang, P. Dai, Nature Materials **7**, 953 (2008). H. Chen, Y. Ren, Y. Qiu, Wei Bao, R. H. Liu, G. Wu, T. Wu, Y. L. Xie, X. F. Wang, Q. Huang, and X. H. Chen, Europhys. Lett. **85**, 17006 (2009). T. Goko, A. A. Aczel, E. Baggio-Saitovitch, S. L. Bud’ko, P. C. Canfield, J. P. Carlo, G. F. Chen, P. Dai, A. C. Hamann, W. Z. Hu, H. Kageyama, G. M. Luke, J. L. Luo, B. Nachumi, N. Ni, D. Reznik, D. R. Sanchez-Candela, A. T. Savici, K. J. Sikes, N. L. Wang, C. R. Wiebe, T. J. Williams, T. Yamamoto, W. Yu, and Y. J. Uemura, arXiv:0808.1425 (unpublished). H. Luetkens, H.-H. Klauss, M. Kraken, F. J. Litterst, T. Dellmann, R. Klingeler, C. Hess, R. Khasanov, A. Amato, C. Baines, M. Kosmala, O. J. Schumann, M. Braden, J. Hamann-Borrero, N. Leps, A. Kondrat, G. Behr, J. Werner, and B. Buechner, Nature Materials **8**, 305 (2009). Q. Si, and E. Abrahams, Phys. Rev. Lett. **101**, 076401 (2008). Z. Y. Weng, Physica E **41**, 1281 (2009); C. Xu, M. Müller, and S. Sachdev, Phys. Rev. B **78**, 020501(R) (2008); C. Fang, H. Yao, W.-F. Tsai, J. Hu, and S. A. Kivelson, Phys. Rev. B **77**, 224509 (2008); G.-M. Zhang, Y.-H. Su, Z.-Y. Lu, Z.-Y. Weng, D.-H. Lee, and T. Xiang, arXiv:0809.3874 (unpublished) C. Cao, P. J. Hirschfeld, and H.-P. Cheng, Phys. Rev. B **77**, 220506(R) (2008); S. Yang, W.-L. You, S.-J. Gu, and H.-Q. Lin, Chin. Phys. B **18**, 2545 (2009). D. J. Singh and M. H. Du, Phys. Rev. Lett. **100**, 237003 (2008); K. Kuroki, S. Onari, R. Arita, H. Usui, Y. Tanaka, H. Kontani, and H. Aoki, Phys. Rev. Lett. **101**, 087004 (2008); Q. Han, Y. Chen, and Z. D. Wang, Europhys. Lett. **82**, 37007 (2008); Q. Han and Z. D. Wang, New J. Phys. **11**, 025022 (2009); V. Cvetkovic, and Z. Tesanovic, Europhys. Lett. **85**, 37002 (2009). Z.-J. Yao, J.-X. Li, and Z. D. Wang, New J. Phys. **11**, 025009 (2009); S. L. Yu, J. Kang, and J. X. Li, Phys. Rev. B **79**, 064517 (2009). S. Raghu, X.-L. Qi, C.-X. Liu, D. J. Scalapino, and S.-C. Zhang, Phys. Rev. B **77**, 220503(R) (2008). W. Z. Hu, G. Li, J. Dong, Z. Li, P. Zheng, G. F. Chen, J. L. Luo, and N. L. Wang, Phys. Rev. Lett. **101**, 257005 (2008). J. Dai, Q. Si, J.-X. Zhu, and E. Abrahams, arXiv:0808.0305 (unpublished); S.-P. Kou, T. Li, and Z.-Y. Weng, arXiv:0811.4111 (unpublished). Y. Wan, Q.-H. Wang, EPL **85**, 57007 (2009). R. H. Liu, G. Wu, T. Wu, D. F. Fang, H. Chen, S. Y. Li, K. Liu, Y. L. Xie, X. F. Wang, R. L. Yang, L. Ding, C. He, D. L. Feng, and X. H. Chen, Phys. Rev. Lett. **101**, 087001 (2008); A. J. Drew, F. L. Pratt, T. Lancaster, S. J. Blundell, P. J. Baker, R. H. Liu, G. Wu, X. H. Chen, I. Watanabe, V. K. Malik, A. Dubroka, K. W. Kim, M. Rössle, and C. Bernhard, Phys. Rev. Lett. **101**, 097010 (2008). G. Xu, W. Ming, Y. Yao, X. Dai, S.-C. Zhang and Z. Fang, Europhys. Lett. **82**, 67002 (2008); Z. P. Yin, S. Lebègue, M. J. Han, B. P. Neal, S. Y. Savrasov, and W. E. Pickett, Phys. Rev. Lett. **101**, 047001 (2008). A. M. Oleś, G. Khaliullin, P. Horsch, and L. F. Feiner, Phys. Rev. B **72**, 214431 (2005). Y. Chen, Z. D. Wang, J.-X. Zhu, and C. S. Ting, Phys. Rev. Lett. **89**, 217001 (2002). P. Werner, E. Gull, M. Troyer, and A. J. Millis, Phys. Rev. Lett. **101**, 166405 (2008). K. Haule, and G. Kotliar, New J. Phys. **11**, 025021 (2009). J. Lorenzana, G. Seibold, C. Ortix, and M. Grilli, Phys. Rev. Lett. **101**, 186402 (2008). R. Yu, K. T. Trinh, A. Moreo, M. Daghofer, J. A. Riera, S. Haas, and E. Dagotto, arXiv:0812.2894 (unpublished). S. Takeshita, R. Kadono, M. Hiraishi, M. Miyazaki, A. Koda, S. Matsuishi, and H. Hosono, Phys. Rev. Lett. **103**, 027002 (2009); Y. Qi, Z. Gao, L. Wang, D. Wang, X. Zhang, and Y. Ma, New J. Phys. **10**, 123003 (2008); Y. Mizuguchi, F. Tomioka, S. Tsuda, T. Yamaguchi, and Y. Takano, Appl. Phys. Lett. **94**, 012503 (2009). Y. D. Zhu, F. C. Zhang, and M. Sigrist, Phys. Rev. B **51**, 1105 (1995). Y. Chen, J. W. Lynn, J. Li, G. Li, G. F. Chen, J. L. Luo, N. L. Wang, P. Dai, C. de la Cruz, and H. A. Mook, Phys. Rev. B **78**, 064515 (2008). I. I. Mazina and J. Schmalian, Physica C **469**, 614 (2009).
W. Z. Hu, Q. M. Zhang, N. L. Wang, Physica C **469**, 545 (2009). H.-Y. Liu, X.-W. Jia, W.-T. Zhang, L. Zhao, J.-Q. Meng, G.-D. Liu, X.-L. Dong, G. Wu, R.-H. Liu, X.-H. Chen, Z.-A. Ren, W. Yi, G.-C. Che, G.-F. Chen, N.-L. Wang, G.-L. Wang, Y. Zhou, Y. Zhu, X.-Y. Wang, Z.-X. Zhao, Z.-Y. Xu, C.-T. Chen, and X.-J. Zhou, Chin. Phys. Lett. **25**, 3761 (2008); L. Zhao, H.-Y. Liu, W.-T. Zhang, J.-Q. Meng, X.-W. Jia, G.-D. Liu, X.-L. Dong, G.-F. Chen, J.-L. Luo, N.-L. Wang, W. Lu, G.-L. Wang, Y. Zhou, Y. Zhu, X.-Y. Wang, Z.-Y. Xu, C.-T. Chen, and X.-J. Zhou, Chin. Phys. Lett. **25**, 4402 (2008). M. C. Boyer, K. Chatterjee, W. D. Wise, G. F. Chen, J. L. Luo, N. L. Wang, E. W. Hudson, arXiv:0806.4400 (unpublished); T. Sato, S. Souma, K. Nakayama, K. Terashima, K. Sugawara, T. Takahashi, Y. Kamihara, M. Hirano, and H. Hosono, J. Phys. Soc. Jpn. **77**, 063708 (2008); Y. Ishida, T. Shimojima, K. Ishizaka, T. Kiss, M. Okawa, T. Togashi, S. Watanabe, X. Y. Wang, C. T. Chen, Y. Kamihara, M. Hirano, H. Hosono, and S. Shin, Phys. Rev. B **79**, 060503(R) (2009). D. Hsieh, Y. Xia, L. Wray, D. Qian, K.K. Gomes, A. Yazdani, G. F. Chen, J. L. Luo, N. L. Wang, and M. Z. Hasan, arXiv:0812.2289 (unpublished). I. I. Mazin, D. J. Singh, M. D. Johannes, and M. H. Du, Phys. Rev. Lett. **101**, 057003 (2008); K. Seo, B. A. Bernevig, and J. P. Hu, Phys. Rev. Lett. [**101**]{}, 206404 (2008). F. Wang, H. Zhai, Y. Ran, A. Vishwanath, and D. H. Lee, Phys. Rev. Lett. **102**, 047005 (2009); W.-Q. Chen, K.-Y. Yang, Y. Zhou, and F.-C. Zhang, Phys. Rev. Lett. **102**, 047006 (2009); A. V. Chubukov, Physica C **469**, 640 (2009). Y. Wang, and A. H. MacDonald, Phys. Rev. B **52**, R3876 (1995); M. Franz, and Z. Tešanović, Phys. Rev. Lett. **80**, 4763 (1998). G. Mu, X. Y. Zhu, L. Fang, L. Shan, C. Ren, and H. H. Wen, Chin. Phys. Lett. **25**, 2221 (2008); L. Shan, Y. L. Wang, X. Y. Zhu, G. Mu, L. Fang, C. Ren, and H. H. Wen, Europhys. Lett. **83**, 57004 (2008); C. Ren, Z. S. Wang, H. Yang, X. Y. Zhu, L. Fang, G. Mu, L. Shan, and H. H. Wen, arXiv:0804.1726 (unpublished). H. -J. Grafe, D. Paar, G. Lang, N. J. Curro, G. Behr, J. Werner, J. Hamann-Borrero, C. Hess, N. Leps, R. Klingeler, and B. Buechner, Phys. Rev. Lett. **101**, 047003 (2008); H. Luetkens, H.-H. Klauss, R. Khasanov, A. Amato, R. Klingeler, I. Hellmann, N. Leps, A. Kondrat, C.Hess, A. Köhler, G. Behr, J. Werner, and B. Büchner, Phys. Rev. Lett. **101**, 097009 (2008); J. P. Carlo, Y. J. Uemura, T. Goko, G. J. MacDougall, J. A. Rodriguez, W. Yu, G. M. Luke, P. C. Dai, N. Shannon, S. Miyasaka, S. Suzuki, S. Tajima, G. F. Chen, W. Z. Hu, J. L. Luo, and N. L. Wang, Phys. Rev. Lett. **102**, 087001 (2009). K. Matano, Z. A. Ren, X. L. Dong, L. L. Sun, Z. X. Zhao, and G. Q. Zheng, Europhys. Lett. **83**, 57001 (2008); K. Ahilan, F. L. Ning, T. Imai, A. S. Sefat, R. Jin, M. A. McGuire, B. C. Sales, and D. Mandrus, Phys. Rev. B **78**, 100501(R) (2008); F. Massee, Y. Huang, R. Huisman, S. de Jong, M. S. Golden, and J. B. Goedkoop, arXiv:0812.4539 (unpublished). G. Li, W. Z. Hu, J. Dong, Z. Li, P. Zheng, G. F. Chen, J. L. Luo, and N. L. Wang, Phys. Rev. Lett. **101**, 107004 (2008); G. Mu, H. Luo, Z. Wang, L. Shan, C. Ren, and Hai-Hu Wen, Phys. Rev. B **79**, 174501 (2009). D. K. Pratt, W. Tian, A. Kreyssig, J. L. Zarestky, S. Nandi, N. Ni, S. L. Bud’ko, P. C. Canfield, A. I. Goldman, and R. J. McQueeney, Phys. Rev. Lett. **103**, 087001 (2009). I. Maggio-Aprile, Ch. Renner, A. Erb, E. Walker, and [Ø]{}. Fischer, Phys. Rev. Lett. **75**, 2754 (1995); J.-X. Zhu, and C. S. Ting, Phys. Rev. Lett. **87**, 147002 (2001). V. Cvetkovic and Z. Tesanovic, Phys. Rev. B **80**, 024512 (2009). A. B. Vorontsov, M. G. Vavilov, and A. V. Chubukov, Phys. Rev. B **79**, 060508(R) (2009). X. Hu, C. S. Ting, and J. X. Zhu, Phys. Rev. B **80**, 014523 (2009).
|
---
abstract: 'Modern-day ‘testing’ of (perturbative) QCD is as much about pushing the boundaries of its applicability as about the verification that QCD is the correct theory of hadronic physics. This talk gives a brief discussion of a small selection of topics: factorisation and jets in diffraction, power corrections and event shapes, the apparent excess of $b$-production in a variety of experiments, and the matching of event generators and NLO calculations.'
address: 'LPTHE, Universités Paris VI et Paris VII, Paris, France.'
author:
- 'G. P. Salam'
title: 'QCD tests through hadronic final-state measurements[^1]'
---
Introduction
============
[0.4]{}
The testing of QCD is a subject that many would consider to be well into maturity. The simplest test is perhaps that ${\alpha_\mathrm{s}}$ values measured in different processes and at different scales should all be consistent. It suffices to take a look at compilations by the PDG [@PDG] or Bethke [@Bethke] to see that this condition is satisfied for a range of observables, to within the current theoretical and experimental precision, namely a few percent. There exist many other potentially more discriminatory tests, examples explicit measurements of the QCD colour factors [@ColourFactors] or the running of the $b$-quark mass [@Bambade] — and there too one finds a systematic and excellent agreement with the QCD predictions. A significant amount of the data comes from HERA experiments, and to illustrate this, figure \[fig:HERAalphas\] shows a compilation of a subset of the results on ${\alpha_\mathrm{s}}$, as compiled by ZEUS [@ZEUSalphas].
In the space available however, it would be impossible to give a critical and detailed discussion of the range of different observables that are used to verify that QCD is ‘correct’. Rather let us start from the premise that, in light of the large body of data supporting it, QCD *is* the right theory of hadronic physics, and consider what then is meant by ‘testing QCD’.
One large body of activity is centred around constraining QCD. This includes such diverse activities as measuring fundamental (for the time being) unknowns such as the strong coupling and the quark masses; measuring quantities such as structure functions and fragmentation functions, which though formally predictable by the theory are beyond the scope of the tools currently at our disposal (perturbation theory, lattice methods); and the understanding, improvement and verification of the accuracy of QCD predictions, through NNLO calculations, resummations and projects such as the matching of fixed-order calculations with event-generators. One of the major purposes of such work is to provide a reliable ‘reference’ for the inputs and backgrounds in searches for new physics.
A complementary approach to testing QCD is more about exploring the less well understood aspects of the theory, for example trying to develop an understanding of non-perturbative phenomena such as hadronisation and diffraction, or the separation of perturbative and non-perturbative aspects of problems such as heavy-quark decays; pushing the theory to new limits as is done at small-$x$ and in studies of saturation; or even the search for and study of qualitatively new phenomena and phases of QCD, be they within immediate reach of experiments (the quark-gluon plasma, instantons) or not (colour superconductors)!
Of course these two branches of activity are far from being completely separated: it would in many cases be impossible to study the less well understood aspects of QCD without the solid knowledge that we have of its more ‘traditional’ aspects — and it is the exploration of novel aspects of QCD that will provide the ‘references’ of the future.
The scope of this talk is restricted to tests involving final states. Final states tend to be highly discriminatory as well as complementary to more inclusive measurements. We shall consider two examples where our understanding of QCD has seen vast progress over the past years, taking us from a purely ‘exploratory’ stage almost to the ‘reference’ stage: the question of jets and factorisation in diffraction (section \[sec:Diff\]); and that of hadronisation corrections in event shapes (section \[sec:Hadr\]). We will then consider two questions that are more directly related to the ‘reference’ stage: the topical issue of the excess of $b$-quark production seen in a range of experiments (section \[sec:Heavy\]); and then the problem of providing Monte Carlo event generators that are correct to NLO accuracy, which while currently only in its infancy is a subject whose practical importance warrants an awareness of progress and pitfalls.
For reasons of lack of space, many active and interesting areas will not be covered in this talk, among them small-$x$ physics, progress in next-to-next-to-leading order calculations, questions related to prompt photons, the topic of generalised parton distributions and deeply-virtual Compton scattering, hints (or not) of instantons, a range of measurements involving polarisation and so on. Many of these subjects are widely discussed in other contributions to both the plenary and parallel sessions of this conference, to which the reader is referred for more details.
Jets in diffraction and factorisation {#sec:Diff}
=====================================
Factorisation, for problems explicitly involving initial or final state hadrons, is the statement that to leading twist, predictions for observables can be written as a convolution of one or more non-perturbative but universal functions (typically structure or fragmentation functions) with some perturbatively calculable coefficient function.
[0.42]{}
While factorisation has long been established in inclusive processes [@GenFact] it has been realised in the past few years [@DiffFact] that it should also hold in more exclusive cases — in particular for diffraction, in terms of diffractive parton distributions $f_{a/p}^\mathrm{diff}(x,x_{\mathrm{I\!P}},\mu^2,t)$, which can be interpreted loosely as being related to the probability of finding a parton $a$ at scale $\mu^2$ with longitudinal momentum fraction $x$, inside a diffractively scattered proton $p$, which in the scattering exchanges a squared momentum $t$ and loses a longitudinal momentum fraction $x_{\mathrm{I\!P}}$. These kinematic variables are illustrated in fig. \[fig:Diffraction\].
The dependence of the diffractive parton distributions on so many variables means that without a large kinematical range (separately in $x$, $x_{\mathrm{I\!P}}$ and $Q^2$, while perhaps integrating over $t$) it is a priori difficult to thoroughly test diffractive factorisation. An interesting simplifying assumption is that of Regge factorisation, where one writes [@IngelmanSchlein] $$f_{a/p}^\mathrm{diff}(x,x_{\mathrm{I\!P}},\mu^2,t) = |\beta_p(t)|^2
x_{\mathrm{I\!P}}^{-2\alpha(t)} f_{a/{\mathrm{I\!P}}}(x/x_{\mathrm{I\!P}}, \mu^2, t)$$ the interpretation of diffraction being due to (uncut) pomeron exchange (first two factors), with the virtual photon probing the parton distribution of the pomeron (last factor).
As yet no formal justification exists for this extra Regge factorisation. Furthermore given that diffraction is arguably related to saturation and high parton densities (assuming the AGK cutting rules [@AGK]) one could even question the validity of arguments for general diffractive factorisation, which rely on parton densities being low (as does normal inclusive factorisation).
The experimental study of factorisation in diffraction relied until recently exclusively on inclusive $F_2^d$ measurements. This was somewhat unsatisfactory because of the wide range of alternative models able to reproduce the data and even the existence of significantly different forms for the $f_{a/{\mathrm{I\!P}}}(x/x_{\mathrm{I\!P}}, \mu^2, t)$ which gave a satisfactory description of the data within the Regge factorisation picture.
However diffractive factorisation allows one to predict not only inclusive cross sections but also jet cross sections. Results in the Regge factorisation framework are compared to data in figure \[fig:DiffDijets\] (taken from [@SchillingThesis]), showing remarkable agreement between the data and the predictions (based on one of the pomeron PDF fits obtained from $F_2^d$). On the other hand, when one considers certain other models that work well for $F_2^d$ the disagreement is dramatic, as for example is shown with the soft colour neutralisation models [@SCI; @BGH] in figure \[fig:DiffSCI\].
[0.42]{}
Despite this apparently strong confirmation of diffractive factorisation, a word of warning is perhaps needed. Firstly there exist other models which have not been ruled out (for example the dipole model [@DiffDipole]). In these cases it would be of interest to establish whether these models can be expressed in a way which satisfies some effective kind of factorisation.
Other important provisos are that a diffractive PDF fit based on more recent $F_2^{d}$ data has a lower gluon distribution and so leads to diffractive dijet predictions which are a bit lower than the data, though still compatible to within experimental and theoretical uncertainties [@DijetTalks]. And secondly that the predictions themselves are based on the Rapgap event generator [@Rapgap] which incorporates only leading order dijet production. It would be of interest (and assuming that the results depend little on the treatment of the ‘pomeron remnant,’ technically not at all difficult) to calculate diffractive dijet production to NLO with programs such as Disent [@disent] or Disaster++ [@disaster], using event generators only for the modelling of hadronisation correction, as is done in inclusive jet studies.
Hadronisation {#sec:Hadr}
=============
Another subject that has seen considerable experimental and theoretical progress recent years is that of hadronisation. Even at the relatively high scattering energies involved at LEP and the Tevatron, for many final state observables non-perturbative contributions associated with hadronisation are of the same order of magnitude as next-to-leading order perturbative contributions and cannot be neglected. With the advent of NNLO calculations in the foreseeable future the need for a good understanding of hadronisation becomes ever more important.
Until a few years ago, the only way of estimating hadronisation corrections in final-state measurements was by comparing the parton and hadron levels of Monte Carlo event generators. Such a procedure suffers from a number of drawbacks. In particular the separation between perturbative and non-perturbative contributions is ill-defined: for example event generators adopt a prescription for the parton level based on a cutoff; on the other hand, in fixed-order perturbative calculations no cutoff is present, and the perturbative integrals are naively extended into the non-perturbative region — furthermore the ‘illegally-perturbative’ contribution associated with this region differs order by order (and depends also on the renormalisation scale).
Additionally, hadronisation corrections obtained from event generators suffer from a lack of transparency: the hadronisation models are generally quite sophisticated, involving many parameters, and the relation between these parameters and the hadronisation corrections is rarely straightforward.
In the mid 1990’s a number of groups started examining approaches for estimating hadronisation corrections based on the perturbative estimates of observables’ sensitivity to the infrared. This leads to predictions of non-perturbative corrections which are suppressed by powers of $1/Q$ relative to the perturbative contribution (for a review see [@BenekeReview]). One of the most successful applications of these ideas has been to event shapes, for which (in the formalism of Dokshitzer and Webber [@DokWeb]) $$\langle {\mathcal{V}}_{\mathrm{NP}}\rangle = \langle {\mathcal{V}}_{\mathrm{PT}}\rangle + c_{\mathcal{V}}{\mathcal{P}}\,,
\qquad
{\mathcal{P}}\equiv \frac{2C_F}{\pi}\frac{\mu_I}{Q}
\left\{ \alpha_0(\mu_I)- {\alpha_\mathrm{s}}(Q)
- {{\mathcal{O}\left({\alpha_\mathrm{s}}^2\right)}} \right\}\>,$$
[0.42]{}
where $c_{\mathcal{V}}$ is a perturbatively calculable observable-dependent coefficient and ${\mathcal{P}}$ governs the size of the power correction. The quantity $\alpha_0(\mu_I)$, which can be interpreted as the mean value of an infrared finite effective coupling in the infrared (up to an infrared matching scale $\mu_I$, conventionally chosen to be $2$ GeV), is hypothesised to be *universal*. The terms in powers of ${\alpha_\mathrm{s}}$ are subtractions of pieces already included in the perturbative prediction for the observable.
It is interesting to see the progress that has been made in our understanding of these effects. The first predictions for the $c_{\mathcal{V}}$ coefficients were based on calculations involving the Born configuration plus a single ‘massive’ (virtual) gluon. Fitting $\alpha_0$ and ${\alpha_\mathrm{s}}$ to data for mean values of ${e^{+}e^{-}}$ event-shapes, using the original predictions for the $c_{\mathcal{V}}$, leads to the results shown in figure \[fig:contnaive\].
At the time of the original predictions, however, much of the data used to generate fig. \[fig:contnaive\] was not yet in existence (which is perhaps fortunate — had fig. \[fig:contnaive\] been around in 1995, the field of $1/Q$ hadronisation corrections might not have made it past early childhood). Rather, various theoretical objections ([@NasonSeymour]) and the gradual appearance of new data, especially for the broadenings, forced people to refine their ideas.
[0.42]{}
Among the developments was the realisation that to control the normalisation of the $c_{\mathcal{V}}$ it is necessary to take into account the decay of the massive, virtual, gluon (the reason for the two thrust results in fig. \[fig:contnaive\] was the existence of two different conventions for dealing with the undecayed massive gluon) [@MilanFactor]. It was also realised that it is insufficient to consider a lone ‘non-perturbative’ gluon, but rather that such a gluon must be taken in the context of the full structure of soft and collinear perturbative gluon radiation [@ResummedNP]. Another discovery was that hadron-masses can be associated with universality breaking $1/Q$ power corrections in certain definitions of observables [@SalamWicke] and when testing the universality picture all observables should be measured in an appropriate common ‘hadron-mass’ scheme.
Results incorporating these theoretical developments are shown in figure \[fig:cont2002\]. As well as ${e^{+}e^{-}}$ mean event shapes we also include recent results using resummed DIS event shapes [@DasguptaSalam], fitted to H1 distributions [@H1dist]. The agreement between observables, even in different processes, is remarkable, especially compared to fig. \[fig:contnaive\], and a strong confirmation of the universality hypothesis.[^2]
This is not to say that the field has reached maturity. In the above fits the approximation has been made that non-perturbative corrections just shift the perturbative distribution [@DokWebShift], however there exists a considerable amount of recent work which examines the problem with the more sophisticated ‘shape-functions’ approach [@ShapeFunctions] in particular in the context of the Dressed Gluon Exponentiation approximation [@DGE]. An important point also is that all the detailed experimental tests so far are for 2-jet event shapes, where there exists a solid theoretical justification based on the Feynman tube model [@FeynmanTube], longitudinal boost invariance. It will be of interest to see what happens in multi-jet tests of $1/Q$ hadronisation corrections where one introduces both non-trivial geometry and the presence of gluons in the Born configuration [@MilanMultiJet]. Finally we note the provocative analysis by the Delphi collaboration [@Delphi] where they show that a renormalisation-group based fit prefers an absence of hadronisation corrections, at least for mean values of event shapes, as well as leading to highly consistent values for ${\alpha_\mathrm{s}}$ across a range of event-shapes.
Heavy quark ($b$) production {#sec:Heavy}
============================
For light quarks (and gluon) it is impossible to make purely perturbative predictions of their multiplicity or of their fragmentation functions because of soft and collinear divergences. For heavy quarks however, these divergences are cut off by the quark mass itself, opening the way to a range of perturbative predictions and corresponding tests of QCD.
\
\
It is therefore particularly embarrassing that there should be a significant discrepancy in most experiments (but not all, [@HeraB]) where the QCD bottom production cross section has been measured. The situation is shown in figure \[fig:bcSummary\] for Tevatron, HERA and LEP results, illustrating the systematic excess of a factor of three between measurements and NLO calculations. To add to the puzzle, the agreement for charm production (which if anything should be worse described because of the smaller mass) is considerably better across a range of experiments (see the lower-right plot of fig. \[fig:bcSummary\]).
Aside from the intrinsic interest of having a good understanding of $b$-production in QCD, one should keep in mind that $b$-quarks are widely relied upon as signals of Higgs production and in searches for physics beyond the standard model, so one needs to have confidence in predictions of the QCD background.
We shall discuss a couple of explanations that have been proposed for the excess at the Tevatron (the excesses in other experiments are more recent and have yet to be addressed in the same detail). Indeed, one hypothesis is precisely that we are seeing a signal of light(ish) gluino production. Another is that bottom fragmentation effects have been incorrectly accounted for. A third explanation, discussed in detail in another of the opening plenary talks [@deRoeckTalk] is associated with unintegrated $k_t$ distributions and small-$x$ resummations.
The SUSY hypothesis
-------------------
[0.42]{}\
In [@Berger] it has been argued that a possible explanation of the Tevatron $b$-quark excess is the production of a pair of light gluinos with a mass of order $14$ GeV which then decay to sbottoms ($\sim
3.5$ GeV) and bottoms, as in fig. \[fig:gluinos\]. The mixing angles are chosen such that the sbottom decouples from the $Z$ at LEP, accounting for its non-observation there.
At moderate and larger $p_t$, the contribution from this process is about as large as that from NLO QCD and so it brings the overall production rate into agreement with the data.
There are a number of other consequences of such a scenario: one is the production of like-sign $b$ quarks (as in the Feynman graph of fig. \[fig:gluinos\]), which could in principle be observed at the Tevatron, although it would need to be disentangled from $B_0$-${\bar
B}_0$ mixing. Another is that the running of ${\alpha_\mathrm{s}}$ would be modified significantly above the gluino mass, leading to an increase of about $0.007$ in the running to $M_Z$ of low $Q$ measurements of ${\alpha_\mathrm{s}}$. This seems to be neither favoured nor totally excluded by current ${\alpha_\mathrm{s}}$ measurements.
Though they have not provided a detailed analysis, the authors of [@Berger] also consider the implications for HERA. There it seems that the enhancement of the $b$-production rates is too small to explain the data (because of the suppression due to the gluino mass).
The fragmentation explanation
-----------------------------
In any situation where one sees a significant discrepancy from QCD expectations it is worth reexamining the elements that have gone into the theoretical calculation. Various groups have considered issues related to $b$ fragmentation and found significant effects, which could be of relevance to the Tevatron results (see for example [@OldFragB]). However a recent article by Cacciari and Nason [@CaccNas] is particularly interesting in that it makes use of the full range of available theoretical tools to carry out a unified analysis all the way from the ${e^{+}e^{-}}$ data, used to constrain the $b$-quark fragmentation function, through to expectations for the Tevatron. It raises a number of important points along the way.[^3]
To be able to follow their analysis it is worth recalling how one calculates expectations for processes involving heavy quarks. The cross section for producing a $b$-quark with a given $p_t$ (or even integrated over all $p_t$) is finite, unlike that for a light quark. This is because the quark mass regulates (cuts-off) the infrared collinear and soft divergences which lead to infinities for massless quark production. But infrared finiteness does not mean infrared insensitivity and to obtain a $B$-meson $p_t$ distribution from a $b$-quark ${\hat p}_t$ distribution, one needs to convolute with a fragmentation function, $$\label{eq:frag}
\displaystyle \underbrace{\frac{d\sigma}{dp_t}}_{\mathrm{
measured,\;e.g.\;}B_0} = \int d \hat{p}_t dz
\underbrace{\frac{d\sigma}{d\hat{p}_t}}_{\mathrm{ PT\,
QCD,}\;b\;\mathrm{quark}}
\underbrace{D(z)}_{\mathrm{ fragmentation}} \delta(p_t - z \hat{p}_t) \,.$$ The details of the infrared finiteness of the $b$-quark production are such that $\langle z D(z) \rangle$ is $1 - {{\mathcal{O}\left(\Lambda/m_b\right)}}$, where the origin of the $\Lambda/m_b$ piece is closely related to that of the $\Lambda/Q$ power corrections discussed in the previous section [@NasonWebber].
There are various well-known points to bear in mind about fragmentation functions. Firstly, in close analogy to the hadronisation corrections discussed earlier (and of course structure functions), the exact form for the fragmentation function will depend on the perturbative order at which we define eq. . Secondly, while for $p_t \sim m_b$ we are free to use fixed order (FO) perturbative predictions, for $p_t \gg m_b$ there are large logarithmically enhanced terms, which need to be resummed. The technology for doing this currently exists to next-to-leading logarithmic (NLL) order. In the intermediate region $p_t \gtrsim m_b$ the two approaches can be combined to give FONLL predictions [@FONLLb; @OleariNason] (strictly this can be used even for $p_t \gg
m_b$).
Having established these points we can consider what has been done by Cacciari and Nason [@CaccNas]. Firstly they discuss moments of the fragmentation function $\langle z^{N-1} D(z)\rangle$. This is because for a steeply falling perturbative ${\hat p}_t$ distribution in eq. , $\frac{d\sigma}{d\hat{p}_t} \sim 1/{\hat p}_t^N$, after integrating out the $\delta$-function to give ${\hat p}_t =
p_t/z$, one obtains the result $$\left.\frac{d\sigma}{dp_t}\right|_{B\mathrm{-meson}} =
\langle z^{N-1} D(z)\rangle \left.
\frac{d\sigma}{d\hat{p}_t}\right|_{b\mathrm{-quark}} \,,$$ where for the Tevatron $N\simeq5$.
[0.47]{}\
The cleanest place to constrain $b$ fragmentation is in ${e^{+}e^{-}}$ collisions. Figure \[fig:eefrag\] shows moments of the momentum fraction (with respect to $Q/2$) carried by $B$-mesons as measured by Aleph [@AlephBFrag]. The (magenta) dot-dashed curve shows the purely perturbative NLL prediction, which is clearly above the data. The dashed curve shows what happens when one includes the convolution with an $\epsilon=0.006$ Peterson fragmentation function [@Peterson]. Why this particular function? Simply because it is the one included in certain Monte Carlo event generators and used widely by experimental collaborations that have compared measured and theoretical $p_t$ distributions. The data point for the $N=5$ moment is $50\%$ higher than the theoretical expectation with this fragmentation function.
Of course we don’t expect agreement: the $\epsilon=0.006$ Peterson is widely used in Monte Carlos where one has only leading-logs. But we are interested in NLL calculations and the fragmentation function needs to be refitted. The authors of [@CaccNas] take the functional form of [@CaccNasFragFun], fitted to the $N=2$ moment, to give the solid curve.
The next step in the Cacciari and Nason analysis should simply have been to take the FONLL calculation of bottom production at the Tevatron [@FONLLb], convolute with their new fragmentation function and then compare to data. This however turns out to be impossible for most of the data, because it has already been deconvoluted to ‘parton-level’ (in some cases with the $\epsilon=0.006$ Peterson fragmentation function). So they are only able to compare with the recent CDF data [@CDFhadrB] for $B$-mesons, shown in the left-hand plot of figure \[fig:GoodB\]. The dashed curve is the central result, while the solid ones are those obtained when varying the factorisation and renormalisation scales by a factor of two.[^4] The dotted curve shows the results that would have been obtained with the Peterson fragmentation function. Predictions with FO (generally used in previous comparisons) rather than FONLL would have have been $20\%$ lower still.
Another interesting approach to the problem is to eliminate the fragmentation aspects altogether, which can be achieved by looking at the $E_t$ distribution of $b$-jets, without specifically looking at the $b$ momentum [@FrixioneMangano]. This has been examined by the D0 collaboration [@D0bjet] and the comparison to NLO predictions is shown in the right-hand plot of figure \[fig:GoodB\]. Though in a slightly different $E_t$ range, the relation between theory and data is similar to that in the Cacciari-Nason approach for $B$-mesons: there is a slight excess in the data but not significant compared to the uncertainties. A minor point to note in the study of $b$-jets is that there are contributions ${\alpha_\mathrm{s}}^n \ln^{2n-1} E_t/m_b$ from soft and collinear logs in the multiplicity of gluons which can then branch collinearly to $b{\bar b}$ pairs [@FrixioneManganoRef12]. At very large $E_t$ these terms would need to be resummed.
So overall, once one has a proper theoretical treatment, including both an appropriate fragmentation function and, where relevant, an FONLL perturbative calculation, it is probably fair to say that the excess of $b$-production at the Tevatron is not sufficiently significant to be worrisome (or evidence for supersymmetry).
At some of the other experiments where an excess of $b$-production is observed a number of the same issues arise, in particular relative to the use of the $\epsilon = 0.006$ Peterson fragmentation function and the presentation of results at parton level rather than hadron level. However fragmentation is less likely to be able to explain the discrepancies, because of the lower $p_t$ range.
Event generators at NLO {#sec:EvGen}
=======================
The problem of matching event generators with fixed order calculations is one of the most theoretically active areas of QCD currently, and considerable progress has been made in the past couple of years. This class of problems is both of intrinsic theoretical interest in that it requires a deep understanding of the structure of divergences in QCD and of phenomenological importance because of the need for accurate and reliable Monte Carlo predictions at current and future colliders.
Two main directions are being followed: one is the matching of event-generators with leading-order calculations of $n$-jet production (where $n$ may be relatively high), which is of particular importance for correctly estimating backgrounds for new-particle searches involving cascades of decays with many resulting jets. For a discussion of this subject we refer the reader to the contributions to the parallel sessions [@PrllLonnbladKrauss].
The second direction, still in its infancy, is the matching of event generators with next-to-leading order calculations (currently restricted to low numbers of jets), which is necessary for a variety of purposes, among them the inclusion of correct rate estimates together with consistent final states, for processes with large NLO corrections to the Born cross sections ($K$ factors in $pp$ and $\gamma p$ collisions, boson-gluon fusion at small-$x$ in DIS).
While there have been a number of proposals concerning NLO matching, many of them remain at a somewhat abstract level. We shall here concentrate on two approaches that have reached the implementational stage. As a first step, it is useful to recall why it is non-trivial to implement NLO corrections in an event generator. Let us use the toy model introduced by Frixione and Webber [@FrixioneWebber], involving the emission only of ‘photons’ (simplified, whose only degree of freedom will be their energy) from (say) a quark whose initial energy is taken to be $1$. For a system which has radiated $n$ photons we write a given observable as $O(E_q, E_{\gamma_1},\ldots,
E_{\gamma_n})$. So for example at the Born level, the observable has value $O(1)$. At NLO we have to integrate over the momentum of an emitted photon, giving the following contribution to the mean value of the observable: $$\label{eq:real}
\alpha \int_0^1 \frac{dx}{x}\, R(x)\, O(1-x, x)\,,$$ where $R(x)$ is a function associated with the real matrix element for one-photon emission. There will also be NLO virtual corrections and their contribution will be $$\label{eq:virt}
-\alpha \, O(1) \int_0^1 \frac{dx}{x}\, V(x)\,,$$ where $V(x)$ is related to the matrix element for virtual corrections.
The structure of $dx/x$ divergences is typical of field theory. Finiteness of the overall cross section implies that for $x\to0$, $R(x) = V(x)$. This means that for an infrared safe observable (one that satisfies $\lim_{x\to 0} O(1-x,x) = O(1)$), the ${{\mathcal{O}\left(\alpha\right)}}$ contribution to the mean value of the observable is also finite. However any straightforward attempt to implement eqs. and directly into an event generator will lead to problems because of the poor convergence properties of the cancellation between divergent positively and negatively weighted events corresponding to the real and virtual pieces respectively. So a significant part of the literature on matching NLO calculations with event generators has addressed question of how to recast these divergent integrals in a form which is practical for use in an event generator (which must have good convergence properties, especially if each event is subsequently going to be run through a detector simulation). The second part of the problem is to ensure that the normal Monte Carlo event generation (parton showering, hadronisation, etc.) can be interfaced with the NLO event generation in a consistent manner.
One approach that has reached the implementational stage could be called a ‘patching together’ of NLO and MC. It was originally proposed in [@BaerReno] and recently further developed in [@Potter] and extended in [@Dobbs]. There one chooses a cutoff $x_\mathrm{zero}$ on the virtual corrections such that the sum of Born and virtual corrections gives zero: $$1 - \alpha \int_{x_\mathrm{zero}}^1 \frac{dx}{x} V(x) \equiv 0\,.$$ It is legitimate to sum these two contribution because they have the same (Born) final state. Then for each event, a real emission of energy $x$ is generated with the distribution $dx/x R(x)$ and with the same cutoff as on the virtuals. The NLO total cross section is guaranteed to be correct by construction: $$\sigma_\mathrm{NLO} \equiv \sigma_0 \alpha \int_{x_\mathrm{zero}}^1
\frac{dx}{x} R(x)\,.$$ The next step in the event generation is to take an arbitrary separation parameter $x_\mathrm{sep}$, satisfying $x_\mathrm{zero} <
x_\mathrm{sep} < 1$. For $x>x_\mathrm{sep}$ the NLO emission is considered hard and kept (with ideally the generation of normal Monte Carlo showering below scale $x$, as in the implementation of [@Dobbs]). For $x<x_\mathrm{sep}$ the NLO emission is thrown away and normal parton showing is allowed below scale $x$.[^5]
Among the advantages are that the events all have positive and uniform weights. And while the computation of $x_\mathrm{zero}$ is non-trivial, the method requires relatively little understanding of the internals of the event generator (which are often poorly documented and rather complicated). However the presence of the separation parameter $x_\mathrm{sep}$ is in principle problematic: there can be discontinuities in distributions at $x_\mathrm{sep}$, certain quantities (for example the probability for a quark to have radiated an amount of energy less than some $x_\mathrm{r}$ which is below $x_\mathrm{sep}$) will not quite be correct to NLO and above $x_\mathrm{sep}$ potentially large logarithms of $x_\mathrm{sep}$ are being neglected. These last two points mean that for each new observable that one studies with the Monte Carlo program, one should carry out an analysis of the $x_\mathrm{sep}$ dependence (varying it over a considerable range, not just a factor of two as is sometimes currently done).
A rather different approach (which we refer to as ‘merging’) has been developed by Frixione and Webber in [@FrixioneWebber].[^6] They specify a number of conditions that must be satisfied by a Monte Carlo at NLO (MC[@]{}NLO): i) all observables should be correct to NLO; ii) soft emissions should be treated as in a normal event generator and hard emissions as in an NLO calculation; iii) the matching between the hard and soft regions should be smooth. Their approach exploits the fact that Monte Carlo programs already contain effective real and virtual NLO corrections, $$\pm \alpha \frac{dx}{x} M(x) \quad
\mbox{for}\;\;\;^\mathrm{real}_\mathrm{virtual}\,.$$ Because Monte Carlo programs are designed to correctly reproduce the structure of soft and collinear divergences, $M(x)$ has[^7] the property that for $x\to0$, $M(x) = R(x) =
V(x)$, the divergent part of the NLO corrections is already included in the event generator. This can be exploited when adjusting the Monte Carlo to be correct to NLO, because the regions that need adjusting are the hard regions, but not the (soft) divergent regions. Specifically the method introduced in [@FrixioneWebber] can be summarised by the formula $$\begin{gathered}
I_{\mathrm{MC,Born}} - \alpha\,
I_{\mathrm{MC,Born}} \int \frac{dx}{x}\left( { V(x)} { -
M(x)} \right)\\ + \alpha \int \frac{dx}{x}\left( { R(x)}
{ - M(x)} \right)I_{\mathrm{MC,Born}+x}\,.\end{gathered}$$ $I_{\mathrm{MC,Born}}$ is to be read ‘interface to Monte Carlo.’ It means that one should generate a Monte Carlo event starting from the Born configuration (or from the Born configuration plus a photon in the case of $I_{\mathrm{MC,Born}+x}$). Since at the Born level, $I_{\mathrm{MC,Born}}$ already contains effective real and virtual corrections which go as $\pm \alpha M(x)/x$, when evaluating the NLO corrections to the MC, these pieces should be subtracted from the full NLO matrix elements. Because $M(x)$ and $R(x)$ (or $V(x)$) have the same $x\to0$ limit, the real and virtual integrals are now individually finite and well-behaved, which means that the Monte Carlo only needs only a small, ${{\mathcal{O}\left(\alpha\right)}}$, correction in order for it to be correct to NLO.
[0.47]{}\
Illustrative results from this approach are shown in figure \[fig:mcnlo\] for the transverse momentum distribution of a $W^+W^-$ pair in hadron-hadron collisions. In the low transverse momentum region (which requires resummation — the pure NLO calculation breaks down) MC@NLO clearly coincides with the Herwig results, while at high transverse momentum it agrees perfectly with the NLO calculation (default Herwig is far too low).
So this procedure has several advantages: it is a smooth procedure without cutoffs; the predictions are guaranteed to be correct at NLO and it does not break the resummation of large logarithms. From a practical point of view it has the (minor) drawback of some events with negative weights, however the fraction of negative weight events is low (about $10\%$ in the example shown above) and they are uniform negative weights, so they should have little effect on the convergence of the results. Another limitation is that to implement this method it is necessary that one understand the Monte Carlo event generator sufficiently well as to be able to derive the function $M(x)$, the effective NLO correction already embodied in the event generator. This however is almost certainly inevitable: there is no way of ensuring a truly NLO result without taking into account what is already included in the event generator.
Conclusions: testing QCD? {#sec:Concl}
=========================
An apology is perhaps due at this stage to those readers who would have preferred a detailed discussion of the evidence from final-state measurements in favour of (or against) QCD as *the* theory of hadronic physics. I rather took the liberty of reinterpreting the title as ‘Tests and perspectives of our *understanding* of QCD through final-state measurements.’ Such tests are vital if we are to extend the domain of confidence of our predictions, as has been discussed in the cases of diffraction and power corrections.
The tests of course should be well thought through: some considerations that come out of the still to be fully understood $b$-excess story are (a) the importance (as ever) of quoting results at hadron level, not some ill-defined parton level; and (b) that if carrying out a test at a given level of precision (NLO), it is necessary that all stages of the theoretical calculation (including for example the determination of the fragmentation function), be carried out at that same level of precision.
Another, general, consideration is the need for the Monte Carlo models to be reliable and accurate, whether they be used to reconstruct data or to estimate backgrounds. This is especially relevant in cases where the actual measurements are limited to corners of phase space or where large extrapolations are needed. In this context the recent advances in the extension of Monte Carlo models to NLO accuracy is a significant development, and in the medium term we should expect progress from the current ‘proof-of-concept’ implementations to a widespread availability of NLO-merged event generators.
To conclude, it could well be that a few years from now, many of the measurements and theoretical approaches discussed here will have made it to textbooks as ‘standard’ QCD. We look forward to future speakers on this topic have an equally varied (but different) range of ‘until recently controversial’ tests of QCD to discuss!
Acknowledgements {#acknowledgements .unnumbered}
================
I wish to thank Matteo Cacciari, John Collins, Yuri Dokshitzer, Stefano Frixione, Hannes Jung, and Frank Schilling for numerous helpful suggestions, discussions and comments during the preparation and writeup of this talk.
[99]{} E. Rodrigues \[for the ZEUS collaboration\], talk given at the XXXVIIth Rencontres de Moriond, ‘QCD and high energy hadronic interactions,’ March 2002.
D. E. Groom [*et al.*]{} \[Particle Data Group\], Eur. Phys. J. C [**15**]{} (2000) 1. S. Bethke, J. Phys. G [**26**]{} (2000) R27. P. Tortosa, these proceedings. P. Bambade, M. J. Costa, J. Fuster and P. Tortosa, in ‘Budapest 2001, High energy physics,’ PRHEP-hep2001/005.
F. P. Schilling \[H1 collaboration\], to appear in the proceedings of DIS2001, hep-ex/0107002. See for example J. C. Collins, D. E. Soper and G. Sterman, Nucl. Phys. B [**308**]{} (1988) 833, and references therein. L. Trentadue and G. Veneziano, Phys. Lett. B [**323**]{} (1994) 201; A. Berera and D. E. Soper, Phys. Rev. D [**50**]{} (1994) 4328; M. Grazzini, L. Trentadue and G. Veneziano, Nucl. Phys. B [**519**]{} (1998) 394; J. C. Collins, Phys. Rev. D [**57**]{} (1998) 3051 \[Erratum-ibid. D [**61**]{} (2000) 019902\].
G. Ingelman and P. E. Schlein, Phys. Lett. B [**152**]{} (1985) 256. V. A. Abramovsky, V. N. Gribov and O. V. Kancheli, Yad. Fiz. [**18**]{} (1973) 595 \[Sov. J. Nucl. Phys. [**18**]{} (1974) 308\]. C. Adloff [*et al.*]{} \[H1 Collaboration\], Eur. Phys. J. C [**20**]{} (2001) 29 \[arXiv:hep-ex/0012051\]. A. Edin, G. Ingelman and J. Rathsman, Phys. Lett. B [**366**]{} (1996) 371; A. Edin, G. Ingelman and J. Rathsman, Z. Phys. C [**75**]{} (1997) 57. W. Buchmuller, T. Gehrmann and A. Hebecker, Nucl. Phys. B [**537**]{} (1999) 477.
J. Bartels, H. Lotter and M. Wusthoff, Phys. Lett. B [**379**]{} (1996) 239 \[Erratum-ibid. B [**382**]{} (1996) 449\]; J. Bartels, H. Jung and M. Wusthoff, Eur. Phys. J. C [**11**]{} (1999) 111. P. Laycock, these proceedings; F.-P. Schilling, these proceedings; P. Thompson, these proceedings.
H. Jung, Comput. Phys. Commun. [**86**]{} (1995) 147. S. Catani and M. H. Seymour, Nucl. Phys. B [**485**]{}, 291 (1997) \[Erratum-ibid. B [**510**]{}, 503 (1997)\]. D. Graudenz, hep-ph/9710244. M. Beneke, Phys. Rept. [**317**]{} (1999) 1. Y. L. Dokshitzer and B. R. Webber, Phys. Lett. B [**352**]{} (1995) 451. P. Nason and M. H. Seymour, Nucl. Phys. B [**454**]{} (1995) 291. Y. L. Dokshitzer, A. Lucenti, G. Marchesini and G. P. Salam, Nucl. Phys. [**B511**]{} (1998) 396, Y. L. Dokshitzer, A. Lucenti, G. Marchesini and G. P. Salam, JHEP [**9805**]{} (1998) 003; M. Dasgupta and B. R. Webber, JHEP [**9810**]{} (1998) 001; M. Dasgupta, L. Magnea and G. Smye, JHEP [**9911**]{} (1999) 025; G. E. Smye, JHEP [**0105**]{} (2001) 005. R. Akhoury and V. I. Zakharov, Phys. Lett. [**B357**]{} (1995) 646; R. Akhoury and V. I. Zakharov, Nucl. Phys. [**B465**]{} (1996) 295; Y. L. Dokshitzer, G. Marchesini and G. P. Salam, Eur. Phys. J. directC [**3**]{} (1999) 1 \[Erratum-ibid. C [**1**]{} (2001) 1\].
G. P. Salam and D. Wicke, JHEP [**0105**]{} (2001) 061. V. Antonelli, M. Dasgupta and G. P. Salam, JHEP [**0002**]{} (2000) 001; M. Dasgupta and G. P. Salam, Phys. Lett. B [**512**]{} (2001) 323; M. Dasgupta and G. P. Salam, Eur. Phys. J. C [**24**]{} (2002) 213. M. Dasgupta and G. P. Salam, contribution to these proceedings, hep-ph/0205161. C. Adloff [*et al.*]{} \[H1 Collaboration\], Eur. Phys. J. C [**14**]{} (2000) 255 \[Addendum-ibid. C [**18**]{} (2000) 417\]. P. A. Movilla Fernandez, S. Bethke, O. Biebel and S. Kluth, Eur. Phys. J. C [**22**]{} (2001) 1. G. J. McCance \[for the H1 and ZEUS Collaborations\], hep-ex/0008009, talk given at the 35th Rencontres de Moriond: QCD and High Energy Hadronic Interactions, France, March 2000; Y. L. Dokshitzer and B. R. Webber, Phys. Lett. B [**404**]{} (1997) 321. G. P. Korchemsky and G. Sterman, Nucl. Phys. [**B437**]{} (1995) 415; G. P. Korchemsky, G. Oderda and G. Sterman, presented at *5th International Workshop on Deep Inelastic Scattering and QCD (DIS 97)*, Chicago, IL, April 1997 hep-ph/9708346; G. P. Korchemsky and G. Sterman, Nucl. Phys. B [**555**]{} (1999) 335; E. Gardi and J. Rathsman, Nucl. Phys. B [**609**]{} (2001) 123; E. Gardi and J. Rathsman, hep-ph/0201019.
R.P. Feynman, ‘Photon Hadron Interactions’, W.A. Benjamin, New York (1972).
A. Banfi, G. Marchesini, Y. L. Dokshitzer and G. Zanderighi, JHEP [**0007**]{} (2000) 002; A. Banfi, Y. L. Dokshitzer, G. Marchesini and G. Zanderighi, Phys. Lett. B [**508**]{} (2001) 269; A. Banfi, Y. L. Dokshitzer, G. Marchesini and G. Zanderighi, JHEP [**0105**]{} (2001) 040; A. Banfi, G. Marchesini, G. Smye and G. Zanderighi, JHEP [**0108**]{} (2001) 047; D. Wicke, contributed to 37th Rencontres de Moriond on QCD and Hadronic Interactions, March 2002, hep-ph/0205299. B. Abbott [*et al.*]{} \[D0 Collaboration\], Phys. Lett. B [**487**]{} (2000) 264. S. Frixione, in ‘Budapest 2001, High energy physics,’ PRHEP-hep2001/025. I. Abt [*et al.*]{} \[HERA-B Collaboration\], arXiv:hep-ex/0205106. A. de Roeck, these proceedings.
E. L. Berger, B. W. Harris, D. E. Kaplan, Z. Sullivan, T. M. Tait and C. E. Wagner, Phys. Rev. Lett. [**86**]{} (2001) 4231. J. Binnewies, B. A. Kniehl and G. Kramer, Phys. Rev. D [**58**]{} (1998) 034016; Also, discussions in P. Nason [*et al.*]{}, in ‘Standard model physics (and more) at the LHC,’ CERN 1999, hep-ph/0003142.
M. Cacciari and P. Nason, hep-ph/0204025. P. Nason and B. R. Webber, Phys. Lett. B [**395**]{} (1997) 355. M. Cacciari, M. Greco and P. Nason, JHEP [**9805**]{} (1998) 007. P. Nason and C. Oleari, Nucl. Phys. B [**565**]{} (2000) 245 \[arXiv:hep-ph/9903541\]. A. Heister [*et al.*]{} \[ALEPH Collaboration\], Phys. Lett. B [**512**]{} (2001) 30. C. Peterson, D. Schlatter, I. Schmitt and P. M. Zerwas, Phys. Rev. D [**27**]{} (1983) 105. V. G. Kartvelishvili, A. K. Likhoded and V. A. Petrov, Phys. Lett. B [**78**]{} (1978) 615.
D. Acosta [*et al.*]{} \[CDF Collaboration\], Phys. Rev. D [**65**]{} (2002) 052005. Yu. L. Dokshitzer, private communication.
B. Abbott [*et al.*]{} \[D0 Collaboration\], Phys. Rev. Lett. [**85**]{} (2000) 5068. S. Frixione and M. L. Mangano, Nucl. Phys. B [**483**]{} (1997) 321. See M. H. Seymour, Nucl. Phys. B [**436**]{} (1995) 163. F. Krauss, these proceedings; L. Lönnblad, these proceedings.
S. Frixione and B. R. Webber, JHEP [**0206**]{} (2002) 029. H. Baer and M. H. Reno, Phys. Rev. D [**44**]{} (1991) 3375; H. Baer and M. H. Reno, Phys. Rev. D [**45**]{} (1992) 1503. B. Potter, Phys. Rev. D [**63**]{} (2001) 114017; B. Potter and T. Schorner, Phys. Lett. B [**517**]{} (2001) 86.
M. Dobbs, Phys. Rev. D [**65**]{} (2002) 094011. J. C. Collins and X. m. Zu, JHEP [**0206**]{} (2002) 018; JHEP [**0204**]{} (2002) 041. J. Collins, Phys. Rev. D [**65**]{} (2002) 094016. Y. Chen, J. C. Collins and N. Tkachuk, JHEP [**0106**]{} (2001) 015. J. C. Collins, JHEP [**0005**]{} (2000) 004. S. Mrenna, hep-ph/9902471.
[^1]: Plenary presentation at the X International Workshop on Deep Inelastic Scattering (DIS2002) Cracow, Poland.
[^2]: It should be noted that results for certain ${e^{+}e^{-}}$ distributions [@KluthFits] and DIS means [@H1dist; @ZEUSmeans] are not quite as consistent. Though this remains to be understood, it may in part be associated with the particular fit ranges that are used.
[^3]: The reader is referred to their article for full references to the ‘ingredients’ used at different stages of the analysis.
[^4]: A point worth keeping in mind [@DokPrivComm] is that the central scale choice $\mu =
\sqrt{p_t^2 + m_b^2}$ is not universally accepted as being optimal — indeed for $p_t \gtrsim m_b$, a scale choice of $\mu = p_t$ is equally justifiable, and would have a non-negligible effect on the predictions.
[^5]: For simplicity, many important but sometimes tricky technical details have been left out. This will also be the case for the merging procedure discussed lower down.
[^6]: A number of aspects of the work of Collins and collaborations [@CollinsMC] may actually be equivalent, though presented in a rather different framework. Related issues are discussed also in [@Mrenna].
[^7]: Or rather, ‘should have.’ In practice the divergence structure of large-angle soft-gluon emission is not always properly treated in event generators, which leads to some extra complications in the MC@NLO approach.
|
---
abstract: |
We study integrable systems on double Lie algebras in absence of Ad-invariant bilinear form by passing to the semidirect product with the $%
\tau $-representation. We show that in this stage a natural Ad-invariant bilinear form does exist, allowing for a straightforward application of the AKS theory, and giving rise to Manin triple structure, thus bringing the problem to the realm of Lie bialgebras and Poisson-Lie groups.
author:
- |
**S. Capriotti$^{\dag }$ & H. Montani$^{\ddag }$[[^1] ]{}**\
*Departamento de Matemática, Universidad Nacional del Sur,* \
*Av. Alem 1253, 8000* - *Bahía Blanca, Buenos Aires, Argentina.*\
\
CONICET & *Departamento de Ciencias Exactas y Naturales,*\
*Universidad Nacional de la Patagonia Austral.*\
*9011 - Caleta Olivia, Argentina*\
title: |
**Double Lie algebras, semidirect product, and integrable systems**\
---
Introduction
============
The deep relation between integrable systems and Lie algebras finds its optimal realization when the involved Lie algebra is equipped with an ad-invariant nondegenerate symmetric bilinear form. There, the coadjoint orbit setting turns to be equivalent to the Lax pair formulation, the Adler-Kostant-Symes theory of integrability [@Adler],[@Kostant],[Symes]{} works perfectly, the Poisson-Lie group structures and Lie bialgebras naturally arises, etc. Semisimplicity is an usual requirement warranting a framework with this kind of bilinear form, however out of this framework it becomes in a rather stringent condition. This is the case with semidirect product Lie algebras.
Integrable systems can also be modelled on Lie groups, and their Hamiltonian version is realized on their cotangent bundle. There, reduction of the cotangent bundle of a Lie group by the action of some Lie subgroup brings the problem to the realm of semidirect products [@Guillemin-Sternberg], [@MWR] where, in spite of the lack of semisimplicity, an ad-invariant form can be defined provided the original Lie algebra had one. Also, at the Lie algebra level, in ref. [@Trofimov; @1983] the complete integrability of the Euler equations on a semidirect product of a semisimple Lie algebra with its adjoint representation was proven. In ref. [@CapMon; @JPA], the AKS theory was applied to study integrable systems on this kind of Lie groups.
However, the lack of an ad-invariant bilinear form is not an obstruction to the application of AKS ideas. In fact, in ref. [@Ovando; @1],[@Ovando; @2] the AKS theory is adapted to a context equipped with a symmetric and nondegenerate bilinear form, which also produces a decomposition of the Lie algebra into two complementary orthogonal subspaces. This is performed by using the *B operation* introduced by Arnold in [@Arnold; @1], and realizing that it amounts to be an action of the Lie algebra on itself, which can be promoted to an action of the Lie group on its Lie algebra, called the $\tau $-action. Thus, the restriction of the system to one of its orthogonal components becomes integrable by factorization.
The main goal of this work is to study integrable systems on a semidirect product of a Lie algebra with its adjoint representation, disregarding the ad-invariance property of the bilinear form. So, the framework is that of a double Lie algebra $\mathfrak{g}=\mathfrak{g}_{+}\oplus \mathfrak{g}_{-}$ equipped with a symmetric nondegenerate bilinear form, and the semidirect product $\mathfrak{h}=\mathfrak{g}\ltimes _{\tau }\mathfrak{g}$ where the left $\mathfrak{g}$ act on the other one by the $\tau $-action. The main achievement is the introduction of an $\mathrm{ad}^{\mathfrak{h}}$-invariant symmetric nondegenerate bilinear form which induces a decomposition $%
\mathfrak{h}=\mathfrak{h}_{+}\oplus \mathfrak{h}_{-}$, with $\mathfrak{h}%
_{+},\mathfrak{h}_{-}$ being Lie subalgebras and isotropic subspaces. In this way, a natural Manin triple structure arises on the Lie algebra $%
\mathfrak{h}$, bringing the problem into the realm of the original AKS theory and in that of Lie bialgebras and Poisson-Lie groups. In fact, we show in which way integrable systems on $\mathfrak{h}_{\pm }$ arising from the restriction of an almost trivial system on $\mathfrak{h}$ defined by an $%
\mathrm{ad}^{\mathfrak{h}}$-invariant Hamilton function, can be solved by the factorization of an exponential curve in the associated connected simply-connected Lie group $H$. Moreover, we built explicitly the Poisson-Lie structures on the factors $H_{\pm }$ of the group $H$.
As the application of main interest, we think of that derived from Lie group with no bi-invariant Riemannian metric. From the result by Milnor [Milnor]{} asserting that a Riemannian metric is bi-invariant if and only if the Lie group is a product of a compact semisimple and Abelian groups, one finds a wide class of examples fitting in the above scheme among the solvable and nilpotent Lie algebras. Many examples with dimension up to 6 are studied in ref. [@ghanam], one of them is fully developed in the present work as an example.
The work is ordered as follows: in Section II we fix the algebraic tools of the problem by introducing the $\tau $-action, in Section III we present the main results of this work, dealing with many issues in the semidirect product framework. In Section IV we show how works the integrability by factorization in the framework developed in the previous section. In Section V, we present three examples without Ad-invariant bilinear forms on which we apply the construction developed in the previous sections. Finally, in Section VI we include some conclusions.
Double Lie algebras and the $\protect\tau $-action
==================================================
Let us consider a *double Lie group* $\left( G,G_{+},G_{-}\right) $ and its associated *double Lie algebra* $\left( \mathfrak{g},\mathfrak{g}%
_{+},\mathfrak{g}_{-}\right) $. These mean that $G_{+}$ and $G_{-}$ are Lie subgroups of $G$ such that $G=G_{+}G_{-}$, and that $\mathfrak{g}_{+}$ and $%
\mathfrak{g}_{-}$ are Lie subalgebras of $\mathfrak{g}$ with $\mathfrak{g}=%
\mathfrak{g}_{+}\oplus \mathfrak{g}_{-}$. We also assume there is a symmetric nondegenerate bilinear form $\left( \cdot ,\cdot \right) _{%
\mathfrak{g}}$ on $\mathfrak{g}$, which induces the direct sum decomposition: $$\mathfrak{g}=\mathfrak{g}_{+}^{\perp }\oplus \mathfrak{g}_{-}^{\perp }$$where $\mathfrak{g}_{\pm }^{\perp }$ are the annihilators subspaces of $%
\mathfrak{g}_{\pm }$, respectively, $$\mathfrak{g}_{\pm }^{\perp }:=\left\{ Z\in \mathfrak{g}:\left( Z,X\right) _{%
\mathfrak{g}}=0\quad \forall X\in \mathfrak{g}_{\pm }\right\}$$
Since the bilinear form is not assumed to be $\mathrm{Ad}^{G}$-invariant, the adjoint action is not a good symmetry in building integrable systems. Following references $\cite{Ovando 1},\cite{Ovando 2}$, where AKS ideas are adapted to a framework lacking of an ad-invariant bilinear form by using the so called $\tau $*-action*, we take this symmetry as the building block of our construction. Let us briefly review the main result of these references: let $\mathfrak{g}=\mathfrak{g}_{+}\oplus \mathfrak{g}_{-}$ as above, then the adjoint action induces the $\tau $*-action* defined as$$\begin{array}{ccc}
\mathrm{ad}^{\tau }:\mathfrak{g}\rightarrow \mathrm{End}\left( \mathfrak{g}%
\right) & / & \left( \mathrm{ad}_{X}^{\tau }Z,Y\right) _{\mathfrak{g}%
}:=-\left( Z,\left[ X,Y\right] \right) _{\mathfrak{g}}%
\end{array}
\label{ad-tao 0}$$$\forall X,Y,Z\in \mathfrak{g}$. It can be promoted to an action of the associated Lie group $G$ on $\mathfrak{g}$ through the exponential map, thereby $$\begin{array}{ccc}
\tau :G\rightarrow \mathrm{Aut}\left( \mathfrak{g}\right) & / & \left( \tau
\left( g\right) X,Y\right) _{\mathfrak{g}}:=\left( X,\mathrm{Ad}%
_{g^{-1}}^{G}Y\right) _{\mathfrak{g}}%
\end{array}%$$Often we also use the notation $\tau _{g}X=\tau \left( g\right) X$.
It is worth to observe that, since the bilinear form is nondegenerate, it allows for the identification of the $\mathfrak{g}$ with its dual vector space $\mathfrak{g}^{\ast }$ through the isomorphism $$\begin{array}{ccc}
\psi :\mathfrak{g}\longrightarrow \mathfrak{g}^{\ast } & / & \left\langle
\psi \left( X\right) ,Y\right\rangle :=\left( X,Y\right) _{\mathfrak{g}}%
\end{array}
\notag$$which connects the $\tau $*-action* with the *coadjoint* one: $$\tau \left( g\right) =\bar{\psi}\circ \mathrm{Ad}_{g^{-1}}^{G\ast }\circ \psi$$It reduces to the adjoint action, namely $\tau \left( g\right) =\mathrm{Ad}%
_{g}^{G}$, when the bilinear form is Ad-invariant. Also, $\psi $ provides the isomorphisms $\psi :\mathfrak{g}_{\pm }^{\bot }\longrightarrow \mathfrak{%
g}_{\mp }^{\ast }$. Let $\Pi _{\mathfrak{g}_{\pm }^{\perp }}:\mathfrak{g}%
\rightarrow \mathfrak{g}_{\pm }^{\perp }$ be the projection map.
Observe that the annihilator subspaces $\mathfrak{g}_{\pm }^{\perp }$ can be regarded as $\tau $*-representation spaces* of $G_{\pm }$, respectively. Moreover, the $\tau $*-action* gives rise to crossed actions, as explained in the following lemma.
Lemma:
: *The maps* $\tilde{\tau}:G_{\mp }\times \mathfrak{g}%
_{\pm }^{\perp }\rightarrow \mathfrak{g}_{\pm }^{\perp }$* defined as*$$\tilde{\tau}\left( h_{\mp }\right) Z_{\pm }^{\perp }=\Pi _{\mathfrak{g}_{\pm
}^{\perp }}\left( \tau \left( h_{\mp }\right) Z_{\pm }^{\perp }\right)
\label{eq:NewAction}$$*are left actions, and the infinitesimal generator associated to elements* $Y_{\mp }\in \mathfrak{g}_{\mp }$ *is*$$\left( Y_{\mp }\right) _{\mathfrak{g}_{\pm }^{\perp }}\left( Z_{\pm }^{\perp
}\right) =\Pi _{\mathfrak{g}_{\pm }^{\perp }}\left( \mathrm{ad}_{Y_{\mp
}}^{\tau }Z_{\pm }^{\perp }\right)$$
Thus, $\mathfrak{g}_{\pm }^{\perp }$ is a $G_{\mp }$-space through the $%
\mathrm{Ad}^{G\ast }$-action translated via the identification $\mathfrak{g}%
_{\pm }^{\perp }\simeq \mathfrak{g}_{\mp }^{\ast }$ induced by the inner product. Further, this last identification allows to consider the annihilators as Poisson manifolds, and the orbits of the $G_{\mp }$ on $%
\mathfrak{g}_{\pm }^{\perp }$ as symplectic manifolds. Under favorable circumstances (i.e. when $f\in C^{\infty }\left( \mathfrak{g}\right) $ is $%
\mathrm{Ad}^{G}$-invariant) the dynamical system defined on these symplectic spaces by the restriction of $f$ can be solved through the action of the other group. As a bonus, the curve on this group whose action produces the solution can be obtained by factorization of a simpler curve. Below we will see the way in which it can be generalized to a context where there are no Ad-invariant inner product.
Remark:
: *The assignment* $\mathrm{ad}^{\tau }:\mathfrak{g}%
\longrightarrow \mathrm{End}\left( \mathfrak{g}\right) $ *is a Lie algebra antihomomorphism*$$\mathrm{ad}_{\left[ X,Y\right] }^{\tau }=-\left[ \mathrm{ad}_{X}^{\tau },%
\mathrm{ad}_{Y}^{\tau }\right]$$
Remark:
: $\cite{Ovando 1}$ *The* $\tilde{\tau}$*-orbits* $\mathcal{O}_{X}^{\tilde{\tau}}:=\left\{ \tilde{\tau}\left( h_{\mp }\right)
X\in \mathfrak{g}_{\pm }^{\perp }/h_{\mp }\in G_{\mp }\right\} $ *whose tangent space are*$$T_{\tau \left( g\right) X}\mathcal{O}_{X}^{\tau }=\left\{ \mathrm{ad}%
_{Y}^{\tau }\tau _{i}\left( g\right) X/Y\in \mathfrak{g}\right\}$$*are symplectic manifold with the symplectic form*$$\left\langle \omega ,\mathrm{ad}_{Y}^{\tau }\tau \left( g\right) X\otimes
\mathrm{ad}_{Z}^{\tau }\tau \left( g\right) X\right\rangle _{\tau _{i}\left(
g\right) X}:=\left( \tau \left( g\right) X,\left[ Y,Z\right] \right) _{%
\mathfrak{g}}$$
Semidirect product with the $\protect\tau $-action representation
=================================================================
The lack of an $\mathrm{Ad}$-invariant bilinear form on $\mathfrak{g}=%
\mathfrak{g}_{+}\oplus \mathfrak{g}_{-}$ is not an obstacle to have AKS integrable systems in the *double Lie algebra* $\left( \mathfrak{g},%
\mathfrak{g}_{+},\mathfrak{g}_{-}\right) $ $\cite{Ovando 1}$. However, it prevents the existence of richer structures as the *Manin triple* one, which brings it to the realm of *Lie bialgebras* and the associated Lie groups acquire a natural compatible Poisson structure turning them into *Poisson-Lie groups*.
In this section we introduce the semidirect sum Lie algebra $\mathfrak{h}=%
\mathfrak{g}\ltimes _{\tau }\mathfrak{g}$ as the central object of our construction.
Lie algebra structure on $\mathfrak{h}=\mathfrak{g}\oplus
\mathfrak{g}$ and ad-invariant bilinear form
----------------------------------------------------------
Let us consider the semidirect sum Lie algebra $\mathfrak{h}=\mathfrak{g}%
\ltimes _{\tau }\mathfrak{g}$ where the first component acts on the second one through the $\mathrm{ad}^{\tau }$-action $\left( \ref{ad-tao 0}\right) $, giving rise to the Lie bracket on $\mathfrak{h}$ $$\left[ \left( X,Y\right) ,\left( X^{\prime },Y^{\prime }\right) \right]
:=\left( \left[ X,X^{\prime }\right] ,\mathrm{ad}_{X}^{\tau }Y^{\prime }-%
\mathrm{ad}_{X^{\prime }}^{\tau }Y\right) \label{Lie alg g+g 1}$$Also, we equip $\mathfrak{h}$ with the symmetric nondegenerate bilinear form $\left( -,-\right) _{\mathfrak{h}}:\mathfrak{h}\times \mathfrak{h}%
\rightarrow \mathbb{R}$$$\left( \left( X,Y\right) ,\left( X^{\prime },Y^{\prime }\right) \right) _{%
\mathfrak{h}}:=\left( X,Y^{\prime }\right) _{\mathfrak{g}}+\left(
Y,X^{\prime }\right) _{\mathfrak{g}} \label{Lie alg g+g 2}$$It is easy to prove the next result.
Proposition:
: *The bilinear form* $\left( \cdot ,\cdot \right)
_{\mathfrak{h}}\ $*on the semidirect sum of Lie algebras* $\mathfrak{h%
}$ *is* $\mathit{\mathrm{ad}}^{\mathfrak{h}}$*-invariant*$$\left( \left[ \left( X^{\prime \prime },Y^{\prime \prime }\right) ,\left(
X,Y\right) \right] ,\left( X^{\prime },Y^{\prime }\right) \right) _{%
\mathfrak{h}}+\left( \left( X,Y\right) ,\left[ \left( X^{\prime \prime
},Y^{\prime \prime }\right) ,\left( X^{\prime },Y^{\prime }\right) \right]
\right) _{\mathfrak{h}}=0$$
Let us denote by $H=G\ltimes _{\tau }\mathfrak{g}$ the Lie group associated with the above Lie algebra structure. Indeed, there are two possible semidirect product Lie group structures related to the left or right character of the action of $G$ on $\mathfrak{g}$. We adopt the right action structure for the Lie group structure in $H=G\times \mathfrak{g}$$$\left( g,X\right) \mathbf{\cdot }\left( k,Y\right) :=\left( gk,\tau
_{k^{-1}}X+Y\right) \label{semidirect prod}$$The exponential map in this case is $$\mathrm{Exp}^{\cdot }\left( t\left( X,Y\right) \right) =\left(
e^{tX},-\left( \sum_{n=1}^{\infty }\frac{\left( -1\right) ^{n}}{n!}%
t^{n}\left( \mathrm{ad}_{X}^{\tau }\right) ^{n-1}\right) Y\right)
\label{exp semid transad}$$and the adjoint action of $H$ on $\mathfrak{h}$ is$$\mathrm{Ad}_{\left( g,Z\right) }^{H}\left( X,Y\right) =\left( \mathrm{Ad}%
_{g}^{G}X\,,\tau _{g}\left( Y-\mathrm{ad}_{X}^{\tau }Z\right) \right)$$while the adjoint action of $\mathfrak{h}$ on itself retrieves de Lie bracket structure $\left( \ref{Lie alg g+g 1}\right) $.
Manin triple and factorization
------------------------------
The direct sum decompositions of $\mathfrak{g}$ as $\mathfrak{g}=\mathfrak{g}%
_{+}\oplus \mathfrak{g}_{-}$ and $\mathfrak{g}=\mathfrak{g}_{+}^{\perp
}\oplus \mathfrak{g}_{-}^{\perp }$ allows to decompose the Lie algebra $%
\mathfrak{h}$ as a direct sum $\mathfrak{h}=\mathfrak{h}_{+}\oplus \mathfrak{%
h}_{-}$, where $\mathfrak{h}_{+},\mathfrak{h}_{-}$ are Lie subalgebras of $%
\mathfrak{h}$. In fact, by observing that the subspaces $\mathfrak{g}_{\pm
}^{\bot }$ are $\tau $*-representation spaces* of $G_{\pm }$, we define the semidirect products $$H_{\pm }=G_{\pm }\ltimes _{\tau }\mathfrak{g}_{\pm }^{\bot }$$with $$\left( g_{\pm },X_{\pm }^{\bot }\right) \cdot \left( k_{\pm },Y_{\pm }^{\bot
}\right) :=\left( g_{\pm }k_{\pm },\tau _{k_{\pm }^{-1}}X_{\pm }^{\bot
}+Y_{\pm }^{\bot }\right) ~,$$and the semidirect sum Lie algebras $$\mathfrak{h}_{\pm }=\mathfrak{g}_{\pm }\ltimes _{\tau }\mathfrak{g}_{\pm
}^{\bot }$$with the Lie bracket $$\left[ \left( X_{\pm },Y_{\pm }^{\bot }\right) ,\left( X_{\pm }^{\prime
},Y_{\pm }^{\prime \bot }\right) \right] =\left( \left[ X_{\pm },X_{\pm
}^{\prime }\right] ,\mathrm{ad}_{X_{\pm }}^{\tau }Y_{\pm }^{\prime \bot }-%
\mathrm{ad}_{X_{\pm }^{\prime }}^{\tau }Y_{\pm }^{\bot }\right) \in
\mathfrak{h}_{_{\pm }}~. \label{Lie subalg}$$Then, we have the factorization $$H=H_{+}H_{-}~,$$and the decomposition in direct sum of Lie subalgebras $$\mathfrak{h}=\mathfrak{h}_{+}\oplus \mathfrak{h}_{-}~.$$
In addition, the restriction of the bilinear form $\left( \ref{Lie alg g+g 2}%
\right) $ to the subspaces $\mathfrak{h}_{+},\mathfrak{h}_{-}$ vanish $$\left( \left( X_{\pm },Y_{\pm }^{\bot }\right) ,\left( X_{\pm }^{\prime
},Y_{\pm }^{\prime \bot }\right) \right) _{\mathfrak{h}}=\left( X_{\pm
},Y_{\pm }^{\prime \bot }\right) _{\mathfrak{g}}+\left( Y_{\pm }^{\bot
},X_{\pm }^{\prime }\right) _{\mathfrak{g}}=0$$meaning that the $\mathfrak{h}_{+},\mathfrak{h}_{-}$ are isotropic subspaces of $\mathfrak{h}$. These results are summarized in the following proposition.
Proposition:
: *The Lie algebras* $\mathfrak{h}$, $\mathfrak{h}%
_{_{+}}$, $\mathfrak{h}_{_{-}}$ *endowed with the bilinear form* $%
\left( \ref{Lie alg g+g 2}\right) $ *compose a Manin triple* $\left(
\mathfrak{h},\mathfrak{h}_{_{+}},\mathfrak{h}_{_{-}}\right) $.
Hence, the Lie subalgebras $\mathfrak{h}_{_{\pm }}=\mathfrak{g}_{\pm
}\ltimes _{\tau }\mathfrak{g}_{\pm }^{\bot }$ are a Lie bialgebras, and the factors $H_{_{\pm }}=G_{\pm }\ltimes _{\tau }\mathfrak{g}_{\pm }^{\bot }$ are Poisson-Lie groups.
The factorization $H=H_{+}H_{-}$ means that each $\left( g,X\right) \in H$ can be written as $$\left( g,X\right) =\left( g_{+},\tilde{\tau}_{g_{-}}\left( \Pi _{\mathfrak{g}%
_{+}^{\bot }}X\right) \right) \cdot \left( g_{-},\Pi _{\mathfrak{g}%
_{-}^{\bot }}\left( Id-\tau _{g_{-}^{-1}}\tilde{\tau}_{g_{-}}\Pi _{\mathfrak{%
g}_{+}^{\bot }}\right) X\right) \label{semidirect fact 2}$$with $\tilde{\tau}:G_{\mp }\times \mathfrak{g}_{\pm }^{\perp }\rightarrow
\mathfrak{g}_{\pm }^{\perp }$* *defined in $\left( \ref{eq:NewAction}%
\right) $.
Let us name $\gamma :\mathfrak{h}\longrightarrow \mathfrak{h}^{\ast }$ the linear bijection induced by the bilinear form $\left( ,\right) _{\mathfrak{h}%
}$, then it also produces the linear bijections $\gamma \left( \mathfrak{h}%
_{_{\pm }}\right) =\mathfrak{h}_{_{\mp }}^{\ast }$.
Dressing action
---------------
Following ref. $\cite{Lu-We}$, we write the Lie bracket in the double Lie algebra $\left( \mathfrak{h},\mathfrak{h}_{+},\mathfrak{h}_{-}\right) $ as $$\left[ \left( X_{-},Y_{-}^{\bot }\right) ,\left( X_{+},Y_{+}^{\bot }\right) %
\right] =\left( X_{+},Y_{+}^{\bot }\right) ^{\left( X_{-},Y_{-}^{\bot
}\right) }+\left( X_{-},Y_{-}^{\bot }\right) ^{\left( X_{+},Y_{+}^{\bot
}\right) }$$with $\left( X_{+},Y_{+}^{\bot }\right) ^{\left( X_{-},Y_{-}^{\bot }\right)
}\in \mathfrak{h}_{+}$ and $\left( X_{+},Y_{+}^{\bot }\right) ^{\left(
X_{-},Y_{-}^{\bot }\right) }\in \mathfrak{h}_{-}$. Therefore, from the Lie algebra structure $\left( \ref{Lie alg g+g 1}\right) $ we get$$\left\{
\begin{array}{l}
\left( X_{-},Y_{-}^{\bot }\right) ^{\left( X_{+},Y_{+}^{\bot }\right)
}=\left( X_{-}^{X_{+}},\Pi _{\mathfrak{g}_{-}^{\bot }}\mathrm{ad}%
_{X_{-}}^{\tau }Y_{+}^{\bot }-\Pi _{\mathfrak{g}_{-}^{\bot }}\mathrm{ad}%
_{X_{+}}^{\tau }Y_{-}^{\bot }\right) \\
\\
\left( X_{+},Y_{+}^{\bot }\right) ^{\left( X_{-},Y_{-}^{\bot }\right)
}=\left( X_{+}^{X_{-}},\Pi _{\mathfrak{g}_{+}^{\bot }}\mathrm{ad}%
_{X_{-}}^{\tau }Y_{+}^{\bot }-\Pi _{\mathfrak{g}_{+}^{\bot }}\mathrm{ad}%
_{X_{+}}^{\tau }Y_{-}^{\bot }\right)%
\end{array}%
\right. \label{Lie alg dress action}$$
Let us consider the factorization for $\left( g,X\right) \in H$ given in $%
\left( \ref{semidirect fact 2}\right) $, and any couple $\left(
g_{+},X_{+}^{\bot }\right) \in H_{+}$, $\left( g_{-},X_{-}^{\bot }\right)
\in H_{-}$. Since the product $\left( g_{-},X_{-}^{\bot }\right) \cdot
\left( g_{+},X_{+}^{\bot }\right) $ is also in $H$, it can be decomposed as above. Then, we write $$\left( g_{-},X_{-}^{\bot }\right) \cdot \left( g_{+},X_{+}^{\bot }\right)
=\left( g_{+},X_{+}^{\bot }\right) ^{\left( g_{-},X_{-}^{\bot }\right)
}\cdot \left( g_{-},X_{-}^{\bot }\right) ^{\left( g_{+},X_{+}^{\bot }\right)
}$$for $\left( g_{+},X_{+}\right) ^{\left( g_{-},X_{-}\right) }\in H_{+}$ and $%
\left( g_{-},X_{-}\right) ^{\left( g_{+},X_{+}\right) }\in H_{-}$ given by $$\left\{
\begin{array}{l}
\left( g_{+},X_{+}^{\bot }\right) ^{\left( g_{-},X_{-}^{\bot }\right)
}=\left( g_{+}^{g_{-}},\tilde{\tau}_{g_{-}^{g_{+}}}\left( \Pi _{\mathfrak{g}%
_{+}^{\bot }}\tau _{g_{+}^{-1}}X_{-}^{\bot }+X_{+}^{\bot }\right) \right) \\
\\
\left( g_{-},X_{-}^{\bot }\right) ^{\left( g_{+},X_{+}^{\bot }\right) } \\
\qquad =\left( g_{-}^{g_{+}},\Pi _{\mathfrak{g}_{-}^{\bot }}\tau
_{g_{+}^{-1}}X_{-}^{\bot }-\Pi _{\mathfrak{g}_{-}^{\bot }}\tau _{\left(
g_{-}^{g_{+}}\right) ^{-1}}\tilde{\tau}_{g_{-}^{g_{+}}}\left( X_{+}^{\bot
}+\Pi _{\mathfrak{g}_{+}^{\bot }}\tau _{g_{+}^{-1}}X_{-}^{\bot }\right)
\right)%
\end{array}%
\right.$$The assignment $H_{-}\times H_{+}\longrightarrow H_{+}$ and $H_{+}\times
H_{-}\longrightarrow H_{-}$ defined by the above relations are indeed *actions*, the so-called *dressing actions* (see [@STS] and [Lu-We]{}). In fact, $H_{-}\times H_{+}\longrightarrow H_{+}$ such that $%
\left( \left( g_{-},X_{-}^{\bot }\right) ,\left( g_{+},X_{+}^{\bot }\right)
\right) \longmapsto \left( g_{+},X_{+}^{\bot }\right) ^{\left(
g_{-},X_{-}^{\bot }\right) }$ is a *right action* of $H_{-}$ on $H_{+}$ and, reciprocally, $\left( \left( g_{+},X_{+}^{\bot }\right) ,\left(
g_{-},X_{-}^{\bot }\right) \right) \longmapsto \left( g_{-},X_{-}^{\bot
}\right) ^{\left( g_{+},X_{+}^{\bot }\right) }$ is a *left action* of $%
H_{+}$ on $H_{-}$.
The infinitesimal generator of the dressing actions can be derived be considering the action of exponential elements $\left( g_{-},X_{-}^{\bot
}\right) =\mathrm{Exp}^{\cdot }\left( t\left( X_{-},Y_{-}^{\bot }\right)
\right) $ and $\left( g_{+},X_{+}^{\bot }\right) =\mathrm{Exp}^{\cdot
}\left( t\left( X_{+},Y_{+}^{\bot }\right) \right) $ to get$$\left\{
\begin{array}{l}
\left( g_{+},X_{+}^{\bot }\right) ^{\left( X_{-},Y_{-}^{\bot }\right)
}=\left( g_{+}^{X_{-}},\Pi _{\mathfrak{g}_{+}^{\bot }}\mathrm{ad}%
_{X_{-}^{g_{+}}}^{\tau }X_{+}^{\bot }+\Pi _{\mathfrak{g}_{+}^{\bot }}\tau
_{g_{+}^{-1}}Y_{-}^{\bot }\right) \\
\\
\left( g_{-},X_{-}^{\bot }\right) ^{\left( X_{+},Y_{+}^{\bot }\right) } \\
\qquad =\left( g_{-}^{X_{+}},\Pi _{\mathfrak{g}_{-}^{\bot }}\tau
_{g_{-}^{-1}}\tilde{\tau}_{g_{-}}\left( \Pi _{\mathfrak{g}_{+}^{\bot }}%
\mathrm{ad}_{X_{+}}^{\tau }X_{-}^{\bot }-Y_{+}^{\bot }\right) -\Pi _{%
\mathfrak{g}_{-}^{\bot }}\mathrm{ad}_{X_{+}}^{\tau }X_{-}^{\bot }\right)%
\end{array}%
\right. \label{inf gen dres transad}$$and making the derivative at $t=0$, we recover the result obtained in $%
\left( \ref{Lie alg dress action}\right) $.
Also one may show that $$\left\{
\begin{array}{l}
\left( X_{+},X_{+}^{\bot }\right) ^{\left( g_{-},X_{-}^{\bot }\right)
}=\left( X_{+}^{g_{-}},\tilde{\tau}_{g_{-}}X_{+}^{\bot }-\tilde{\tau}%
_{g_{-}}\Pi _{\mathfrak{g}_{+}^{\bot }}\mathrm{ad}_{X_{+}}^{\tau
}X_{-}^{\bot }\right) \\
\\
\left( X_{-},X_{-}^{\bot }\right) ^{\left( g_{+},X_{+}^{\bot }\right)
}=\left( X_{-}^{g_{+}},\tilde{\tau}_{g_{+}^{-1}}X_{-}^{\bot }+\Pi _{%
\mathfrak{g}_{-}^{\bot }}\mathrm{ad}_{X_{-}^{g_{+}}}^{\tau }X_{+}^{\bot
}\right)%
\end{array}%
\right.$$which are relevant for the explicit form of the crossed adjoint actions
$$\left\{
\begin{array}{l}
\mathrm{Ad}_{\left( g_{+},X_{+}^{\bot }\right) ^{-1}}^{H}\left(
X_{-},Y_{-}^{\bot }\right) =\left( g_{+},X_{+}^{\bot }\right) ^{-1}\left(
g_{+},X_{+}^{\bot }\right) ^{\left( X_{-},Y_{-}^{\bot }\right) } \\
\qquad \qquad \qquad \qquad \qquad +\left( X_{-},Y_{-}^{\bot }\right)
^{\left( g_{+},X_{+}^{\bot }\right) } \\
\\
\mathrm{Ad}_{\left( g_{-},X_{-}^{\bot }\right) }^{H}\left( X_{+},Y_{+}^{\bot
}\right) =\left( g_{-},X_{-}^{\bot }\right) ^{\left( X_{+},Y_{+}^{\bot
}\right) }\left( g_{-},X_{-}^{\bot }\right) ^{-1} \\
\qquad \qquad \qquad \qquad \qquad +\left( X_{+},Y_{+}^{\bot }\right)
^{\left( g_{-},X_{-}^{\bot }\right) }%
\end{array}%
\right.$$
that are equivalent to $$\left\{
\begin{array}{l}
\mathrm{Ad}_{\left( g_{+},X_{+}^{\bot }\right) ^{-1}}^{H}\left(
X_{-},Y_{-}^{\bot }\right) =\left( \mathrm{Ad}_{g_{+}^{-1}}^{G}X_{-},\mathrm{%
ad}_{\mathrm{Ad}_{g_{+}^{-1}}^{G}X_{-}}^{\tau }X_{+}^{\bot }+\tau
_{g_{+}^{-1}}Y_{-}^{\bot }\right) \\
\\
\mathrm{Ad}_{\left( g_{-},X_{-}^{\bot }\right) }^{H}\left( X_{+},Y_{+}^{\bot
}\right) =\left( \mathrm{Ad}_{g_{-}}^{G}X_{+},-\mathrm{ad}_{\mathrm{Ad}%
_{g_{-}}^{G}X_{+}}^{\tau }\tau _{g_{-}}X_{-}^{\bot }+\tau
_{g_{-}}Y_{+}^{\bot }\right)%
\end{array}%
\right. \label{crossed adj actions}$$With this expressions we are ready to get the Poisson-Lie bivector on $%
H_{\pm }$.
Lie bialgebra and Poisson-Lie structures
----------------------------------------
Let us now work out the Lie bialgebra structures on $\mathfrak{h}_{_{\pm }}=%
\mathfrak{g}_{\pm }\oplus \mathfrak{g}_{\pm }^{\bot }$, and the associated Poisson-Lie groups ones on $H_{_{\pm }}=G_{\pm }\oplus \mathfrak{g}_{\pm
}^{\bot }$. The Lie bracket on these semidirect sums were defined in $\left( %
\ref{Lie subalg}\right) $ and, in order to define the *Lie cobracket* $%
\delta :\mathfrak{h}_{_{\pm }}\longrightarrow \mathfrak{h}_{_{\pm }}\otimes
\mathfrak{h}_{_{\pm }}$, we introduce a bilinear form on $\mathfrak{h}%
\otimes \mathfrak{h}$ from the bilinear form $\left( \ref{Lie alg g+g 2}%
\right) $ as$$\left( X\otimes Y,U\otimes V\right) _{\mathfrak{h}\otimes \mathfrak{h}%
}:=\left( X,U\right) _{\mathfrak{h}}\left( Y,V\right) _{\mathfrak{h}}$$$\forall X,Y,U,V\in \mathfrak{h}$. Thus, the Lie cobracket $\delta $ arises from the relation $$\begin{aligned}
&&\left( \left( X_{\mp }^{\prime },Y_{\mp }^{\prime \bot }\right) \otimes
\left( X_{\mp }^{\prime \prime },Y_{\mp }^{\prime \prime \bot }\right)
,\delta \left( X_{\pm },Y_{\pm }^{\bot }\right) \right) _{\mathfrak{h}%
\otimes \mathfrak{h}} \\
&=&\left( \left[ \left( X_{\mp }^{\prime },Y_{\mp }^{\prime \bot }\right)
,\left( X_{\mp }^{\prime \prime },Y_{\mp }^{\prime \prime \bot }\right) %
\right] ,\left( X_{\pm },Y_{\pm }^{\bot }\right) \right) _{\mathfrak{h}}\end{aligned}$$that in $2\times 2$ block matrix form means $$\delta \left( X_{\pm },Y_{\pm }^{\bot }\right) =\left(
\begin{array}{cc}
\delta \left( Y_{\pm }^{\bot }\right) & \tau ^{\ast }\left( X_{\pm }\right)
\\
-\tau ^{\ast }\left( X_{\pm }\right) & 0%
\end{array}%
\right)$$where $\delta :\mathfrak{g}_{\pm }^{\bot }\longrightarrow \mathfrak{g}_{\pm
}^{\bot }\otimes \mathfrak{g}_{\pm }^{\bot }$ such that$$\left( X_{\mp }^{\prime }\otimes X_{\mp }^{\prime \prime },\delta \left(
Y_{\pm }^{\bot }\right) \right) _{\mathfrak{g}}:=\left( \left[ X_{\mp
}^{\prime },X_{\mp }^{\prime \prime }\right] ,Y_{\pm }^{\bot }\right) _{%
\mathfrak{g}}$$and $\tau ^{\ast }:\mathfrak{g}_{\pm }\longrightarrow \mathfrak{g}_{\pm
}^{\bot }\otimes \mathfrak{g}_{\pm }$ is$$\left( X_{\mp }^{\prime }\otimes Y_{\mp }^{\prime \prime \bot },\tau ^{\ast
}\left( X_{\pm }\right) \right) _{\mathfrak{g}}:=\left( \mathrm{ad}_{X_{\mp
}^{\prime }}^{\tau }Y_{\mp }^{\prime \prime \bot },X_{\pm }\right) _{%
\mathfrak{g}}$$
### **The Poisson-Lie bivector on** $\mathit{H}_{\mathit{+}}$
The Poisson-Lie bivector $\pi _{+}\in \Gamma \left( T^{\otimes
2}H_{+}\right) $, the sections of the vector bundle $T^{\otimes
2}H_{+}\longrightarrow H_{+}$, is defined by the relation $$\begin{aligned}
&&\left\langle \gamma \left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right)
\left( g_{+},Y_{+}^{\bot }\right) ^{-1}\otimes \gamma \left( X_{-}^{\prime
\prime },Y_{-}^{\prime \prime \bot }\right) \left( g_{+},Y_{+}^{\bot
}\right) ^{-1},\pi _{+}\left( g_{+},Z_{+}^{\bot }\right) \right\rangle
\notag \\
&=&\left( \Pi _{-}\mathrm{Ad}_{\left( g_{+},Z_{+}^{\bot }\right)
^{-1}}^{H}\left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right) ,\Pi _{+}%
\mathrm{Ad}_{\left( g_{+},Z_{+}^{\bot }\right) ^{-1}}^{H}\left(
X_{-}^{\prime \prime },Y_{-}^{\prime \prime \bot }\right) \right) _{%
\mathfrak{h}} \label{PL bivector h+ 1}\end{aligned}$$In order to simplify the notation we introduce the projectors $\mathbb{A}%
_{\pm }^{G}\left( g\right) $, defined as$$\mathbb{A}_{\pm }^{G}\left( g\right) :=\mathrm{Ad}_{g^{-1}}^{G}\Pi _{%
\mathfrak{g}_{\pm }}\mathrm{Ad}_{g}^{G}$$such that $$\left\{
\begin{array}{l}
\mathbb{A}_{\pm }^{G}\left( g\right) \mathbb{A}_{\pm }^{G}\left( g\right) =%
\mathbb{A}_{\pm }^{G}\left( g\right) \\
\mathbb{A}_{\mp }^{G}\left( g\right) \mathbb{A}_{\pm }^{G}\left( g\right) =0
\\
\mathbb{A}_{+}^{G}\left( g\right) +\mathbb{A}_{-}^{G}\left( g\right) =Id%
\end{array}%
\right.$$which will be used in the following.
By using the expressions obtained in eq. $\left( \ref{crossed adj actions}%
\right) $, it takes the explicit form $$\begin{aligned}
&&\left\langle \gamma \left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right)
\left( g_{+},Z_{+}^{\bot }\right) ^{-1}\otimes \gamma \left( X_{-}^{\prime
\prime },Y_{-}^{\prime \prime \bot }\right) \left( g_{+},Z_{+}^{\bot
}\right) ^{-1},\pi _{+}\left( g_{+},Z_{+}^{\bot }\right) \right\rangle
\label{PL bivector h+ 2} \\
&=&\left( \left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right) ,\left( -%
\mathbb{A}_{-}^{G}\left( g_{+}^{-1}\right) X_{-}^{\prime \prime },\tau
_{g_{+}}\Pi _{\mathfrak{g}_{+}^{\bot }}\left( \tau
_{g_{+}^{-1}}Y_{-}^{\prime \prime \bot }+\mathrm{ad}_{\Pi _{\mathfrak{g}_{-}}%
\mathrm{Ad}_{g_{+}^{-1}}^{G}X_{-}^{\prime \prime }}^{\tau }Z_{+}^{\bot
}\right) \right) \right) _{\mathfrak{h}} \notag\end{aligned}$$Introducing the linear operator $\pi _{+\left( g_{+},Z_{+}^{\bot }\right)
}^{R}:\mathfrak{h}_{-}\longrightarrow \mathfrak{h}_{+}$ such that$$\begin{aligned}
&&\left\langle \gamma \left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right)
\left( g_{+},Z_{+}^{\bot }\right) ^{-1}\otimes \gamma \left( X_{-}^{\prime
\prime },Y_{-}^{\prime \prime \bot }\right) \left( g_{+},Z_{+}^{\bot
}\right) ^{-1},\pi _{+}\left( g_{+},Z_{+}^{\bot }\right) \right\rangle \\
&=&\left( \left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right) ,\pi _{+\left(
g_{+},Z_{+}^{\bot }\right) }^{R}\left( X_{-}^{\prime \prime },Y_{-}^{\prime
\prime \bot }\right) \right) _{\mathfrak{h}}\end{aligned}$$we get, in components in $\mathfrak{g}_{+}\oplus \mathfrak{g}_{+}^{\bot }$, the operator block matrix $$\pi _{+\left( g_{+},Z_{+}^{\bot }\right) }^{R}\left(
\begin{array}{c}
X_{-}^{\prime \prime } \\
Y_{-}^{\prime \prime \bot }%
\end{array}%
\right) =\left(
\begin{array}{cc}
\mathbb{A}_{+}^{G}\left( g_{+}^{-1}\right) & 0 \\
\tau _{g_{+}}\Pi _{+}^{\bot }\tilde{\varphi}\left( Z_{+}^{\bot }\right) \Pi
_{-}\mathrm{Ad}_{g_{+}^{-1}}^{G} & \tau _{g_{+}}\Pi _{+}^{\bot }\tau
_{g_{+}^{-1}}%
\end{array}%
\right) \left(
\begin{array}{c}
X_{-}^{\prime \prime } \\
Y_{-}^{\prime \prime \bot }%
\end{array}%
\right)$$Here we introduced the map $\tilde{\varphi}:\mathfrak{g}\longrightarrow
End_{vec}\left( \mathfrak{g}\right) $ such that$$\tilde{\varphi}\left( Z_{+}^{\bot }\right) X_{-}^{\prime \prime }:=\mathrm{ad%
}_{X_{-}^{\prime \prime }}^{\tau }Z_{+}^{\bot } \label{phi orlado}$$
For a couple of functions $\mathcal{F},\mathcal{H}$ on $H_{+}=G_{+}\ltimes
\mathfrak{g}_{+}^{\bot }$ and the definition of the Poisson bracket in terms of the bivector $\pi _{H}^{+}$$$\left\{ \mathcal{F},\mathcal{H}\right\} _{PL}\left( g_{+},Z_{+}^{\bot
}\right) =\left\langle d\mathcal{F}\wedge d\mathcal{H},\pi _{+}\left(
g_{+},Z_{+}^{\bot }\right) \right\rangle$$we use the expression $\left( \ref{PL bivector h+ 2}\right) $ to obtain $$\begin{aligned}
&&\left\{ \mathcal{F},\mathcal{H}\right\} _{PL}\left( g_{+},Z_{+}^{\bot
}\right) \\
&=&\left\langle g_{+}\mathbf{d}\mathcal{H},\bar{\psi}\left( \delta \mathcal{F%
}\right) \right\rangle -\left\langle g_{+}\mathbf{d}\mathcal{F},\bar{\psi}%
\left( \delta \mathcal{H}\right) \right\rangle +\left\langle \psi \left(
Z_{+}^{\bot }\right) ,\left[ \bar{\psi}\left( \delta \mathcal{F}\right) ,%
\bar{\psi}\left( \delta \mathcal{H}\right) \right] \right\rangle\end{aligned}$$for $d\mathcal{F}=\left( \mathbf{d}\mathcal{F},\delta \mathcal{F}\right) \in
T^{\ast }H_{+}$.
Also, from this Poisson-Lie bivector we can retrieve the infinitesimal generator of the dressing action $\left( \ref{inf gen dres transad}\right) $ by the relation$$\left( g_{+},Z_{+}^{\bot }\right) ^{\left( X_{-},Y_{-}^{\bot }\right)
}=\left( R_{\left( g_{+},Z_{+}^{\bot }\right) }\right) _{\ast }\left( \pi
_{+\left( g_{+},Z_{+}^{\bot }\right) }^{R}\left( X_{-},Y_{-}^{\bot }\right)
\right) \label{PL bivector h+ 7}$$It is worth to recall that the symplectic leaves of the Poisson-Lie structure are the orbits of the dressing actions, the integral submanifolds of the dressing vector fields, and $\left( e,0\right) \in H_{+}$ is a 1-point orbit.
### **The Poisson-Lie bivector on** $\mathit{H}_{\mathit{-}}$.
Analogously to the previous definition, the Poisson-Lie bivector $\pi
_{-}\in \Gamma \left( T^{\otimes 2}H_{-}\right) $, is defined by the relation$$\begin{aligned}
&&\left\langle \left( g_{-},Z_{-}^{\bot }\right) ^{-1}\gamma \left(
X_{+}^{\prime },Y_{+}^{\prime \bot }\right) \otimes \left( g_{-},Z_{-}^{\bot
}\right) ^{-1}\gamma \left( X_{+}^{\prime \prime },Y_{+}^{\prime \prime \bot
}\right) ,\pi _{-}\left( g_{-},Z_{-}^{\bot }\right) \right\rangle \notag \\
&=&\left( \Pi _{-}\mathrm{Ad}_{\left( g_{-},Z_{-}^{\bot }\right) }^{H}\left(
X_{+}^{\prime },Y_{+}^{\prime \bot }\right) ,\Pi _{+}\mathrm{Ad}_{\left(
g_{-},Z_{-}^{\bot }\right) }^{H}\left( X_{+}^{\prime \prime },Y_{+}^{\prime
\prime \bot }\right) \right) _{\mathfrak{h}} \label{PL bivector h- 1}\end{aligned}$$From the relation $\left( \ref{crossed adj actions}\right) $, we get $$\begin{aligned}
&&\left\langle \left( g_{-},Z_{-}^{\bot }\right) ^{-1}\gamma \left(
X_{+}^{\prime },Y_{+}^{\prime \bot }\right) \otimes \left( g_{-},Z_{-}^{\bot
}\right) ^{-1}\gamma \left( X_{+}^{\prime \prime },Y_{+}^{\prime \prime \bot
}\right) ,\pi _{-}\left( g_{-},Z_{-}^{\bot }\right) \right\rangle \\
&=&\left( \mathbb{A}_{+}^{G}\left( g_{-}\right) X_{+}^{\prime
},Y_{+}^{\prime \prime \bot }\right) _{\mathfrak{g}}+\left( Y_{+}^{\prime
\bot },\mathbb{A}_{-}^{G}\left( g_{-}\right) X_{+}^{\prime \prime }\right) _{%
\mathfrak{g}}-\left( Z_{-}^{\bot },\left[ X_{+}^{\prime },X_{+}^{\prime
\prime }\right] \right) _{\mathfrak{g}} \\
&&+\left( Z_{-}^{\bot },\left[ X_{+}^{\prime },\mathbb{A}_{-}^{G}\left(
g_{-}\right) X_{+}^{\prime \prime }\right] \right) _{\mathfrak{g}}+\left(
Z_{-}^{\bot },\left[ \mathbb{A}_{-}^{G}\left( g_{-}\right) X_{+}^{\prime
},X_{+}^{\prime \prime }\right] \right) _{\mathfrak{g}}\end{aligned}$$The PL bracket for functions $\mathcal{F},\mathcal{H}$ on $%
H_{-}=G_{-}\ltimes \mathfrak{g}_{-}^{\bot }$ is obtained by identifying the differential $d\mathcal{F},d\mathcal{H}$ with $\left( g_{-},Z_{-}^{\bot
}\right) ^{-1}\gamma \left( X_{+}^{\prime },Y_{+}^{\prime \bot }\right) $, $%
\left( g_{-},Z_{-}^{\bot }\right) ^{-1}\gamma \left( X_{+}^{\prime \prime
},Y_{+}^{\prime \prime \bot }\right) $ respectively. Then, making explicit the left translation we obtain$$\begin{aligned}
\left\{ \mathcal{F},\mathcal{H}\right\} _{PL}\left( g_{-},Z_{-}^{\bot
}\right) &=&-\left\langle g_{-}\mathbf{d}\mathcal{H},\mathbb{A}%
_{-}^{G}\left( g_{-}\right) \bar{\psi}\left( \delta \mathcal{F}\right)
\right\rangle \\
&&+\left\langle g_{-}\mathbf{d}\mathcal{F},\mathbb{A}_{-}^{G}\left(
g_{-}\right) \bar{\psi}\left( \delta \mathcal{H}\right) \right\rangle \\
&&-\left\langle \psi \left( Z_{-}^{\bot }\right) ,\left[ \bar{\psi}\left(
\delta \mathcal{F}\right) ,\bar{\psi}\left( \delta \mathcal{H}\right) \right]
\right\rangle\end{aligned}$$Alternatively, we may write the PL bivector in terms of the associated linear operator $\pi _{-\left( h_{-},Z_{-}^{\bot }\right) }^{L}:\mathfrak{h}%
_{-}^{\ast }\longrightarrow \mathfrak{h}_{-}$ defined from$$\begin{aligned}
&&\left\langle \left( g_{-},Z_{-}^{\bot }\right) ^{-1}\gamma \left(
X_{+}^{\prime },Y_{+}^{\prime \bot }\right) \otimes \left( g_{-},Z_{-}^{\bot
}\right) ^{-1}\gamma \left( X_{+}^{\prime \prime },Y_{+}^{\prime \prime \bot
}\right) ,\pi _{-}\left( g_{-},Z_{-}^{\bot }\right) \right\rangle \\
&=&\left( \left( X_{+}^{\prime },Y_{+}^{\prime \bot }\right) ,\pi _{-\left(
g_{-},Z_{-}^{\bot }\right) }^{L}\left( X_{+}^{\prime \prime },Y_{+}^{\prime
\prime \bot }\right) \right) _{\mathfrak{h}}\end{aligned}$$we get, in terms of components in the direct sum, that$$\begin{aligned}
&&\medskip \pi _{-\left( g_{-},Z_{-}^{\bot }\right) }^{L}\left( \gamma
\left(
\begin{array}{c}
X_{+}^{\prime \prime } \\
Y_{+}^{\prime \prime \bot }%
\end{array}%
\right) \right) \\
&=&\left(
\begin{array}{cc}
\Pi _{-}\left( \mathbb{A}_{+}^{G}\left( g_{-}\right) \right) & 0 \\
\tilde{\varphi}\left( Z_{-}^{\bot }\right) \mathbb{A}_{+}^{G}\left(
g_{-}\right) -\tau _{g_{-}^{-1}}\Pi _{+}^{\bot }\tau _{g_{-}}\tilde{\varphi}%
\left( Z_{-}^{\bot }\right) & \tau _{g_{+}}\Pi _{+}^{\bot }\tau _{g_{+}^{-1}}%
\end{array}%
\right) \left(
\begin{array}{c}
X_{+}^{\prime \prime } \\
Y_{+}^{\prime \prime \bot }%
\end{array}%
\right)\end{aligned}$$where we used $\tilde{\varphi}:\mathfrak{g}\longrightarrow End_{vec}\left(
\mathfrak{g}\right) $ introduced in $\left( \ref{phi orlado}\right) $, such that$$\tilde{\varphi}\left( Z_{-}^{\bot }\right) X_{+}:=\mathrm{ad}_{X_{+}}^{\tau
}Z_{-}^{\bot }$$The infinitesimal generator of the dressing action are then$$\left( g_{-},Z_{-}^{\bot }\right) ^{\left( X_{+},Y_{+}^{\bot }\right)
}=\left( \pi _{-\left( e,0\right) }^{L}\left( \gamma \left(
X_{+},Y_{+}^{\bot }\right) \right) \right) \left( g_{-},Z_{-}^{\bot }\right)
\label{PL bivector h- 2}$$retrieving the result obtained in $\left( \ref{inf gen dres transad}\right) $.
Integrability on $\mathfrak{g}$ from AKS on $\mathfrak{h}$
==========================================================
Despite AKS integrable system theory also works in absence of an Ad-invariant bilinear form [@Ovando; @1],[@Ovando; @2], the $\mathrm{Ad}%
^{H}$-invariant bilinear form $\left( \ref{Lie alg g+g 2}\right) $ on the semidirect product $\mathfrak{h}=\mathfrak{g}\ltimes _{\tau }\mathfrak{g}$ brings back to the standard framework of AKS theory. We now briefly review it in the context of the semidirect product $\mathfrak{g}\ltimes _{\tau }%
\mathfrak{g}$.
Let us consider now $\mathfrak{h}^{\ast }$ equipped with the Lie-Poisson structure. Then, turning the linear isomorphism $\mathfrak{h}\overset{\gamma
}{\longrightarrow }\mathfrak{h}^{\ast }$ into a Poisson map, we define a Poisson structure on $\mathfrak{h}$ such that for any couple of functions $%
\mathsf{F},\mathsf{H}:\mathfrak{h}\longrightarrow \mathbb{R}$, it is $$\left\{ \mathsf{F},\mathsf{H}\right\} _{\mathfrak{h}}\left( X\right)
:=\left\{ \mathsf{F}\circ \gamma ^{-1},\mathsf{H}\circ \gamma ^{-1}\right\}
_{\mathfrak{h}^{\ast }}\left( \psi \left( X\right) \right) =\left( X,\left[
\mathfrak{L}_{\mathsf{F}}(X),\mathfrak{L}_{\mathsf{H}}(X)\right] \right) _{%
\mathfrak{h}}$$where the Legendre transform $\mathfrak{L}_{\mathsf{F}}:\mathfrak{h}%
\longrightarrow \mathfrak{h}$ is defined as$$\left( \mathfrak{L}_{\mathsf{F}}\left( X\right) ,Y\right) _{\mathfrak{h}%
}=\left. \frac{d\mathsf{F}\left( X+tY\right) )}{dt}\right\vert _{t=0}$$
Let us now consider a function $\mathsf{F}:\mathfrak{h}\longrightarrow
\mathbb{R}$, and let $\mathcal{F}:=\mathsf{F}\circ \imath _{K_{-}}$ with $%
\imath _{K_{-}}:\mathfrak{h}_{+}\longrightarrow \mathfrak{h}$, for some $%
K_{-}\in \mathfrak{h}_{-}$, such that$$\imath _{K_{-}}(X_{+})=X_{+}+K_{-}$$The Legendre transform $\mathfrak{L}_{\mathcal{F}}:\mathfrak{h}%
_{+}\longrightarrow \mathfrak{h}_{-}$ relates with $\mathfrak{L}_{\mathsf{F}%
} $ as $$\mathfrak{L}_{\mathcal{F}}(X_{+})=\Pi _{\mathfrak{g}_{-}}\mathfrak{L}_{%
\mathsf{F}}(\imath \left( X_{+}+K_{-}\right) ).$$ Therefore, the Lie-Poisson bracket on $\mathfrak{h}_{+}$ of $\mathcal{F},%
\mathcal{H}:\mathfrak{h}_{+}\longrightarrow \mathbb{R}$ coincides with that of $\mathsf{F},\mathsf{H}:\mathfrak{h}\longrightarrow \mathbb{R}$ restricted to $\mathfrak{h}_{+}$ $$\begin{aligned}
\left\{ \mathcal{F},\mathcal{H}\right\} _{\mathfrak{h}_{+}}\left(
X_{+}\right) &=&\left( X_{+},\left[ \mathfrak{L}_{\mathcal{F}}(X_{+}),%
\mathfrak{L}_{\mathcal{H}}(X_{+})\right] _{-}\right) _{\mathfrak{h}} \\
&=&\left\langle d\mathsf{F},\Pi _{\mathfrak{h}_{+}}\mathrm{ad}_{\Pi _{%
\mathfrak{h}_{-}}\mathfrak{L}_{\mathsf{H}}\left( X_{+}+K_{-}\right) }^{%
\mathfrak{h}}\left( X_{+}+K_{-}\right) \right\rangle\end{aligned}$$where $\Pi _{\mathfrak{h}_{+}}\mathrm{ad}_{\Pi _{\mathfrak{h}_{-}}\mathfrak{L%
}_{\mathsf{H}}\left( X_{+}+K_{-}\right) }^{\mathfrak{h}}\left(
X_{+}+K_{-}\right) $ is the projection of the hamiltonian vector field of $%
\mathsf{H}$ on $\mathfrak{h}_{+}$. Then $$\begin{aligned}
\left\{ \mathcal{F},\mathcal{H}\right\} _{\mathfrak{h}_{+}}\left(
X_{+}\right) &=&\left. \left\{ \mathsf{F},\mathsf{H}\right\} _{\mathfrak{h}%
}\right\vert _{+}\left( X_{+}+K_{-}\right) \label{restr pb h+} \\
&=&\left( X_{+},\left[ \Pi _{\mathfrak{h}_{-}}\mathfrak{L}_{\mathsf{F}%
}(X_{+}+K_{-}),\Pi _{\mathfrak{h}_{-}}\mathfrak{L}_{\mathsf{H}}\left(
X_{+}+K_{-}\right) \right] _{-}\right) _{\mathfrak{h}} \notag\end{aligned}$$
Let $\mathsf{F}$ be* *an $\mathrm{Ad}^{H}$-invariant function on $%
\mathfrak{h}$, then the following relations holds: $\forall h\in G$, * *$$\mathrm{Ad}_{h}^{H}\mathfrak{L}_{\mathsf{F}}(X)=\mathfrak{L}_{\mathsf{F}}(%
\mathrm{Ad}_{h}^{H}X)$$and $\forall X\in \mathfrak{h}$,$$\begin{array}{ccc}
\mathrm{ad}_{\mathfrak{L}_{\mathsf{F}}(X)}^{\mathfrak{h}}X=0 &
\Longrightarrow & \mathrm{ad}_{\Pi _{\mathfrak{h}_{-}}\mathfrak{L}_{\mathsf{F%
}}(X)}^{\mathfrak{h}}X=-\mathrm{ad}_{\Pi _{\mathfrak{h}_{+}}\mathfrak{L}_{%
\mathsf{F}}(X)}^{\mathfrak{h}}X%
\end{array}%$$
By using this results, it is easy to show the following statement.
AKS Theorem:
: *Let* $\gamma \left( K_{-}\right) $* be a character of* $\mathfrak{h}_{+}$. *Then, the restriction to the immersed submanifold* $\imath _{K_{-}}:\mathfrak{h}_{+}\longrightarrow
\mathfrak{h}$* of* $\mathrm{Ad}^{H}$*-invariant functions on* $\mathfrak{h}$ *gives rise to a nontrivial abelian Poisson* *algebra.*
Remark:
: *The condition* $\gamma \left( K_{-}\right) \in
\mathrm{char~}\mathfrak{h}_{+}$ *means,* $\forall X_{+}\in \mathfrak{h%
}_{+}$,$$ad_{X_{+}}^{\ast }\gamma \left( K_{-}\right) =0\Longleftrightarrow \Pi _{%
\mathfrak{h}_{-}}\mathrm{ad}_{X_{+}}^{\mathfrak{h}}K_{-}=0$$*This condition in terms of components in* $\mathfrak{g}\oplus
\mathfrak{g}$* implies* $$\begin{cases}
\Pi _{\mathfrak{h}_{-}}\left[ \Pi _{1}X_{+},\Pi _{1}K_{-}\right] _{\mathfrak{%
g}}=0 \\
\Pi _{\mathfrak{h}_{-}}^{\bot }\mathrm{ad}_{\Pi _{1}X_{+}}^{\tau }\Pi
_{2}K_{-}=0%
\end{cases}%$$*where* $\Pi _{i}$ , $i=1,2$*, stand for the proyectors on the first and second component in the semidirect sum* $\mathfrak{g}\ltimes
\mathfrak{g}$*, respectively*.
The dynamics on these submanifolds is ruled by the hamiltonian vector fields arising from the process of restriction, as described in the following proposition.
Proposition:
: *The hamiltonian vector field associated to the Poisson bracket on* $\mathfrak{h}_{+}$*, eq.* $\left( \ref{restr pb
h+}\right) $*,* *is*$$\begin{aligned}
V_{\mathsf{H}}\left( X_{+}+K_{-}\right) &=&\Pi _{\mathfrak{h}_{+}}\mathrm{ad}%
_{\Pi _{\mathfrak{h}_{-}}\mathfrak{L}_{\mathsf{H}}\left( X_{+}+K_{-}\right)
}^{\mathfrak{h}}\left( X_{+}+K_{-}\right) \\
&=&-\mathrm{ad}_{\Pi _{\mathfrak{h}_{+}}\mathfrak{L}_{\mathsf{F}%
}(X_{+}+K_{-})}^{\mathfrak{h}}\left( X_{+}+K_{-}\right)\end{aligned}$$*for* $X_{+}^{\bot }\in \mathfrak{h}_{+}^{\bot }$.
The integral curves of this hamiltonian vector field, a *dressing vector field indeed*, are the orbits of a particular curve in $H$, as explained in the following proposition.
Propositon:
: *The Hamilton equation of motion* on $\imath
_{K_{-}}\left( \mathfrak{h}_{+}\right) \subset \mathfrak{h}$ *are*$$\begin{cases}
\dot{Z}\left( t\right) =V_{\mathsf{H}}\left( Z\left( t\right) \right) \\
Z_{\circ }=Z\left( 0\right) =X_{+\circ }+K_{-}%
\end{cases}
\label{Ham sist on h+}$$*with* $Z\left( t\right) =X_{+}\left( t\right) +K_{-}$*, and* $%
\mathsf{H}$ *is an* $\mathrm{Ad}^{H}$*-invariant function. It is solved by factorization: if* $h_{+},h_{-}:\mathbb{R}\rightarrow H_{\pm }$ *are curves on these groups defined by*$$\exp \left( t\mathfrak{L}_{\mathsf{H}}(Z_{\circ })\right) =h_{+}\left(
t\right) h_{-}\left( t\right)$$*the solution of the above Hamilton equation is* $$Z\left( t\right) =\mathrm{Ad}_{h_{+}^{-1}\left( t\right) }^{H}\left(
Z_{\circ }\right)$$
**Proof:** The Hamilton equation of motion are$$\dot{Z}\left( t\right) =-\mathrm{ad}_{\Pi _{\mathfrak{h}_{+}}\mathfrak{L}_{%
\mathsf{H}}\left( Z\left( t\right) \right) }^{\mathfrak{h}}Z\left( t\right)
\label{Eq h+}$$It is solved by $$Z\left( t\right) =\mathrm{Ad}_{h_{+}^{-1}\left( t\right) }^{H}\left(
Z_{\circ }\right) \in H_{+}$$with the curve $t\longmapsto h_{+}\left( t\right) \subset H_{+}$ solving $$h_{+}^{-1}\left( t\right) \dot{h}_{+}\left( t\right) =\Pi _{\mathfrak{h}_{+}}%
\mathfrak{L}_{\mathsf{H}}(Z\left( t\right) ) \label{Eq H+}$$
Now, let us consider a curve $t\longmapsto h\left( t\right) =e^{t\mathfrak{L}%
_{\mathsf{H}}(X_{+\circ }+K_{-})}\subset H$, for constant $X_{+\circ }+K_{-}$, which solves the differential equation$$\dot{h}\left( t\right) h^{-1}\left( t\right) =\mathfrak{L}_{\mathsf{H}%
}(Z_{\circ })$$so, as $H=H_{+}H_{-}$ and $\mathfrak{h}=\mathfrak{h}_{+}\oplus \mathfrak{h}%
_{-}$ we have $h\left( t\right) =h_{+}\left( t\right) h_{-}\left( t\right) $ with $h_{+}\left( t\right) \in H_{+}$ and $h_{-}\left( t\right) \in H_{-}$, hence $\dot{h}h^{-1}=\dot{h}_{+}h_{+}^{-1}+\mathrm{Ad}_{h_{+}}^{H}\dot{h}%
_{-}h_{-}^{-1}$ and, since $\mathsf{H}$ is $\mathrm{Ad}^{H}$-invariant, the equation of motion turns in $$h_{+}^{-1}\dot{h}_{+}+\dot{h}_{-}h_{-}^{-1}=\mathfrak{L}_{\mathsf{H}}(%
\mathrm{Ad}_{h_{+}^{-1}}^{H}\left( Z_{\circ }\right) )$$from where we conclude that$$\left\{
\begin{array}{c}
\medskip h_{+}^{-1}\left( t\right) \dot{h}_{+}\left( t\right) =\Pi _{%
\mathfrak{h}_{+}}\mathfrak{L}_{\mathsf{H}}(Z\left( t\right) ) \\
\dot{h}_{-}\left( t\right) h_{-}^{-1}\left( t\right) =\Pi _{\mathfrak{h}_{-}}%
\mathfrak{L}_{\mathsf{H}}(Z\left( t\right) )%
\end{array}%
\right.$$Here we can see that the first equation coincides with $\left( \ref{Eq H+}%
\right) $, thus showing that the factor $h_{+}\left( t\right) $ of the decomposition $e^{t\mathfrak{L}_{\mathsf{H}}(X_{+})}=h_{+}\left( t\right)
h_{-}\left( t\right) $ solves the hamiltonian system $\left( \ref{Eq h+}%
\right) $.$\blacksquare $
In order to write the Hamilton equation $\left( \ref{Ham sist on h+}\right) $ in terms of the components $\mathfrak{h}=\mathfrak{g}\oplus \mathfrak{g}$, we write $Z\left( t\right) =X_{+}\left( t\right) +K_{-}$ as $$Z\left( t\right) =\left( \Pi _{1}\left( X_{+}\left( t\right) +K_{-}\right)
,\Pi _{2}\left( X_{+}\left( t\right) +K_{-}\right) \right)$$Then, the evolution equations for each component are $$\begin{cases}
\Pi _{1}\dot{X}_{+}\left( t\right) =\left[ \Pi _{1}X_{+}\left( t\right) +\Pi
_{1}K_{-},\Pi _{1}\Pi _{\mathfrak{h}_{+}}\mathfrak{L}_{\mathsf{H}}\left(
Z\left( t\right) \right) \right] \\
\\
\Pi _{2}\dot{X}_{+}\left( t\right) =\mathrm{ad}_{\Pi _{1}X_{+}\left(
t\right) +\Pi _{1}K_{-}}^{\tau }\Pi _{2}\Pi _{\mathfrak{h}_{+}}\mathfrak{L}_{%
\mathsf{H}}\left( Z\left( t\right) \right) \\
\qquad \qquad \qquad -\mathrm{ad}_{\Pi _{1}\Pi _{\mathfrak{h}_{+}}\mathfrak{L%
}_{\mathsf{H}}\left( Z\left( t\right) \right) }^{\tau }\left( \Pi
_{2}X_{+}\left( t\right) +\Pi _{2}K_{-}\right)%
\end{cases}
\label{Z5}$$whose solutions are obtained from the components of the curve $$Z\left( t\right) =\mathrm{Ad}_{h_{+}^{-1}\left( t\right) }^{H}\left(
Z_{\circ }\right)$$Explicitly they are $$\begin{cases}
\Pi _{1}X_{+}\left( t\right) =\mathrm{Ad}_{g_{+}^{-1}\left( t\right)
}^{G}\Pi _{1}\left( Z_{\circ }\right) \\
\\
\Pi _{2}X_{+}\left( t\right) =\tau _{g_{+}^{-1}\left( t\right) }\Pi
_{2}\left( Z_{\circ }\right) +\mathrm{ad}_{\mathrm{Ad}_{g_{+}^{-1}\left(
t\right) }^{G}\Pi _{1}\left( Z_{\circ }\right) }^{\tau }Y_{+}^{\bot }\left(
t\right)%
\end{cases}%$$where $\left( g_{+}\left( t\right) ,Y_{+}^{\bot }\left( t\right) \right) $ is the $H_{+}=\mathfrak{g}_{+}\oplus \mathfrak{g}_{+}^{\bot }$ factor of the exponential curve in $H=G\ltimes \mathfrak{g}$, namely$$\mathrm{Exp}^{\cdot }\left( t\mathfrak{L}_{\mathsf{H}}\left( Z_{\circ
}\right) \right) =\left( e^{t\Pi _{1}\mathfrak{L}_{\mathsf{H}}\left(
Z_{\circ }\right) },-\sum_{n=1}^{\infty }\frac{\left( -1\right) ^{n}}{n!}%
t^{n}\left( \mathrm{ad}_{\Pi _{1}\mathfrak{L}_{\mathsf{H}}\left( Z_{\circ
}\right) }^{\tau }\right) ^{n-1}\Pi _{2}\mathfrak{L}_{\mathsf{H}}\left(
Z_{\circ }\right) \right)$$where $\mathfrak{L}_{\mathsf{H}}\left( Z_{\circ }\right) =\left( \Pi _{1}%
\mathfrak{L}_{\mathsf{H}}\left( Z_{\circ }\right) ,\Pi _{2}\mathfrak{L}_{%
\mathsf{H}}\left( Z_{\circ }\right) \right) \in \mathfrak{g}\oplus \mathfrak{%
g}$.
Let us restrict to the subspace $$\mathfrak{g}_{2}:=\left\{ 0\right\} \oplus \mathfrak{g}:=\left\{ \left(
0,Y\right) /Y\in \mathfrak{g}\right\}$$Then, the $\mathrm{Ad}^{H}$-invariant function $\mathsf{H}$ reduces to an $%
\tau ^{G}$-invariant function on $\mathfrak{g}_{2}$ since $$\left. \mathrm{Ad}_{\left( g,Z\right) }^{H}\left( X,Y\right) \right\vert _{%
\mathfrak{g}_{2}}=\left( 0,\tau _{g}Y\right)$$then, if $\imath :\mathfrak{g}_{2}\longrightarrow \mathfrak{h}$ is the injection then $\mathsf{h}:=\mathsf{H}\circ \imath :\mathfrak{g}%
_{2}\longrightarrow \mathbb{R}$ is the restriction of $\mathsf{H}$ to $%
\mathfrak{g}_{2}$. Then $$\begin{aligned}
\left( \mathfrak{L}_{\mathsf{h}}\left( X\right) ,Y\right) _{\mathfrak{h}}
&=&\left. \frac{d\mathsf{H}\circ \imath \left( X+tY\right) )}{dt}\right\vert
_{t=0} \\
&=&\left( \mathfrak{L}_{\mathsf{H}}\imath \left( X\right) ,\left( 0,Y\right)
\right) _{\mathfrak{h}}=\left( \Pi _{1}\mathfrak{L}_{\mathsf{H}}\imath
\left( X\right) ,Y\right) _{\mathfrak{g}}\end{aligned}$$so, we conclude that$$\Pi _{1}\mathfrak{L}_{\mathsf{H}}\imath \left( X\right) =\mathfrak{L}_{%
\mathsf{h}}\left( X\right)$$The restriction of the differential equations $\left( \ref{Z5}\right) $ to this subspace are$$\Pi _{2}\dot{X}_{+}\left( t\right) =-\mathrm{ad}_{\Pi _{\mathfrak{g}_{+}}%
\mathfrak{L}_{\mathsf{h}}\left( \Pi _{2}X_{+}\left( t\right) \right) }^{\tau
}\left( \Pi _{2}Z_{\circ }\right)$$that reproduces the equation integrable via AKS obtained in ref. [Ovando 1]{},[@Ovando; @2].
Examples of Lie algebras with no bi-invariant metrics
=====================================================
A three step nilpotent Lie algebra
----------------------------------
Let us consider the three step nilpotent Lie algebra $\mathfrak{g}$ generated by $\left\{ e_{1},e_{2},e_{3},e_{4}\right\} $, see ref. [Ovando 1]{}, defined by the nonvanishing Lie brackets $$\begin{array}{ccc}
\left[ e_{4},e_{1}\right] =e_{2} & , & \left[ e_{4},e_{2}\right] =e_{3}%
\end{array}
\label{3step 0}$$and the metric is determined by the nonvanishing pairings $$\begin{array}{ccc}
\left( e_{2},e_{2}\right) _{\mathfrak{g}}=\left( e_{4},e_{4}\right) _{%
\mathfrak{g}}=1 & , & \left( e_{1},e_{3}\right) _{\mathfrak{g}}=-1%
\end{array}%$$It can be decomposed in two different direct sums $\mathfrak{g}=\mathfrak{g}%
_{+}\oplus \mathfrak{g}_{-}$ or $\mathfrak{g}=\mathfrak{g}_{+}^{\bot }\oplus
\mathfrak{g}_{-}^{\bot }$ with$$\begin{array}{lll}
\mathfrak{g}_{+}=lspan\left\{ e_{2},e_{3},e_{4}\right\} & , & \mathfrak{g}%
_{+}^{\bot }=lspan\left\{ e_{3}\right\} \\
\mathfrak{g}_{-}=lspan\left\{ e_{1}\right\} & , & \mathfrak{g}_{-}^{\bot
}=lspan\left\{ e_{1},e_{2},e_{4}\right\}%
\end{array}%$$where $\mathfrak{g}_{+}$ and $\mathfrak{g}_{-}$ are Lie subalgebras of $%
\mathfrak{g}$.
The nonvanishing $\tau $-action action of $\mathfrak{g}$ on itself are$$\begin{array}{lllllll}
\mathrm{ad}_{e_{2}}^{\tau }e_{1}=-e_{4} & , & \mathrm{ad}_{e_{1}}^{\tau
}e_{2}=e_{4} & , & \mathrm{ad}_{e_{4}}^{\tau }e_{1}=e_{2} & , & \mathrm{ad}%
_{e_{4}}^{\tau }e_{2}=e_{3}%
\end{array}%$$and the Lie bracket in the semidirect sum Lie algebra $\mathfrak{h}=%
\mathfrak{g}\ltimes \mathfrak{g}$, see eq. $\left( \ref{Lie alg g+g 1}%
\right) $, is defined by the following non trivial ones$$\begin{array}{ccc}
\left[ \left( e_{4},0\right) ,\left( e_{1},0\right) \right] =\left(
e_{2},0\right) & & \left[ \left( e_{2},0\right) ,\left( 0,e_{1}\right) %
\right] =\left( 0,-e_{4}\right) \\
\left[ \left( e_{4},0\right) ,\left( e_{2},0\right) \right] =\left(
e_{3},0\right) & & \left[ \left( e_{4},0\right) ,\left( 0,e_{2}\right) %
\right] =\left( 0,e_{3}\right) \\
\left[ \left( e_{1},0\right) ,\left( 0,e_{2}\right) \right] =\left(
0,e_{4}\right) & & \left[ \left( e_{4},0\right) ,\left( 0,e_{1}\right) %
\right] =\left( 0,-e_{2}\right)%
\end{array}%$$
The Ad-invariant symmetric nondegenerate bilinear form $\left( \cdot ,\cdot
\right) _{\mathfrak{h}}:\mathfrak{h}\times \mathfrak{h}\rightarrow \mathbb{R}
$, see eq. $\left( \ref{Lie alg g+g 2}\right) $, has the following non trivial pairings $$\begin{aligned}
\left( \left( e_{2},0\right) ,\left( 0,e_{2}\right) \right) _{\mathfrak{h}}
&=&\left( \left( e_{4},0\right) ,\left( 0,e_{4}\right) \right) _{\mathfrak{h}%
}=1 \\
\left( \left( e_{1},0\right) ,\left( 0,e_{3}\right) \right) _{\mathfrak{h}}
&=&\left( \left( e_{3},0\right) ,\left( 0,e_{1}\right) \right) _{\mathfrak{h}%
}=-1\end{aligned}$$giving rise to the *Manin triple* $\left( \mathfrak{h},\mathfrak{h}_{+},%
\mathfrak{h}_{-}\right) $ with $$\begin{aligned}
\mathfrak{h}_{+} &=&\mathfrak{g}_{+}\oplus \mathfrak{g}_{+}^{\bot
}=lspan\left\{ \left( e_{2},0\right) ,\left( e_{3},0\right) ,\left(
e_{4},0\right) ,\left( 0,e_{3}\right) \right\} \\
\mathfrak{h}_{-} &=&\mathfrak{g}_{-}\oplus \mathfrak{g}_{-}^{\bot
}=lspan\left\{ \left( e_{1},0\right) ,\left( 0,e_{1}\right) ,\left(
0,e_{2}\right) ,\left( 0,e_{4}\right) \right\}\end{aligned}$$
The Lie subalgebras $\mathfrak{h}_{+},\mathfrak{h}_{-}$ are indeed *Lie bialgebras* with the non vanishing Lie brackets and cobrackets $$\begin{aligned}
\mathfrak{h}_{+} &\longrightarrow &\left\{
\begin{array}{l}
\left[ \left( e_{4},0\right) ,\left( e_{2},0\right) \right] =\left(
e_{3},0\right) \\
\delta _{+}\left( e_{4},0\right) =\left( e_{3},0\right) \wedge \left(
0,e_{3}\right) -\left( 0,e_{3}\right) \wedge \left( e_{2},0\right)%
\end{array}%
\right. \notag \\
&& \label{3step 4} \\
\mathfrak{h}_{-} &\longrightarrow &\left\{
\begin{array}{l}
\left[ \left( e_{1},0\right) ,\left( 0,e_{2}\right) \right] =\left(
0,e_{4}\right) \\
\delta _{-}\left( 0,e_{1}\right) =\left( 0,e_{2}\right) \wedge \left(
0,e_{4}\right)%
\end{array}%
\right. \notag\end{aligned}$$so the associated Lie groups $H_{\pm }=G_{\pm }\oplus \mathfrak{g}_{\pm
}^{\bot }$ become into *Poisson Lie groups.* We determine the Poisson Lie structure in the matrix representation of this Lie algebra $\left( \ref%
{3step 0}\right) $ in a four dimensional vector space where the Lie algebra generators, in terms of the $4\times 4$ elementary matrices $\left(
E_{ij}\right) _{kl}=\delta _{ik}\delta _{jl}$, are
$$\begin{array}{ccccccc}
e_{1}=E_{34} & , & e_{2}=E_{24} & , & e_{3}=E_{14} & , & e_{4}=E_{12}+E_{23}%
\end{array}%$$
In this representation, any vector $X=\left( x_{1},x_{2},x_{3},x_{4}\right) $ of this Lie algebra is 4-step nilpotent, $X^{4}=0$.
The *Poisson-Lie bivector* $\pi _{+}$ on $H_{+}$ is defined by the relation $\left( \ref{PL bivector h+ 1}\right) $. Since the exponential map is surjective on nilpotent Lie groups, we may write $$\begin{aligned}
\left( g_{+},Z_{+}^{\bot }\right) &=&\left(
e^{u_{2}e_{2}+u_{3}e_{3}+u_{4}e_{4}},z_{3}e_{3}\right) \\
\left( X_{-},Y_{-}^{\bot }\right) &=&\left(
x_{1}e_{1},y_{1}e_{1}+y_{2}e_{2}+y_{4}e_{4}\right)\end{aligned}$$to get $$\begin{aligned}
&&\left\langle \gamma \left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right)
\left( g_{+},Z_{+}^{\bot }\right) ^{-1}\otimes \gamma \left( X_{-}^{\prime
\prime },Y_{-}^{\prime \prime \bot }\right) \left( g_{+},Z_{+}^{\bot
}\right) ^{-1},\pi _{+}\left( g_{+},Z_{+}^{\bot }\right) \right\rangle \\
&=&\frac{1}{2}y_{1}^{\prime }u_{4}^{2}x_{1}^{\prime \prime }-\frac{1}{2}%
x_{1}^{\prime }u_{4}^{2}y_{1}^{\prime \prime }+x_{1}^{\prime
}u_{4}y_{2}^{\prime \prime }-y_{2}^{\prime }u_{4}x_{1}^{\prime \prime }\end{aligned}$$Introducing $\pi _{+}^{R}:G\longrightarrow \mathfrak{h}_{+}\otimes \mathfrak{%
h}_{+}$ as $\pi _{+}^{R}\left( g_{+},Z_{+}^{\bot }\right) =\left( R_{\left(
g_{+},Z_{+}^{\bot }\right) ^{-1}}\right) _{\ast }^{\otimes 2}\pi _{+}\left(
g_{+},Z_{+}^{\bot }\right) $, we get $$\begin{aligned}
\pi _{+}^{R}\left( g_{+},Z_{+}^{\bot }\right) &=&\frac{1}{2}u_{4}^{2}\left(
e_{3},0\right) \otimes \left( 0,e_{3}\right) -\frac{1}{2}u_{4}^{2}\left(
0,e_{3}\right) \otimes \left( e_{3},0\right) \\
&&-u_{4}\left( 0,e_{3}\right) \otimes \left( e_{2},0\right) +u_{4}\left(
e_{2},0\right) \otimes \left( 0,e_{3}\right)\end{aligned}$$The dressing vector can be obtained from the PL bivector by using eq. $%
\left( \ref{PL bivector h+ 7}\right) $ to get $$\left( g_{+},Z_{+}^{\bot }\right) ^{\left( X_{-}^{\prime \prime
},Y_{-}^{\prime \prime \bot }\right) }=\left( -u_{4}x_{1}^{\prime \prime
}e_{2}-\frac{1}{2}u_{4}^{2}x_{1}^{\prime \prime }e_{3},\left( \frac{1}{2}%
u_{4}^{2}y_{1}^{\prime \prime }-u_{4}y_{2}^{\prime \prime }\right)
e_{3}\right)$$and, the cobracket $\delta _{+}:\mathfrak{h}_{+}\longrightarrow \mathfrak{h}%
_{+}\wedge \mathfrak{h}_{+}$ defined from $\left( \pi _{+}^{R}\right) _{\ast
\left( e,0\right) }$ reproducing $\left( \ref{3step 4}\right) $. Since $%
\mathfrak{h}_{+}$ has only one non trivial Lie bracket, see eq. $\left( \ref%
{3step 4}\right) $, there is no an object $r\in \mathfrak{h}_{+}\otimes
\mathfrak{h}_{+}$ making $\delta _{+}$ in a coboundary.
In order to determine *the Poisson-Lie structure on* $H_{-}$, we write the elements in $H_{-}$ and $\mathfrak{h}_{+}$ as $$\begin{aligned}
\left( g_{-},Z_{-}^{\bot }\right) &=&\left(
e^{u_{1}e_{1}},z_{1}e_{1}+z_{2}e_{2}+z_{4}e_{4}\right) \\
\left( X_{+},Y_{+}^{\bot }\right) &=&\left(
x_{2}e_{2}+x_{3}e_{3}+x_{4}e_{4},y_{3}e_{3}\right)\end{aligned}$$and PL bivector $\pi _{-}$ on $H_{-}$ defined by the relation $\left( \ref%
{PL bivector h- 1}\right) $ is $$\left\langle \left( g_{-},Z_{-}^{\bot }\right) ^{-1}\left( \gamma \left(
X_{+}^{\prime },Y_{+}^{\prime \bot }\right) \otimes \gamma \left(
X_{+}^{\prime \prime },Y_{+}^{\prime \prime \bot }\right) \right) ,\pi
_{-}\left( g_{-},Z_{-}^{\bot }\right) \right\rangle =x_{4}^{\prime
}z_{1}x_{2}^{\prime \prime }-x_{2}^{\prime }z_{1}x_{4}^{\prime \prime }$$
Writing it in terms of the dressing vector as in eq. $\left( \ref{PL
bivector h- 2}\right) $ we obtain$$\left( g_{-},Z_{-}^{\bot }\right) ^{\left( X_{+}^{\prime \prime
},Y_{+}^{\prime \prime \bot }\right) }=\left( 0,-z_{1}x_{4}^{\prime \prime
}e_{2}+z_{1}x_{2}^{\prime \prime }e_{4}\right)$$Introducing $\pi _{-}^{L}:G\longrightarrow \mathfrak{h}_{-}\otimes \mathfrak{%
h}_{-}$ as $\pi _{-}^{L}\left( g_{-},Z_{-}^{\bot }\right) =\left( L_{\left(
g_{-},Z_{-}^{\bot }\right) ^{-1}}\right) _{\ast }^{\otimes 2}\pi _{-}\left(
g_{-},Z_{-}^{\bot }\right) $, we have$$\pi _{-}^{L}\left( g_{-},Z_{-}^{\bot }\right) =z_{1}\left( 0,e_{4}\right)
\otimes \left( 0,e_{2}\right) -z_{1}\left( 0,e_{2}\right) \otimes \left(
0,e_{4}\right)$$The cobracket on $\mathfrak{h}_{-}$, $\delta _{-}:\mathfrak{h}%
_{-}\longrightarrow \mathfrak{h}_{-}\wedge \mathfrak{h}_{-}$ given in $%
\left( \ref{3step 4}\right) $ can be easily retrieved as $\delta
_{-}:=-\left( \pi _{-}^{L}\right) _{\ast \left( e,0\right) }$. Again, the only nonvanishing Lie bracket in $\mathfrak{h}_{-}$ does not allows for some object $r$ in $\mathfrak{h}_{-}\otimes \mathfrak{h}_{-}$ giving rise to that cobracket.
The decomposition $\mathfrak{h}=\mathfrak{h}_{+}\oplus \mathfrak{h}_{-}$ implies the factorization of the Lie group $H$ as $H_{+}H_{-}$ such that$$\begin{aligned}
&&\bigskip \left(
e^{u_{1}e_{1}+u_{2}e_{2}+u_{3}e_{3}+u_{4}e_{4}},z_{1}e_{1}+z_{2}e_{2}+z_{3}e_{3}+z_{4}e_{4}\right)
\\
&=&\left( e^{\left( u_{2}-\frac{1}{2}u_{1}u_{4}\right) e_{2}+\left( u_{3}-%
\frac{1}{12}u_{1}u_{4}^{2}\right) e_{3}+u_{4}e_{4}},z_{3}e_{3}\right) \cdot
\left( e^{u_{1}e_{1}},z_{1}e_{1}+z_{2}e_{2}+z_{4}e_{4}\right)\end{aligned}$$and from here we get the reciprocal dressing actions$$\left\{
\begin{array}{l}
\left( e^{u_{2}e_{2}+u_{3}e_{3}+u_{4}e_{4}},z_{3}e_{3}\right) ^{\left(
e^{u_{1}e_{1}},z_{1}e_{1}+z_{2}e_{2}+z_{4}e_{4}\right) } \\
=\left( e^{^{\left( u_{2}-u_{1}u_{4}\right) e_{2}+\left( u_{3}-\frac{1}{2}%
u_{1}u_{4}^{2}\right) e_{3}+u_{4}e_{4}}},\left( \dfrac{1}{2}%
z_{1}u_{4}^{2}-z_{2}u_{4}+z_{3}\right) e_{3}\right) \\
\\
\left( e^{u_{1}e_{1}},z_{1}e_{1}+z_{2}e_{2}+z_{4}e_{4}\right) ^{\left(
e^{u_{2}e_{2}+u_{3}e_{3}+u_{4}e_{4}},z_{3}e_{3}\right) } \\
=\left( e^{u_{1}e_{1}},z_{1}e_{1}+\left( z_{2}-z_{1}u_{4}\right)
e_{2}+\left( z_{1}u_{2}+z_{4}\right) e_{4}\right)%
\end{array}%
\right.$$
### AKS Integrable system
In order to produces an AKS integrable systems, we consider an $\mathrm{ad}^{%
\mathfrak{h}}$-invariant function $\mathsf{H}$ on $\mathfrak{h}$. We consider $\left( \cdot ,\cdot \right) _{\mathfrak{h}}:\mathfrak{h}\times
\mathfrak{h}\rightarrow \mathbb{R}$$$\mathsf{H}\left( X,Y\right) =\frac{1}{2}\left( \left( \left( X,Y\right)
,\left( X,Y\right) \right) _{\mathfrak{h}}\right) =\left( X,Y\right) _{%
\mathfrak{g}}$$so $$\mathfrak{L}_{\mathsf{H}}\left( X,Y\right) =\left( X,Y\right)$$
The hamiltonian vector field is$$V_{\mathsf{H}}\left( X,Y\right) =\left[ \left( X_{-},Y_{-}^{\bot }\right)
,\left( X_{+},Y_{+}^{\bot }\right) \right]$$The condition $\psi \left( X_{-},Y_{-}^{\bot }\right) \in \emph{char}\left(
\mathfrak{h}_{+}\right) $, is satisfied provided the component in $\left(
0,e_{1}\right) $ vanish, so the allowed motion is performed on points $$\left( X,Y\right) =\left(
x_{1}e_{1}+x_{2}e_{2}+x_{3}e_{3}+x_{4}e_{4},y_{2}e_{2}+y_{3}e_{3}+y_{4}e_{4}%
\right)$$and the Hamiltonian vector fields reduces to$$V_{\mathsf{H}}\left( Z\right) =-x_{4}\left( x_{1}e_{2},y_{2}e_{3}\right)$$giving rise to non trivial Hamilton equation of motion $\dot{Z}\left(
t\right) =V_{\mathsf{H}}\left( Z\left( t\right) \right) $ $$\left\{
\begin{array}{l}
\dot{x}_{2}=-x_{4}x_{1}\medskip \\
\dot{y}_{3}=-x_{4}y_{2}%
\end{array}%
\right. \label{3s07}$$The coordinates $x_{1},x_{3},x_{4},y_{2},y_{4}$ remains constants of motion, unveiling a quite simple dynamical system, namely a uniform linear motion in the plane $\left( x_{2},y_{3}\right) $. Despite this simplicity, it is interesting to see how the $H_{+}$ factor of the semidirect product exponential curve succeeds in yielding this linear trajectories as the orbits by the adjoint action.
In order to apply the AKS Theorem, we obtain$$\mathrm{Exp}^{\cdot }\left( t\mathfrak{L}_{\mathsf{H}}(X_{+\circ
}+K_{-})\right) =h_{+}\left( t\right) h_{-}\left( t\right)$$with$$\begin{aligned}
h_{+}\left( t\right) &=&\left( e^{\left( tx_{2}-\frac{1}{2}%
t^{2}x_{1}x_{4}\right) e_{2}+\left( tx_{3}-\frac{1}{12}t^{3}x_{1}x_{4}^{2}%
\right) e_{3}+tx_{4}e_{4}},-\left( \frac{1}{2}t^{2}x_{4}y_{2}-ty_{3}\right)
e_{3}\right) \\
h_{-}\left( t\right) &=&\left( e^{tx_{1}e_{1}},ty_{2}e_{2}-\left( \frac{1}{2}%
t^{2}x_{1}y_{2}-ty_{4}\right) e_{4}\right)\end{aligned}$$
Thus, for the initial condition$$Z\left( t_{0}\right) =\left(
x_{10}e_{1}+x_{20}e_{2}+x_{30}e_{3}+x_{40}e_{4},y_{20}e_{2}+y_{30}e_{3}+y_{40}e_{4}\right)$$the system $\left( \ref{3s07}\right) $ has the solution* *$$Z\left( t\right) =\mathrm{Ad}_{h_{+}^{-1}\left( t\right) }^{H}Z\left(
t_{0}\right)$$described by the curve $$h_{+}^{-1}\left( t\right) =\left( e^{-\left( tx_{20}-\frac{t^{2}}{2}%
x_{10}x_{40}\right) e_{2}-\left( tx_{30}-\frac{t^{3}}{12}x_{10}x_{40}^{2}%
\right) e_{3}-tx_{40}e_{4}},\left( \frac{t^{2}}{2}x_{40}y_{20}-ty_{30}%
\right) e_{3}\right)$$The explicit form of the adjoint curve is $$\begin{aligned}
Z\left( t\right) &=&\left( x_{10}e_{1}+\left( x_{20}-tx_{10}x_{40}\right)
e_{2}+x_{30}e_{3}+x_{40}e_{4}\right. , \\
&&\left. y_{20}e_{2}+\left( y_{30}-tx_{40}y_{20}\right)
e_{3}+y_{40}e_{4}\right)\end{aligned}$$solving the system of Hamilton equations $\left( \ref{3s07}\right) $.
The Lie Group $G=\mathrm{A}_{6.34}$
-----------------------------------
This example was taken from reference [@ghanam]. Its Lie algebra $%
\mathfrak{g}=\mathfrak{a}_{6.34}$ is generated by the basis $\left\{
e_{1},e_{2},e_{3},e_{4},e_{5},e_{6}\right\} $ with the non vanishing Lie brackets$$\begin{array}{ccccccc}
\left[ e_{2},e_{3}\right] =e_{1} & , & \left[ e_{2},e_{6}\right] =e_{3} & ,
& \left[ e_{3},e_{6}\right] =-e_{2} & , & \left[ e_{4},e_{6}\right] =e_{5}%
\end{array}%$$The Lie algebra can be represented by $6\times 6$ matrices, and by using the elementary matrices $\left( E_{ij}\right) _{kl}=\delta _{ik}\delta _{jl}$ they can be written as $$\left\{
\begin{array}{lllll}
e_{1}=-2E_{36} & , & e_{2}=E_{35} & , & e_{3}=E_{34}-E_{56} \\
e_{4}=E_{12} & , & e_{5}=E_{16} & , & e_{6}=E_{26}+E_{54}-E_{45}%
\end{array}%
\right. ,$$and a typical element $g\left( z,x,y,p,q,\theta \right) $ of the associated Lie group $G=\mathrm{A}_{6.34}$ is$$g\left( z,x,y,p,q,\theta \right) =\left(
\begin{array}{cccccc}
1 & p & 0 & 0 & 0 & q \\
0 & 1 & 0 & 0 & 0 & \theta \\
0 & 0 & 1 & x\sin \theta +y\cos \theta & x\cos \theta -y\sin \theta & z \\
0 & 0 & 0 & \cos \theta & -\sin \theta & x \\
0 & 0 & 0 & \sin \theta & \cos \theta & -y \\
0 & 0 & 0 & 0 & 0 & 1%
\end{array}%
\right)$$
The group $G$ can be factorize as $G=G_{+}G_{-}$ with $G_{+},G_{-}$ being the Lie subgroups $$\begin{aligned}
G_{+} &=&\left\{ g_{+}\left( x,y,z\right) =g\left( z,x,y,0,0,0\right)
/\left( x,y,z\right) \in \mathbb{R}^{3}\right\} \\
&& \\
G_{-} &=&\left\{ g_{-}\left( p,q,\theta \right) =g\left( 0,0,0,p,q,\theta
\right) /\left( p,q,\theta \right) \in \mathbb{R}^{3}\right\}\end{aligned}$$in such a way that, for $g\left( z,x,y,p,q,\theta \right) \in G$, $$g\left( z,x,y,p,q,\theta \right) =g_{+}\left( x,y,z\right) g_{-}\left(
p,q,\theta \right) ~~.$$
By using this result we determine the dressing actions$$\left\{
\begin{array}{l}
\left( g_{+}\left( x,y,z\right) \right) ^{g_{-}\left( p,q,\theta \right)
}=g_{+}\left( x\cos \theta +y\sin \theta ,y\cos \theta -x\sin \theta
,z\right) \\
\\
\left( g_{-}\left( p,q,\theta \right) \right) ^{g_{+}\left( x,y,z\right)
}=g_{-}\left( p,q,\theta \right)%
\end{array}%
\right. .$$
The Lie algebra $\mathfrak{g}$ decompose then as $\mathfrak{g}=\mathfrak{g}%
_{+}\oplus \mathfrak{g}_{-}$ with$$\begin{array}{ccc}
\mathfrak{g}_{+}=Lspan\left\{ e_{1},e_{2},e_{3}\right\} & , & \mathfrak{g}%
_{-}=Lspan\left\{ e_{4},e_{5},e_{6}\right\}%
\end{array}%$$being Lie subalgebras of $\mathfrak{g}$. The exponential map is surjective and each element $g\left( z,x,y,p,q,\theta \right) \in \mathrm{A}_{6.34}$ can be written as$$\begin{aligned}
&&g\left( z,x,y,p,q,\theta \right) \\
&=&e^{\left( -\frac{1}{2}ze_{1}+\frac{\theta }{2}\frac{\left( x\sin \theta
-y\left( 1-\cos \theta \right) \right) }{\left( 1-\cos \theta \right) }e_{2}+%
\frac{\theta }{2}\frac{\left( x\left( 1-\cos \theta \right) +y\sin \theta
\right) }{\left( 1-\cos \theta \right) }e_{3}+pe_{4}+\left( q-\frac{1}{2}%
p\theta \right) e_{5}+\theta e_{6}\right) }\end{aligned}$$
Let us now introduce a bilinear form on $\mathfrak{g}$ inherited from the non-invariant metric on $G$ given in reference [@ghanam], $$g=dp^{2}+\left( dq-\frac{1}{2}\left( pd\theta +\theta dp\right) \right)
^{2}+dx^{2}+dy^{2}-ydxd\theta +xdyd\theta +dzd\theta$$By taking this metric at the identity element of $G$ we get the nondegenerate symmetric bilinear form $\left( ,\right) _{\mathfrak{g}}:%
\mathfrak{g}\otimes \mathfrak{g}\longrightarrow \mathbb{R}$ defined as $$\begin{aligned}
&&\left( X\left( x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\right) ,X\left(
x_{1}^{\prime },x_{2}^{\prime },x_{3}^{\prime },x_{4}^{\prime
},x_{5}^{\prime },x_{6}^{\prime }\right) \right) _{\mathfrak{g}} \\
&=&x_{2}x_{2}^{\prime }+x_{3}x_{3}^{\prime }+x_{4}x_{4}^{\prime
}+x_{5}x_{5}^{\prime }+x_{1}x_{6}^{\prime }+x_{6}x_{1}^{\prime }\end{aligned}$$It gives rise to the decomposition of $\mathfrak{g}$ as $\mathfrak{g}%
_{+}^{\bot }\oplus \mathfrak{g}_{-}^{\bot }$ where $$\begin{array}{ccc}
\mathfrak{g}_{+}^{\bot }=Lspan\left\{ e_{1},e_{4},e_{5}\right\} & , &
\mathfrak{g}_{-}^{\bot }=Lspan\left\{ e_{2},e_{3},e_{6}\right\}%
\end{array}%$$This bilinear form defines associated $\tau $-action of $\mathfrak{g}$ on itself $$\begin{array}{lllllll}
\mathrm{ad}_{e_{3}}^{\tau }e_{2}=e_{1} & , & \mathrm{ad}_{e_{2}}^{\tau
}e_{3}=-e_{1} & , & \mathrm{ad}_{e_{4}}^{\tau }e_{5}=-e_{1} & , & \mathrm{ad}%
_{e_{2}}^{\tau }e_{6}=-e_{3} \\
\mathrm{ad}_{e_{6}}^{\tau }e_{2}=-e_{3} & , & \mathrm{ad}_{e_{6}}^{\tau
}e_{3}=e_{2} & , & \mathrm{ad}_{e_{6}}^{\tau }e_{5}=e_{4} & , & \mathrm{ad}%
_{e_{3}}^{\tau }e_{6}=e_{2}%
\end{array}%$$
So, let us now consider the semidirect sum $\mathfrak{h}=\mathfrak{g}\ltimes
\mathfrak{g}$ where the left $\mathfrak{g}$ acts on the other one by the $%
\tau $-action. We have then the decomposition $\mathfrak{h}=\mathfrak{h}%
_{+}\oplus \mathfrak{h}_{-}$ with$$\begin{aligned}
\mathfrak{h}_{+} &=&\mathfrak{g}_{+}\oplus \mathfrak{g}_{+}^{\bot
}=Lspan\left\{ \left( e_{1},0\right) ,\left( e_{2},0\right) ,\left(
e_{3},0\right) ,\left( 0,e_{1}\right) ,\left( 0,e_{4}\right) \left(
0,e_{5}\right) \right\} \\
&& \\
\mathfrak{h}_{-} &=&\mathfrak{g}_{-}\oplus \mathfrak{g}_{-}^{\bot
}=Lspan\left\{ \left( e_{4},0\right) ,\left( e_{5},0\right) ,\left(
e_{6},0\right) ,\left( 0,e_{2}\right) ,\left( 0,e_{3}\right) \left(
0,e_{6}\right) \right\}\end{aligned}$$The Lie algebra structure on $\mathfrak{h}$ is given by $\left( \ref{Lie alg
g+g 1}\right) $, with the fundamental Lie brackets$$\begin{array}{lll}
\left[ \left( e_{2},0\right) ,\left( e_{3},0\right) \right] =\left(
e_{1},0\right) & & \left[ \left( e_{3},0\right) ,\left( 0,e_{2}\right) %
\right] =\left( 0,e_{1}\right) \\
\left[ \left( e_{2},0\right) ,\left( e_{6},0\right) \right] =\left(
e_{3},0\right) & & \left[ \left( e_{3},0\right) ,\left( 0,e_{6}\right) %
\right] =\left( 0,e_{2}\right) \\
\left[ \left( e_{3},0\right) ,\left( e_{6},0\right) \right] =-\left(
e_{2},0\right) & & \left[ \left( e_{4},0\right) ,\left( 0,e_{5}\right) %
\right] =-\left( 0,e_{1}\right) \\
\left[ \left( e_{4},0\right) ,\left( e_{6},0\right) \right] =\left(
e_{5},0\right) & & \left[ \left( e_{6},0\right) ,\left( 0,e_{2}\right) %
\right] =-\left( 0,e_{3}\right) \\
\left[ \left( e_{2},0\right) ,\left( 0,e_{3}\right) \right] =-\left(
0,e_{1}\right) & & \left[ \left( e_{6},0\right) ,\left( 0,e_{3}\right) %
\right] =\left( 0,e_{2}\right) \\
\left[ \left( e_{2},0\right) ,\left( 0,e_{6}\right) \right] =-\left(
0,e_{3}\right) & & \left[ \left( e_{6},0\right) ,\left( 0,e_{5}\right) %
\right] =\left( 0,e_{4}\right)%
\end{array}%$$The semidirect product structure of $H=G\ltimes \mathfrak{g}$ is defined by eq. $\left( \ref{semidirect prod}\right) $.
Every element $\left( g\left( z,x,y,p,q,\theta \right)
,\sum_{i=1}^{6}x_{i}e_{i}\right) \in H$ can be factorized as $$\begin{aligned}
&&\left( g\left( z,x,y,p,q,\theta \right) ,\sum_{i=1}^{6}x_{i}e_{i}\right) \\
&=&\left( g_{+}\left( z,x,y\right) ,\left( x_{1}-x_{5}p\right) e_{1}+\left(
x_{4}+x_{5}\theta \right) e_{4}+x_{5}e_{5}\right) \\
&&\cdot \left( g_{-}\left( p,q,\theta \right)
,x_{2}e_{2}+x_{3}e_{3}+x_{6}e_{6}\right)\end{aligned}$$from where we get the reciprocal *dressing actions* $$\begin{array}{l}
\left( g_{+}\left( x,y,z\right) ,x_{1}e_{1}+x_{4}e_{4}+x_{5}e_{5}\right)
^{\left( g_{-}\left( p,q,\theta \right)
,x_{2}e_{2}+x_{3}e_{3}+x_{6}e_{6}\right) } \\
~=\left( g_{+}\left( x\cos \theta +y\sin \theta ,y\cos \theta -x\sin \theta
,z\right) \right. , \\
~~~~~\left. \left( \frac{1}{2}x_{6}\left( x^{2}+y^{2}\right) -\left(
yx_{2}-xx_{3}\right) +x_{1}-x_{5}p\right) e_{1}+\left( x_{4}+x_{5}\theta
\right) e_{4}+x_{5}e_{5}\right)%
\end{array}%$$and$$\begin{array}{l}
\left( g_{-}\left( p,q,\theta \right)
,x_{2}e_{2}+x_{3}e_{3}+x_{6}e_{6}\right) ^{\left( g_{+}\left( x,y,z\right)
,x_{1}e_{1}+x_{4}e_{4}+x_{5}e_{5}\right) } \\
~=\left( g_{-}\left( p,q,\theta \right) ,\left( x_{2}-x_{6}y\right)
e_{2}+\left( x_{3}+x_{6}x\right) e_{3}+x_{6}e_{6}\right)%
\end{array}%$$The infinitesimal generator of these dressing action are $$\begin{array}{l}
\left( g_{+}\left( x,y,z\right) ,x_{1}e_{1}+x_{4}e_{4}+x_{5}e_{5}\right)
^{\left( z_{4}e_{4}+z_{5}e_{5}+z_{6}e_{6},z_{2}^{\prime }e_{2}+z_{3}^{\prime
}e_{3}+z_{6}^{\prime }e_{6}\right) } \\
~=\left( yz_{6}e_{2}-xz_{6}e_{3},\left( \frac{1}{2}\left( x^{2}+y^{2}\right)
z_{6}^{\prime }-x_{5}z_{4}+xz_{3}^{\prime }-yz_{2}^{\prime }\right)
e_{1}+x_{5}z_{6}e_{4}\right)%
\end{array}%$$and
$$\begin{array}{l}
\left( g_{-}\left( p,q,\theta \right)
,x_{2}e_{2}+x_{3}e_{3}+x_{6}e_{6}\right) ^{\left(
z_{1}e_{1}+z_{2}e_{2}+z_{3}e_{3},z_{1}^{\prime }e_{1}+z_{4}^{\prime
}e_{4}+z_{5}^{\prime }e_{5}\right) } \\
~=\left( 0,-x_{6}z_{3}e_{2}+x_{6}z_{2}e_{3}\right)%
\end{array}%$$
The Poisson-Lie bivector is determined from the relation $\left( \ref{PL
bivector h+ 1}\right) $, that for $$\begin{aligned}
\left( g_{+},X_{+}^{\bot }\right) &=&\left( g\left( z,x,y\right)
,x_{1}e_{1}+x_{4}e_{4}+x_{5}e_{5}\right) \\
\left( Y_{-},Y_{-}^{\bot }\right) &=&\left(
y_{4}e_{4}+y_{5}e_{5}+y_{6}e_{6},y_{2}^{\prime }e_{2}+y_{3}^{\prime
}e_{3}+y_{6}^{\prime }e_{6}\right) \\
\left( Z_{-},Z_{-}^{\bot }\right) &=&\left(
z_{4}e_{4}+z_{5}e_{5}+z_{6}e_{6},z_{2}^{\prime }e_{2}+z_{3}^{\prime
}e_{3}+z_{6}^{\prime }e_{6}\right)\end{aligned}$$we get $$\begin{aligned}
&&\left\langle \gamma \left( Y_{-},Y_{-}^{\bot }\right) \left(
g_{+},Z_{+}^{\bot }\right) ^{-1}\otimes \gamma \left( Z_{-},Z_{-}^{\bot
}\right) \left( g_{+},X_{+}^{\bot }\right) ^{-1},\pi _{+}\left(
g_{+},X_{+}^{\bot }\right) \right\rangle \\
&=&y_{4}x_{5}z_{6}+y_{6}xz_{3}^{\prime }-y_{6}x_{5}z_{4}-y_{6}yz_{2}^{\prime
} \\
&&+\frac{1}{2}y_{6}\left( x^{2}+y^{2}\right) z_{6}^{\prime }+y_{2}^{\prime
}yz_{6}-y_{3}^{\prime }xz_{6}-\frac{1}{2}y_{6}^{\prime }\left(
x^{2}+y^{2}\right) z_{6}\end{aligned}$$Introducing the linear map $\pi _{+}^{R}:\mathfrak{h}_{-}\longrightarrow
\mathfrak{h}_{+}$ defined as $$\begin{aligned}
&&\left\langle \gamma \left( Y_{-},Y_{-}^{\bot }\right) \left(
g_{+},Z_{+}^{\bot }\right) ^{-1}\otimes \gamma \left( Z_{-},Z_{-}^{\bot
}\right) \left( g_{+},X_{+}^{\bot }\right) ^{-1},\pi _{+}\left(
g_{+},X_{+}^{\bot }\right) \right\rangle \\
&=&\left( \left( Y_{-},Y_{-}^{\bot }\right) ,\pi _{+}^{R}\left(
Z_{-},Z_{-}^{\bot }\right) \right) _{\mathfrak{h}}\end{aligned}$$we get that it is characterized by the matrix $$\begin{aligned}
&&\pi _{+}^{R}\left( g\left( z,x,y\right)
,x_{1}e_{1}+x_{4}e_{4}+x_{5}e_{5}\right) \\
&& \\
&=&\left(
\begin{array}{cccccc}
0 & 0 & x_{5} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
-x_{5} & 0 & 0 & -y & x & \frac{1}{2}\left( x^{2}+y^{2}\right) \\
0 & 0 & y & 0 & 0 & 0 \\
0 & 0 & -x & 0 & 0 & 0 \\
0 & 0 & -\frac{1}{2}\left( x^{2}+y^{2}\right) & 0 & 0 & 0%
\end{array}%
\right)\end{aligned}$$
The Poisson-Lie bivector $H_{-}$ is determined from the relation $\left( \ref%
{PL bivector h- 1}\right) $ for $$\begin{aligned}
\left( g_{-},X_{-}^{\bot }\right) &=&\left( g\left( p,q,\theta \right)
,x_{2}e_{2}+x_{3}e_{3}+x_{6}e_{6}\right) \\
\left( Y_{+},Y_{+}^{\bot }\right) &=&\left(
y_{1}e_{1}+y_{2}e_{2}+y_{3}e_{3},y_{1}^{\prime }e_{1}+y_{4}^{\prime
}e_{4}+y_{5}^{\prime }e_{5}\right) \\
\left( Z_{+},Z_{+}^{\bot }\right) &=&\left(
z_{1}e_{1}+z_{2}e_{2}+z_{3}e_{3},z_{1}^{\prime }e_{1}+z_{4}^{\prime
}e_{4}+z_{5}^{\prime }e_{5}\right)\end{aligned}$$and after some lengthy computations we get $$\left\langle \left( g_{-},X_{-}^{\bot }\right) ^{-1}\left( \gamma \left(
Y_{+},Y_{+}^{\bot }\right) \otimes \gamma \left( Z_{+},Z_{+}^{\bot }\right)
\right) ,\pi _{-}\left( g_{-},Z_{-}^{\bot }\right) \right\rangle
=y_{3}x_{6}z_{2}-y_{2}x_{6}z_{3}$$It has associated a bilinear form on $\mathfrak{h}_{+}$, that in the given basis is characterized by the matrix $$\pi _{-}^{L}\left( g\left( p,q,\theta \right)
,x_{2}e_{2}+x_{3}e_{3}+x_{6}e_{6}\right) =\left(
\begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -x_{6} & 0 & 0 & 0 \\
0 & x_{6} & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0%
\end{array}%
\right)$$
### AKS-Integrable system on $G=\mathrm{A}_{6.34}$
Let us study a dynamical system on $\mathfrak{h}$ ruled by the $\mathrm{ad}^{%
\mathfrak{h}}$-invariant Hamilton function $\mathsf{H}$ on $\mathfrak{h}$$$\mathsf{H}\left( X,X^{\prime }\right) =\frac{1}{2}\left( \left( \left(
X,X^{\prime }\right) ,\left( X,X^{\prime }\right) \right) _{\mathfrak{h}%
}\right) =\left( X,X^{\prime }\right) _{\mathfrak{g}}$$Writing $\left( X,X^{\prime }\right) \in \mathfrak{h}$ as $$\left( X,X^{\prime }\right) =\left(
\sum_{i=1}^{6}x_{i}e_{i},\sum_{i=1}^{6}x_{i}^{\prime }e_{i}\right)$$it becomes in $$\mathsf{H}\left( X,X^{\prime }\right) =\frac{1}{n}\left( x_{2}^{\prime
}x_{2}+x_{3}^{\prime }x_{3}+x_{4}^{\prime }x_{4}+x_{5}^{\prime
}x_{5}+x_{1}x_{6}^{\prime }+x_{6}x_{1}^{\prime }\right)$$for which $$\mathfrak{L}_{\mathsf{H}}\left( X,X^{\prime }\right) =\left( X,X^{\prime
}\right)$$
The hamiltonian vector field is then$$V_{\mathsf{H}}\left( X,X^{\prime }\right) =-\mathrm{ad}_{\Pi _{+}\mathfrak{L}%
_{\mathsf{H}}\left( X,X^{\prime }\right) }^{\mathfrak{h}}\left( X,X^{\prime
}\right) =\left[ \Pi _{-}\left( X,X^{\prime }\right) ,\Pi _{+}\left(
X,X^{\prime }\right) \right]$$Then, the explicit computation of the Lie bracket and applying the condition $\gamma \left( K_{-}\right) \in \emph{char}\left( \mathfrak{h}_{+}\right) $, which means that $x_{6}^{\prime }=0$, we get the Hamiltonian vector fields $$V_{\mathsf{H}}\left( X,X^{\prime }\right) =\left(
x_{3}x_{6}e_{2}-x_{2}x_{6}e_{3},\left( x_{2}x_{3}^{\prime
}-x_{3}x_{2}^{\prime }-x_{4}x_{5}^{\prime }\right) e_{1}+x_{5}^{\prime
}x_{6}e_{4}\right)$$and the nontrivial equation of motions are then$$\left\{
\begin{array}{l}
\dot{x}_{2}=x_{3}x_{6} \\
\dot{x}_{3}=-x_{2}x_{6} \\
\dot{x}_{1}^{\prime }=\left( x_{2}x_{3}^{\prime }-x_{3}x_{2}^{\prime
}-x_{4}x_{5}^{\prime }\right) \\
\dot{x}_{4}^{\prime }=x_{5}^{\prime }x_{6}%
\end{array}%
\right. \label{A6.34 ham eqs}$$with $x_{1},x_{4},x_{5},x_{6},x_{2}^{\prime },x_{3}^{\prime },x_{5}^{\prime
},x_{6}^{\prime }$ being time independent.
Let us now to integrate these equations by the AKS method. In doing so, we need to factorize the exponential curve $\left( \ref{exp semid transad}%
\right) $$$\mathrm{Exp}^{\cdot }t\mathfrak{L}_{\mathsf{H}}\left( X_{0},X_{0}^{\prime
}\right) =\mathrm{Exp}^{\cdot }t\left(
\sum_{i=1}^{6}x_{i0}e_{i},\sum_{i=1}^{5}x_{i0}^{\prime }e_{i}\right)$$for some initial $\left( X_{0},X_{0}^{\prime }\right) \in \mathfrak{h}$.
Factorizing the exponential $$\mathrm{Exp}^{\cdot }t\mathfrak{L}_{\mathsf{H}}\left( X_{0},X_{0}^{\prime
}\right) =h_{+}\left( t\right) h_{-}\left( t\right)$$we obtain the curve$$t\longmapsto h_{+}\left( t\right) =\left( g_{+}\left( t\right) ,W_{+}^{\bot
}\left( t\right) \right)$$where$$g_{+}\left( t\right) =g_{+}\left( -2tx_{10},\tfrac{\left( x_{30}\left(
1-\cos tx_{60}\right) +x_{20}\sin tx_{60}\right) }{x_{60}},\tfrac{\left(
x_{20}\left( \cos tx_{60}-1\right) +x_{30}\sin tx_{60}\right) }{x_{60}}%
\right)$$and$$W_{+}^{\bot }\left( t\right) =\left( tx_{10}^{\prime
}-2t^{2}x_{40}x_{50}^{\prime }\right) e_{1}+\left( tx_{4}^{\prime
}+2t^{2}x_{60}x_{50}^{\prime }\right) e_{4}+tx_{5}^{\prime }e_{5}$$which through the adjoint action$$Z\left( t\right) =\mathrm{Ad}_{h_{+}^{-1}\left( t\right) }^{H}\left(
X_{0},X_{0}^{\prime }\right) =\left( \mathrm{Ad}_{g_{+}^{-1}\left( t\right)
}^{G}X_{0}\,,\tau _{g_{+}^{-1}\left( t\right) }\left( X_{0}^{\prime }-%
\mathrm{ad}_{X_{0}}^{\tau }W_{+}^{\bot }\left( t\right) \right) \right)$$gives rise to the solution of the Hamilton equations $\left( \ref{A6.34 ham
eqs}\right) $. In fact, after some calculations we get the nontrivial solutions $$\left\{
\begin{array}{l}
x_{2}\left( t\right) =x_{20}\cos tx_{60}+x_{30}\sin tx_{60} \\
x_{3}\left( t\right) =x_{30}\cos tx_{60}-x_{20}\sin tx_{60} \\
x_{1}^{\prime }\left( t\right) =\left( x_{10}^{\prime
}-tx_{40}x_{50}^{\prime }\right) -\dfrac{\left( x_{20}^{\prime }x_{2}\left(
t\right) +x_{30}^{\prime }x_{3}\left( t\right) \right) }{x_{60}}+\dfrac{%
\left( x_{30}x_{30}^{\prime }+x_{20}x_{20}^{\prime }\right) }{x_{60}} \\
x_{4}^{\prime }\left( t\right) =x_{40}^{\prime }+tx_{60}x_{50}^{\prime }%
\end{array}%
\right.$$
$\mathfrak{sl}_{2}\left( \mathbb{C}\right) $ equipped with an inner product
---------------------------------------------------------------------------
Let us consider the Lie algebra $\mathfrak{sl}_{2}(\mathbb{C})$, with the associated decomposition $$\mathfrak{sl}_{2}(\mathbb{C})^{\mathbb{R}}=\mathfrak{su}_{2}\oplus \mathfrak{%
b} \label{iwasawa}$$where $\mathfrak{b}$ is the subalgebra of upper triangular matrices with real diagonal and null trace, and $\mathfrak{su}_{2}$ is the real subalgebra of $\mathfrak{sl}_{2}(\mathbb{C})$ of antihermitean matrices. For $\mathfrak{%
su}_{2}$ we take the basis $$X_{1}=\left(
\begin{array}{cc}
0 & i \\
i & 0%
\end{array}%
\right) \quad ,\quad X_{2}=\left(
\begin{array}{cc}
0 & 1 \\
-1 & 0%
\end{array}%
\right) \quad ,\quad X_{3}=\left(
\begin{array}{cc}
i & 0 \\
0 & -i%
\end{array}%
\right)$$and in $\mathfrak{b}$ this one$$E=\left(
\begin{array}{cc}
0 & 1 \\
0 & 0%
\end{array}%
\right) \quad ,\quad iE=\left(
\begin{array}{cc}
0 & i \\
0 & 0%
\end{array}%
\right) \quad ,\quad H=\left(
\begin{array}{cc}
1 & 0 \\
0 & -1%
\end{array}%
\right)$$
The Killing form in $\mathfrak{sl}_{2}(\mathbb{C})$ is$$\kappa (X,Y)=\mathrm{tr}{(ad}\left( {X}\right) {ad}\left( {Y}\right) {)}=4%
\mathrm{tr}{(XY)},$$and the bilinear form on $\mathfrak{sl}_{2}(\mathbb{C})$ $$\mathrm{k}_{0}(X,Y)=-\frac{1}{4}\mathrm{\mathrm{Im}} \kappa (X,Y)$$is nondegenerate, symmetric and Ad-invariant, turning $\mathfrak{b}$* *and* *$\mathfrak{su}_{2}$* into isotropic subspaces*. However, it fails in to be an *inner product* and, in consequence, it does not give rise to a Riemannian metric on the Lie group $SL\left( 2,\mathbb{C}%
\right) $.
By introducing the idempotent linear operator $\mathcal{E}:\mathfrak{sl}_{2}(%
\mathbb{C})\longrightarrow \mathfrak{sl}_{2}(\mathbb{C})$ defined as $$\begin{array}{lllll}
\mathcal{E}X_{1}=-E & , & \mathcal{E}X_{2}=iE & , & \mathcal{E}X_{3}=-H \\
\mathcal{E}E=-X_{1} & , & \mathcal{E}iE=X_{2} & , & \mathcal{E}H=-X_{3}%
\end{array}%$$we define the *non Ad-invariant* *inner product* $\mathrm{g}:%
\mathfrak{sl}_{2}(\mathbb{C})\otimes \mathfrak{sl}_{2}(\mathbb{C}%
)\longrightarrow \mathbb{R\ }$such that for $V,W\in \mathfrak{sl}_{2}(%
\mathbb{C})$ we have$$\mathrm{g}\left( V,W\right) :=\mathrm{k}_{0}(V,\mathcal{E}W)~.$$So, by left or right translations, it induces a non bi-invariant Riemannian metric on $SL\left( 2,\mathbb{C}\right) $, regarded as a real manifold.
Then, beside Iwasawa decomposition $\left( \ref{iwasawa}\right) $, we have the vector subspace direct sum decomposition $\mathfrak{sl}_{2}(\mathbb{C})^{%
\mathbb{R}}=\left( \mathfrak{su}_{2}\right) ^{\bot }\oplus \mathfrak{b}%
^{\bot }$, where $^{\bot }$ stands for the orthogonal subspaces referred to the inner product $\mathrm{g}$. Since the basis $\left\{
X_{1},X_{2},X_{3},E,\left( iE\right) ,H\right\} $ is an orthogonal one in relation with $\mathrm{g}$, we have that $\left( \mathfrak{su}_{2}\right)
^{\bot }=\mathfrak{b}$ and $\mathfrak{b}^{\bot }=\mathfrak{su}_{2}$.
We now built an example of the construction developed in previous sections by considering $\mathfrak{sl}_{2}(\mathbb{C})$ equipped with the inner product $\mathrm{g}:\mathfrak{sl}_{2}(\mathbb{C})\otimes \mathfrak{sl}_{2}(%
\mathbb{C})\longrightarrow \mathbb{R}$. The $\tau $-action of $SL\left( 2,%
\mathbb{C}\right) $ on $\mathfrak{sl}_{2}(\mathbb{C})$ and of $\mathfrak{sl}%
_{2}(\mathbb{C})$ on itself are then $$\begin{array}{ccc}
\tau _{g}=\mathcal{E}\circ \mathrm{Ad}_{g}\circ \mathcal{E} & , & \mathrm{ad}%
_{X}^{\tau }=\mathcal{E}\circ \mathrm{ad}_{X}\circ \mathcal{E}%
\end{array}%$$
In the remaining of this example we often denote $\mathfrak{g}_{+}=\mathfrak{%
su}_{2}$, $\mathfrak{g}_{-}=\mathfrak{b}$, $G_{+}=SU\left( 2\right) $ and $%
G_{-}=B$. Let us denote the elements $g\in SL(2,\mathbb{C)}$, $g_{+}\in
SU\left( 2\right) $ and $g_{-}\in B$ as $$\begin{array}{ccccc}
g=\left(
\begin{array}{cc}
\mu & \nu \\
\rho & \sigma%
\end{array}%
\right) & , & g_{+}=\left(
\begin{array}{cc}
\alpha & \beta \\
-\bar{\beta} & \bar{\alpha}%
\end{array}%
\right) & , & g_{-}=\left(
\begin{array}{cc}
a & z \\
0 & a^{-1}%
\end{array}%
\right)%
\end{array}%
,$$with $\mu ,\sigma ,\nu ,\rho ,\alpha ,\beta ,z\in \mathbb{C}$, $a\in \mathbb{%
R}_{>0}$, and $\mu \sigma -\nu \rho =\left\vert \alpha \right\vert
^{2}+\left\vert \beta \right\vert ^{2}=1$.
The factorization $SL(2,\mathbb{C)}=SU\left( 2\right) \times B$ means that $%
g=g_{+}g_{-}$ with $$\begin{array}{ccc}
g_{+}=\frac{1}{\sqrt{\left\vert \mu \right\vert ^{2}+\left\vert \rho
\right\vert ^{2}}}\left(
\begin{array}{cc}
\mu & -\bar{\rho} \\
\rho & \bar{\mu}%
\end{array}%
\right) & , & g_{-}\frac{1}{\sqrt{\left\vert \mu \right\vert ^{2}+\left\vert
\rho \right\vert ^{2}}}\left(
\begin{array}{cc}
\left\vert \mu \right\vert ^{2}+\left\vert \rho \right\vert ^{2} & \bar{\mu}%
\nu +\bar{\rho}\sigma \\
0 & 1%
\end{array}%
\right)%
\end{array}%$$from where we determine the dressing actions $$g_{+}^{g_{-}}=\frac{1}{\Delta \left( g_{+},g_{-}\right) }\left(
\begin{array}{cc}
a\left( \alpha -\dfrac{1}{a}z\bar{\beta}\right) & \dfrac{1}{a}\beta \\
-\dfrac{1}{a}\bar{\beta} & a\left( \bar{\alpha}-\dfrac{1}{a}\bar{z}\beta
\right)%
\end{array}%
\right)$$$$g_{-}^{g_{+}}=\left(
\begin{array}{cc}
\Delta \left( g_{+},g_{-}\right) & \frac{\left( a^{2}\beta \bar{\alpha}+az%
\bar{\alpha}^{2}-a\bar{z}\beta ^{2}-\left( \left\vert z\right\vert ^{2}+%
\dfrac{1}{a^{2}}\right) \bar{\alpha}\beta \right) }{\Delta \left(
g_{+},g_{-}\right) } \\
0 & \dfrac{1}{\Delta \left( g_{+},g_{-}\right) }%
\end{array}%
\right)$$with $$\Delta \left( g_{+},g_{-}\right) =\sqrt{a^{2}\left\vert \alpha \right\vert
^{2}-a\left( \bar{z}\alpha \beta +z\bar{\beta}\bar{\alpha}\right) +\left(
\left\vert z\right\vert ^{2}+\dfrac{1}{a^{2}}\right) \left\vert \beta
\right\vert ^{2}}~.$$
The dressing vector fields associated with $%
X_{+}=x_{1}X_{1}+x_{2}X_{2}+x_{3}X_{3}\in \mathfrak{g}_{+}$ is
$$g_{-}^{X_{+}}=\dfrac{\sqrt{\mathrm{det}X_{+}}}{a}\left(
\begin{array}{cc}
-a\left( bx_{2}+cx_{1}\right) & \Omega \left( x_{1},x_{2},x_{3},a,b,c\right)
\\
0 & \dfrac{1}{a}\left( bx_{2}+cx_{1}\right)%
\end{array}%
\right)$$
where $$\begin{aligned}
\Omega \left( x_{1},x_{2},x_{3},a,b,c\right)
&=&b^{2}x_{2}+ic^{2}x_{1}+2x_{3}a\left( c-ib\right) \\
&&+\left( bc-b^{2}-c^{2}-\dfrac{1}{a^{2}}\right) \left( ix_{1}+x_{2}\right)\end{aligned}$$and those associated with the generators $\left\{ E,\left( iE\right)
,H\right\} \ $are
$$\begin{aligned}
g_{+}^{E} &=&\left(
\begin{array}{cc}
\frac{1}{2}\left( \alpha \beta +\bar{\beta}\bar{\alpha}\right) \alpha -\bar{%
\beta} & \frac{1}{2}\left( \alpha \beta +\bar{\beta}\bar{\alpha}\right) \beta
\\
-\frac{1}{2}\left( \alpha \beta +\bar{\beta}\bar{\alpha}\right) \bar{\beta}
& \frac{1}{2}\left( \alpha \beta +\bar{\beta}\bar{\alpha}\right) \bar{\alpha}%
-\beta%
\end{array}%
\right) \\
&& \\
g_{+}^{\left( iE\right) } &=&\left(
\begin{array}{cc}
-\frac{1}{2}\left( i\alpha \beta -i\bar{\beta}\bar{\alpha}\right) \alpha -i%
\bar{\beta} & -\frac{1}{2}\left( i\alpha \beta -i\bar{\beta}\bar{\alpha}%
\right) \beta \\
\frac{1}{2}\left( i\alpha \beta -i\bar{\beta}\bar{\alpha}\right) \bar{\beta}
& -\frac{1}{2}\left( i\alpha \beta -i\bar{\beta}\bar{\alpha}\right) \bar{%
\alpha}+i\beta%
\end{array}%
\right) \\
&& \\
g_{+}^{H} &=&2\left(
\begin{array}{cc}
\left\vert \beta \right\vert ^{2}\alpha & -\left\vert \alpha \right\vert
^{2}\beta \\
\left\vert \alpha \right\vert ^{2}\bar{\beta} & \left\vert \beta \right\vert
^{2}\bar{\alpha}%
\end{array}%
\right)\end{aligned}$$
### Semidirect product with the $\protect\tau $-action action
Now, we shall work with $G=SL(2,\mathbb{C})$ and its Lie algebra $\mathfrak{g%
}=\mathfrak{sl}_{2}(\mathbb{C})$ embodied in the semidirect product Lie group $H=SL(2,\mathbb{C})$ $\ltimes \mathfrak{sl}_{2}(\mathbb{C})$ where the vector space $\mathfrak{sl}_{2}(\mathbb{C})$ regarded as the representation space for the $\tau $-*action* in the right action structure of semidirect product $\left( \ref{semidirect prod}\right) $. The associated Lie algebra is the semidirect sum Lie algebra $\mathfrak{h}=\mathfrak{sl}%
_{2}(\mathbb{C})\ltimes \mathfrak{sl}_{2}(\mathbb{C})$. It inherit a decomposition from that in $\mathfrak{sl}_{2}(\mathbb{C})$, namely, $%
\mathfrak{h}=\mathfrak{h}_{+}\oplus \mathfrak{h}_{-}$ with $$\begin{aligned}
\mathfrak{h}_{+} &=&\mathfrak{g}_{+}\oplus \mathfrak{g}_{+}^{\bot }=%
\mathfrak{su}_{2}\oplus \mathfrak{b} \\
\mathfrak{h}_{-} &=&\mathfrak{g}_{-}\oplus \mathfrak{g}_{-}^{\bot }=%
\mathfrak{b}\oplus \mathfrak{su}_{2}\end{aligned}$$The bilinear form$$\left( \left( X,X^{\prime }\right) ,\left( Y,Y^{\prime }\right) \right) _{%
\mathfrak{h}}=\mathrm{g}\left( X,Y^{\prime }\right) +\mathrm{g}\left(
Y,X^{\prime }\right)$$turns $\mathfrak{h}_{+},\mathfrak{h}_{-}$ into isotropic subspaces of $%
\mathfrak{h}$. The Lie group $H=SL(\mathbb{C})\ltimes \mathfrak{sl}_{2}(%
\mathbb{C})$ factorize also as $H=H_{+}H_{-}$ with $$\begin{array}{ccc}
H_{+}=SU\left( 2\right) \ltimes \mathfrak{b} & , & H_{-}=B\ltimes \mathfrak{%
su}_{2}%
\end{array}%$$Then, a typical element $h\in H$ can be written as $$h=\left( g,X\right) =\left( g_{+},\tau _{g_{-}}\Pi _{\mathfrak{g}_{+}^{\bot
}}X\right) \cdot \left( g_{-},\Pi _{\mathfrak{g}_{-}^{\bot }}X\right)$$
Let us determine de PL bivector on $H_{+}$ from relation $$\begin{aligned}
&&\left\langle \gamma \left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right)
\left( g_{+},Y_{+}^{\bot }\right) ^{-1}\otimes \gamma \left( X_{-}^{\prime
\prime },Y_{-}^{\prime \prime \bot }\right) \left( g_{+},Y_{+}^{\bot
}\right) ^{-1},\pi _{+}\left( g_{+},Z_{+}^{\bot }\right) \right\rangle \\
&:&=\left( \Pi _{-}\mathrm{Ad}_{\left( g_{+},Z_{+}^{\bot }\right)
^{-1}}^{H}\left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right) ,\Pi _{+}%
\mathrm{Ad}_{\left( g_{+},Z_{+}^{\bot }\right) ^{-1}}^{H}\left(
X_{-}^{\prime \prime },Y_{-}^{\prime \prime \bot }\right) \right) _{%
\mathfrak{h}}\end{aligned}$$By writing $$\begin{aligned}
X_{-} &=&x_{E}E+x_{\left( iE\right) }\left( iE\right) +x_{H}H \\
Y_{-}^{\bot } &=&y_{1}X_{1}+y_{2}X_{2}+y_{3}X_{3} \\
Z_{+}^{\bot } &=&z_{E}E+z_{\left( iE\right) }\left( iE\right) +z_{H}H\end{aligned}$$and introducing $\pi _{+}^{R}:H_{+}\longrightarrow \mathfrak{h}_{+}\wedge
\mathfrak{h}_{+}$ such that$$\begin{aligned}
&&\left\langle \gamma \left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right)
\left( g_{+},Y_{+}^{\bot }\right) ^{-1}\otimes \gamma \left( X_{-}^{\prime
\prime },Y_{-}^{\prime \prime \bot }\right) \left( g_{+},Y_{+}^{\bot
}\right) ^{-1},\pi _{+}\left( g_{+},Z_{+}^{\bot }\right) \right\rangle \\
&=&\left( \left( X_{-}^{\prime },Y_{-}^{\prime \bot }\right) \otimes \left(
X_{-}^{\prime \prime },Y_{-}^{\prime \prime \bot }\right) ,\pi
_{+}^{R}\left( g_{+},Z_{+}^{\bot }\right) \right) _{\mathfrak{h}}\end{aligned}$$we obtain
$$\begin{aligned}
&&\pi _{+}^{R}\left( g_{+}\left( \alpha ,\beta \right) ,z_{E}E+z_{\left(
iE\right) }\left( iE\right) +z_{H}H\right) \\
&=&\left\vert \beta \right\vert ^{2}\left( X_{1},0\right) \wedge \left(
0,\left( iE\right) \right) \\
&&-\left( \left( \left\vert \alpha \right\vert ^{2}-\left\vert \beta
\right\vert ^{2}\right) \mathrm{Im}\left( \beta \alpha \right) +2i\mathrm{Re}%
\left( \beta ^{3}\bar{\alpha}\right) \right) \left( X_{1},0\right) \wedge
\left( 0,H\right) \\
&&-\left( 2\mathrm{Re}\left( \alpha ^{2}\beta ^{2}\right) +\left( \left\vert
\alpha \right\vert ^{2}-\left\vert \beta \right\vert ^{2}\right) \left\vert
\beta \right\vert ^{2}\right) \left( X_{2},0\right) \wedge \left( 0,E\right)
\\
&&-2\mathrm{Im}\left( \alpha ^{2}\beta ^{2}\right) \left( X_{2},0\right)
\wedge \left( 0,\left( iE\right) \right) \\
&&+\left( \left( \left\vert \alpha \right\vert ^{2}-\left\vert \beta
\right\vert ^{2}\right) \mathrm{Re}\left( \alpha \beta \right) +2\mathrm{Re}%
\left( \beta ^{3}\bar{\alpha}\right) \right) \left( X_{2},0\right) \wedge
\left( 0,H\right) \\
&&-\mathrm{Re}\left( \alpha \beta \right) \left( X_{3},0\right) \wedge \left(
0,\left( iE\right) \right) -2\mathrm{Im}\left( \alpha ^{2}\bar{\beta}%
^{2}\right) \left( X_{3},0\right) \wedge \left( 0,H\right) \\
&&-\left( z_{E}\left( \left( 1-4\left\vert \beta \right\vert ^{2}\right)
\mathrm{Re}\left( \alpha ^{2}\right) +\left( 1-4\left\vert \alpha \right\vert
^{2}\right) \mathrm{Re}\left( \beta ^{2}\right) \right) \right) \left(
0,E\right) \wedge \left( 0,H\right) \\
&&+z_{\left( iE\right) }\mathrm{Im}\left( \alpha ^{2}-\beta ^{2}\right) \left(
0,E\right) \wedge \left( 0,H\right) \\
&&+\left( z_{E}\left( \left( 1-4\left\vert \beta \right\vert ^{2}\right)
\mathrm{Im}\left( \alpha ^{2}\right) +\left( 1-4\left\vert \alpha \right\vert
^{2}\right) \mathrm{Im}\left( \beta ^{2}\right) \right) \right) \left(
0,H\right) \wedge \left( 0,\left( iE\right) \right) \\
&&+z_{\left( iE\right) }\mathrm{Re}\left( \alpha ^{2}-\beta ^{2}\right) \left(
0,H\right) \wedge \left( 0,\left( iE\right) \right) \\
&&+2\left( z_{E}\mathrm{Im}\left( \alpha \bar{\beta}\right) +z_{\left(
iE\right) }\mathrm{Re}\left( \alpha \bar{\beta}\right) \right) \left( 0,\left(
iE\right) \right) \wedge \left( 0,E\right)\end{aligned}$$
On the side, the Poisson-Lie bivector on $H_{-}$ is defined from relation$$\begin{aligned}
&&\left\langle \left( g_{-},Z_{-}^{\bot }\right) ^{-1}\gamma \left(
X_{+}^{\prime },Y_{+}^{\prime \bot }\right) \otimes \left( g_{-},Z_{-}^{\bot
}\right) ^{-1}\gamma \left( X_{+}^{\prime \prime },Y_{+}^{\prime \prime \bot
}\right) ,\pi _{-}\left( g_{-},Z_{-}^{\bot }\right) \right\rangle \\
&:&=\left( \Pi _{+}\mathrm{Ad}_{\left( g_{-},Z_{-}^{\bot }\right)
}^{H}\left( X_{+}^{\prime },Y_{+}^{\prime \bot }\right) ,\Pi _{-}\mathrm{Ad}%
_{\left( g_{-},Z_{-}^{\bot }\right) }^{H}\left( X_{+}^{\prime \prime
},Y_{+}^{\prime \prime \bot }\right) \right) _{\mathfrak{h}}\end{aligned}$$that writing$$\begin{aligned}
X_{+} &=&x_{1}X_{1}+x_{2}X_{2}+x_{3}X_{3} \\
Y_{+}^{\bot } &=&b_{1}B_{1}+b_{2}B_{2}+b_{3}B_{3}\end{aligned}$$and introducing $\pi _{H}^{-}:H_{-}\longrightarrow \mathfrak{h}_{-}\wedge
\mathfrak{h}_{-}$ such that$$\begin{aligned}
&&\left\langle \left( g_{-},Z_{-}^{\bot }\right) ^{-1}\gamma \left(
X_{+}^{\prime },Y_{+}^{\prime \bot }\right) \otimes \left( g_{-},Z_{-}^{\bot
}\right) ^{-1}\gamma \left( X_{+}^{\prime \prime },Y_{+}^{\prime \prime \bot
}\right) ,\pi _{-}\left( g_{-},Z_{-}^{\bot }\right) \right\rangle \\
&=&\left\langle \left( g_{-},Z_{-}^{\bot }\right) ^{-1}\gamma \left(
X_{+}^{\prime },Y_{+}^{\prime \bot }\right) \otimes \left( g_{-},Z_{-}^{\bot
}\right) ^{-1}\gamma \left( X_{+}^{\prime \prime },Y_{+}^{\prime \prime \bot
}\right) ,\pi _{-}\left( g_{-},Z_{-}^{\bot }\right) \right\rangle\end{aligned}$$we get $$\begin{array}{l}
\pi _{-}^{L}\left( g_{-},Z_{-}^{\bot }\right) \\
=\dfrac{1}{a^{2}}\left( c^{2}+b^{2}+\dfrac{1}{a^{2}}-a^{2}\right) \left(
B_{1},0\right) \wedge \left( 0,X_{2}\right) \\
-\dfrac{1}{a^{2}}\left( b^{2}+c^{2}+\dfrac{1}{a^{2}}-a^{2}\right) \left(
B_{2},0\right) \wedge \left( 0,X_{1}\right) \\
~+\dfrac{c}{a}\left( B_{1},0\right) \wedge \left( 0,X_{3}\right) -\dfrac{c}{a%
}\left( B_{3},0\right) \wedge \left( 0,X_{1}\right) \\
-\dfrac{b}{a}\left( B_{2},0\right) \wedge \left( 0,X_{3}\right) -\dfrac{b}{a}%
\left( B_{3},0\right) \wedge \left( 0,X_{2}\right) \\
~+2\dfrac{1}{a^{2}}\left( bcz_{1}-acz_{2}+2\left( c^{2}+b^{2}+\dfrac{1}{a^{2}%
}\right) z_{3}\right) \left( 0,X_{1}\right) \wedge \left( 0,X_{2}\right) \\
~+\dfrac{1}{a}\left( az_{1}-2bz_{3}\right) \left( 0,X_{2}\right) \wedge
\left( 0,X_{3}\right) -\dfrac{1}{a}\left( az_{2}+2cz_{3}\right) \left(
0,X_{1}\right) \wedge \left( 0,X_{3}\right)%
\end{array}%$$
### AKS integrable system on $\mathfrak{h}_{-}$
We study a Hamiltonian system on $\mathfrak{h}_{-}$, described by the $%
\mathrm{Ad}^{H}$-invariant function $\mathsf{H}$, and for which the hamiltonian vector field is$$V_{\mathsf{H}}\left( X_{-}+K_{+}\right) =\Pi _{-}\mathrm{ad}_{\Pi _{+}%
\mathfrak{L}_{\mathsf{H}}\left( X_{-}+K_{+}\right) }^{\mathfrak{h}}\left(
X_{-}+K_{+}\right) =-\mathrm{ad}_{\Pi _{-}\mathfrak{L}_{\mathsf{H}}\left(
X_{-}+K_{+}\right) }^{\mathfrak{h}}\left( X_{-}+K_{+}\right)$$It is worth to recall that $\psi \left( K_{+}\right) $ must be a *character* of $\mathfrak{h}_{-}$, which in this case means that $$K_{+}=\left( k_{3}X_{3},k_{H}H\right)$$for arbitrary $k_{3},k_{H}\in \mathbb{R}$. Thus the Hamilton equation of motion on $\imath _{K_{+}}\left( \mathfrak{h}_{-}\right) \subset \mathfrak{h}
$ are then$$\dot{Z}\left( t\right) =V_{\mathsf{H}}\left( Z\left( t\right) \right)$$for the initial condition $Z\left( 0\right) =X_{-\circ }+K_{+}$.
In particular, we consider the $\mathrm{Ad}^{H}$-invariant Hamilton function$$\mathsf{H}\left( X,X^{\prime }\right) =\mathrm{g}\left( X,X^{\prime }\right)$$where $\psi \left( \Pi _{+}\left( X,X^{\prime }\right) \right) \in \mathrm{%
char~}\mathfrak{h}_{-}$, so that $\left( X,X^{\prime }\right) =\left(
X_{-}+k_{3}X_{3},X_{-}^{\bot }+k_{H}H\right) $. Then, the Hamilton function reduces to$$\mathsf{H}\left( x_{E}E+x_{\left( iE\right) }\left( iE\right)
+x_{H}H+k_{3}X_{3},x_{1}X_{1}+x_{2}X_{2}+x_{3}X_{3}+k_{H}H\right)
=x_{H}k_{H}+x_{3}k_{3}$$and$$\mathfrak{L}_{\mathsf{H}}\left( X,X^{\prime }\right) =\left(
x_{H}H+k_{3}X_{3},k_{H}H+x_{3}X_{3}\right)$$The hamiltonian vector field is $$\begin{aligned}
&&V_{\mathsf{H}}\left( x_{E}E+x_{\left( iE\right) }\left( iE\right)
+x_{H}H+k_{3}X_{3},x_{1}X_{1}+x_{2}X_{2}+x_{3}X_{3}+k_{H}H\right) \\
&=&\left( -2x_{H}\left( x_{E}E+x_{\left( iE\right) }\left( iE\right) \right)
,-2\left( x_{E}x_{3}+x_{H}x_{1}\right) X_{1}+2\left( x_{\left( iE\right)
}x_{3}-x_{H}x_{2}\right) X_{2}\right)\end{aligned}$$Because $x_{H}$ and $x_{3}$ are time independent, we write them as $$\begin{array}{ccc}
x_{H}=\dfrac{\alpha }{2} & , & x_{3}=\dfrac{\beta }{2}%
\end{array}%$$and the nontrivial Hamilton equations are$$\left\{
\begin{array}{l}
\dot{x}_{E}=-\alpha x_{E} \\
\dot{x}_{\left( iE\right) }=-\alpha x_{\left( iE\right) } \\
\dot{x}_{1}=-\beta x_{E}-\alpha x_{1} \\
\dot{x}_{2}=\beta x_{\left( iE\right) }-\alpha x_{2}%
\end{array}%
\right. \label{ham eq sl2C}$$
The solution $Z\left( t\right) =X_{-}\left( t\right) +K_{+}$ to this set of Hamilton equation is$$Z\left( t\right) =-\mathrm{Ad}_{h_{-}\left( t\right) }^{H}\left( X_{-\circ
}+K_{+}\right)$$with $$h_{-}\left( t\right) =\left( g_{-}\left( t\right) ,tx_{30}X_{3}\right)$$being such that$$\left( t\mathfrak{L}_{\mathsf{H}}(X_{-\circ }+K_{+})\right) =h_{+}\left(
t\right) h_{-}\left( t\right)$$
So, in order to solve our problem, we deal with the exponential curve$$\mathrm{Exp}^{\cdot }t\left( x_{H0}H+k_{30}X_{3},k_{H0}H+x_{30}X_{3}\right)
=\left( e^{t\left( x_{H0}H+k_{30}X_{3}\right) },tk_{H0}H+tx_{30}X_{3}\right)$$which factorize as $$\left( e^{t\left( x_{H0}H+k_{30}X_{3}\right) },tk_{H0}H+tx_{30}X_{3}\right)
=\left( g_{+}\left( t\right) ,tk_{H0}H\right) \cdot \left( g_{-}\left(
t\right) ,tx_{30}X_{3}\right)$$
The evolution of the system is then driven by the orbit of the curve $%
t\longmapsto h_{-}\left( t\right) =\left( g_{-}\left( t\right)
,tx_{30}X_{3}\right) $ where $$g_{-}\left( t\right) =\left(
\begin{array}{cc}
e^{tx_{H0}} & 0 \\
0 & e^{-tx_{H0}}%
\end{array}%
\right)$$Writing $$\begin{aligned}
Z\left( t\right) &=&\left( x_{E}\left( t\right) E+x_{\left( iE\right)
}\left( t\right) \left( iE\right) +x_{H}\left( t\right) H+k_{3}X_{3}\right. ,
\\
&&\left. x_{1}\left( t\right) X_{1}+x_{2}\left( t\right) X_{2}+x_{3}\left(
t\right) X_{3}+k_{H}H\right)\end{aligned}$$and $$Z\left( t_{0}\right) =\left( x_{E0}E+x_{\left( iE\right) 0}\left( iE\right)
+x_{H0}H+k_{3}X_{3},x_{10}X_{1}+x_{20}X_{2}+x_{30}X_{3}+k_{H}H\right)$$we have, after some computation and introducing $$\begin{array}{ccc}
x_{H0}=\dfrac{\alpha }{2} & , & x_{30}=\dfrac{\beta }{2}%
\end{array}%$$it turns in $$\begin{aligned}
Z\left( t\right) &=&-\left( x_{E0}e^{\alpha t}E+x_{\left( iE\right)
0}e^{\alpha t}\left( iE\right) +\dfrac{\alpha }{2}H+k_{3}X_{3}\right. , \\
&&\left. \left( x_{10}+\beta x_{E0}t\right) e^{\alpha t}X_{1}+\left(
x_{20}-\beta x_{\left( iE\right) 0}t\right) e^{\alpha t}X_{2}+\dfrac{\beta }{%
2}X_{3}+k_{H}H\right)\end{aligned}$$which means that$$\left\{
\begin{array}{l}
x_{E}\left( t\right) =-x_{E0}e^{\alpha t} \\
x_{\left( iE\right) }\left( t\right) =-x_{\left( iE\right) 0}e^{\alpha t} \\
x_{1}\left( t\right) =-x_{10}e^{\alpha t}-\beta x_{E0}te^{\alpha t} \\
x_{2}\left( t\right) =-x_{20}e^{\alpha t}+\beta x_{\left( iE\right)
0}te^{\alpha t}%
\end{array}%
\right.$$that solves the equations $\left( \ref{ham eq sl2C}\right) $.
Conclusions
===========
In this work we developed a method to obtain AKS integrable systems overtaking the lack of an Ad-invariant bilinear form in the associated Lie algebra. In doing so, we promoted the original double Lie algebra to the framework of the semidirect product with the $\tau $-representation, obtaining a double semidirect product Lie algebra which naturally admits an Ad-invariant bilinear form. Actually, the structure is richer than this since it amount to *Manin triple*. So, the method not only brings the problem to the very realm of AKS theory, but also to that of Lie bialgebras and Poisson-Lie groups.
We developed these issues in extent, showing how the AKS theory works in the proposed framework producing Lax pairs equations, and that by freezing the first coordinate in the semidirect product, the $\tau $-orbits formulation is retrieved. We built the crossed dressing actions of the factors in the semidirect product Lie group, and showed that the dynamical systems moves on the associated orbits. The Lie bialgebra and Poisson-Lie structures were obtained, however we do not advanced forward the coboundary ones leaving them a pending problem.
In the examples we explored three different situations lacking of Ad-invariant bilinear form: a nilpotent and a solvable Lie algebras, and $%
\mathfrak{sl}_{2}\left( \mathbb{C}\right) $ where the usual Ad-invariant bilinear form is spoiled with the aim of getting an inner product which gives rise to a Riemannian metric on $SL\left( 2,\mathbb{C}\right) $.
In summary, it has been shown that an AKS scheme and Lie bialgebra - PL structures can be associated to any Lie algebra without Ad-invariant bilinear form by promoting them to the semidirect product with the $\tau $-representation.
Acknowledgments
===============
The authors thank to CONICET (Argentina) for financial support.
[99]{} M. Adler, P. van Moerbeke, *Completely integrable systems, Euclidean Lie algebras and curves*, Adv. Math. 38 (1980), 267-317.
V. I. Arnold, *Mathematical Methods of Classical Mechanics*, Springer-Verlag, New York (1989).
S. Capriotti, H. Montani, *Integrable systems on semidirect product Lie groups*, J. Phys. A, in press. arXiv: math-phys/1307.0122.
R. Ghanam, F. Hindeleh, and G. Thompson, *Bi-invariant and noninvariant metrics on Lie groups*,* *J. Math. Phys. 48(2007), 102903-17.
V. Guillemin, S. Sternberg, *Symplectic techniques in physics*, Cambridge, Cambridge Univ. Press, 1984.
B. Kostant, *The solution to a generalized Toda lattice and representation theory*, Adv. Math. 34 (1979), 195-338.
J.-H. Lu, A. Weinstein, *Poisson Lie groups, dressing transformations and Bruhat decompositions*,* *J. Diff. Geom. 31 (1990), 501-526.
J. Marsden, A. Weinstein, T. Ratiu, *Semidirect products and reduction in mechanics*, Trans. AMS 281 (1984), 147-177; *Reduction and hamiltonian structures on duals of semidirect product Lie algebras*, Contemporary Math. 28 (1984), 55-100.
J, Milnor, *Curvature of left invarian metrics on Lie groups*, Advances in Math. 21 (1976), 293-329.
G. Ovando, *Hamiltonian systems related to invariant metrics*, Journal of Physics: Conference Series 175 (2009) 012012-11.
G. Ovando, *Invariant metrics and Hamiltonian systems*, arXiv:math/0301332v2 \[math.DG\] 8 Mar 2003.
M.A. Semenov-Tian-Shansky, *Dressing transformations and Poisson group actions,* Publ. RIMS, Kyoto Univ. 21 (1985), 1237-1260.
W. Symes, *Systems of Toda type, inverse spectral problem and representation theory*, Inv. Math 159 (1980), 13-51.
V.V. Trofimov, *Extensions of Lie algebras and Hamiltonian systems*, Math. USSR Izvestiya 23 (1984), 561-578.
[^1]: e-mail: * hmontani@uaco.unpa.edu.ar*
|
---
abstract: 'We derive sharp estimates on modulus of continuity for solutions of the heat equation on a compact Riemannian manifold with a Ricci curvature bound, in terms of initial oscillation and elapsed time. As an application, we give an easy proof of the optimal lower bound on the first eigenvalue of the Laplacian on such a manifold as a function of diameter.'
address:
- 'Mathematical Sciences Institute, Australia National University; Mathematical Sciences Center, Tsinghua University; and Morningside Center for Mathematics, Chinese Academy of Sciences.'
- 'Mathematical Sciences Institute, Australia National University'
-
author:
- Ben Andrews
- Julie Clutterbuck
title: Sharp modulus of continuity for parabolic equations on manifolds and lower bounds for the first eigenvalue
---
[^1]
Introductory comments
=====================
In our previous papers [@AC1; @AC2] we proved sharp bounds on the modulus of continuity of solutions of various parabolic boundary value problems on domains in Euclidean space. In this paper, our aim is to extend these estimates to parabolic equations on manifolds. Precisely, let $(M,g)$ be a compact Riemannian manifold with induced distance function $d$, diameter $\sup\{d(x,y):\ x,y\in M\}=D$ and lower Ricci curvature bound $\text{\rm Ric}(v,v)\geq (n-1)\kappa g(v,v)$. Let $a:\ T^*M\to\text{\rm Sym}_2\left(T^*M\right)$ be a parallel equivariant map (so that $a(S^*\omega)(S^*\mu,S^*\nu)=a(\omega)(\mu,\nu)$ for any $\omega$, $\mu$, $\nu$ in $T_x^*M$ and $S\in O(T_xM)$, while $\nabla\left(a(\omega)(\mu,\nu)\right)=0$ whenever $\nabla\omega=\nabla\mu=\nabla\nu=0$). Then we consider solutions to the parabolic equation $$\begin{aligned}
\label{eq:flow}
\dfrac{\partial u}{\partial t} &= a^{ij}(Du)\nabla_i\nabla_ju. \end{aligned}$$ Our assumptions imply that the coefficients $a^{ij}$ have the form $$\label{eq:formofa}
a(Du)(\xi,\xi) =\alpha(|Du|)\frac{\left(Du\cdot\xi\right)^2}{|Du|^2} + \beta(|Du|)\left(|\xi|^2-\frac{\left(Du\cdot\xi\right)^2}{|Du|^2}\right)$$ for some smooth positive functions $\alpha$ and $\beta$. Of particular interest are the cases of the heat equation (with $\alpha=\beta=1$) and the $p$-laplacian heat flows (with $\alpha=(p-1)|Du|^{p-2}$ and $\beta = |Du|^{p-2}$). Here we are principally concerned with the case of manifolds without boundary, but can also allow $M$ to have a nontrivial convex boundary (in which case we impose Neumann boundary conditions $D_\nu u=0$). Our main aim is to provide the following estimates on the modulus of continuity of solutions in terms of the initial oscillation, elapsed time, $\kappa$ and $D$:
\[thm:moc\] Let $(M,g)$ be a compact Riemannian manifold (possibly with smooth, uniformly locally convex boundary) with diameter $D$ and Ricci curvature bound $\operatorname{Ric}\ge (n-1)\kappa g$ for some constant $\kappa\in{\mathbb{R}}$. Let $u: M\times [0,T)\rightarrow {{\mathbb R}}$ be a smooth solution to equation , with Neumann boundary conditions if $\partial M\neq\emptyset$. Suppose that
- $u(\cdot,0)$ has a smooth modulus of continuity $\varphi_0:[0,D/2]\rightarrow{{\mathbb R}}$ with $\varphi_0(0)=0$ and $\varphi_0'\geq 0$;
- $\varphi:[0,D/2]\times {\mathbb{R}}_+\rightarrow {\mathbb{R}}$ satisfies
1. $\varphi(z,0)=\varphi_0(z)$ for each $z\in[0,D/2]$;
2. \[1deqn\] $\frac{\partial \varphi}{\partial t}\ge \alpha(\varphi')\varphi'' - (n-1) {\mathbf{T}_\kappa}\beta(\varphi')\varphi'$;
3. $\varphi'\geq 0$ on $[0,D/2]\times {\mathbb{R}}_+$.
Then $\varphi(\cdot,t)$ is a modulus of continuity for $u(\cdot,t)$ for each $t\in[0,T)$: $$|u(x,t)-u(y,t)|\le 2\varphi\left( \frac{d(x,y)}2,t\right).$$
Here we use the notation $$\label{defn of ck}
{\mathbf{C_\kappa}}(\tau )=\begin{cases} \cos\sqrt{\kappa}\tau , & \kappa>0 \\
1, &\kappa =0 \\
\cosh \sqrt{-\kappa} \tau , & \kappa <0,
\end{cases} \quad \text{ and } \quad
{\mathbf{S_\kappa}}(\tau )=\begin{cases} \frac1{\sqrt{\kappa}}\sin\sqrt{\kappa}\tau , & \kappa>0 \\
\tau , &\kappa =0 \\
\frac1{\sqrt{-\kappa}}\sinh \sqrt{-\kappa} \tau , & \kappa <0,
\end{cases}$$ and $${\mathbf{T}_\kappa}(s) := \kappa\frac{{\mathbf{S_\kappa}}(s)}{{\mathbf{C_\kappa}}(s)}=\begin{cases}
\sqrt{\kappa}\tan\left(\sqrt{\kappa}s\right),&\kappa>0\\
0,& \kappa=0\\
-\sqrt{-\kappa}\tanh\left(\sqrt{-\kappa}s\right),&\kappa<0.
\end{cases}$$
These estimates are sharp, holding exactly for certain symmetric solutions on particular warped product spaces. The modulus of continuity estimates also imply sharp gradient bounds which hold in the same situation. The central ingredient in our argument is a comparison result for the second derivatives of the distance function (Theorem \[thm:dist-comp\]) which is a close relative of the well-known Laplacian comparison theorem. We remark that the assumption of smoothness can be weakened: For example in the case of the $p$-laplacian heat flow we do not expect solutions to be smooth near spatial critical points, but nevertheless solutions are smooth at other points and this is sufficient for our argument.
As an immediate application of the modulus of continuity estimates, we provide a new proof of the optimal lower bound on the smallest positive eigenvalue of the Laplacian in terms of $D$ and $\kappa$: Precisely, if we define $$\lambda_1(M,g) = \inf\left\{\int_M |Du|_g^2\,d\text{\rm Vol}(g):\ \int_M u^2d\text{\rm Vol}(g)=1,\ \int_Mu\,d\text{\rm Vol}(g)=0\right\},$$ and $$\lambda_1(D,\kappa,n) =
\inf\left\{\lambda_1(M,g):\ \text{\rm dim}(M)=n,\ \text{\rm diam}(M)\leq D,\ \text{\rm Ric}\geq (n-1)\kappa g\right\},$$ then we characterise $\lambda_1(D,\kappa)$ precisely as the first eigenvalue of a certain one-dimensional Sturm-Liouville problem:
\[first eigenvalue estimate\] Let $\mu$ be the first eigenvalue of the Sturm–Liouville problem $$\begin{gathered}
\label{SL equation} \begin{split}
\frac1{{\mathbf{C_\kappa}}^{n-1}}\left(\Phi' {\mathbf{C_\kappa}}^{n-1}\right)' +\mu\Phi&=0 \text{ on }[-D/2,D/2],\\
\Phi'(\pm D/2 )&=0.
\end{split}\end{gathered}$$ Then $\lambda_1(D,\kappa,n)=\mu$.
Previous results in this direction include the results derived from gradient estimates due to Li [@Li-ev] and Li and Yau [@LiYau], with the sharp result for non-negative Ricci curvature first proved by Zhong and Yang [@ZY]. The complete result as stated above is implicit in the results of Kröger [@Kroeger]\*[Theorem 2]{} and explicit in those of Bakry and Qian [@BakryQian]\*[Theorem 14]{}, which are also based on gradient estimate methods. Our contribution is the rather simple proof using the long-time behaviour of the heat equation (a method which was also central in our work on the fundamental gap conjecture [@AC3], and which has also been employed successfully in [@Ni]) which seems considerably easier than the previously available arguments. In particular the complications arising in previous works from possible asymmetry of the first eigenfunction are avoided in our argument. A similar argument proving the sharp lower bound for $\lambda_1$ on a Bakry-Emery manifold may be found in [@Andrews-Ni].
The estimate in Theorem \[first eigenvalue estimate\] is sharp (that is, we obtain an equality and not just an inequality), since for a given diameter $D$ and Ricci curvature bound $\kappa$, we can construct a sequence of manifolds satisfying these bounds on which the first eigenvalue approaches $\mu_1$ (see the remarks after Corollary 1 in [@Kroeger]). We include a discussion of these examples in section \[sec:examples\], since the examples required for our purposes are a simpler subset of those constructed in [@Kroeger]. We also include in section \[sec:Li\] a discussion of the implications for a conjectured inequality of Li.
A comparison theorem for the second derivatives of distance {#sec:dist-comp}
===========================================================
\[thm:dist-comp\] Let $(M,g)$ be a complete connected Riemannian manifold with a lower Ricci curvature bound ${\mathrm{Ric}}\geq (n-1)\kappa g$, and let $\varphi$ be a smooth function with $\varphi'\geq0$. Then on $(M\times M)\setminus \{(x,x):\ x\in M\}$ the function $v(x,y)=2\varphi(d(x,y)/2)$ is a viscosity supersolution of $$\mathcal{L}[\nabla^2 v,\nabla v]=2\left[\alpha(\varphi')\varphi''-(n-1){\mathbf{T}_\kappa}\beta(\varphi')\varphi'\right]\big|_{d/2},$$ where $$\mathcal{L}[B,\omega] = \inf\left\{\operatorname{tr}(AB):\
\begin{aligned}
&A\in\text{\rm Sym}_2(T^*_{x,y}(M\times M))\\
&A\geq0\\
&A|_{T^*_xM}=a(\omega\big|_{T_xM})\\
&A|_{T^*_yM}=a(\omega\big|_{T_yM})
\end{aligned}\right\}$$ for any $B\in\text{\rm Sym}_2(T_{x,y}(M\times M)$ and $\omega\in T^*_{(x,y)}(M\times M)$.
By approximation it suffices to consider the case where $\varphi'$ is strictly positive. Let $x$ and $y$ be fixed, with $y\neq x$ and $d=d(x,y)$, and let $\gamma:\ [-d/2,d/2]\to M$ be a minimizing geodesic from $x$ to $y$ (that is, with $\gamma(-d/2)=x$ and $\gamma(d/2)=y$) parametrized by arc length. Choose an orthonormal basis $\{E_i\}_{1\leq i\leq n}$ for $T_xM$ with $E_n=\gamma'(-d/2)$. Parallel transport along $\gamma$ to produce an orthonormal basis $\{E_i(s)\}_{1\leq i\leq n}$ for $T_{\gamma(s)}M$ with $E_n(s)=\gamma'(s)$ for each $s\in [-d/2,d/2]$. Let $\{E_*^i\}_{1\leq i\leq n}$ be the dual basis for $T^*_{\gamma(s)}M$.
To prove the theorem, consider any smooth function $\psi$ defined on a neighbourhood of $(x,y)$ in $M\times M$ such that $\psi\leq v$ and $\psi(x,y)=v(x,y)$. We must prove that $\mathcal{L}[\nabla^2\psi,\nabla\psi]\big|_{(x,y)}\leq 2\left[\alpha(\varphi')\varphi''-(n-1)\beta(\varphi')\varphi'{\mathbf{T}_\kappa}\right]\big|_{d(x,y)/2}$. By definition of $\mathcal{L}$ it suffices to find a non-negative $A\in\text{\rm Sym}_2(T^*_{x,y}(M\times M))$ such that $A|_{T_xM}=a\left(\nabla\psi|_{T_xM}\right)$ and $A|_{T_yM}=a\left(\nabla\psi|_{T_yM}\right)$, with $\operatorname{tr}(AD^2\psi)\leq 2\left[\alpha(\varphi')\varphi''-(n-1)\beta(\varphi')\varphi'{\mathbf{T}_\kappa}\right]\big|_{d/2}$.
Before choosing this we observe that $\nabla\psi$ is determined by $d$ and $\varphi$: We have $\psi\leq 2\varphi\circ d/2$ with equality at $(x,y)$. In particular we have (since $\varphi$ is nondecreasing) $$\psi(\gamma(s),\gamma(t))\leq 2\varphi(d(\gamma(s),\gamma(t))/2)\leq 2\varphi(L[\gamma|_{[s,t]}]/2)\leq 2\varphi(|t-s|/2),$$ for all $s\neq t$, with equality when $t=d/2$ and $s=-d/2$. This gives $\nabla\psi(E_n,0)=-\varphi'(d/2)$ and $\nabla\psi(0,E_n)=\varphi'(d/2)$. To identify the remaining components of $\nabla\psi$, we define $\gamma^y_i(r,s) = \exp_{\gamma(s)}(r(1/2+s/d)E_i(s))$ for $1\leq i\leq n-1$. Then we have $$\psi(x,\exp_y(rE_i))\leq 2\varphi(L[\gamma_i^y(r,.)]/2)$$ with equality at $r=0$. The right-hand side is a smooth function of $r$ with derivative zero, from which it follows that $\nabla\psi(0,E_i)=0$. Similarly we have $\nabla\psi(E_i,0)=0$ for $i=1,\dots,n-1$. Therefore we have $$\nabla\psi\big|_{(x,y)} = \varphi'(d(x,y)/2) (-E^n_*,E^n_*).$$ In particular we have by $$a(\nabla\psi|_{T_xM}) = \alpha(\varphi')E_n\otimes E_n+\beta(\varphi')\sum_{i=1}^{n-1}E_i\otimes E_i,$$ and similarly for $y$.
Now we choose $A$ as follows: $$\label{choice of A}
A = \alpha(\varphi')(E_n,-E_n)\otimes(E_n,-E_n)+\beta(\varphi')\sum_{i=1}^{n-1}(E_i,E_i)\otimes(E_i,E_i).$$ This is manifestly non-negative, and agrees with $a$ on $T_xM$ and $T_yM$ as required. This choice gives $$\label{eq:trace}
\operatorname{tr}(A\nabla^2\psi) = \alpha(\varphi)\nabla^2\psi\left((E_n,-E_n),(E_n,-E_n)\right)+\beta(\varphi')\sum_{i=1}^{n-1}\nabla^2\psi\left((E_i,E_i),(E_i,E_i)\right).$$ For each $i\in\{1,\dots,n-1\}$ let $\gamma_{i}:\ (-\varepsilon,\varepsilon)\times[-d/2,d/2]\to M$ be any smooth one-parameter family of curves with $\gamma_{i}(r,\pm d/2) = \exp_{\gamma(\pm d/2)}(rE_i(\pm d/2))$ for $i=1,\dots,n-1$, and $\gamma_i(0,s)=\gamma(s)$. Then $d(\exp_x(rE_i),\exp_y(rE_i))\leq L[\gamma_i(r,.)]$ and hence $$\begin{aligned}
\psi(\exp_x(rE_i),\exp_y(rE_i))&\le v(\exp_x(rE_i),\exp_y(rE_i))\\
&=2\varphi\left(\frac{d(\exp_x(rE_i),\exp_y(rE_i))}{2}\right)\\
&\leq 2\varphi\left(\frac{L[\gamma_i(r,.)]}{2}\right) $$ since $\varphi$ is nondecreasing. Since the functions on the left and the right are both smooth functions of $r$ and equality holds for $r=0$, it follows that $$\label{eq:D2i}
\nabla^2\psi((E_i,E_i),(E_i,E_i))\leq2\sum_{i=1}^{n}\frac{d^2}{dr^2}\left(\varphi\left(\frac{L[\gamma_{i}(r,.)]}{2}\right)\right)
\Big|_{r=0}.$$ Similarly, since $d-2r=L[\gamma\big|_{[-d/2+r,d/2-r]}]\geq d(\gamma(-d/2+r),\gamma(d/2-r))$ we have $$\label{eq:D2n}
\nabla^2\psi(E_n,-E_n),(E_n,-E_n))\leq 2\frac{d^2}{dr^2}\left(\varphi\left(\frac{d}{2}-r\right)\right)\Big|_{r=0} = 2\varphi''\left(\frac{d}{2}\right).$$ Now we make a careful choice of the curves $\gamma_i(r,.)$ motivated by the situation in the model space, in order to get a useful result on the right-hand side in the inequality : To begin with if $K>0$ then we assume that $d<\frac{\pi}{\sqrt{K}}$ (we will return to deal with this case later). We choose $$\gamma_i(r,s) = \exp_{\gamma(s)}\left(\frac{r{\mathbf{C_\kappa}}(s)E_i}{{\mathbf{C_\kappa}}(d/2)}\right),$$ where ${\mathbf{C_\kappa}}$ is given by . Now we proceed to compute the right-hand side of : Denoting $s$ derivatives of $\gamma_i$ by $\gamma'$ and $r$ derivatives by $\dot\gamma$, we find $$\begin{aligned}
\frac{d}{dr}\left(\frac{L[\gamma_{i}(r,.)]}{2}\right)
&=\frac{d}{dr}\left(
\int_{-d/2}^{d/2}\left\|\gamma'(r,s)\right\|\,ds\right)\\
&=\int_{-d/2}^{d/2}\frac{\left\langle \gamma',\nabla_r\gamma'\right\rangle}{\|\gamma'\|}\,ds.\end{aligned}$$ In particular this gives zero when $r=0$. Differentiating again we obtain (using $\|\gamma'(0,s)\|=1$ and the expression $\dot\gamma(0,s)=\frac{{\mathbf{C_\kappa}}(s)}{{\mathbf{C_\kappa}}(d/2)}E_i$) $$\frac{d^2}{dr^2}\left(\frac{L[\gamma_{i}(r,.)]}{2}\right)\Big|_{r=0}
=\int_{-d/2}^{d/2}\|\nabla_r\gamma'\|^2-\left\langle\gamma',\nabla_r\gamma'\right\rangle^2+
\left\langle \gamma',\nabla_r\nabla_r\gamma'\right\rangle\,ds.$$ Now we observe that $\nabla_r\gamma'=\nabla_s\dot\gamma = \nabla_s\left(\frac{{\mathbf{C_\kappa}}(s)}{{\mathbf{C_\kappa}}(d/2)}E_i\right) = \frac{{\mathbf{C_\kappa}}'(s)}{{\mathbf{C_\kappa}}(d/2)}E_i$, while $$\nabla_r\nabla_r\gamma' = \nabla_r\nabla_s\dot\gamma
=\nabla_s\nabla_r\dot\gamma -R(\dot\gamma,\gamma')\dot\gamma
=-\frac{{\mathbf{C_\kappa}}(s)^2}{{\mathbf{C_\kappa}}(d/2)^2} R(E_i,E_n)E_i,$$ since by the definition of $\gamma_i(r,s)$ we have $\nabla_r\dot\gamma=0$. This gives $$\frac{d^2}{dr^2}\left(\frac{L[\gamma_{i}(r,.)]}{2}\right)\Big|_{r=0}
=\frac{1}{{\mathbf{C_\kappa}}(d/2)^2}\int_{-d/2}^{d/2}\left\{{\mathbf{C_\kappa}}'(s)^2-{\mathbf{C_\kappa}}(s)^2 R(E_i,E_n,E_i,E_n)\right\}\,ds.$$ Summing over $i$ from $1$ to $n-1$ gives $$\begin{aligned}
\left.\sum_{i=1}^{n-1}\frac{d^2}{dr^2}\left(\frac{L[\gamma_{i}(r,.)]}{2}\right)\right|_{r=0}
&=\frac{1}{{\mathbf{C_\kappa}}(d/2)^2}\int_{-d/2}^{d/2}\left\{(n-1){\mathbf{C_\kappa}}'(s)^2 - {\mathbf{C_\kappa}}(s)^2\sum_{i=1}^{n-1}R(E_i,E_n,E_i,E_n)\right\}\,ds\\
&=\frac{1}{{\mathbf{C_\kappa}}(d/2)^2}\int_{-d/2}^{d/2}\left\{(n-1){\mathbf{C_\kappa}}'(s)^2 - {\mathbf{C_\kappa}}(s)^2{\mathrm{Ric}}(E_n,E_n)\right\}\,ds\\
&\leq \frac{n-1}{{\mathbf{C_\kappa}}(d/2)^2}\int_{-d/2}^{d/2}\left\{{\mathbf{C_\kappa}}'(s)^2 - \kappa {\mathbf{C_\kappa}}(s)^2\right\}\,ds.\end{aligned}$$
In the case $\kappa=0$ the integral is zero; in the case $\kappa<0$, or the case $\kappa>0$ with $d<\frac{\pi}{\sqrt{\kappa}}$, we have $$\begin{aligned}
\frac{1}{{\mathbf{C_\kappa}}(d/2)^2}\int_{-d/2}^{d/2}\left\{{\mathbf{C_\kappa}}'(s)^2 - \kappa{\mathbf{C_\kappa}}(s)^2\right\}\,ds
&= \frac{1}{{\mathbf{C_\kappa}}(d/2)^2}\int_{-d/2}^{d/2} \left(-\kappa {\mathbf{S_\kappa}}{\mathbf{C_\kappa}}'- \kappa {\mathbf{S_\kappa}}' {\mathbf{C_\kappa}}\right) \,ds\\
&=-\frac{\kappa}{{\mathbf{C_\kappa}}(d/2)^2} \int_{-d/2}^{d/2} \left({\mathbf{C_\kappa}}{\mathbf{S_\kappa}}\right)' \,ds\\
&=-\frac{2\kappa {\mathbf{C_\kappa}}(d/2){\mathbf{S_\kappa}}(d/2) }{{\mathbf{C_\kappa}}(d/2)^2} \\
&=-2{\mathbf{T}_\kappa}(d/2).\end{aligned}$$ Finally, we have $$\left.\frac{d}{dr}\left(\varphi\left(\frac{L[\gamma_{i}(r,.)]}{2}\right)\right)
\right|_{r=0} =\left. \varphi'\frac{d}{dr}\left(\frac{L[\gamma_{i}(r,.)]}{2}\right)
\right|_{r=0} = 0,$$ and so $$\begin{aligned}
\sum_{i=1}^{n-1}\frac{d^2}{dr^2}\left(\varphi\left.\left(\frac{L[\gamma_{i}(r,.)]}{2}\right)\right)
\right|_{r=0} &= \sum_{i=1}^{n-1}\left.\left(\varphi'\frac{d^2}{dr^2}\left(\frac{L[\gamma_{i}(r,.)]}{2}\right)
\right|_{r=0} +\varphi''\left(\frac{d}{dr}\left.\left(\frac{L[\gamma_{i}(r,.)]}{2}\right)
\right|_{r=0}\right)^2\right)\\
&\leq -2(n-1)\left.\varphi'{\mathbf{T}_\kappa}\right|_{d/2}.\end{aligned}$$ Now using the inequalities and , we have from that $$\label{trace inequality}
\mathcal{L}[\nabla^2\psi,\nabla\psi]\leq{\mathrm{trace}}\left(A\nabla^2\psi\right)\leq 2\left[\alpha(\varphi')\varphi''-(n-1)\beta(\varphi')\varphi'{\mathbf{T}_\kappa}\right]\big|_{d/2},$$ as required.
In the case $d=\frac{\pi}{\sqrt{K}}$ then we choose instead $\gamma_i(r,s) = \exp_{\gamma(s)}
\left(\frac{r \mathbf{C_{\kappa'}} (s)E_i}{\mathbf{C_{\kappa'}}(d/2)}\right)$, for arbitrary $\kappa'<\kappa$. Then the computation above gives $$\sum_{i=1}^{n-1}\nabla^2\psi((E_i,E_i),(E_i,E_i))\leq -2(n-1)\varphi'{\mathbf{T}_\kappa}.$$ Since the right hand side approaches $-\infty$ as $\kappa'$ increases to $\kappa$, we have a contradiction to the assumption that $\psi$ is smooth. Hence no such $\psi$ exists and there is nothing to prove.
Estimate on the modulus of continuity for solutions of heat equations {#mfld without bdy section}
=====================================================================
In this section we prove Theorem \[thm:moc\], which extends the oscillation estimate from domains in $\mathbb{R}^n$ to compact Riemannian manifolds. The estimate is analogous to [@AC2]\*[Theorem 4.1]{}, the modulus of continuity estimate for the Neumann problem on a convex Euclidean domain.
Recall that $(M,g)$ is a compact Riemannian manifold, possibly with boundary (in which case we assume that the boundary is locally convex). Define an evolving quantity, $Z$, on the product manifold ${M}\times{M}\times[0,\infty)$: $$Z(x,y,t):=u(y,t)-u(x,t)-2\varphi(d(x,y)/2,t)-\epsilon(1+t)$$ for small $\epsilon>0$.
We have assumed that $\varphi$ is a modulus of continuity for $u$ at $t=0$, and so $Z(\cdot,\cdot,0)\leq -\epsilon<0$. Note also that $Z$ is smooth on $M\times M\times[0,\infty)$, and $Z(x,x,t)=-\varepsilon(1+t)<0$ for each $x\in M$ and $t\in[0,T)$. It follows that if $Z$ ever becomes positive, there exists a first time $t_0>0$ and points $x_0\neq y_0$ in $M$ such that $Z(x_0,y_0,t_0)=0$. There are two possibilities: Either both $x_0$ and $y_0$ are in the interior of $M$, or at least one of them (say $x_0$) lies in the boundary $\partial M$.
We deal with the first case first: Clearly $Z(x,y,t)\leq 0$ for all $x,y\in M$ and $t\in[0,t_0]$. In particular if we let $v(x,y)=2\varphi\left(\frac{d(x,y)}{2},t_0\right)$ and $\psi(x,y)=u(y,t)-u(x,t)-\varepsilon(1+t_0)$ then $$\psi(x,y)\leq v(x,y)$$ for all $x,y\in M$, while $\psi(x_0,y_0)=v(x_0,y_0)$. Since $\psi$ is smooth, by Theorem \[thm:dist-comp\] we have $${\mathcal L}[\nabla^2\psi,\nabla\psi]\leq 2\left[\alpha(\varphi')\varphi''-(n-1){\mathbf{T}_\kappa}\beta(\varphi')\varphi'\right]\big|_{\frac{d(x_0,y_0)}2}.$$ Now we observe that since the mixed partial derivatives of $\nabla^2\psi$ all vanish, we have for any admissible $A$ in the definition of $\mathcal L$ that $$\text{\rm tr}\left(A\nabla^2\psi\right) = \left(a(Du)^{ij}\nabla_i\nabla_j u\right)\big|_{(y_0,t_0)}-\left(a(Du)^{ij}\nabla_i\nabla_ju\right)\big|_{(x_0,t_0)},$$ and therefore $${\mathcal L}[\nabla^2\psi,\nabla\psi] = \left(a(Du)^{ij}\nabla_i\nabla_j u\right)\big|_{(y_0,t_0)}-\left(a(Du)^{ij}\nabla_i\nabla_ju\right)\big|_{(x_0,t_0)}.$$ It follows that $$\label{secvar}
a(Du)^{ij}\nabla_i\!\nabla_j u\big|_{(y_0,t_0\!)}\!\!\!-\!a(Du)^{ij}\nabla_i\!\nabla_ju\big|_{(x_0,t_0\!)}\!\leq 2\!\left[\!\alpha(\!\varphi')\varphi''\!\!-\!(\!n\!-\!1\!)\!{\mathbf{T}_\kappa}\beta(\varphi')\varphi'\right]\!\!\big|_{{d(x_0,y_0)}/2}.$$ We also know that the time derivative of $Z$ is non-negative at $(x_0,y_0,t_0)$, since $Z(x_0,y_0,t)\leq 0$ for $t<t_0$: $$\label{tvar}
\frac{\partial Z}{\partial t}\big|_{(x_0,y_0,t_0)}=a(Du)^{ij}\nabla_i\nabla_j u\big|_{(y_0,t_0)}-\left.a(Du)^{ij}\nabla_i\nabla_j u\right|_{(x_0,t_0)}-2\frac{\partial\varphi}{\partial t}-\varepsilon\geq 0.$$ Combining the inequalities and we obtain $$\frac{\partial\varphi}{\partial t}<\alpha(\varphi')\varphi''-(n-1){\mathbf{T}_\kappa}\beta(\varphi')\varphi'$$ where all terms are evaluated at the point $d(x_0,y_0)/2$. This contradicts the assumption \[1deqn\] in Theorem \[thm:moc\].
Now we consider the second case, where $x_0\in\partial M$. Under this assumption that $\partial M$ is convex there exists [@BGS] a length-minimizing geodesic $\gamma:\ [0,d]\to M$ from $x_0$ to $y_0$, such that $\gamma(s)$ is in the interior of $M$ for $0<s<d$ and $\gamma'(0)\cdot \nu(x_0)>0$, where $\nu(x_0)$ is the inward-pointing unit normal to $\partial M$ at $x_0$. We compute $$\frac{d}{ds}Z(\exp_{x_0}(s\nu(x_0)),y_0,t_0) = -\nabla_{\nu(x_0)}u-\varphi'(d/2)\nabla d(\nu(x_0),0)
=\varphi'(d/2)\gamma'(0)\cdot \nu(x_0)\geq 0.$$ In particular $Z(\exp_{x_0}(s\nu(x_0)),y_0,t_0)>0$ for all small positive $s$, contradicting the fact that $Z(x,y,t_0)\leq 0$ for all $x,y\in M$.
Therefore $Z$ remains negative for all $(x,y)\in M$ and $t\in[0,T)$. Letting $\varepsilon$ approach zero proves the theorem.
The eigenvalue lower bound
==========================
Now we provide the proof of the sharp lower bound on the first eigenvalue (Theorem \[first eigenvalue estimate\]), which follows very easily from the modulus of continuity estimate from Theorem \[thm:moc\].
\[mu as bound\] For $M$ and $u$ as in Theorem 1 applied to the heat equation ($\alpha\equiv\beta\equiv1$ in ), we have the oscillation estimate $$|u(y,t)-u(x,t)|\le C{\mathrm e}^{-\mu t},$$ where $C$ depends on the modulus of continuity of $u(\cdot,0)$, and $\mu$ is the smallest positive eigenvalue of the Sturm-Liouville equation $$\begin{gathered}
\label{SL equation} \begin{split}
\Phi''-(n-1){\mathbf{T}_\kappa}\Phi' +\mu\Phi=\frac1{{\mathbf{C_\kappa}}^{n-1}}\left(\Phi' {\mathbf{C_\kappa}}^{n-1}\right)' +\mu \Phi&=0 \text{ on }[-D/2,D/2],\\
\Phi'(\pm D/2 )&=0.
\end{split}\end{gathered}$$
The eigenfunction-eigenvalue pair $(\Phi,\mu)$ is defined as follows: For any $\sigma\in{\mathbb{R}}$ we define $\Phi_\sigma(x)$ to be the solution of the initial value problem $$\begin{aligned}
\Phi_\sigma''-(n-1){\mathbf{T}_\kappa}\Phi_\sigma'+\sigma\Phi_\sigma&=0;\\
\Phi_\sigma(0)&=0;\\
\Phi'_\sigma(0)&=1.\end{aligned}$$ Then $\mu=\sup\{\sigma:\ x\in[-D/2,D/2]\Longrightarrow \Phi'_\sigma(x)>0\}$. In particular, for $\sigma<\mu$ the function $\Phi_\sigma$ is strictly increasing on $[-D/2,D/2]$, and $\Phi_\sigma(x)$ is decreasing in $\sigma$ and converges smoothly to $\Phi(x)=\Phi_\mu(x)$ as $\sigma$ approaches $\mu$ for $x\in(0,D/2]$ and $0<\sigma<\mu$.
Now we apply Theorem \[thm:moc\]: Since $\Phi$ is smooth, has positive derivative at $x=0$ and is positive for $x\in(0,D/2]$, there exists $C>0$ such that $C\Phi$ is a modulus of continuity for $u(.,0)$. Then for each $\sigma\in(0,\mu)$, $\varphi_0=C\Phi_\sigma$ is also a modulus of continuity for $u(.,0)$, with $\varphi_0(0)=0$ and $\varphi'_0>0$. Defining $\varphi(x,t) = C\Phi_\sigma(x){\mathrm e}^{-\sigma t}$, all the conditions of Theorem \[thm:moc\] are satisfied, and we deduce that $\varphi(.,t)$ is a modulus of continuity for $u(.,t)$ for each $t\geq 0$. Letting $\sigma$ approach $\mu$, we deduce that $C\Phi{\mathrm e}^{-\mu t}$ is also a modulus of continuity. That is, for all $x,y$ and $t\geq 0$ $$\left|u(y,t)-u(x,t)\right|\leq C{\mathrm e}^{-\mu t}\Phi\left(\frac{d(x,y)}{2}\right)\leq C\sup\Phi\ {\mathrm e}^{-\mu t}.$$
#### **Proof of Theorem \[first eigenvalue estimate\].**
Observe that if $(\varphi,\lambda)$ is the first eigenfunction-eigenvalue pair, then $u(x,t)=e^{-\lambda t}\varphi(x)$ satisfies the heat equation on $M$ for all $t>0$. From Proposition \[mu as bound\], we have $|u(y,t)-u(x,t)|\le Ce^{-\mu t}$ and so $|\varphi(y)-\varphi(x)|\le Ce^{-(\mu-\lambda)t}$ for all $x,y\in M$ and $t>0$. Since $\varphi$ is non-constant, letting $t\rightarrow\infty$ implies that $\mu-\lambda\le 0$.
Sharpness of the estimates {#sec:examples}
==========================
In the previous section we proved that $\lambda_1(D,\kappa,n)\geq \mu$. To complete the proof of Theorem \[first eigenvalue estimate\] we must prove that $\lambda_1(D,\kappa,n)\leq \mu$. To do this we construct examples of Riemannian manifolds with given diameter bounds and Ricci curvature lower bounds such that the first eigenvalue is as close as desired to $\mu$. The construction is similar to that given in [@Kroeger] and [@BakryQian], but we include it here because the construction also produces examples proving that the modulus of continuity estimates of Theorem \[thm:moc\] are sharp.
Fix $\kappa$ and $D$, and let $M=S^{n-1}\times [-D/2,D/2]$ with the metric $$g = ds^2 + a{\mathbf{C_\kappa}}^2(s)\bar g$$ where $\bar g$ is the standard metric on $S^{n-1}$, and $a>0$. The Ricci curvatures of this metric are given by $$\begin{aligned}
\text{\rm Ric}(\partial_s,\partial_s)&=(n-1)\kappa;\\
\text{\rm Ric}(\partial_s,v)&=0\quad\text{for\ }v\in TS^{n-1};\\
\text{\rm Ric}(v,v)&=\left((n-1)\kappa + (n-2)\frac{\frac{1}{a}-\kappa}{{\mathbf{C_\kappa}}^2}\right)|v|^2\quad\text{for\ }v\in TS^{n-1}.\end{aligned}$$ In particular the lower Ricci curvature bound $\text{\rm Ric}\geq (n-1)\kappa$ is satisfied for any $a$ if $\kappa\leq 0$ and for $a\leq 1/\kappa$ if $\kappa>0$.
To demonstrate the sharpness of the modulus of continuity estimate in Theorem \[thm:moc\], we construct solutions of equation on $M$ which satisfy the conditions of the Theorem, and satisfy the conclusion with equality for positive times: Let $\varphi_0:\ [0,D/2]$ be as given in the Theorem, and extend by odd reflection to $[-D/2,D/2]$ and define $\varphi$ to be the solution of the initial-boundary value problem $$\begin{aligned}
\frac{\partial\varphi}{\partial t} &= \alpha(\varphi')\varphi''+(n-1){\mathbf{T}_\kappa}\beta(\varphi')\varphi';\\
\varphi(x,0)&=\varphi_0(x);\\
\varphi'(\pm D/2,t)&= 0.\end{aligned}$$ Now define $u(z,s,t) = \varphi(s,t)$ for $s\in[-D/2,D/2]$, $z\in S^{n-1}$, and $t\geq 0$. Then a direct calculation shows that $u$ is a solution of equation on $M$. If $\varphi_0$ is concave on $[0,D/2]$, then we have $|\varphi_0(a)-\varphi_0(b)|\leq 2\varphi_0\left(\frac{|b-1|}{2}\right)$ for all $a$ and $b$ in $[-D/2,D/2]$. For our choice of $\varphi$ this also remains true for positive times. Note also that for any $w,z\in S^{n-1}$ and $a,b\in[-D/2,D/2]$ we have $d((w,a),(z,b))\geq |b-a|$. Therefore we have $$|u(w,a,t)-u(z,b,t)|=|\varphi(a,t)-\varphi(b,t)|\leq 2\varphi\left(\frac{|b-a|}{2},t\right)\leq
2\varphi\left(\frac{d((w,a),(z,b))}{2},t\right),$$ so that $\varphi(.,t)$ is a modulus of continuity for $u(.,t)$ as claimed. Furthermore, this holds with equality whenever $w=z$ and $b=-a$, so there is no smaller modulus of continuity and the estimate is sharp.
Now we proceed to the sharpness of the eigenvalue estimate. On the manifold constructed above we have an explicit eigenfunction of the Laplacian, given by $\varphi(z,s) = \Phi(s)$ where $\Phi$ is the first eigenfunction of the one-dimensional Sturm-Liouville problem given in Proposition \[mu as bound\]. That is, we have $\lambda_1(M,g)\leq \mu$. In this example we have the required Ricci curvature lower bound, and the diameter approaches $D$ as $a\to 0$. Since $\mu$ depends continuously on $D$, the result follows.
A slightly more involved construction shows that examples of compact manifolds without boundary can also be constructed showing that the eigenvalue bound is sharp even in the smaller class of manifolds without boundary. This is achieved by smoothing attaching a small spherical region at the ends of the above examples (see the similar construction in [@Andrews-Ni]\*[Section 2]{}).
Implications for the ‘Li conjecture’ {#sec:Li}
====================================
In this section we mention some implications of the sharp eigenvalue estimate and a conjecture attributed to Peter Li: The result of Lichnerowicz [@Lich] is that $\lambda_1\geq n\kappa$ whenever $\text{\rm Ric}\geq (n-1)\kappa g_{ij}$ (so that, by the Bonnet-Myers estimate, $D\leq \frac{\pi}{\sqrt{\kappa}}$). The Zhong-Yang estimate [@ZY] gives $\lambda_1\geq \frac{\pi^2}{D^2}$ for $\text{\rm Ric}\geq 0$. Both of these are sharp, and the latter estimate should also be sharp as $D\to 0$ for any lower Ricci curvature bound. Interpolating linearly (in $\kappa$) between these estimates we obtain Li’s conjecture $$\lambda_1\geq \frac{\pi^2}{D^2}+(n-1)\kappa.$$ By construction this holds precisely at the endpoints $\kappa\to 0$ and $\kappa\to\frac{\pi^2}{D^2}$.
Several previous attempts to prove such inequalities have been made, particularly towards proving inequalities of the form $\lambda_1\geq \frac{\pi^2}{D^2}+a\kappa$ for some constant $a$, which are linear in $\kappa$ and have the correct limit as $\kappa\to 0$. These include works of DaGang Yang [@yang-eigenvalue], Jun Ling [@ling-eigenvalues-2007] and Ling and Lu [@ling-lu-eigenvalues], the latter showing that $\alpha=\frac{34}{100}$ holds. These are all superseded by the result of Shi and Zhang [@SZ] which proves $\lambda_1\geq \sup_{s\in(0,1)}\left\{4s(1-s)\frac{\pi^2}{D^2}+(n-1)s\kappa\right\}$, so in particular $\lambda_1\geq \frac{\pi^2}{D^2}+\frac{n-1}{2}\kappa$ by taking $s=\frac12$.
We remark here that the inequality with $a=\frac{n-1}{2}$ is the best possible of this kind, and in particular the Li conjecture is false. This can be seen by computing an asymptotic expansion for the sharp lower bound $\mu$ given by Theorem \[first eigenvalue estimate\]: For fixed $D=\pi$ we perturb about $\kappa=0$ (as in [@Andrews-Ni]\*[Section 4]{}), obtaining $$\mu = 1+\frac{(n-1)}{2}\kappa+O(\kappa^2).$$ By scaling this amounts to the estimate $$\mu = \frac{\pi^2}{D^2}+\frac{(n-1)}{2}\kappa+O(\kappa D^2).$$ Since the lower bound $\lambda_1\geq \mu$ is sharp, this shows that the inequality $\lambda_1\geq \frac{\pi^2}{D^2}+a\kappa$ is false for any $a>\frac{(n-1)}{2}$, and in particular for $a=n-1$.
[^1]: Supported by Discovery Projects grants DP0985802 and DP120102462 of the Australian Research Council.
|
---
abstract: 'Situational awareness is crucial for effective disaster management. However, obtaining information about the actual situation is usually difficult and time-consuming. While there has been some effort in terms of incorporating the affected population as a source of information, the issue of obtaining trustworthy information has not yet received much attention. Therefore, we introduce the concept of witness-based report verification, which enables users from the affected population to evaluate reports issued by other users. We present an extensive overview of the objectives to be fulfilled by such a scheme and provide a first approach considering security and privacy. Finally, we evaluate the performance of our approach in a simulation study. Our results highlight synergetic effects of group mobility patterns that are likely in disaster situations.'
author:
-
bibliography:
- 'IEEEabrv.bib'
- 'references.bib'
title: Towards Trustworthy Mobile Social Networking Services for Disaster Response
---
[0.78]{}(0.11,0.03)
Security and Privacy Protection, Mobile communication systems, Multicast
Introduction {#sec:intro}
============
Responding to large-scale disasters has always been a challenging task. One of the reasons for this is the unpredictability of the actual situation at hand. With first responders usually being short on technical and human resources, an awareness of the current circumstances, e.g. the location of casualties, is substantial to effectively providing help to victims within the first critical hours. In order to increase the situational awareness of officials and to support mutual first response, the concept of incorporating the affected population as a potential source of information has emerged recently [@palen2010vision]. Among the potential services for disaster response [@wozniak2011towards], one of the most important services is a reporting service that enables the affected population to issue reports about the locations of victims, remaining or evolving hazards, resource requirements, etc. With other services building upon the data collected by this service, it is essential that this information is authentic and accurate to allow appropriate decision making. Therefore, apart from ensuring a high quality of information, a crucial aspect of this service is to implement countermeasures against users trying to inject false or inaccurate information about allegedly urgent events.
In this work, we introduce a rating approach relying on the affected population to verify the correctness and urgency of reports. In our approach, which we refer to as , witnesses report certain events to so-called verifier nodes. These verifier nodes issue confirmation requests to potential witnesses of the event, asking them to decide about the accuracy and urgency of the report. Witnesses can then vote with their decision, allowing the verifier node to rate a report (see Fig.\[fig:concept\]).
Our witness-based approach is inspired by the issue of obtaining credible information in *social swarming* applications [@liu2011optimizing]. In social swarming, a swarm of users tries to cooperatively fulfill certain tasks, e.g., search and rescue. Users in the swarm may send reports to a swarm director using their smartphones. Based on his global view, the swarm director then provides instructions to users to achieve the common goal. In order to obtain credible information, the swarm director may selectively query users for confirmation. Accordingly, in our verification schemes, confirmation requests are issued to certain users. However, in their work, the authors focus on the problem of optimizing the network resources by querying the most suitable users based on their credibility under normal network conditions. In contrast, we apply the concept of querying specific users to deal with the challenges of verifying reports in disaster situations. On one hand, this concerns the need to communicate in a delay-tolerant manner due to the failure of parts of the network infrastructure. On the other hand, in order to meet legal requirements and gain acceptance among users, such a scheme has to protect the privacy of the witnesses. This is especially the case if such an approach is deployed on mobile devices that are also used in normal conditions, e.g., to provide help also in a small scale car accident.
![Witness-based report verification[]{data-label="fig:concept"}](concept.pdf)
Apart from the issue of obtaining credible information in social swarming, there are several related research areas. On one hand, there has been work on trustworthy ubiquitous emergency communication [@weber2011mundomessage]. However, it focuses on first responders and does not consider the verification of information for services. On the other hand, regarding the issue of crowdsourcing information in disasters, existing approaches are usually open-access, with no or only limited verification [@gao2011harnessing]. Furthermore, while there has been work on the trustworthiness of information obtained from microblogging services for emergency situations [@gupta2012credibility], the aspect of querying witnesses in the disaster area in order to verify reports has not been considered yet. Finally, our approach can be considered an application of the concept of *spatiotemporal multicast*, where a message is delivered to users, i.e., witnesses, encountered in the past while protecting their privacy from the sender of the message [@wozniak2012geocast].
In this article we make the following contributions: We propose the concept of witness-based report verification in the context of a reporting service for disasters and derive extensive security and privacy objectives (section\[sec:objectives\]). Furthermore, we present a first approach for such a scheme (section\[sec:approach\]) and provide a detailed discussion of its security and privacy features (section\[sec:discussion\]). Finally, we evaluate our approach by an extensive simulation study (section\[sec:evaluation\]).
Design Objectives {#sec:objectives}
=================
In this work, we consider a network model where users are able to sporadically access the Internet via a cellular network infrastructure. Furthermore, we assume that devices are able to communicate directly forming a local wireless network.
Functional Objectives
---------------------
**Proximity restriction:**
Only users close to an event should be able vote for reports about this event.
**Deferring of votes:**
Users should be able to defer a vote, e.g., if a user has to provide first aid, he should be able to defer his vote and submit it later.
Non-functional Objectives
-------------------------
**Verification delay:**
Reports should be verified quickly.
**Robustness:**
After a disaster, parts of the infrastructure may fail. Hence, the scheme has to operate in a delay- and disruption-tolerant manner. Furthermore, it should be robust against occasional false reports and votes.
**Scalability:**
The objectives should not be severely degraded by an increasing number of users and reports.
**Efficiency:**
The service should be efficient in terms of computation, memory, and communication overhead.
Security Objectives
-------------------
**Secure communication:**
Reports and votes must be delivered confidential, authentic, and of integrity.
**Resilient decision making:**
The service should be resilient against malicious reports and votes. Consequently, users must only issue one report about an event and vote once for each report. Thus, attackers must not be able to perform Sybil attacks.
**Accountability:**
Official authorities should be able to obtain the identity of a reporter or witness for the prosecution of crimes. However, restrictions must apply for access to this information in order to prevent abuse.
**Availability:**
The verification service should provide resistance against attacks. This includes spamming of reports and votes.
Privacy Objectives
------------------
**Anonymity:**
Attackers must not learn about the identities of users issuing reports and votes.
**Location privacy:**
Attackers must not determine the location of users. Otherwise, by following their movements, attackers might be able to infer their identities.
**Co-location privacy:**
Attackers must not determine whether two users have been residing at the same location at the same time. Otherwise, attackers might infer a social connection between those users.
**Absence privacy:**
Attackers must not learn about a user’s absence from a location during a certain time. This information can be harmful if a user was not present at a location although he was supposed to be.
Verification Approach {#sec:approach}
=====================
In this article, we present a verification scheme, which we refer to as . Our approach allows users to report events to one of potentially many *verifiers* via their smartphones, i.e., **.
In order to verify a report, the verifier issues confirmation requests to users that have been residing close to the event at the time the report has been submitted. Delivering these confirmation requests in a privacy-preserving manner while supporting delay-tolerant communication and deferring of votes requires a scheme [@wozniak2012geocast]. It is necessary to rely on this concept as employing a scheme would require witnesses to stay close to the place of the event, which is an unrealistic assumption. Therefore, building upon the approach in [@wozniak2012geocast], we rely on ** to deliver confirmation requests in a privacy-preserving manner. This -based approach requires that users poll in regular time intervals using a *token* $\tau$ containing a *key* $K$ that has been negotiated at some location and time in the past. To allow for extensive anonymity guarantees these tokens are negotiated between nearby users in a cryptographically secure manner. Hence, in certain time intervals, users initiate the negotiation of a *group key* $K$ with all users that are currently in communication range. Users may also forward the negotiation requests over several hops to increase the number of users within a group and therefore the number of potential witnesses for some event in the future. Tokens are considered valid up to some time after their reception, e.g., for 5minutes. When issuing a report to the verifier, users include their currently valid tokens to allow the verifier to deposit a confirmation request at specific so that potential witnesses of an event are able to retrieve it.
Finally, witnesses having obtained a request can issue their vote to the verifier, which is then able to decide whether a report is true based on the majority of votes. In order to prevent Sybil attacks and to allow users to only issue one report or vote for each event, an ** is necessary that is able to authenticate the identity of users. Therefore, in order to issue reports or votes, users have to obtain a *voting ticket* $\lambda$ from the first. This ticket contains a *vote identifier* $\upsilon$ that is unique for each user and report. Here, it is important that the does not obtain the report itself in order to protect the privacy of users. Therefore, he issues $\lambda$ for a given $h(M)$, where $h(x)$ is a cryptographic hash function and $M$ the message of the report.
We now provide a detailed overview of the four phases of the approach in the following sections (see Fig. \[fig:overview\]).
![Overview of approach. The process of issuing a report is shown in light gray, while issuing a vote is shown in dark gray.[]{data-label="fig:overview"}](overview.pdf)
Token Negotiation
-----------------
In the first phase, users initiate a secure protocol for a group key exchange in order to negotiate a common key $K$ (e.g. using [@steiner1996diffie]) in certain, randomly distributed time intervals within their $k$-hop neighborhood. When the protocol is finished, users have negotiated a token ${\tau = h(K)}$ which is stored as a pair of $(\tau,K)$ and used to poll for confirmation requests later on.
Event Reporting
---------------
In order to report an event, a user has to interact with the and verifier. Hence, the user establishes a secure connection with the , e.g., using the protocol, and sends $h(M)$ to the . Then, the authenticates the identity of the user and responds with a voting ticket ${\lambda = (\upsilon,\left\{ h(M), \upsilon \right\}_{IS})}$, containing the unique vote identifier $\upsilon$. Here, ${\upsilon = h(id, h(M), K_{IS})}$ with $id$ representing an identifier for the identity of the user, e.g. his . Furthermore, $\left\{ \right\}_{IS}$ is the public-key signature of and $K_{IS}$ is a secret that is only known to the and used to prevent the guessing of $\upsilon$ for a known identity and report message.
In order to prevent duplicate reports from different users, the reporting application should provide users with information about reports issued in their neighborhood, allowing them to recognize existing reports. In this case, no additional report is sent, allowing users to proactively confirm this report so that if a confirmation request is received later on, the device can reply to the request without requiring further interaction from the user.
Then, the user establishes a secure connection with the verifier and sends his report ${\rho = ( \lambda, M, \alpha_1, \ldots, \alpha_l )}$, where ${M = (r, x, y, t, m)}$ contains the location $x,y$, time $t$, message description $m$, and random number $r$, which is used to prevent guessing of $h(M)$ by the . Furthermore, ${\alpha_i = (\tau_i, E_{K_i}(M))}$ represents the tokens $\tau$ and report messages $M$ which are symmetrically encrypted with the group key $K$ for all $t$ currently valid tokens. Finally, the verifier computes $h(M)$ to verify the signature of the and checks whether there is already a vote or report for the given vote identifier $\upsilon$. If this is not the case, the report is accepted.
It should be noted here that it is possible to maintain multiple verifiers in order to provide resistance against attacks or to filter reports. Therefore, police, fire, and ambulance department could each maintain their own verifiers.
Confirmation Request
--------------------
In order to be able to decide whether a report is trustworthy, the verifier may send a confirmation request to specific , where potential witnesses are able to retrieve them. Therefore, for each $\alpha_i$ contained in the report, the verifier computes a identifier $rp_{\tau_i} = h(\tau_i)$. Like in [@wozniak2012geocast], this identifier is used to obtain the name of the where the request should be deposited. By appending the number $rp_{\tau_i} \bmod N$ to a known prefix ($N$ is the number of ), the verifier can resolve the IP address of the , e.g. by . Finally, having established a secure connection, the verifier sends $(\tau_i, E_{K_i}(M))$ to the respective , which stores $E_{K_i}(M)$ for lookup with $\tau_i$.
Witness Feedback
----------------
Witnesses poll in regular time intervals to retrieve requests concerning their stored tokens $\tau$. Here, the addresses of the are derived as described in the previous section. Once a user receives $E_{K}(M)$ for a token $\tau$, he decrypts it using the stored group key $K$ and decides about $M$. Then, he establishes a secure connection to the and obtains a voting ticket $\lambda$ as described above. Finally, after having established a secure connection, he sends his vote ${V = (\lambda, \delta)}$ to the verifier, where ${\delta \in \left\{ \mbox{true}, \mbox{false}, \mbox{unsure}, \mbox{defer} \right\}}$ is his decision about the report. Here, in order to support postponing of votes, if a device does not receive an input from the user within a certain time limit, it auto-replies with a *defer*. This allows the verifier to detect that a vote from a legitimate witness is still missing in order to postpone his decision making if a large number of votes is still missing. If the verifier has received several votes providing a clear majority for the validity of a report, it is considered true.
Discussion {#sec:discussion}
==========
Security Aspects
----------------
Regarding the security objectives, we assume that potential attackers have one or more of the following goals: to obtain knowledge of the content of reports, its reporters and witnesses (see privacy discussion below), or to propagate misleading information in order to, e.g., impede relief operations or hide crimes. To achieve these goals, attackers may observe the communication between entities, send reports, vote as a witness, or even compromise . However, attackers cannot compromise verifiers, the , or parts of the cellular network infrastructure. We consider this an appropriate assumption as it may be easier to control access to few verifiers or the than protecting a large number of , which may be required for scalability reasons. With these abilities, we now discuss the given security objectives.
### Secure communication
Confidentiality, authentication, and integrity is provided by a protocol like that is employed between the entities. Hence, by observing the communication or participating in the service adversaries cannot violate this objective. It can also not be violated by compromising , as those only store encrypted messages.
### Resilient decision making
By employing an , attackers are not able to perform a Sybil attack and can only issue one report or vote per event. Therefore, by participating in the service, they can only obtain a malicious majority, if the majority of votes is malicious. While an adversary may try to issue false reports where he holds a malicious majority (i.e. by using non-existing tokens to exclude benign witnesses), this does not provide an advantage as long as benign users issue reports about the same event. A more sophisticated attacker may also be able to compromise . In this case, while he may not manipulate votes directly, he can suppress confirmation requests to reduce the number of witnesses. Nevertheless, if a report contains more than one token, requests are distributed to different so that an adversary may only suppress a fraction of votes. Finally, an adversary may try to manipulate decisions by identity theft, i.e., stealing votes. Here, the only reasonable countermeasure is to implement a reputation scheme that allows to filter malicious or compromised . We plan to investigate such reputation-based filtering techniques in our future work.
### Accountability
While privacy of users is an important aspect, it still has to be possible to reveal the identity of a user for the prosecution of crimes. This can be achieved by combining the knowledge of a verifier and the , i.e., the vote identifier $\upsilon$ and $K_{IS}$, to infer the identity of a reporter or witness. However, due to pre-image resistance of $h(x)$, this still requires brute-force testing of all user identifiers $id$ and comparing it to ${\upsilon = h(id, ...)}$. Hence, uncovering the identity of users is possible, but requires significant effort.
### Availability
Verifiers and the can implement countermeasures against spamming by rejecting users who send reports or votes at too high rates. Countermeasures against attacks may include techniques like, for example, client puzzles but are beyond the scope of this paper.
Privacy Aspects
---------------
In terms of the privacy objectives, we assume that potential attackers have one or more of the following goals: to infer the identity of users, their locations, co-location of users, or absence of users from a location. We assume that attackers have the same abilities as described above. Given these abilities, we discuss potential attacks against privacy.
### Observation Attack
If an attacker observes the communication between entities, he is not able to violate the anonymity of users as he can only see an encrypted traffic flow. Information about the identities belonging to the involved addresses requires additional knowledge from the cellular operator. An adversary is also not able to violate the location, co-location, or absence privacy of users. While he may observe which poll which , this does not provide an advantage since are responsible for many locations at different times in an unpredictable manner due to the pre-image resistance of $h(x)$ and ${rp_{\tau} = h(\tau)}$ [@wozniak2012geocast]. Furthermore, observing the communication with a verifier or the does also not violate any objective as the attacker cannot read $M$ due to the encrypted communication.
### Participation Attack
When having access to valid adversaries may send reports and votes. This corresponds to the knowledge of an attacker about specific $\tau$, $K$, and $M$.
#### Anonymity
As described above, user identities can only be violated if location, co-location, or absence privacy are violated as traffic is relayed only encrypted.
#### Location privacy
Observing users polling does not violate the location privacy as are responsible for many locations at different times. Knowing $\tau$ and thus which is used to deliver a confirmation request does therefore not violate this objective. However, if there is only one report and the attacker knows the location contained in $M$, he may violate the location privacy as he is able to detect users voting for this report. Still, such a temporal correlation may not be easy to detect with many reports and users reacting at different times. Furthermore, attackers can only obtain information about one location at a specific time, which is unlikely to be sufficient for inferring their identities. Nevertheless, if reports are only issued rarely, users should still contact the IS and verifiers regularly to obfuscate temporal correlations.
#### Co-location privacy
According to the location privacy, this objective may only be violated if there is just one report.
#### Absence privacy
An attacker may not violate this objective, as he may only detect absence from a location if a user does not poll a certain . This is unlikely, as ${rp_\tau = h(\tau)}$ evenly distributes the responsibility of for different times and locations. Therefore, having received several tokens, a user is likely to poll every .
### Compromising
More sophisticated attackers may also compromise one or more . This corresponds to obtaining knowledge of tokens $\tau$ being polled by users.
#### Anonymity
As described above, anonymity can only be violated if location, co-location, or absence privacy is violated as attackers only obtain IP addresses of the users.
#### Location privacy
Since the tokens $\tau$ that are stored on the do not reveal any information about location or time (this requires knowledge of the group key $K$ exchanged among users in the area), an attacker has to participate in the service in order to violate this objective. That is, he has to obtain $M$ and the corresponding $\tau$, as well as group key $K$. In this case, he can infer the being polled by users having resided at that location at this time. If he is able to compromise this , he can violate the location privacy of users having resided at the time and place contained in $M$. Still, this is again not likely to be sufficient to infer the identity of users. In order to track the movement of users, an attacker has to know several tokens $\tau$ received by a user which is only possible if he has been able to follow the user in the disaster area over some time. Moreover, he has to be able to compromise several to follow the movement.
#### Co-location privacy
An attacker may violate this objective as he can detect whether two users poll the same using the same $\tau$. Nevertheless, this only provides knowledge of a potential social connection between two unknown users which may not be of much benefit. Assuming that an attacker wants to find out if two known users (given their IP addresses) have met, he has to be able to compromise a specific responsible for the assumed place and time of the meeting. In order to avoid this potential attack, users may want to use an anonymous proxy when polling to hide the actual IP addresses.
#### Absence privacy
By observing whether a user never polls a certain $\tau$ on a , an attacker can violate the absence privacy of a user. Nevertheless, this only provides an advantage if the user is known and the attacker is able to compromise a specific for the location. As discussed above, users may prevent this by using an anonymous proxy.
Evaluation {#sec:evaluation}
==========
--------------------------------------------------------------- --
**Parameter & **Value\
Simulated time & 120min (mobility warm-up 60min)\
Field setup & 5$\times$5km$^2$, 2000 nodes, 100 events\
Event radius & $\mathcal{U}$(25m, 250m)\
Token negotiation interval & $\mathcal{U}$(15min, 30min)\
Negotiation hop limit & 1, 2, 3, 4, 5, 6\
Token validity period & 5min (starting at time of reception)\
Ratio of malicious nodes & 0..0.4 in steps of 0.05\
**Mobility Models & , ,\
Movement speed & $\mathcal{U}$(0.5m/s, 1.5m/s)\
Group size (, ) & $\mathcal{N}$($\mu$ = 4, $\sigma^2$ = 4)\
Max. pause duration & 60s (, ), 15min ()\
Max. group/roaming radius & 5m (), 25m ()\
**Radio Model & IEEE 802.11 (2.4GHz, 54Mbit/s)\
Transmit power & 17dBm (max. range $\approx$ 100m)\
Path loss model & log-distance, log-normal shadowing\
Path loss coefficients & $n$ = 3.0, $\sigma$ = 9.5dB\
Fast fading model & Jakes’ Rayleigh fading\
********
--------------------------------------------------------------- --
: Overview of simulation parameters[]{data-label="tbl:parameters"}
In order to evaluate the performance of our approach, we implemented it in OMNeT++ [@varga2001omnet++] using the MiXiM framework [@wessel2009mixim]. An overview of the simulation parameters is given in Table\[tbl:parameters\]. In our simulation, users move on a field according to a given mobility model while negotiating tokens within their $k$-hop neighborhood. For replicability, we modeled events by circles with a given radius and users reporting an event when entering its area. As we were interested in the number of witnesses that can be expected for a report when negotiating tokens over $k$ hops, we used three different mobility models: the well-known , as well as two group mobility models: and [@camp2002survey]. We chose these models over existing mobility models for disasters since these models either do not consider the mobility of the affected population in a disaster or model it by applying existing models for group movement [@uddin2009post]. Furthermore, in order to get an impression of the abilities of our approach in terms of detecting malicious users, we randomly set a certain fraction to be malicious. At the end of the simulation, we collected the sets of witnesses for the reported events and calculated the ratio of benign majorities by counting the number of reports with more benign users and dividing it by the total number of reports. We used this ratio as it corresponds to the correct confirmation of either true, or the rejection of false reports. For witnesses, we assumed that malicious users are always able to vote for a malicious and against a benign report. In contrast, benign witnesses only confirm a true or reject a false report, if they were actually in the event area. Otherwise, they issue a vote with an unsure decision.
![image](plots.pdf)
As expected, for an increasing number of $k$ hops, we can see that the number of witnesses increases as well (Fig.\[fig:plots\]a). Here, we can also see the impact of the different mobility models. While for , were users just move randomly, the average number of witnesses is rather small with about 2 and only increases slightly, both group mobility models show a larger number of witnesses (between 4.2 and 5.5 for and between 8 and 11 for ) and a stronger increase with increasing $k$. This behavior can be explained with the group mobility models providing a larger number of witnesses through the movement in groups which provides a higher number of nodes that are in communication range. Accordingly, we can see that the model, where users move up to 25m away from the center of the group, the number of witnesses is smaller compared to the model where users move closely to each other with only about 5m from the center of the group. Since group mobility is more likely to appear after a disaster, we can see that our approach benefits from this with more witnesses per event.
Furthermore, according to our expectations, the ratio of unsure witnesses increases with the number of hops (Fig.\[fig:plots\]b). An interesting aspect here is the fact that for the group mobility models, at 1 hop, the ratio of unsure witnesses is smaller than for . For more than 1 hop, the group mobility models suffer from the fact that users move in groups. Here, it is more likely that witnesses using the same token have not been to the event area and are therefore unsure. Hence, negotiating group keys over multiple hops does not seem to be a good strategy for disasters where users are likely to move in groups.
Finally, regarding the ratio of benign majorities for $k=1$ (Fig.\[fig:plots\]c), we can see that the group mobility models are able to provide a higher ratio of benign majorities. This behavior can be explained with the higher number of witnesses per report. Nevertheless, for all mobility models, the approach suffers from benign users being unsure about an event. Therefore, we can see that, e.g., for 10% of malicious users, less than 90% of all reports have a benign majority.
Conclusion {#sec:conclusion}
==========
In this article, we proposed the concept of witness-based report verification. We provided an extensive overview of objectives to be fulfilled by such a scheme and presented a first approach. Our evaluation shows the benefit of group mobility, which results in a reasonable number of witnesses per event while relying on single-hop negotiation of tokens.
In our future work, we plan to investigate the impact of node densities and realistic reporting behavior that includes the aspect of witnesses voting at different times. Finally, we aim to consider incorporating reputation schemes to filter malicious users and provide a comparison with existing verification schemes that are based on data mining techniques.
Acknowledgment {#acknowledgment .unnumbered}
==============
This work is supported by the German Research Foundation (DFG Graduiertenkolleg 1487, Selbstorganisierende Mobilkommunikationssysteme für Katastrophenszenarien).
|
---
author:
- |
\
*1. Institute of Experimental Physics, Johannes Kepler University Linz, A-4040 Linz, Austria*\
*2. State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Weijin Road 92,*\
*Nankai District, CN-300072 Tianjin, China*\
*3. Nanchang Institute for Microtechnology of Tianjin University, Weijin Road 92, Nankai District, 300072 Tianjin, China*\
*\* Corresponding author: lidong.sun@jku.at*
bibliography:
- 'RDSofMoS2.bib'
title: ' Substrate Induced Optical Anisotropy in Monolayer MoS$_2$'
---
Abstract {#abstract .unnumbered}
========
In-plane optical anisotropy has been detected from monolayer MoS$_2$ grown on a-plane $(11\overline{2}0)$ sapphire substrate in the ultraviolet-visible wavelength range. Based on the measured optical anisotropy, the energy differences between the optical transitions polarized along the ordinary and extraordinary directions of the underlying sapphire substrate have been determined. The results corroborate comprehensively with the dielectric environment induced modification on the electronic band structure and exciton binding energy of monolayer MoS$_2$ predicted recently by first principle calculations. The output of this study proposes the symmetry as a new degree of freedom for dielectric engineering of the two-dimensional materials.\
**Keywords:** Monolayer MoS$_2$, Optical anisotropy, Dielectric screening, Dielectric Engineering, Two-dimensional (2D) materials.
Introduction
============
Among the most studied two-dimensional (2D) semiconductors, monolayer transition metal dichalcogenides (TMDCs) serve as the platform for fundamental studies in nanoscale and promise a wide range of potential applications. [@mak2010Atomically; @splendiani2010Emerging; @Radisavljevic2011Single; @Wang2012Electronics; @Geim2013Van; @qiu2013Optical; @ugeda2014giant] Recently, the dielectric environment induced modification on the excitonic structures of monolayer TMDCs becomes a topic of intensive research efforts, [@komsa2012effects; @chernikov2014exciton; @stier2016probing; @rosner2016two; @qiu2016screening; @raja2017coulomb; @kirsten2017band; @cho2018environmentally; @wang2018Colloquium] and the potential of the so called dielectric engineering in constructing novel optoelectronic devices has also been demonstrated.[@raja2017coulomb; @utama2019dielectric; @raja2019dielectric]
For freestanding monolayer TMDCs, due to quantum confinement and reduced dielectric screening, the Coulomb interactions between charge carriers are enhanced leading to a significant renormalization of the electronic structure and the formation of tightly bound excitons. While freestanding monolayer in vacuum representing the utmost reduction of dielectric screening, the electronic band structure and the binding energy between charge carriers in monolayer TMDCs can also be tuned by selecting dielectric environment. Indeed, first principle calculations predict a monotonic decrease of both electronic bandgap and exciton binding energy with increasing dielectric screening,[@komsa2012effects; @qiu2013Optical; @kirsten2017band; @cho2018environmentally; @wang2018Colloquium] which has also been observed experimentally[@stier2016probing; @rosner2016two; @raja2017coulomb; @wang2018Colloquium]. Recently, by overlapping a homogeneous monolayer of MoS$_2$ (molybdenum disulfide) with the boundary connecting two substrates with different dielectric constants, an operational lateral heterojunction diode has been successfully constructed.[@utama2019dielectric] Even recently, a new concept named “dielectric order” has been introduced and its strong influence on the electronic transitions and exciton propagation has been illustrated using monolayer of WS$_2$ (tungsten disulfide)[@raja2019dielectric]. However, among these in-depth studies, the influence of the dielectric environment with a reduced symmetry has not been investigated,[@neupane2019plane] and its potential for realizing anisotropic modification on the electronic and optical properties of the monolayer TMDCs remains unexploited. In this letter, we report the breaking of the three-fold in-plane symmetry of the MoS$_2$ monolayer by depositing on the low-symmetry surface of sapphire, demonstrating the symmetry associated dielectric engineering of the 2D materials.
Results and Discussions
=======================
![(a)The setup of the RDS measurement and its alignment to the substrate.[]{data-label="figure1"}](Fig1.pdf){width="6cm"}
Due to their attractive properties, sapphire crystals are widely applied in solid-state device fabrications and also among the substrate candidates for 2D semiconductors.[@singh2015al2o3; @dumcenco2015large] Sapphire belongs to negative uniaxial crystals, i.e., its extraordinary dielectric function $\epsilon_e$ smaller than its ordinary dielectric function $\epsilon_o$.[@harman1994optical; @yao1999anisotropic] So far, only c-plane (0001) sapphire substrate has been used to investigate its dielectric screening effects on the monolayer TMDCs.[@yu2015exciton; @park2018direct] With isotropic in-plane dielectric properties defined by $\epsilon_o$, the underlying c-plane (0001) sapphire substrate induces a dielectric modification, which is laterally isotropic to monolayer TMDCs. In contrast, we prepared monolayer MoS$_2$ on a-plane ($11\overline{2}0$) sapphire substrate using chemical vapor deposition (CVD).[@supplemental] By selecting low symmetry a-plane sapphire as the substrate, we supply monolayer MoS$_2$ with an anisotropic dielectric environment defined by $\Delta \epsilon_{ext}=\epsilon_o-\epsilon_e$(see Fig.1). The resultant anisotropic modification was then investigate by measuring the optical anisotropy in the monolayer MoS$_2$ over the ultraviolet-visible (UV-Vis) range using reflectance difference spectroscopy (RDS),[@aspnes1985anisotropies; @weightman2005reflection] which measures the reflectance difference between the light polarized along two orthogonal directions at close normal incidence (see Fig.1). This highly sensitive technology has been successfully applied to investigate the optical properties of ultra-narrow graphene nanoribbons.[@denk2014exciton] For the monolayer MoS$_2$ covered a-plane ($11\overline{2}0$) substrate, the RD signals can be described by the following equation: $$\label{Eq.1}
\frac{\Delta r}{r}=\frac{1}{2}\frac{r_{[1\overline{1}00]}-r_{[0001]}}{r_{[1\overline{1}00]}+r_{[0001]}}$$ where $r_{[1\overline{1}00]}$ and $r_{[0001]}$ denote the reflectance of the light polarized along the $[1\overline{1}00]$ and the \[0001\] directions of the a-plane sapphire substrate, respectively.
![(a) The RD spectrum taken from the monolayer MoS$_2$ on Al$_2$O$_3$($11\overline{2}0$), (b) The absorption spectrum (b) and its first derivative (c) measured from monolayer MoS$_2$ on Al$_2$O$_3$(0001).[]{data-label="figure2"}](Fig2.pdf){width="8cm"}
After the systematic characterization using conventional techniques,[@supplemental] RDS measurement was then applied to investigate the optical anisotropy within the plane of the MoS$_2$ monolayer.The real part of the RD spectra measured from the bare Al2O3(11$\overline{2}$0) surface and the one covered by monolayer MoS$_2$ are plotted in Fig. 2(a), respectively. The bare Al2O3(11$\overline{2}$0) surface shows an optical anisotropy with an almost constant value which can be directly attributed to the in-plane birefringence of the a-plane sapphire substrate. Actually, the corresponding in-plane axis, namely \[1$\overline{1}$00\] and \[0001\] axis are parallel to the ordinary and extraordinary directions of sapphire, respectively. The result reveals thus the dielectric anisotropy $\Delta \epsilon_{ext}=\epsilon_o-\epsilon_e$ in a-plane sapphire substrate. Furthermore, additional optical anisotropy shows up from the a-plane sapphire substrate covered by monolayer MoS$_2$. It worth mentioning that, above the transparent sapphire substrate, the real part of the RD signal is predominantly associated with the anisotropy of the absorption of the monolayer MoS$_2$. For comparison, the absorption spectrum of the monolayer MoS$_2$ grown on a c-plane (0001) sapphire substrate[@supplemental] is plotted in Fig. 2(b). The spectrum exhibits typical absorption spectral line shape of monolayer MoS$_2$ with well resolved peaks indicated as A, B and C locating at 1.89eV, 2.03eV and 2.87eV, respectively. The peaks A and B are attributed to the electronic transitions from the spin-orbit split valence band (VB) to the conduction band (CB) around the critical points of K and K$'$ in the Brillouin zone, whereas the feature C is assigned to the transitions from VB to the CB in a localized region between critical points of K and $\Gamma$.[@qiu2013Optical; @li2014measurement] Furthermore, two additional absorption features indicated with D and E rise at 3.06 and 4.09eV, respectively. A recent study combining experiments and first-principle calculation attributed the D and E features to the higher-lying interband transitions located at $\Gamma$ and K points of the Brillouin zone, respectively.[@Baokun2018Layer] Most importantly, the comparison between the spectra in Fig. 2(a) and (b) reveals apparent deviations of RD spectrum from the spectral line shape of absorption, indicating the observed anisotropy does not merely arise due to the polarization dependent reflectance of the substrate. In fact, the observed optical anisotropy should be the consequence of the anisotropic optical transitions of monolayer MoS$_2$. The result indicates thus the break of the pristine three-fold rotation symmetry of monolayer MoS$_2$. Indeed, the RD spectrum can be resembled precisely by the first derivative of the absorption spectrum (see Fig. 2(c)), regarding the overall line shape and, especially, the peak positions. Each peak in the RD spectrum coincides precisely with a local maximum on the first derivative curve, which is initiated by the rising slope of an absorption peak. Reminding the configuration of RDS measurement in Fig. 1, this coincidence reveals that, at each critical point, the energy of optical transition for \[1$\overline{1}$00\]-polarized light (E$_{[1\overline{1}00]}$) is smaller than for \[0001\]-polarized light (E$_{[0001]}$), bringing in a positive energy shift $\Delta E = E_{[0001]}-E_{[1\overline{1}00]}$, accordingly.
In order to determine the energy shift $\Delta E$ for each individual optical transitions, the reflectance spectra of monolayer MoS$_2$ on a-plane (11$\overline{2}$0) sapphire substrate, namely $r_{[0001]}$ and $r_{[1\overline{1}00]}$, were simulated for the light polarized along \[0001\] and \[1$\overline{1}$00\] directions, respectively. For this purpose, a three-phase model composing vacuum, monolayer MoS$_2$, and a-plane (11$\overline{2}$0) sapphire substrate was used for the calculation (see Fig. 3(a)). The anisotropic dielectric functions of the a-plane sapphire substrate, namely $\epsilon_{[1\overline{1}00]}=\epsilon_o$ and $\epsilon_{[0001]}=\epsilon_e$, were measured using spectroscopic ellipsometry by Yao *et al*.[@yao1999anisotropic] The dielectric function of the MoS$_2$ monolayer polarized along the \[1$\overline{1}$00\] direction of the sapphire substrate, i.e., $\epsilon_{\mathrm{MoS_2}[1\overline{1}00]}$, was deduced from the absorption spectrum of the one deposited on the isotropic Al$_2$O$_3$(0001) substrate (see Fig. 2(b)). To this end, the absorption spectrum was fitted by a superposition of multiple Lorentzian oscillators, each with well defined peak position $E_i$, amplitude $f_i$ and broadness $\Gamma_i$.[@li2014measurement] Subsequently, $\epsilon_{\mathrm{MoS_2}[0001]}$ polarized along the \[0001\] direction of the substrate is modeled by introducing a center energy shift $\Delta E_i$, an amplitude deviation $\Delta f_i$ and a line width difference $\Delta \Gamma_i$ for each individual Lorentzian oscillators constituting $\epsilon_{\mathrm{MoS_2}[1\overline{1}00]}$. The resultant reflectance spectra, namely $r_{[0001]}$ and $r_{[1\overline{1}00]}$, were subsequently used to calculate the corresponding RD spectrum using Eq.1. The Lorentzian parameters for each individual optical transitions were obtained by fitting the simulated RD spectrum with the one experimentally measured (see Fig. 3(b)). The real and imaginary parts of the dielectric function obtained for monolayer MoS$_2$ along the $[1\overline{1}00]$ and \[0001\] directions, respectively, are plotted in Fig.3(c). More details of the calculations can be found in Supplementary S3.
A close inspection at Fig. 3(b) confirms the systematic blue shift of $\epsilon_{\mathrm{MoS_2}[0001]}$ relative to $\epsilon_{\mathrm{MoS_2}[1\overline{1}00]}$. The energy shifts deduced for the A and B peaks are both around 0.02meV, and the $\Delta E$ increases to a value of $\sim$7.23meV and $\sim$15.88meV for the absorption features of C and E, respectively. The positive sign of the $\Delta E$s obtained agrees with the conclusion based on the coincidence between the RD and the differentiated absorption spectra. The weak feature D is excluded because its strong overlapping with the predominant broad C peak prevents deducing reliable $\Delta E$.
The observed optical anisotropy of monolayer MoS$_2$ can be explained by the anisotropic dielectric screening induced by the a-plane sapphire substrate. As introduced in the previous section, for the atomically thin semiconductors, it has been predicted that the surrounding dielectric environment modifies both their electronic band structure and their exciton binding energy significantly by the dielectric screening effect. However, near the band edge, the modification of the electronic band structure is largely compensated by the simultaneous alternation of the exciton states, resulting in only a moderate variation of the excitonic transition energy.[@cho2018environmentally] This effect, however, attenuates for optical transitions involving higher-lying bands, leading to a pronounced dielectric environment modification on the transition energy.[@cho2018environmentally; @raja2019dielectric]
In the current case, being clipped between the air and substrate, the monolayer MoS$_2$ is exposed to the anisotropic dielectric environment invoked by the a-plane sapphire substrate, and its electronic band structure and exciton states become the objects of modification. For the a-plane sapphire substrate, at the static limit, the polarization dependent dielectric constants read $\epsilon_{[1\overline{1}00]} = \epsilon_o = 3.064$ and $\epsilon_{[0001]}=\epsilon_e=3.038$. [@harman1994optical; @yao1999anisotropic] The anisotropic dielectric environment indicated by $\Delta\epsilon=\epsilon_{1\overline{1}00}-\epsilon_{[0001]}=0.026$ may introduce the energy shifts $\Delta E$s between the \[0001\]- and $[1\overline{1}00]$-polarized optical transitions, and concomitantly the optical anisotropy of monolayer MoS$_2$ on a-plane sapphire substrate. Actually, the observed correlation between the $\Delta E$ and the $\Delta\epsilon$ agrees nicely with the previous results in the following respects: (1) Optical transition energy is predicted to decrease with increasing the dielectric permittivity. For a-plane sapphire substrate, $\epsilon_{[1\overline{1}00]}$is larger than $\epsilon_{[0001]}$. Consequently, the positive sign of $\Delta E=E_{[0001]}-E_{[1\overline{1}00]}$ obtained from each individual features of RD spectrum is consistent with the prediction. (2) The environment induced modification on the optical transition energy is enhanced for the higher-lying interband transitions. In the experimental results presented here, the $\Delta E$ increases with optical transition energy dramatically. Actually, the $\Delta E$ associated with the higher-lying interband transitions (C and E) are several orders of magnitude larger than the ones related to the excitonic transitions below the bandgap (A and B).
![(a) The schematic illustration of the three phase model, (b) the fitting between the simulated and the measured RD spectra, (c) the deduced dielectric functions of monolayer MoS$_2$ for light polarized along the \[0001\] and \[1$\overline{1}$00\] directions of a-plane sapphire substrate, respectively.[]{data-label="figure3"}](Fig3.pdf){width="14cm"}
The RDS measurements have also been performed when the sample was enclosed in a vacuum chamber with a base pressure of $1 \times 10^{-9}$ mbar. Fig. 3 shows the RD spectra of the same sample of monolayer MoS$_2$ on $\mathrm{Al_2O_3} \ (11\overline{2}0)$ but measured in the atmosphere and vacuum, respectively. In comparison with the result obtained in air, the RD spectrum measured in vacuum shows clearly three new features between the B and C peaks (Fig. 3(a)). Based on their energetic positions, two of them can be attributed to the 2s and 3s states in the B exciton Rydberg series (see the indication in Fig. 4(a)).[@hill2015observation] The third feature, which appears as a shoulder at the right side of the peak B, is most probably associated with the 2s state of A exciton. This argument is supported by the observation that the energy interval between this feature and the 2s state of peak B is similar to the one between the 1s states of A and B exciton.[@hill2015observation] Besides, an intensification of RD signal can be recognized over the whole spectral range (Fig. 3(b)). The vacuum induced enhancement of the optical anisotropy can be explained by the improvement of the “dielectric ordering”.[@raja2019dielectric; @cadiz2017excitonic] In fact, the atmosphere may introduce dielectric disorder in following ways: (1) Adsorption of air molecules, such as water, on the top surface of monolayer MoS$_2$. (2) Molecular intercalation between the monolayer MoS$_2$ and the $\mathrm{Al_2O_3 \ (11\overline{2}0)}$ surface. [@he2012scanning] These processes may introduce a nonhomogeneous dielectric environment and weaken the regular anisotropic dielectric screening induced by the substrate. By reducing the ambient pressure in a vacuum, both adsorption and intercalation are hinted, leading to an improved dielectric ordering and enhanced anisotropic dielectric screening from the substrate. Consequently, the current experimental observation suggests also the potential of the substrate-induced optical anisotropy as a sensitive probe to the molecular adsorption and intercalation.
![(a) The RD spectra of monolayer MoS$_2$ on $\mathrm{Al_2O_3}(11\overline{2}0)$ measured in atmosphere and in vacuum, respectively, plotted in a narrow (a) and an extended (b) energy range. The interfacial structures without and with molecular adsorption and intercalation are schematically illustrated in the inset of (b).[]{data-label="figure4"}](Fig4.pdf){width="10cm"}
Conclusion
==========
In summary, optical anisotropy has been detected in monolayer MoS$_2$, which was deposited on a-plane $(11\overline{2}0)$ sapphire substrate by CVD. The revealed breaking of the intrinsic three-fold rotation symmetry of monolayer MoS$_2$ is associated with the anisotropic dielectric environment supplied by the underlying a-plane sapphire substrate. The resultant anisotropic modification of monolayer MoS$_2$ has been quantitatively evaluated by determining the energy difference between optical transitions polarized along extraordinary and ordinary directions of the sapphire substrate. The general tendency based on the optical transitions over the ultraviolet-visible wavelength range is in good agreement with the first principle predication regarding the modification of the electronic band structure and exciton binding energy. Furthermore, the detailed optical anisotropy of monolayer MoS$_2$ shows a dependence on the ambient pressure, indicating its sensitivity to the molecular adsorption and intercalation. Although only one combination, namely monolayer MoS$_2$ and a-plane (11$\overline{2}$0) sapphire substrate has been investigated, the anisotropic dielectric modification should be a general phenomenon for atomically thin materials adjacent substrates with low symmetry. In addition to the magnitude and order, the symmetry of the dielectric environment may supply a new degree of freedom for dielectric engineering of the two-dimensional materials.
acknowledgements {#acknowledgements .unnumbered}
================
We acknowledge the financial support for this work by the Austrian Science Fund (FWF) with project number: P25377-N20. W.F.S. and Y.X.W. acknowledge the financial support of the China Scholarship Council (CSC), Y.X.W. acknowledges the financial support from Eurasia Pacific Uninet, C.B.L.P. acknowledges financial support from Consejo Nacional de cienciay Tecnologa, though the Becas Mixtas program.
|
---
abstract: 'At 66Mpc, AT2019qiz is the closest optical tidal disruption event (TDE) to date, with a luminosity intermediate between the bulk of the population and the faint-and-fast event iPTF16fnl. Its proximity allowed a very early detection and triggering of multiwavelength and spectroscopic follow-up well before maximum light. The velocity dispersion of the host galaxy and fits to the TDE light curve indicate a black hole mass $\approx 10^6$, disrupting a star of $\approx1$. By analysing our comprehensive UV, optical and X-ray data, we show that the early optical emission is dominated by an outflow, with a luminosity evolution $L\propto t^2$, consistent with a photosphere expanding at constant velocity ($\gtrsim 2000$), and a line-forming region producing initially blueshifted H and He II profiles with $v=3000-10000$. The fastest optical ejecta approach the velocity inferred from radio detections, thus the same outflow is likely responsible for both the fast optical rise and the radio emission – the first time this connection has been determined in a TDE. The light curve rise begins $35\pm1.5$ days before maximum light, peaking when AT2019qiz reaches the radius where optical photons can escape. The photosphere then undergoes a sudden transition, first cooling at constant radius then contracting at constant temperature. At the same time, the blueshifts disappear from the spectrum and Bowen fluorescence lines (N III) become prominent, implying a source of far-UV photons, while the X-ray light curve peaks at $\approx10^{41}$. Thus accretion began promptly in this event, favouring accretion-powered over collision-powered outflow models. The size and mass of the outflow are consistent with the reprocessing layer needed to explain the large optical to X-ray ratio in this and other optical TDEs.'
author:
- |
[M. Nicholl$^{1,2}$]{}[^1], [T. Wevers$^{3}$]{}, [S. R. Oates$^{1}$]{}, [K. D. Alexander$^{4}$[^2]]{}, [G. Leloudas$^{5}$]{}, [F. Onori$^{6}$]{}, , [S. Gomez$^{9}$]{}, [S. Campana$^{10}$]{}, [I. Arcavi$^{11,12}$]{}, [P. Charalampopoulos$^{5}$]{}, , [N. Ihanec$^{13}$]{}, [P. G. Jonker$^{14,15}$]{}, [A. Lawrence$^{2}$]{}, [I. Mandel$^{16,17,1}$]{}, , [J. Burke$^{18,19}$]{}, [D. Hiramatsu$^{18,19}$]{}, [D. A. Howell$^{18,19}$]{}, [C. Pellegrino$^{18,19}$]{}, , [J. P. Anderson$^{21}$]{}, [E. Berger$^{9}$]{}, [P. K. Blanchard$^{9}$]{}, [G. Cannizzaro$^{14,15}$]{} , [M. Dennefeld$^{22}$]{}, [L. Galbany$^{23}$]{}, [S. González-Gaitán$^{24}$]{}, [G. Hosseinzadeh$^{9}$]{}, , [I. Irani$^{26}$]{}, [P. Kuin$^{27}$]{}, [T. Muller-Bravo$^{28}$]{}, [J. Pineda$^{29}$]{}, [N. P. Ross$^{2}$]{}, , [B. Tucker$^{18}$]{}, [[Ł]{}. Wyrzykowski$^{13}$]{}, [D. R. Young$^{31}$]{}\
Affiliations at end of paper
bibliography:
- 'refs.bib'
title: 'An outflow powers the optical rise of the nearby, fast-evolving tidal disruption event AT2019qiz'
---
\[firstpage\]
transients: tidal disruption events – galaxies: nuclei – black hole physics
Introduction {#sec:intro}
============
An unfortunate star in the nucleus of a galaxy can find itself on an orbit that intersects the tidal radius of the central supermassive black hole (SMBH), $R_t\approx R_*(M_\bullet/M_*)^{1/3}$ for a black hole of mass $M_\bullet$ and a star of mass $M_*$ and radius $R_*$ [@Hills1975]. This encounter induces a spread in the specific orbital binding energy across the star that is orders of magnitude greater than the mean binding energy [@Rees1988], sufficient to tear the star apart in a ‘tidal disruption event’ (TDE). The stellar debris, confined in the vertical direction by self-gravity [@Kochanek1994; @Guillochon2014], is stretched into a long, thin stream, roughly half of which remains bound to the SMBH [@Rees1988]. As the bound debris orbits the SMBH, relativistic apsidal precession causes the stream to self-intersect and dissipate energy.
This destruction can power a very luminous flare, up to or exceeding the Eddington luminosity, either when the intersecting streams circularise and form an accretion disk [@Rees1988; @Phinney1989], or even earlier if comparable radiation is produced directly from the stream collisions [@Piran2015; @Jiang2016]. Such flares are now regularly discovered, at a rate exceeding a few per year, by the various wide-field time-domain surveys.
Observed TDEs are bright in the UV, with characteristic temperatures $\sim2-5\times10^4$K and luminosities $\sim10^{44}$. They are classified according to their spectra, generally exhibiting broad, low equivalent width[^3] emission lines of hydrogen, neutral and ionised helium, and Bowen fluorescence lines of doubly-ionised nitrogen and oxygen [e.g. @Gezari2012; @Holoien2014; @Arcavi2014; @Leloudas2019]. This prompted @vanVelzen2020 to suggest three sub-classes labelled TDE-H, TDE-He and TDE-Bowen, though some TDEs defy a consistent classification by changing their apparent spectral type as they evolve [@Nicholl2019].
TDE flares were initially predicted to be brightest in X-rays, due to the high temperature of an accretion disk, and indeed this is the wavelength where the earliest TDE candidates were identified [@Komossa2002]. However, the optically-discovered TDEs have proven to be surprisingly diverse in their X-ray properties. Their X-ray to optical ratios at maximum light range from $\gtrsim10^{3}$ to $<10^{-3}$ [@Auchettl2017]. Producing such luminous optical emission without significant X-ray flux can be explained in one of two ways: either X-ray faint TDEs are powered primarily by stream collisions rather than accretion, or the accretion disk emission is reprocessed through an atmosphere [@Strubbe2009; @Guillochon2014; @Roth2016].
Several lines of evidence have indicated that accretion disks do form promptly even in X-ray faint TDEs: Bowen fluorescence lines that require excitation from far-UV photons [@Blagorodnova2018; @Leloudas2019]; low-ionisation iron emission appearing shortly after maximum light [@Wevers2019b]; and recently the direct detection of double-peaked Balmer lines that match predicted disk profiles [@Short2020; @Hung2020]. Thus a critical question is to understand the nature and origin of the implied reprocessing layer. It has already been established that this cannot be simply the unbound debris stream, as the apparent cross-section is too low to intercept a significant fraction of the TDE flux [@Guillochon2014].
Inhibiting progress is the messy geometry of the debris. Colliding streams, inflowing and outflowing gas, and a viewing-angle dependence on both the broad-band [@Dai2018] and spectroscopic [@Nicholl2019] properties all contribute to a messy knot that must be untangled. One important clue comes from radio observations: although only a small (but growing) sample of TDEs have been detected in the radio [see recent review by @Alexander2020], in such cases we can measure the properties (energy, velocity, and density) of an outflow directly. In some TDEs this emission is from a relativistic jet [@Zauderer2011; @Bloom2011; @Burrows2011; @Cenko2012; @Mattila2018], which does not appear to be a common feature of TDEs, but other radio TDEs have launched sub-relativistic outflows [@vanvelzen2016; @Alexander2016; @Alexander2017].
A number of radio-quiet TDEs have exhibited indirect evidence for slower outflows in the form of blueshifted optical/UV emission and absorption lines [@Roth2018; @Hung2019; @Blanchard2017], suggesting that outflows may be common. This is crucial, as the expanding material offers a promising means to form the apparently ubiquitous reprocessing layer required by the optical/X-ray ratios. Suggested models include an Eddington envelope [@Loeb1997], possibly inflated by radiatively inefficient accretion or an optically thick disk wind [@Metzger2016; @Dai2018]; or a collision-induced outflow [@Lu2020].
Understanding whether the optical reprocessing layer is connected to the non-relativistic outflows seen in some radio TDEs is therefore a crucial, and as yet under-explored, step towards a pan-chromatic picture of TDEs. Acquiring early observations in the optical, X-ray, and radio bands can allow us to distinguish whether outflows are launched *by* accretion, or *before* accretion (i.e., in the collisions that ultimately enable disk formation).
In this paper, we present a detailed study of the UV, optical and X-ray emission from AT2019qiz: the closest TDE discovered to date, and the first optical TDE at $z<0.02$ that has been detected in the radio. We place a particular focus on the spectroscopic evolution, finding clear evidence of an outflow launched well before the light curve maximum. By studying the evolution of the photosphere and line velocities, we infer a roughly homologous structure, with the fastest optically-emitting material (the reprocessing layer) likely also responsible for the radio emission (which is analysed in detail in a forthcoming companion paper; K. D. Alexander et al., in preparation). This event suggests a closer connection between the optical and radio TDE outflows than has been appreciated to date, while a peak in the X-ray light curve and the detection of Bowen lines indicates that accretion began promptly in this event and likely drives the outflow. The rapid rise and decline of the light curve suggests that the properties of outflows may be key to understanding the fastest TDEs.
We detail the discovery and classification of AT2019qiz in section \[sec:class\], and describe our observations and data reduction in section \[sec:obs\]. We analyse the host galaxy, including evidence for a pre-existing AGN, in section \[sec:host\], and study the photometric and spectroscopic evolution of the TDE emission in sections \[sec:phot\] and \[sec:spec\]. This is then brought together into a coherent picture, discussed in section \[sec:dis\], before we conclude in section \[sec:conc\]. All data in this paper will be made publicly available via WISeREP [@Yaron2012].
Discovery and background {#sec:class}
========================
AT2019qiz was discovered in real-time alerts from the Zwicky Transient Facility [ZTF; @Bellm2019; @Masci2019; @Patterson2019], at coordinates RA$=$04:46:37.88, Dec$=-$10:13:34.90, and was given the survey designation ZTF19abzrhgq. It is coincident with the centre of the galaxy 2MASXJ04463790-1013349 (a.k.a. WISEAJ044637.88-101334.9). The transient was first identified by the ALeRCE broker, who reported it to the Transient Name Server on 2019-09-19 UT [@Forster2019]. It was also reported by the AMPEL broker a few days later. In subsequent nights the same transient was independently reported by the Asteroid Terrestrial impact Last Alert System [ATLAS; @Tonry2018] as ATLAS19vfr, by *Gaia* Science Alerts [@Hodgkin2013] as Gaia19eks, and by the Panoramic Survey Telescope And Rapid Response System (PanSTARRS) Survey for Transients [PSST; @Huber2015] as PS19gdd. The earliest detection is from ATLAS on 2019-09-04 UT.
We have been running the Classification Survey for Nuclear Transients with Liverpool and Lasair (C-SNAILS; @Nicholl2019csnails) to search for TDEs in the public ZTF alert stream. AT2019qiz passed our selection criteria (based on brightness and proximity to the centre of the host galaxy[^4]) on 2019-09-25 UT, and we triggered spectroscopy with the Liverpool Telescope. On the same day, @Siebert2019 publicly classified AT2019qiz as a TDE using spectroscopy from Keck I.
Their reported spectrum, and our own data obtained over the following nights, showed broad He II and Balmer emission lines superposed on a very blue continuum, characteristic of UV-optical TDEs. Follow-up observations from other groups showed that the source was rising in the UV [@Zhang2019], but was not initially detected in X-rays [@Auchettl2019]. Radio observations with the Australia Telescope Compact Array (ATCA) revealed rising radio emission, reaching 2.6mJy at 21.2GHz on 2019-12-02 [@OBrien2019], placing AT2019qiz among the handful of TDEs detected at radio wavelengths.
![Pre-disruption $g,i,z$ colour image of the host of AT2019qiz, 2MASXJ04463790-1013349, obtained from the PanSTARRS image server. Comparing to our highest S/N LCO image of the transient, we measure an offset of $15\pm46$pc from the centre of the host.[]{data-label="fig:ps1"}](ps1.pdf){width="7cm"}
The spectroscopic redshift of AT2019qiz, as listed for the host galaxy in the NASA Extragalactic Database (NED) and measured from narrow absorption lines in the TDE spectrum, is $z=0.01513$. This corresponds to a distance of 65.6Mpc, assuming a flat cosmology with $H_0=70$kms$^{-1}$Mpc$^{-1}$ and $\Omega_\Lambda=0.7$. This makes AT2019qiz the most nearby TDE discovered to date.
AT2019qiz was included in the sample of 17 TDEs from ZTF studied by @vanVelzen2020. Their study focused primarily on the photometric evolution around peak, and correlations between TDE and host galaxy properties. In this paper, we analyse a rich dataset for AT2019qiz including densely sampled spectroscopy, very early and late photometric observations, and the first detection of the source in X-rays. We also examine the host galaxy in detail. While we link the optical properties of the TDE to its behaviour in the radio, the full radio dataset and analysis will be presented in a companion paper (K. D. Alexander et al., in preparation).
Observations {#sec:obs}
============
Ground-based imaging {#sec:ground}
--------------------
Well-sampled host-subtracted light curves of AT2019qiz were obtained by the ZTF public survey, in the $g$ and $r$ bands, and ATLAS in the $c$ and $o$ bands (effective wavelengths 5330 and 6790Å). The ZTF light curves were accessed using the Lasair alert broker[^5] [@Smith2019].
We triggered additional imaging with a typical cadence of four days in the $g,r,i$ bands using the Las Cumbres Observatory (LCO) global network of 1-m telescopes [@Brown2013]. We retrieved the reduced (de-biased and flat-fielded) images from LCO and carried out point spread function (PSF) fitting photometry using a custom wrapper for <span style="font-variant:small-caps;">daophot</span>. The zeropoint in each image was calculated by comparing the instrumental magnitudes of field stars with their catalogued magnitudes from PanSTARRS data release 1 [@Flewelling2016].
AT2019qiz resides in a bright galaxy, 2MASXJ04463790-1013349, with $m_r=14.2$mag (Section \[sec:host\]). A three-colour ($gri$) image of the host, comprised of deep stacks from PanSTARRS, is shown in Figure \[fig:ps1\]. To isolate the transient flux, we aligned each LCO image with the PanSTARRS image in the corresponding filter using the <span style="font-variant:small-caps;">geomap</span> and <span style="font-variant:small-caps;">geotran</span> tasks in <span style="font-variant:small-caps;">pyraf</span>, computing the transformation from typically $> 50$ stars, before convolving the PanSTARRS reference image to match the PSF of the science image and subtracting with <span style="font-variant:small-caps;">hotpants</span> [@Becker2015].
![Optical and UV light curves of AT2019qiz. The host contribution has been removed using image differencing (ZTF, ATLAS, LCO) or subtraction of fluxes estimated from the galaxy SED (UVOT).[]{data-label="fig:phot"}](lc.pdf){width="\columnwidth"}
Astrometry
----------
Measuring the TDE position in an LCO $r$ band image obtained on 2019-10-10 (at the peak of the optical light curve), and the centroid of the galaxy in the aligned PanSTARRS $r$ band image, we measure an offset of $0.12\pm0.37$ pixels (or $0.047''\pm0.144$) between the transient and the host nucleus, where the uncertainty is dominated by the root-mean-square error in aligning the images. This corresponds to a physical offset of $15\pm46$pc at this distance; the transient is therefore fully consistent with a nuclear origin.
An alternative astrometric constraint can be obtained using the [*Gaia*]{} Science Alerts (GSA) detections [@Hodgkin2013]. Gaia19eks was discovered at a separation of 38 milli-arcseconds (mas) from the reported location of its host galaxy in [*Gaia*]{} data release 2 (GDR2; @Gaia2018). The estimated astrometric uncertainty of GSA is $\sim$100 mas [@Fabricius2016], and the coordinate systems of GSA and GDR2 are well aligned [@Kostrzewa2018; @Wevers2019b]. This measurement corresponds to an even tighter constraint on the offset of $12\pm32$pc.
*Swift* UVOT data
-----------------
Target-of-opportunity observations spanning 39 epochs (PIs Yu and Nicholl) were obtained with the UV-Optical Telescope (UVOT) and X-ray Telescope (XRT) on-board the Neil Gehrels Swift Observatory (*Swift*). The UVOT light curves were measured using a $5''$ aperture. This is approximately twice the UVOT point-spread function, ensuring the measured magnitudes capture most of the transient flux while minimising the host contribution (the coincidence loss correction for the UVOT data is also determined using a $5''$ aperture, ensuring a reliable calibration of these magnitudes). The count rates were obtained using the *Swift* <span style="font-variant:small-caps;">uvotsource</span> tools and converted to magnitudes using the UVOT photometric zero points [@Breeveld2011]. The analysis pipeline used software HEADAS 6.24 and UVOT calibration 20170922. We exclude the initial images in the $B$, $U$, $UVW1$, and $UVW2$ filters (OBSID 00012012001) due to trailing within the images. We also exclude 2 later $UVW1$ images and a $UVW2$ image due to the source being located on patches of the detector known to suffer reduced sensitivity. A correction has yet to be determined for these patches[^6].
![Stacked *Swift* XRT image (total exposure time 30ks) centred at the position of AT2019qiz, marked by a cyan cross. An X-ray source is detected at $5.6\sigma$ significance. The image has been blocked $4\times4$ and smoothed using a Gaussian kernel for display. The colour bar gives the counts per pixel.[]{data-label="fig:x"}](xray3.pdf){width="7cm"}
![Time-averaged X-ray spectrum and best-fit absorbed power-law model, used to derive the counts-to-flux conversion.[]{data-label="fig:xspec"}](xspec2.pdf){width="\columnwidth"}
![Top: XRT light curve (unabsorbed flux) in 0.3-10keV, 0.3-2keV (soft) and 2-10keV (hard) X-ray bands. The X-ray light curve peaks around 20 days after optical maximum. Bottom: evolution of the hardness ratio, defined as (hard$-$soft counts)/(hard$+$soft counts). The X-rays transition from hard to soft as the luminosity declines.[]{data-label="fig:hr"}](hr.pdf){width="\columnwidth"}
![image](all-spec.pdf){width="\textwidth"}
No host galaxy images in the UV are available for subtraction. We first estimated the host contribution using a spectral energy distribution (SED) fit for the host galaxy (section \[sec:host\]), and scaling the predicted flux by a factor 0.2, i.e. the fraction of the host light within a $5''$ aperture in the PanSTARRS $g$-band image of the galaxy. We checked that this provides a reliable estimate of the host flux by re-extracting the UVOT light curve using a $30''$ aperture to fully capture both the transient and host flux, and subtracting the model host magnitudes with no scaling.
Comparing the $5''$ light curves to the $30''$ light curves, we find a good match in the $U$ and $UVW1$ bands (but with much better S/N ratio in the $5''$ case). In the bluer $UVM2$ and $UVW2$ bands, where the host SED and light profile is less constrained, we find that scaling the host flux by a factor 0.1 before subtraction yields better agreement, and we adopt this as our final light curve. In all cases, we propagate the uncertainty in the host contribution, determined using the standard deviation of samples drawn from the SED fit. The complete, host-subtracted UV and optical light curves from *Swift*, LCO, ZTF and ATLAS are shown in Figure \[fig:phot\].
*Swift* XRT data
----------------
We processed the XRT data using the online analysis tools provided by the UK *Swift* Science Data Centre [@Evans2007; @Evans2009]. We first combined all of the data into a single deep stack (total exposure time 30ks), which we then downloaded for local analysis. The stacked image, shown in Figure \[fig:x\], clearly exhibits an X-ray source at the position of AT2019qiz. Using a $50''$ aperture ($\sim2.5$ times the instrumental half-energy width) centered at the coordinates of the transient, we measure an excess $46.9 \pm 8.4$ counts above the background, giving a mean count rate of $(1.6\pm0.3)\times10^{-3}$ cts$^{-1}$.
We then used the same tools to extract the mean X-ray spectrum, shown in Figure \[fig:xspec\], and light curve. Given the low number of counts we fit the spectrum using Cash statistics, and fixed the Galactic column density to $6.5\times 10^{20}$ cm$^{-2}$. The fit with a power law does not need an intrinsic column ($<2.8\times 10^{21}$ cm$^{-2}$, $90\%$ confidence level). The photon index of the fit is $\Gamma=1.1^{+0.6}_{-0.4}$. A blackbody model can give a comparable fit, but the inferred temperature and radius ($kT=0.9$keV, $R=6.0\times10^7$cm) are not consistent with other TDEs. The 0.3-10 keV unabsorbed flux from the power-law fit is $9.9^{+3.7}_{-3.4}\times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$. At the distance of AT2019qiz, this corresponds to an X-ray luminosity $L_X=5.1\times10^{40}$ergs$^{-1}$.
The X-ray light curve is shown in Figure \[fig:hr\]. AT2019qiz is reasonably well detected in X-rays for around 50 days after optical maximum. At later times, we find that the low count rate requires binning on a timescale greater than the maximum permitted by the online tools. Therefore beyond 50 days, we derive the flux offline using a single binned image, rather than the automated analysis software. The 0.3-10keV light curve peaks around 20 days after optical maximum and declines quickly thereafter.
We calculate the hardness ratio (where sufficient counts are available) as $(H-S)/(H+S)$, where $S$ is the count rate in the $0.3-2$keV band and $H$ the count rate in the $2-10$keV band; these ranges have been chosen to match @Auchettl2017. AT2019qiz exhibits an unusually hard ratio at early times, with $(H-S)/(H+S)=0.2\pm0.3$, but as the X-rays fade they also soften, reaching $(H-S)/(H+S)=-0.4\pm0.3$ by $\approx 50$ days after peak. This latter value is typical of the TDE sample studied by @Auchettl2017, whereas a positive ratio has only been seen previously in the jetted TDE J1644+57 [@Zauderer2011].
Optical spectroscopic data
--------------------------
Spectra of AT2019qiz were obtained from the 3.6-m New Technology Telescope (NTT), using EFOSC2 with Grism\#11, through the advanced Public ESO Spectroscopy Survey of Transient Objects (ePESSTO+; @Smartt2015); the LCO 2-m North and South telescopes with FLOYDS; the 2-m Liverpool Telescope (LT) with SPRAT [@Piascik2014] in the blue-optimised setting, as part of C-SNAILS [@Nicholl2019csnails]; the 6.5-m MMT telescope with Binospec [@Fabricant2019]; the 6.5-m Magellan Clay telescope with LDSS-3 and the VPH-ALL grism; the 4.2-m William Herschel Telescope with ISIS [@Jorden1990] and the R600 blue/red gratings; and the 8-m ESO Very Large Telescope using X-Shooter [@Vernet2011] in on-slit nodding mode, through our TDE target-of-opportunity program.
Reduction and extraction of these data were performed using instrument-specific pipelines or (in the cases of the LDSS-3 and ISIS data) standard routines in <span style="font-variant:small-caps;">iraf</span>. Reduced LCO and LT data were downloaded from the respective data archives, while we ran the pipelines [@Smartt2015; @Freudling2013] locally for the EFOSC2 and X-Shooter data[^7]. Typical reduction steps are de-biasing, flat-fielding and wavelength-calibration using standard lamps, cosmic-ray removal [@vanDokkum2012], flux calibration using spectra of standard stars obtained with the same instrument setups, and variance-weighted extraction to a one-dimensional spectrum. We also retrieved the reduced classification spectrum obtained by @Siebert2019 using the 10-m Keck-I telescope with LRIS [@Oke1995], and made public via the Transient Name Server[^8]. All spectra are corrected for redshift and a foreground extinction of $E(B-V) = 0.0939$ using the dust maps of @Schlafly2011 and the extinction curve from @Cardelli1989. All spectra are plotted in Figure \[fig:spec\]. For host-subtracted spectra (section \[sec:host\] and appendix), we apply these corrections after scaling and subtraction.
Host galaxy properties {#sec:host}
======================
Morphology {#sec:morph}
----------
The host of AT2019qiz is a face-on spiral galaxy. A large-scale bar is visible in the PanSTARRS image (Figure \[fig:ps1\]). @French2020 analysed *Hubble Space Telescope* (*HST*) images of four TDE hosts and identified bars in two. While central bars (on scales $\lesssim 100$pc) can increase the TDE rate by dynamically feeding stars towards the nucleus [@Merritt2004], there is no evidence that large-scale bars increase the TDE rate [@French2020]. Given the proximity of AT2019qiz, this galaxy is an ideal candidate for *HST* or adaptive optics imaging to resolve the structure of the nucleus.
Recent studies have shown that TDE host galaxies typically have a more central concentration of mass than the background galaxy population [@LawSmith2017; @Graur2018]. The most recent compilation [@French2020rev] shows that the Sérsic indices of TDE hosts range from $\approx 1.5-6$, consistent with the background distribution of quiescent galaxies but significantly higher than star-forming galaxies. We measure the Sérsic index for the host of AT2019qiz by fitting the light distribution in the PanSTARRS $r$-band image using <span style="font-variant:small-caps;">galfit</span> [@peng2002]. The residuals are shown in Figure \[fig:galfit\]. We do not fit for the spiral structure. Following @French2020, we investigate the effect of including an additional central point source (using the point-spread function derived from stars in the image as in section \[sec:ground\]). The residuals appear flatter when including the point source (reduced $\chi^2 = 1.36$ with the point source or 1.46 without)[^9]. The best-fit Sérsic index is 5.2 (with the point source) or 6.3 (without). In either case, this is consistent with the upper end of the observed distribution for TDE hosts.
![Left: PanSTARRS $g$-band image of the host galaxy. We fit a Sérsic function for the overall surface brightness profile using <span style="font-variant:small-caps;">galfit</span>, but make no attempt to model the spiral arms. The model also includes a point source for the nearby star to the west (this star is fitted, but not subtracted when producing the residual images), and optionally a central point source for the galactic nucleus. Middle: Subtraction residuals without a central point source. Right: Subtraction residuals when including a central point source. We find visibly smoother residuals in this case.[]{data-label="fig:galfit"}](galfit.pdf){width="\columnwidth"}
Velocity dispersion and black hole mass
---------------------------------------
Following @Wevers2017 [@Wevers2019], we fit the velocity dispersion of stellar absorption lines with the code <span style="font-variant:small-caps;">ppxf</span> [@Cappellari2017] to estimate the mass of the central SMBH. We use a late-time spectrum obtained from X-shooter, resampled to a logarithmic spacing in wavelength and with the continuum removed via polynomial fits. We find a dispersion $\sigma=69.7\pm2.3$kms$^{-1}$.
Using relations between velocity dispersion and black hole mass (the $M_\bullet-\sigma$ relation), this gives a SMBH mass $\log (M_\bullet/M_\odot)=5.75\pm0.45$ in the calibration of @McConnell2013, or $\log (M_\bullet/M_\odot)=6.52\pm0.34$ in the calibration of @Kormendy2013. The calibration of @Gultekin2009 gives an intermediate value $\log (M_\bullet/M_\odot)=6.18\pm0.44$. The reason for the large spread in these estimates is that these relations were calibrated based on samples that comprised mostly black holes more massive than $10^7$. However, the estimates here are consistent, within the errors, with an independent mass measurement based on the TDE light curve (section \[sec:phot\]).
Host SED model
--------------
Archival photometry of this galaxy is available from the PanSTARRS catalog in the $g,r,i,z,y$ filters, as well as in data releases from the 2 Micron All Sky Survey [2MASS; @Skrutskie2006] in the $J,H,K$ filters and the Wide-field Infrared Survey Explorer [WISE; @Wright2010] in the WISE bands $W1-4$. We retrieved the Kron magnitudes from PanSTARRS, the extended profile-fit magnitudes (‘m\_ext’) from 2MASS, and the magnitudes in a $44''$ circular aperture (chosen to fully capture the galaxy flux) from WISE. We fit the resultant spectral energy distribution (SED) with stellar population synthesis models in <span style="font-variant:small-caps;">Prospector</span> [@Leja2017] to derive key physical parameters of the galaxy. The free parameters in our model are stellar mass, metallicity, the current star-formation rate and the widths of five equal-mass bins for the star-formation history, and three parameters controlling the dust fraction and reprocessing (see @Leja2017 for details). @vanVelzen2020 also used <span style="font-variant:small-caps;">Prospector</span> to model this galaxy (but only the PanSTARRS data); the mass and metallicity we find using the full SED are consistent with their results, within the uncertainties. A difference in our modelling is that we allow for a non-parametric star-formation history to better understand the age of the system.
The best-fitting model is shown compared to the archival photometry in Figure \[fig:host\]. We find a stellar mass $\log (M_*/M_\odot) = 10.26^{+0.12}_{-0.15}$, a sub-solar metallicity $\log {Z/Z_\odot}=-0.84^{+0.28}_{-0.34}$ (but see section \[sec:bpt\]), and a low specific star-formation rate $\log {\rm sSFR}=-11.21^{+0.23}_{-0.55}$ in the last 50Myr, where the reported values and uncertainties are the median and 16th/84th percentiles of the posterior distributions. The model also prefers a modest internal dust extinction, $A_V=0.16\pm0.04$mag. The stellar mass reported by <span style="font-variant:small-caps;">prospector</span> is the integral of the star-formation history, and does not include losses due to stars reaching the ends of their active lives. From our model we measure a ‘living’ mass fraction of $0.58\pm0.02$.
![Archival photometry of the host galaxy, and SED fit using <span style="font-variant:small-caps;">prospector</span>. The best fit model, as well as the $1\sigma$ dispersion in model realisations, are shown. The inset shows the derived star-formation history, which is approximately flat at $1-2$yr$^{-1}$ prior to a steep drop in the last $\sim1$Gyr.[]{data-label="fig:host"}](prospector-fit.pdf){width="\columnwidth"}
In the same figure, we plot the median and $1\sigma$ uncertainty on the star-formation history versus lookback time since the Big Bang. We find a roughly constant star-formation rate of $\approx 2$yr$^{-1}$, prior to a sharp drop in the last $\approx 1.5$Gyr. A recent decline in star-formation is a common feature of TDE host galaxies, as evidenced by the over-representation of quiescent Balmer-strong galaxies (and the subset of post-starburst galaxies) among this population [@Arcavi2014; @French2016; @French2020rev]. Spectroscopy of the host after the TDE has completely faded will be required to confirm whether this galaxy is also a member of this class.
Galaxy emission lines and evidence for an AGN {#sec:bpt}
---------------------------------------------
The metallicity preferred by <span style="font-variant:small-caps;">prospector</span> would be very low for a galaxy of $\gtrsim10^{10}$, though @vanVelzen2020 find similarly low metallicities for all TDE hosts in their sample, including AT2019qiz, from their SED fits. Spectroscopic line ratios provide a more reliable way to measure metallicity. The TDE spectra clearly show narrow lines from the host galaxy. We measure the fluxes of diagnostic narrow lines using Gaussian fits. Specifically, we measure H$\alpha$, H$\beta$, \[O III\]$\lambda5007$, \[O I\]$\lambda6300$, \[N II\]$\lambda6584$, and \[S II\]$\lambda\lambda6717,6731$. We report the mean of each of these ratios (averaged over the six X-shooter spectra) in Table \[tab:bpt\]. No significant time evolution is seen in the narrow line fluxes. To estimate the metallicity, we use the N2 metallicity scale (\[N II\]$\lambda6584$/H$\alpha$), adopting the calibration from @Pettini2004, to find an oxygen abundance $12+\log({\rm O/H})=8.76\pm0.14$. This corresponds to a metallicity $Z/Z_\odot=1.17$, more in keeping with a typical massive galaxy. However, this \[N II\]$\lambda6584$/H$\alpha$ ratio is outside the range used to calibrate the @Pettini2004 relation, so this metallicity may not be reliable. Applying the calibration of @Marino2013, valid over a wider range, we find a slightly lower metallicity of $12+\log({\rm O/H})=8.63$, consistent with solar metallicity.
Line Flux ($10^{-16}$ergs$^{-1}$cm$^{-2}$)
---------------------------------------- ---------------------------------------
H$\beta$ $7.9 \pm 1.7$
[\[O III\] $\lambda5007$]{} $8.0 \pm 3.1$
[\[O I\] $\lambda6300$]{} $<3.0$
H$\alpha$ $14.2 \pm 3.8$
[\[N II\] $\lambda6584$]{} $7.6 \pm 1.9$
[\[S II\] $\lambda\lambda6717,6731$]{} $6.3 \pm 1.0$
$\log$ ratio
[\[O III\]]{} / H$\beta$ $0.04 \pm 0.25$
[\[N II\]]{} / H$\alpha$ $-0.25 \pm 0.14$
[\[S II\]]{} / H$\alpha$ $-0.32 \pm 0.16$
[\[O I\]]{} / H$\alpha$ $\lesssim -0.7$
: Host emission line fluxes and BPT line ratios [@Baldwin1981], averaged over the X-shooter spectra.[]{data-label="tab:bpt"}
Ratios of these lines are used in the Baldwin-Phillips-Terlovich (BPT) diagram [@Baldwin1981] to probe the ionization mechanism of the gas. The ratios we measure for the host of AT2019qiz (Table \[tab:bpt\]) lie intermediate between the main sequence of star-forming galaxies and galaxies with ionization dominated by an active galactic nucleus (AGN). This could be evidence of a weak AGN, or another source of ionization such as supernova shocks or evolved stars [@Kewley2001]. Several other TDE hosts lie in a similar region of the BPT parameter space [@Wevers2019; @French2020rev], while a number show direct evidence of AGN ionisation [@Prieto2016]. We note the caveat that if the lines in our spectra are excited by AGN activity, the calibrations used to estimate the metallicity may not always be valid.
To test the AGN scenario, we look at the mid-infrared colours. @Stern2012 identify a colour cut $W1-W2>0.8$Vega mag to select AGN from WISE data. For the host of AT2019qiz, we find $W1-W2\approx0$Vega mag. At most a few percent of AGN have such a blue $W1-W2$ colour [@Assef2013]. @Wright2010 employ a two-dimensional cut using the $W1-W2$ and $W2-W3$. The host of AT2019qiz has $W2-W3=1.8$mag, consistent with other spiral galaxies. Thus the emission detected by WISE is dominated by the galaxy, not an AGN.
The ratio of X-ray to \[O III\] luminosity can also be used as an AGN diagnostic. Converting the X-rays to the 2-20keV band using our best-fit power-law, we measure a mean $L_X/L_{\rm [O~III]}=2.4\pm0.2$, which is consistent with a typical AGN [@Heckman2005]. However, the X-ray luminosity is only 0.03% of the Eddington luminosity for a SMBH of $10^6$. Moreover, the temporal variation in the luminosity and hardness of the X-rays during the flare suggests a significant fraction of this emission comes from the TDE itself, rather than an existing AGN. In particular, the softening of the X-rays could indicate that as time increases, more of the emission is coming from the TDE flare, relative to an underlying AGN with a harder spectrum. Taking into account the BPT diagram, WISE colours, X-rays, and the morphology of the nucleus (section \[sec:morph\]), we conclude that there is some support for a weak AGN, but that the galaxy is dominated by stellar light.
Photometric analysis {#sec:phot}
====================
Bolometric light curve
----------------------
We construct the bolometric light curve of AT2019qiz by interpolating our photometry in each band to any epoch with data in the $r$, $c$ or $o$ bands, using <span style="font-variant:small-caps;">superbol</span> [@Nicholl2018]. We then integrate under the spectral energy distribution inferred from the multi-colour data at each epoch, and fit a blackbody function to estimate the temperature, radius, and missing energy outside of the observed wavelength range. A blackbody is an excellent approximation of the UV and optical emission from TDEs [e.g @vanVelzen2020]. However, we note that the radius is computed under the assumption of spherical symmetry, which may not reflect the potentially complex geometry in TDEs. We include foreground extinction, but do not correct for the uncertain extinction within the host galaxy (formally, this makes our inferred luminosity and temperature curves lower limits). The bolometric light curve, temperature and radius evolution are plotted in Figure \[fig:bol\].
![Top: Bolometric light curve of AT2019qiz derived from UV and optical photometry. The X-ray light curve is also plotted, which is $\approx10^3$ times fainter than the optical luminosity at peak but slower to fade. Middle: Temperature evolution. Bottom: Evolution of the blackbody radius. The temperature and radius are only shown for epochs covered without extrapolation by data in at least three photometric bands. Vertical lines indicate epochs of transition in the TDE spectrum, when the net blueshift of He II goes to zero and when N III becomes prominent (section \[sec:spec\]). []{data-label="fig:bol"}](bol.pdf){width="\columnwidth"}
![Comparison of the bolometric light curve to other TDEs from the literature [@vanVelzen2019; @Holoien2016b; @Gezari2012; @Chornock2014; @Arcavi2014; @Holoien2014; @Holoien2016; @Blagorodnova2017; @Nicholl2019; @Gomez2020; @Leloudas2019]. []{data-label="fig:bolcomp"}](bol-compare.pdf){width="\columnwidth"}
From the light curve we derive a peak date of MJD $58764\pm1$ (2019-10-08 UT)[^10], a peak luminosity of $L=3.6\times10^{43}$, and integrated emitted energy of $E_{\rm rad}=1.0\times10^{50}$erg. Taking the black hole mass derived in section \[sec:host\], the peak luminosity corresponds to $\lesssim0.2 L_{\rm Edd}$, where $L_{\rm Edd}$ is the Eddington luminosity. We also plot the X-ray light curve to highlight the X-ray to optical ratio, which is $10^{-2.8}$ at peak. Since the X-rays appear to rise after the optical emission starts to fade, this ratio increases to $\approx 10^{-2.0}-10^{-1.8}$ between $20-50$ days after bolometric peak, and averages $\approx 10^{-1.4}$ beyond 50 days.
The rise of the bolometric light curve is extremely well constrained by the dense ATLAS and ZTF coverage, and we find that AT2019qiz brightens by a factor $\approx 160$ from the earliest ATLAS detection to the time of peak 33 days later. We plot the bolometric light curve compared to other well-observed TDEs in Figure \[fig:bolcomp\]. The fast rise (and decline), and low peak luminosity place it intermediate between the bulk of the TDE population and the original ‘faint and fast’ TDE, iPTF16fnl [@Blagorodnova2017; @Brown2018; @Onori2019].
![Power-law fits to the rising and declining phases of the light curve. The early data is best approximated by a fit with $L\propto t^{2.8}$, while the declining phase is steeper than the canonical $t^{-5/3}$.[]{data-label="fig:pl"}](bol-powerlaws.pdf){width="\columnwidth"}
The temperature is initially constant at $\approx 20,000$K before the light curve peaks, and then suddenly declines to $\approx 15,000$K over a period of $\sim 20$ days. It stays constant for the remainder of our observations, barring a possible slight dip around day 100 (though at this phase the UV data are noisier due to the large fractional host contribution that has been subtracted). The radial evolution is also interesting, showing a clear rise up to peak luminosity, remaining constant during the cooling phase identified in the temperature curve, and then decreasing smoothly at constant temperature.
We analyse the shape of the bolometric light curve using power-law fits. We fit the rise with a function $L = L_0 ((t-t_0)/\tau)^{\alpha}$ using the <span style="font-variant:small-caps;">curve\_fit</span> function in <span style="font-variant:small-caps;">scipy</span>. The best fit has $t_0=-39.1$ days, rise timescale $\tau=12.5$ days and $\alpha=2.81\pm0.25$. This suggests that the first detection of AT2019qiz (33 days before peak) is within a week after fallback began. We plot this model in Figure \[fig:pl\], along with a fit using the same function but with a fixed $\alpha=2$. This model also gives a reasonable fit to the data (with $t_0=-33$ days; i.e. AT2019qiz discovered within one day). However, it begins to underestimate the data as the light curve approaches peak. @Holoien2019 modelled the rise of the TDE AT2019ahk using a similar approach, also finding a power-law consistent with $\alpha\approx2$.
We fit the declining phase with a power-law of the same form. For the decline we find $\alpha=-2.54$, which is steeper than the canonical $L \propto t^{-5/3}$ predicted by simple fallback arguments [@Rees1988]. However this is not unusual among the diverse array of TDEs in the growing observed sample, and more recent theoretical work does not find a universal power-law slope for the mass return rate, nor that the light curve exactly tracks this fallback rate [e.g. @Guillochon2013; @Gafton2019].
TDE model fit
-------------
To derive physical parameters of the disruption, we fit our multiband light curves using the Modular Open Source Fitter for Transients [<span style="font-variant:small-caps;">mosfit</span>; @Guillochon2018] with the TDE model from @Mockler2019. This model assumes a mass fallback rate derived from simulated disruptions of polytropic stars by a SMBH of $10^6$ [@Guillochon2014], and uses scaling relations and interpolations for a range of black hole masses, star masses, and impact parameters. The free parameters of the model, as defined by @Mockler2019, are the masses of the black hole, $M_\bullet$, and star, $M_*$; the scaled impact parameter $b$; the efficiency $\epsilon$ of converting accreted mass to energy; the normalisation and power-law index, $R_{\rm ph,0}$ and $l_{\rm ph}$, connecting the radius to the instantaneous luminosity; the viscous delay time $T_\nu$ (the time taken for matter to circularise and/or move through the accretion disk) which acts approximately as a low pass filter on the light curve; the time of first fallback, $t_0$; the extinction, proportional to the hydrogen column density $n_{\rm H}$ in the host galaxy; and a white noise parameter, $\sigma$.
![Fits to the multicolour light curve using the TDE model in <span style="font-variant:small-caps;">mosfit</span> [@Guillochon2018; @Mockler2019].[]{data-label="fig:mosfit"}](mosfit.pdf){width="\columnwidth"}
Parameter Prior Posterior Units
--------------------------- ----------------- ---------------------------- -----------
$ \log{(M_\bullet )}$ $[5, 8]$ $ 6.07 \pm 0.04 $ M$_\odot$
$ M_*$ $[0.01, 100]$ $ 1.01^{+0.03}_{-0.02} $ M$_\odot$
$ b$ $[0, 2]$ $ 0.15^{+0.01}_{-0.02}$
$ \log(\epsilon) $ $[-2.3, -0.4] $ $ -1.95 ^{+0.13}_{-0.09}$
$ \log{(R_{\rm ph,0} )} $ $[-4, 4] $ $ 1.01 \pm 0.05 $
$ l_{\rm ph}$ $[0, 4]$ $ 0.64 \pm 0.03 $
$ \log{(T_v )} $ $[-3, 3] $ $ 0.79\pm 0.07 $ days
$ t_0 $ $[-50, 0]$ $ -1.27 ^{+0.30}_{-0.35}$ days
$ \log{(n_{\rm H,host})}$ $[16, 23]$ $ 17.63 ^{+1.29}_{-1.17}$ cm$^{-2}$
$ \log{\sigma} $ $[-4, 2] $ $ -0.58 \pm 0.02$
: Priors and marginalised posteriors for the <span style="font-variant:small-caps;">mosfit</span> TDE model. Priors are flat within the stated ranges, except for $M_*$, which uses a Kroupa initial mass function. The quoted results are the median of each distribution, and error bars are the 16th and 84th percentiles. These errors are purely statistical; @Mockler2019 provide estimates of the systematic uncertainty.[]{data-label="tab:mosfit"}
The fits are applied using a Markov Chain Monte Carlo (MCMC) method implemented in <span style="font-variant:small-caps;">emcee</span> [@Foreman2013] using the formalism of @Goodman2010. We burn in the chain for 10,000 steps, and then continue to run our simulation until the potential scale reduction factor (PSRF) is $<1.1$, indicating that the fit has converged. We plot 100 realisations of the Markov Chain in the space of our light curve data in Figure \[fig:mosfit\]. The model provides a good fit to the optical bands, but struggles slightly to resolve the sharp peak present in the UV bands. The fit suggests that first fallback occurred within a few days of the earliest datapoint.
From this fit we derive the posterior probability distributions of the parameters, listed in Table \[tab:mosfit\], with two-dimensional posteriors plotted in the appendix. The inferred date of first fallback is MJD $58729\pm1.5$, i.e. 35 days before peak, consistent with the simpler power-law models. The physical parameters point to the disruption of a roughly solar mass main sequence star by a black hole of mass $10^{6.1}$. This is consistent with the middle of the SMBH mass range estimated from spectroscopy and the $M_\bullet-\sigma$ relation. In this case, the peak luminosity corresponds to $0.15 L_{\rm Edd}$ (the Eddington luminosity). This is at the low end of—but fully consistent with—the distribution of Eddington ratios measured for a sample of TDEs with well-constrained SMBH masses [@Wevers2019].
The scaled impact parameter, $b=0.15\pm0.02$, corresponds to a physical impact parameter $\beta \equiv R_t / R_p = 0.78\pm0.02$, where $R_t$ is the tidal radius and $R_p$ the orbital pericentre. For the inferred SMBH mass, $R_t = 20 R_S$, where $R_S$ is the Schwarzschild radius. Using the remnant mass versus $\beta$ curve from @Ryu2020 for a 1 star, up to $\sim 50\%$ of the star could have survived this encounter. Interestingly, @Ryu2020 predict a mass fallback rate proportional to $t^{-8/3}$ in this case (which they call a ‘severe partial disruption’), which is remarkably close to our best-fit power-law decline, $t^{-2.54}$.
Spectroscopic analysis {#sec:spec}
======================
The early spectra are dominated by a steep blue continuum indicative of the high photospheric temperature (15,000-20,000K), superposed with broad emission bumps. As the spectra evolve and the continuum fades, the emission lines become more sharply peaked, while the host contribution becomes more prominent. In all the analysis that follows, we first subtract the host galaxy light using the model SED from <span style="font-variant:small-caps;">prospector</span> (section \[sec:host\]). The full set of host-subtracted spectra, along with further details of the subtraction process, are shown in the appendix. In Figure \[fig:select\], we plot a subset of high signal-to-noise ratio, host-subtracted spectra spanning the evolution from before peak to more than 100 days after.
Line identification
-------------------
To focus on the line evolution, we subtract the continuum using a 6th-order polynomial, with sigma-clipping to reject the line-dominated regions during the fit. The host- and continuum-subtracted spectra obtained with X-shooter are shown in Figure \[fig:id\] (only this subset is shown for clarity of presentation). We identify and label the strong emission lines from both the TDE and the host galaxy. <span style="font-variant:small-caps;">prospector</span> allows to the user to turn nebular emission lines on and off; for the bulk of our analysis we use the predictions from <span style="font-variant:small-caps;">prospector</span> to subtract nebular lines, but in Figure \[fig:id\] we leave the nebular emission in our data for completeness. Balmer emission lines are at all times visible, with both a broad TDE component and a narrow host component. The other strong host lines are those used for the BPT analysis in section \[sec:host\].
As well as hydrogen, we also identify broad emission lines of He II $\lambda4686$, the Bowen fluorescence lines of N III $\lambda4100$ and $\lambda4640$ and likely O III $\lambda3670$ [@Bowen1935; @Blagorodnova2018; @Leloudas2019], and possible weak emission of He I $\lambda5876$. The combination of hydrogen lines with the He II/Bowen blend at around 4600Å is common in TDE spectra, and qualify AT2019qiz as a TDE-Bowen in the recent classification scheme proposed by @vanVelzen2020, or an N-rich TDE in the terminology of @Leloudas2019.
![Top: Selected spectra from X-shooter, EFOSC2 and Binospec after subtraction of the host galaxy model (following the method outlined in the appendix). We have applied the subtraction procedure to all spectra, but here show only this subset (spanning the full range of observed phases) for clarity. Bottom: Comparison of AT2019qiz to X-shooter spectra of iPTF16fnl [from @Onori2019], the only TDE with a faster light curve evolution than AT2019qiz. The continuum has been removed using polynomial fits. The spectra shortly after maximum light are quite similar for these two events, though the Balmer lines are much weaker at late times in iPTF16fnl.[]{data-label="fig:select"}](select-spec-sub.pdf "fig:"){width="\columnwidth"} ![Top: Selected spectra from X-shooter, EFOSC2 and Binospec after subtraction of the host galaxy model (following the method outlined in the appendix). We have applied the subtraction procedure to all spectra, but here show only this subset (spanning the full range of observed phases) for clarity. Bottom: Comparison of AT2019qiz to X-shooter spectra of iPTF16fnl [from @Onori2019], the only TDE with a faster light curve evolution than AT2019qiz. The continuum has been removed using polynomial fits. The spectra shortly after maximum light are quite similar for these two events, though the Balmer lines are much weaker at late times in iPTF16fnl.[]{data-label="fig:select"}](fnl-compare.pdf "fig:"){width="\columnwidth"}
![image](line-id.pdf){width="\textwidth"}
![image](vel-Ha-fit.pdf){width="5.8cm"} ![image](vel-Ha-roth.pdf){width="5.8cm"} ![image](vel-Ha-t.pdf){width="5.8cm"}
In our light curve analysis, we found that AT2019qiz appeared fainter and faster than nearly any other TDE except for iPTF16fnl [@Blagorodnova2017; @Brown2018; @Onori2019]. We plot a spectroscopic comparison in the lower panel of Figure \[fig:select\]. The spectrum of AT2019qiz at about a month after maximum shows several similarities to iPTf16fnl at a slightly earlier phase of 12 days, particularly in the blue wing of H$\alpha$, though AT2019qiz exhibits a broader red wing. The ratio of H$\alpha$ compared to He II is also quite consistent between these two events, modulo the slower evolution in AT2019qiz. At later times, the Balmer lines become weaker in iPTF16fnl, though H$\alpha$ narrows and becomes more symmetric, as we see in AT2019qiz. In fact, the ratios and velocity profiles of these lines evolve substantially with time, as we saw in Figure \[fig:id\]. We will now investigate this in detail in the following sections.
![image](vel-HeII-fit.pdf){width="\columnwidth"} ![image](L-HeII-t.pdf){width="\columnwidth"}
The H$\alpha$ profile {#sec:ha}
---------------------
We look first at the H$\alpha$ line. This region of the spectrum is plotted in velocity coordinates in Figure \[fig:gauss\]. We have subtracted the continuum locally using a linear fit to line free regions at either side of the line (6170-6270Å, 7100-7150Å). The H$\alpha$ profile is initially asymmetric and shallow, with a blueshifted peak and a broad red shoulder. The red side may include some contribution from He I $\lambda6678$, but this is likely not a major contributor as we see only very weak He I $\lambda5876$. We initially fit the profile as the sum of two Gaussians whose normalisations and velocity widths vary independently, with one centroid fixed at zero velocity and the other free to vary. The three velocities (two widths and one offset) are plotted in Figure \[fig:gauss\].
We find that the zero-velocity component is at all times narrower than the offset component, and over time decreases in width as the line becomes sharply peaked. The asymptotic velocity full-width at half-maximum (FWHM) is $\approx2000$. The broader component is always redshifted, though this shift decreases as the red shoulder becomes less prominent. This component maintains a width of $\sim 15,000$, though the scatter in measuring this component is quite large at later times when the shoulder is less prominent. We confirm that this feature is not a blend with He I, as the velocity offset does not match the wavelength of that line (and varies over time). If the broadening of the redshifted component is due to rotation, the implied radius of the emitting material is $\approx 200 R_S \approx 10 R_p$. This is consistent with the size of TDE accretion disks in the simulations of @Bonnerot2020, however the profile does not show an obviously disk-like shape [@Short2020].
Alternatively, this redshifted component could correspond to emission from the receding part of an outflow, and a Gaussian or double-Gaussian profile may be an oversimplification. @Roth2018 (hereafter, RK18) calculated line profiles including the effects of electron scattering above a hot photosphere in an outflowing gas. Qualitatively, their models show properties similar to AT2019qiz: a blueshifted peak (seen here only at early times) and a broad red shoulder. A decreasing optical depth or velocity in these models leads to a narrow core. We compare the observed H$\alpha$ profiles in Figure \[fig:gauss\] to models from RK18, but we find that the implied velocity from the data is lower than any of the available models, apart from in the earliest epochs where a model with $v=5000$ gives an acceptable match. We will return to the early-time line profile in detail in section \[sec:outflow\].
We fit the H$\alpha$ profiles again, this time as a sum of a Gaussian centred at zero velocity (as before) and the $v=5000$ outflow model from RK18. The zero-velocity component could correspond to emission from pre-existing gas, since the RK18 profile should account for the TDE emission self-consistently. The only free parameter for the outflow component is the normalisation of the RK18 spectrum. This gives a good fit at early ($\lesssim0$ days) and late ($\gtrsim50$ days) times, but gives an inferior fit around 30 days compared to the double-Gaussian model. We note that the published model assumed a photospheric radius of $2.7\times10^{14}$cm, similar to the blackbody radius of AT2019qiz well before and after peak, whereas at peak the radius of AT2019qiz is a factor of $\gtrsim2$ larger, which may explain this discrepancy. As is shown in Figure \[fig:gauss\], the velocity of the Gaussian core, and the ratio of luminosity between the broad component and the zero-velocity component, are comparable between the double-Gaussian and Gaussian+RK18 fits.
In both models, when the narrow core of the line is revealed at late time we measure $v\approx 2000$. Rather than line emission from the TDE itself, an alternative interpretation of the line profile is a pre-existing broad line region (BLR) illuminated by the TDE (recalling that this galaxy shows evidence for hosting an AGN; section \[sec:host\]). Interpreting the width of the narrow component as a Keplerian velocity would yield an orbital radius $\approx 5\times 10^{15}$cm ($\approx 10^4$ Schwarzschild radii, $R_{\rm S}$). We note that an outflow from the TDE can reach this distance and interact with a BLR within 100 days if the expansion velocity is $\gtrsim 6000$. At least one previous TDE in a galaxy hosting an AGN has shown evidence of lighting up an existing BLR (PS16dtm; @Blanchard2017), and further candidates have been discovered [@Kankare2017].
The 4650Å He-Bowen blend {#sec:he}
------------------------
Next we examine the He II region of the spectrum. This is complicated by a blend of not only He II $\lambda4686$ and N III $\lambda4640$, but also H$\beta$ and H$\gamma$. In Figure \[fig:gaussHe\], we set zero velocity at the rest-frame wavelength of He II. The earliest spectra before maximum light show a single broad bump with a peak that is bluewards of both He II and N III. After maximum, the H$\beta$ line becomes much more prominent, with a sharp profile similar to H$\alpha$, while the broad bump fades and by $\approx20$ days after maximum is centered at zero velocity. This indicates the early emission is dominated by He II rather than N III. However as the spectra evolve this line moves back to the blue, and by $\sim70$ days is centered at the rest wavelength of N III. This is where it remains over the rest of our observations.
We quantify this by fitting this region with a sum of four Gaussians (He II, N III, H$\beta$ and H$\gamma$). All profiles are centred at zero velocity. The two Balmer lines are constrained to have the same width, and to make the problem tractable we also impose a further condition that the widths of He II and N III match each other. The fits are overlaid on Figure \[fig:gaussHe\], where we also show the evolution in luminosity of the He II and N III components. The luminosities of the two components are poorly constrained pre-peak, but by $\sim20$ days after peak the He II component is clearly dominant, with almost no contribution from N III. Soon afterwards, the N III luminosity increases while that of He II drops, and by 50 days N III is the dominant component, with no significant He II flux detectable after $\gtrsim 60$ days.
Other TDEs have shown separate resolved components of He II and N III [@Blagorodnova2018; @Leloudas2019], however this transition from almost fully He II dominated to fully N III dominated is remarkable, particularly given that the Bowen mechanism is triggered by the recombination of ionised He II. @Leloudas2019 measured the He II/N III ratio for four TDEs with confirmed Bowen features, of which the two with events with $>2$ epochs of measurement showed an increasing He II/N III up to at least 50 days, opposite to what we observe in AT2019qiz. One of these events, AT2018dyb, may have shown a turnover after 60 days, but this is based on only one epoch, and ASASSN-14li continued to increase its He II/N III ratio for at least 80 days after peak. Based on two epochs measured for iPTF16axa, it may have shown a decrease in He II/N III as in AT2019qiz.
One other TDE to date has shown a clear increase in N III while He II fades, as we see in AT2019qiz. @Onori2019 analysed a series of X-shooter spectra of iPTF16fnl, and found that the ratio of He II/N III decreased from $>5$ to $<1$ over a period of $\sim 50$ days, similar to the magnitude and timescale of the evolution in AT2019qiz. Interestingly, we note that the measured He II/N III ratios are in tension with the simplest theoretical predictions for *all* TDEs in which Bowen lines have been detected. @Netzer1985 calculated this ratio for a range of physical conditions appropriate for AGN, and found in all cases that $L_{\rm N~III}/L_{\rm He~II}\approx0.4-0.85$. The reversal in this ratio in AT2019qiz, as well as iPTF16fnl [@Onori2019], AT2018dyb [@Leloudas2019], and iPTF15af [@Blagorodnova2018], is a puzzle that requires the application of detailed line transfer calculations.
![Luminosities and ratios of the Balmer emission lines as a function of time, measured using Gaussian fits. Dashed horizontal lines show the mean ratios. These have not been corrected for the (uncertain) extinction in the host galaxy.[]{data-label="fig:ratio"}](ratios.pdf){width="\columnwidth"}
Balmer line ratios
------------------
Using the fits to this part of the spectrum and to H$\alpha$, we calculate the ratios between the Balmer lines. For the H$\alpha$ luminosity we include both components in measuring the total flux (and we confirm that the luminosity derived from the fits matches that obtained by direct integration). For H$\beta$ and H$\gamma$ the fits include only one component, though at early times there may be a weak red shoulder visible in these lines too. The ratios are plotted in Figure \[fig:ratio\]. There is no strong evidence for an evolution in the line ratio with time. We find H$\alpha$/H$\beta\approx4.7$ and H$\gamma/$H$\beta\approx0.7$. These ratios are marginally consistent with photoionisation and recombination [@Osterbrock2006]. This is what one would expect, for example, if the line emission is dominated by an AGN BLR. A moderate internal dust extinction of $E(B-V)\approx 0.5$mag would bring the H$\alpha$/H$\beta$ ratio into agreement with the theoretical Case B value of 2.86, though we note that the light curve analysis supports a lower extinction. The host galaxy modelling also favours a low (spatially-averaged) extinction, as does the rather weak Na I D absorption. The equivalent widths of each line in the doublet (D1$\approx$D2$\approx0.1$) correspond to $E(B-V)=0.02$ in the calibration of @Poznanski2012. However, the extinction could be significantly higher in the nucleus.
It has recently been pointed out that a number of TDEs show much flatter Balmer decrements (H$\alpha$/H$\beta\approx1$; @Short2020 [@Leloudas2019]), which may indicate collisional excitation of these lines, e.g. in a disk chromosphere [@Short2020]. This ratio can be difficult to measure in TDEs due to line blending, but the excellent data quality available for AT2019qiz confirms a higher Balmer ratio in this event. If we assume a collisional origin for the Balmer lines in AT2019qiz, the implied temperature of the line-forming region is $\lesssim3000$K, following @Short2020. Alternatively, a large Balmer decrement may indicate shock powering, as is seen in many Type IIn supernovae [e.g. @Smith2010].
Finally, we note that the spectra in Figure \[fig:id\] appear to show a positive Balmer jump (i.e. enhanced flux below the Balmer break at 3646Å), though we caution again that this may be due to an unreliable host subtraction at short wavelengths. If the positive jump is real, this could signify an extended atmosphere, where the additional bound-free opacity below the Balmer break means that we observe a $\tau\sim1$ surface at a larger radius, and hence see a larger effective emitting surface.
Outflow signatures in the pre-maximum spectrum {#sec:outflow}
----------------------------------------------
The analysis in sections \[sec:ha\] and \[sec:he\] suggested the early blueshifted line profiles may be indicative of an outflow. We make this more explicit here by comparing the H$\alpha$ and He II/Bowen line profiles in our earliest X-shooter spectrum obtained 9 days before maximum light. Figure \[fig:blueshift\] shows these lines in velocity coordinates. We reiterate that at early times, the 4650Å blend seems to be dominated by He II rather than N III (Figure \[fig:gaussHe\] and section \[sec:he\]), but we plot the profile for both possible cases for completeness.
Both lines exhibit a peak blueshifted from their rest wavelengths (whether the 4650Å feature is He II or N III). Assuming that the He II line is strongest at early times, we see that He II shows a larger blueshift than H$\alpha$. Both lines have a similar broad and smooth red shoulder. We quantify the velocity difference using the electron-scattering outflow models from RK18, previously applied only to H$\alpha$ in section \[sec:ha\]. Line broadening in these models includes both the expansion and thermal broadening. We note the important caveat that these profiles were calculated explicitly only for H$\alpha$, but RK18 suggest that the qualitative results should apply to the other optical lines too. We proceed here under that assumption.
The fit to H$\alpha$ with a 5000 model (the lowest velocity published model) is indicative of an outflow but does not fully capture the shape of the red side of the line. Interpolating/extrapolating by eye between the parameters explored by RK18, we suggest that a larger optical depth and a slightly lower velocity would likely produce a closer match to the observed profile. Alternatively, a two-component model as explored in Figure \[fig:gauss\] can produce a satisfactory fit.
The He II line shows even stronger evidence for an outflow. This profile gives an excellent match to a 10,000 outflow model from RK18 (with other parameters the same as the 5000 model). If this line profile was instead N III dominated, the inferred velocity would be closer to that of H$\alpha$.
![Velocity profiles of the strongest emission lines in the X-shooter spectrum at 9 days before peak, during the early rise/expansion phase. We show H$\alpha$, and the broad He II/Bowen bump for two different cases: (i) assuming that the line is dominated by He II (pink) or (ii) that it is dominated by N III (orange). Although the latter case gives a profile quite similar to H$\alpha$, the evolution over the next 20-30 days suggests that He II is more likely the dominant component. Overplotted are model line profiles from @Roth2018. Assuming the blend is dominated by He II, an excellent match is found for a 10,000 outflow. H$\alpha$ comes from slower material with velocity $\lesssim 5000$. []{data-label="fig:blueshift"}](Ha-HeII.pdf){width="\columnwidth"}
In our photometric analysis, we found that the blackbody radius of AT2019qiz at maximum light was $6.4\times10^{14}$cm, and the rise time was $\approx35$ days (Figure \[fig:bol\]). Taking $v_{\rm ph}=R_{\rm BB}/t$, this indicates a velocity at the photosphere of $\gtrsim 2000$. This suggests a velocity gradient or homologously expanding outflow, where the line forming region above has a greater velocity than the continuum photosphere below.
In this context it is somewhat surprising that He II exhibits a larger velocity than H$\alpha$, as @Roth2016 found that H$\alpha$ should be emitted further out in radial coordinates than He II due to its greater self-absorption optical depth. In this case He II, emitted deeper in the debris, would experience a larger electron scattering optical depth. We speculate that degeneracies between electron scattering optical depth and velocity may be responsible for the apparent contradiction. Alternatively, this may indicate that N III is a better identification for this feature at both early and late times (though not at the phases where we clearly do see He II at zero velocity). This could be the case if the density in the line emitting region is high enough for efficient operation of the Bowen mechanism at early and at late times, but not at intermediate phases $\approx$20-40 days after peak.
![image](spec-compare.pdf){width="10cm"}
Discussion {#sec:dis}
==========
The outflow-dominated early evolution and the onset of accretion
----------------------------------------------------------------
The detection of blueshifted emission in AT2019qiz is not unique, with other events exhibiting either blueshifted He II or asymmetric H$\alpha$ profiles, as shown in a comparison with other TDE spectra in Figure \[fig:speccomp\]. However, due to its proximity allowing detection in the radio, we can unambiguously say that AT2019qiz launched an outflow. Preliminary modelling of the radio light curve (K. D. Alexander et al., in preparation) indicates that the velocity of radio emitting material ($v\approx0.03c\approx10,000$) is consistent with the fastest ejecta we see in the optical spectrum. Thus we can finally confirm the picture that the complex line profiles in TDEs do – at least in some cases – result from outflowing gas (RK18).
The dense time-series spectroscopy of AT2019qiz and very early photometric data allow us to study how the outflow influences the optical rise of the TDE and determine the dominant power sources over the course of the light curve evolution. We now combine our results from the optical, X-rays, and spectroscopy to examine each phase of the transient.
### Pre-maximum light
The earliest detections of AT2019qiz are characterised by a power-law like rise to maximum light over a period of $\approx 35$ days. During this time, the blackbody radius increases at a roughly constant velocity of $\gtrsim 2000$, with no significant temperature evolution (Figure \[fig:bol\]). Blueshifted emission lines, with $v\sim 5000-10,000$ (Figure \[fig:blueshift\]), and a radio detection confirm the rapid expansion.
Generally, such an outflow would cool, but in a TDE we may have a complex trade-off between cooling through adiabatic expansion and heating through continuous energy injection. A quasi-spherical expansion or wind at a perfectly constant velocity and temperature would lead to a luminosity evolution $L\propto t^2$, which is consistent with the data, however we find that a steeper rise of $L\propto t^{2.8}$ is preferred at a level of $3.2\sigma$ (Figure \[fig:pl\]). A steeper rise could indicate acceleration due to energy injection, requiring that the expansion is driven by an ongoing process. We also note that the mass return rate at early times may influence the shape of the rising light curve, and recent simulations find $\dot{M}\propto t^2$ during the second periapsis passage (D. Liptai et al., in preparation). Alternatively, the steeper power-law could indicate a non-spherical outflow: for example, the ballistic ejection of bipolar blobs which then undergo additional expansion due to their own internal pressure. This has been used to explain some observations of classical novae [@Shore2018], but to our knowledge this geometry of mass ejection is not predicted by TDE simulations.
Several models predict a quasi-spherical outflow in TDEs, with $v\sim 10,000$, consistent with the observed velocity. These fall into two broad classes (much like models for the TDE luminosity), which have been revealed through detailed TDE simulations. Collisions between debris streams can launch material on unbound trajectories [@Jiang2016], or drive an outflow via shocks [@Lu2020]. Alternatively, radiatively-inefficient accretion can lead to most of the energy from the fallback going into mechanical outflows. @Metzger2016 described this process analytically, and recent work by D. Liptai (private communication) has demonstrated that this does occur in first-principles numerical simulations.
Our early observations of AT2019qiz at all wavelengths help to break the degeneracy in these models. The detection of X-rays during this early phase indicates that accretion began promptly: stream collisions are not predicted to be X-ray sources, whereas X-ray observations of TDEs to date are generally consistent with disk models [@Jonker2020; @Mummery2020; @Wevers2020]. The X-rays are not likely to arise from the outflow, as the luminosity is orders of magnitude greater than that predicted by modelling the radio SED (K. D. Alexander et al., in preparation). Moreover, we see indications of Bowen fluorescence (N III and likely O III) in the early spectra, and these require a far-UV source to photoionise He II (the Bowen lines are pumped by He II Ly$\alpha$; @Bowen1935). @Leloudas2019 also interpreted Bowen lines in several TDEs as a signature of obscured accretion.
We also note the UV excess in the light curve at peak, relative to the TDE model that best fits the optical bands (Figure \[fig:mosfit\]). The UV excess could be explained as a consequence of multiple energy sources contributing to the light curve, i.e. accretion and collisions. In other TDEs, a UV excess has been seen at late times, even in the form of a second maximum in the light curve [@Leloudas2016; @Wevers2019b], with the first peak interpreted as stream collisions and the second as the formation of a disk. If accretion begins sufficiently early, we may see both processes at once, giving a light curve at peak that cannot be easily fit with a one-component model.
### The constant-radius phase
At the point where its bolometric light curve peaks, AT2019qiz transitions to a rather different behaviour: rapid cooling at constant radius (Figure \[fig:bol\]). In the context of an outflow model, this may be a consequence of the optical depth in the ejecta. @Metzger2016 argue that the wind is initially optically thick, and that photons are advected outwards to a characteristic trapping radius where they can escape. This advection is responsible for downgrading the X-rays released by accretion to UV/optical photons.
Using our inferred SMBH mass and impact factor in equation 24 from @Metzger2016, we estimate a trapping radius for AT2019qiz of $\sim 2\times10^{14}$cm, within a factor of a few of our measured photospheric radius at maximum light. Therefore we suggest that the photosphere follows the debris outwards, unable to cool efficiently until the expansion reaches this trapping radius, at which point photons can free-stream from the outer ejecta and the effective photosphere stays frozen, despite continued expansion of the outflow.
The matter at the photosphere is now able to cool. From photon diffusion [@Metzger2016], this occurs on a timescale $$t_{\rm cool} \sim \tau_{\rm es} R_{\rm ej}/c \approx 3 \kappa_{\rm es} M_{\rm ej} / 4 \pi R c,$$ which is $\sim10$ days if we assume $R\sim R_{\rm ph}=6\times10^{14}$cm, with an ejected mass $M_{\rm ej}\sim 0.1$ above the photosphere and an electron scattering opacity $\kappa_{\rm es}=0.34$cm$^2$g$^{-1}$ for hydrogen-rich matter. This is reasonably well matched to the duration of this phase of the evolution.
During this short-lived phase, the line profiles change dramatically: by around 20 days after maximum light, the He II profile is roughly symmetrical and shows no evidence of a net velocity offset. We interpret this as supporting evidence for a model where the expansion and eventual cooling of the outflow lead to a reduction in the optical depth, and hence the disappearance of the broadened, blueshifted profiles visible during the rising phase.
### The constant-temperature phase
At the point when the blueshifts vanish from the line profiles, the behaviour of the photosphere changes again: the cooling stops, the temperature settling at $\approx 15,000$K, and the radius now begins a smooth decrease. This is more akin to the typical behaviour seen in other optical TDEs, which generally show constant temperatures and decreasing blackbody radii.
At the same time, we observe a rise in the X-ray light curve (Figure \[fig:hr\]), indicating that as the outflow component fades, at this phase we are seeing more directly the contribution of accretion. The X-ray/optical ratio increases, but remains low ($\lesssim10^{-2}$; Figure \[fig:bol\]), meaning that most of the accretion power is still reprocessed by the optically thick inner part of the outflow acting as the Eddington envelope [@Loeb1997]. The spectral evolution confirms this, as the presence of strong Bowen lines requires the absorption of significant far-UV/X-ray flux by the ejecta.
The X-ray and optical luminosity both decline from here, consistent with a decreasing accretion rate, as also inferred from the <span style="font-variant:small-caps;">mosfit</span> model fit. As the accretion power decreases, so too will the energy injected into the mechanical outflow, leading to a decreasing velocity and therefore possibly a higher density (for the same degree of mass-loading) in the inner regions. This may explain the increasing prominence of N III at this phase, as the Bowen mechanism is more efficient at high density [@Bowen1935; @Netzer1985]. Assuming $\sim 0.1$ of hydrogen-dominated debris in the reprocessing region and taking the photospheric radius of $\approx3\times10^{14}$cm at this phase, we find an electron density $N_e \approx 10^{12} f_{\rm ion}$cm$^{-3}$, where $f_{\rm ion}$ is the ionisation fraction, which for even modest ionisation is consistent with the density regime, $N_e>10^{10}$cm$^{-3}$, where Bowen lines are expected to be strong [@Netzer1985].
We note however that other TDEs have shown a decreasing N III/He II ratio at late times [@Leloudas2019], which is harder to explain with an increasing density in the reprocessing layer. We posit that this could arise from a viewing angle effect in a geometry similar to that proposed by @Nicholl2019 to explain the late onset of blueshifted He II in AT2017eqx. If a TDE has a quasi-spherical photosphere but a faster outflow along the polar direction, and is viewed off-axis (see their Figure 13), more low-density material (emitting He II but not N III) will become visible along the dominant axis over time.
The corollary to this picture is that a TDE with an increasing N III/He II ratio (like AT2019qiz) must have been viewed more directly ‘face-on’. Such a scenario is supported by the detection of X-rays, which may be visible only for viewing angles close to the pole in the unified model of @Dai2018. In fact, consistency is also found with @Nicholl2019, who suggested that TDEs with outflow signatures in their early spectra should also exhibit X-ray emission, since both features are more likely for a polar viewing angle if the outflow is not spherical.
It is interesting to note that the X-rays in AT2019qiz are among the faintest observed in a TDE to date, and would not have been detectable at the greater distances where most TDEs have been found. Deeper X-ray observations of a larger TDE sample are still needed to test for any correlations between the X-ray properties and those of the optical spectral lines. The increase in the X-ray to optical ratio has been seen in other TDEs [@Gezari2017; @Wevers2019b; @Jonker2020], and may be a common feature even for TDEs with a low X-ray luminosity at peak.
### Overall energetics
One persistent question pertaining to TDEs is the so-called ‘missing energy’ problem [@Lu2018]. This states that the observed luminosity, totalling $\lesssim 10^{51}$erg for most TDEs, is orders of magnitude below the available energy from the accretion of any significant fraction of a solar mass onto the SMBH. Examining our <span style="font-variant:small-caps;">mosfit</span> parameters for AT2019qiz, we have a bound mass of $\approx 0.5$, and a radiative efficiency of $\approx 1\%$ [typical of observed TDEs; @Mockler2019]. This gives a total available energy of $\approx 10^{52}$erg, whereas direct integration of the bolometric light curve yields only $10^{50}$erg.
In comparison, a homologously expanding outflow of similar mass with a scale velocity $\sim 10,000$ carries a total energy of $3\times10^{50}$erg. This indicates that the fraction of energy going into the mechanical outflow is greater than that released as radiation. However, this is not enough to account for the apparently missing energy. Particularly at early times, accretion can be very inefficient (at producing radiation or driving outflows), and if the infall is spherically symmetric much of the energy can be simply advected across the event horizon.
@Lu2018 suggest that most of the energy is radiated promptly in the far-UV. This seems unlikely for AT2019qiz, as the total energy in soft X-rays over the time of our observations is only $\sim 1\%$ of the near-UV/optical total, and neither the modest blackbody temperature ($\sim 15,000-20,000$K) nor the shallow power-law in the X-rays (Figure \[fig:xspec\]) point towards a large far-UV excess. Still, this scenario can be tested by future searches for a mid-infrared echo on timescales of $\sim$ years [@Lu2018; @JiangN2016].
Alternatively, a large fraction of the energy can be released in the UV and X-rays not promptly, but rather over $\sim$ decade-long timescales via ongoing accretion in a stable viscous disk. Indications of such behaviour have been observed for a number of nearby optical TDEs via late-time flattening in the UV light curves [@vanVelzen2018] and X-ray observations [@Jonker2020]. At our estimated accretion efficiency, AT2019qiz could radiate the rest of its expected total energy ($\sim10^{52}$erg) by accreting a few percent of a solar mass per year. Recently, @Wen2020 modeled the X-ray spectra of TDEs and found that a combination of a ‘slimming disk’ with a very slowly declining accretion rate, and the energy directly lost into the black hole without radiating, appeared to solve the missing energy problem for the systems they studied. The proximity of AT2019qiz makes it an ideal source to test these scenarios with continued monitoring.
The nature of faint and fast TDEs
---------------------------------
Based on the light curve comparisons in section \[sec:phot\], AT2019qiz appears to be the second faintest and fastest among known TDEs. To better quantify this statement, we examine the comprehensive TDE sample from @vanVelzen2020, and define a ‘faint and fast’ TDE as one with a peak blackbody luminosity $\log (L/{\rm erg}\,{\rm s}^{-1})<43.5$ and exponential rise time $t_{\rm r}<15$d. Four events meet these criteria: AT2019qiz, iPTF16fnl, PTF09axc, and AT2019eve. However, @vanVelzen2020 also find that there is no correlation between the rise and decline timescales of TDEs (unlike SNe; @Nicholl2015). AT2019eve is one of the slowest-fading TDEs, and so appears to be qualitatively different from AT2019qiz and iPTF16fnl, which evolve rapidly in both their rise and decline phases. The remaining event, PTF09axc, has little data available after peak to measure the decline timescale, but appears to be more symmetrical than AT2019eve.
Of all optical TDEs with host galaxy mass measurements, iPTF16fnl and AT2019eve are among only four with host masses $\log(M_*/M_\odot)<9.5$ [@vanVelzen2020], the others being AT2017eqx and AT2018lna, which have more typical light curves [@Nicholl2019; @vanVelzen2020]. This suggests that the fastest events may be linked to low mass SMBHs, as proposed by @Blagorodnova2017. However, the case of AT2019qiz demonstrates that similar events can occur in more massive galaxies, and the SMBH mass we measure, $\gtrsim 10^6$, is typical of known TDEs [@Wevers2019; @Mockler2019].
We also found that AT2019qiz is similar to iPTF16fnl in its spectroscopic properties at early times (though the strong narrow component in the Balmer lines at later times, possibly associated with an existing BLR, was not observed in iPTF16fnl; @Blagorodnova2017 [@Onori2019]). Both are classified as TDE-Bowen by @vanVelzen2020, and have characteristically small blackbody radii at maximum light. However, the rise and decay timescales for the TDE-Bowen class as a whole span a similar range to the TDE-H class, so a small radius alone is not sufficient to lead to a fast light curve. Moreover, AT2019eve and PTF09axc are both of type TDE-H.
As the early phase when AT2019qiz and iPTF16fnl are most similar is also the time when the dynamics of the outflow control the light curve evolution of AT2019qiz, it is possible that the properties of the outflow are most important for causing fast TDEs. In this case, such events may indeed be more common at lower SMBH mass, as the escape velocity from the tidal radius scales as $v_{\rm esc}\propto M_\bullet^{1/3}$, meaning an outflow can escape more easily at low $M_\bullet$, but are not limited to low mass black holes as other factors, such as the impact parameter, can also have an effect. Low SMBH mass also lends itself to faint TDEs, if the population have similar Eddington ratios, such that many of these fast TDEs should also be faint. AT2019qiz is only somewhat underluminous, consistent with its more typical SMBH mass.
While iPTF16fnl does show early-time line profiles consistent with an outflow (blueshifted H$\alpha$), it was not detected to deep limits in the radio, despite being at a comparable distance to AT2019qiz. While analysis of the radio data for AT2019qiz is deferred to a forthcoming work, we suggest that a pre-existing AGN in this galaxy may lead to a higher ambient density around the SMBH, producing a more luminous radio light curve as the TDE outflow expands into this medium. This is consistent with the overrepresentation of pre-existing AGNs within the sample of radio-detected TDEs [@Alexander2020]. Note that the outflows discussed here are not relativistic jets, which could be detectable at any plausible nuclear density [@Generozov2017].
Conclusions {#sec:conc}
===========
We have presented extensive optical, UV and X-ray observations of AT2019qiz, which is the closest optical TDE to date at only 65.6Mpc, and a comprehensive analysis of its photometric and spectroscopic evolution. We summarise our main findings here.
- AT2019qiz occurred in a galaxy likely hosting a weak AGN, as indicated by BPT line ratios and a nuclear point source. The galaxy has a high central concentration, as seen in other TDE hosts, and a strong quenching of the star formation within the last $\sim1$Gyr.
- The SMBH mass, measured using the $M_\bullet-\sigma$ relation, is $10^{5.75}-10^{6.52}$, depending on the calibration used. The mass inferred from light curve modelling, $10^{6.1}$, is consistent with the middle of this range.
- The bolometric light curve shows a rise in luminosity $L\propto t^{2.8\pm0.25}$, marginally consistent with expansion at constant temperature and velocity $\approx 2000$, but possibly steeper, which would indicate acceleration or heating of the outflow during the rise.
- The decay is steeper than the canonical $L\propto t^{-5/3}$ but close to the $t^{-8/3}$ predicted for a partial disruption [@Ryu2020]. The best fit light curve model with <span style="font-variant:small-caps;">mosfit</span> also suggests a partial disruption, of a $\approx 1$ star, with about 50% of the star stripped during the encounter.
- The peak luminosity, $L=3.6\times10^{43}$, and integrated emission, $E=1.0\times10^{50}$erg, are both among the lowest measured for a TDE to date.
- The early spectra show broad emission lines of H and most likely He II, with blueshifted peaks and asymmetric red wings, consistent with electron scattering in an expanding medium with $v\approx 3000-10,000$.
- Around maximum light, the temperature suddenly drops while the radius stays constant. This can be explained as trapped photons advected by the outflow suddenly escaping when it reaches the radius at which the optical depth is below unity.
- After peak, the lines become much more symmetrical, and He II is gradually replaced by N III, indicating efficient Bowen fluorescence and thus a source of far-UV photons. The late-time H lines show an unusually strong peak which may be from a pre-existing BLR.
- The time-varying X-ray emission (i.e. from the TDE rather than the pre-existing AGN) and Bowen lines indicate that accretion commenced early in this event.
The detection of this event at radio wavelengths, likely enabled by its fortuitous proximity and possibly a high ambient density, confirms a (non-relativistic; K. D. Alexander et al., in preparation) expansion. This removes any ambiguity in interpreting the line profiles of AT2019qiz as arising in an outflow, and confirms the velocities we have inferred from its optical properties. By extrapolation, outflows (even if undetected in the radio in the general case) are likely important in the many other TDEs with similar line profiles to AT2019qiz.
The properties of the outflow in this case (size, density, optical depth) are consistent with the reprocessing layer needed to explain the low X-ray luminosities of most optical TDEs, and in particular to provide the high-density conditions required for Bowen fluorescence in the inner regions. Thus, AT2019qiz confirms the long-standing picture that outflows are responsible for the ‘Eddington envelope’ hypothesised to do this reprocessing [@Loeb1997; @Strubbe2009; @Guillochon2014; @Metzger2016]. The evidence for accretion early in this event, including X-ray detections before maximum light, suggest that the outflow was in this case powered by radiatively inefficient accretion (@Metzger2016, D. Liptai et al., in preparation), rather than stream collisions.
The exquisite data presented here will make AT2019qiz a Rosetta stone for interpreting future TDE observations in the era of large samples expected from ZTF [@vanVelzen2020], the Rubin Observatory [@Lsst2009] and other new and ongoing time-domain surveys.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Miguel Pérez-Torres for helpful discussions. MN is supported by a Royal Astronomical Society Research Fellowship. TW is funded in part by European Research Council grant 320360 and by European Commission grant 730980. PGJ and GC acknowledge support from European Research Council Consolidator Grant 647208. GL and PC are supported by a research grant (19054) from VILLUM FONDEN. MG is supported by the Polish NCN MAESTRO grant 2014/14/A/ST9/00121. NI is partially supported by Polish NCN DAINA grant No. 2017/27/L/ST9/03221. IA is a CIFAR Azrieli Global Scholar in the Gravity and the Extreme Universe Program and acknowledges support from that program, from the Israel Science Foundation (grant numbers 2108/18 and 2752/19), from the United States - Israel Binational Science Foundation (BSF), and from the Israeli Council for Higher Education Alon Fellowship. JB, DH, and CP were supported by NASA grant 80NSSC18K0577. TWC acknowledges the EU Funding under Marie Skłodowska-Curie grant agreement No 842471. LG was funded by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 839090. This work has been partially supported by the Spanish grant PGC2018-095317-B-C21 within the European Funds for Regional Development (FEDER). SGG acknowledges support by FCT under Project CRISP PTDC/FIS-AST-31546 and UIDB/00099/2020. IM is a recipient of the Australian Research Council Future Fellowship FT190100574. TMB was funded by the CONICYT PFCHA / DOCTORADOBECAS CHILE/2017-72180113. KDA acknowledges support provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51403.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. This work is based on data collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, under ESO programmes 1103.D-0328 and 0104.B-0709 and as part of ePESSTO+ (the advanced Public ESO Spectroscopic Survey for Transient Objects Survey), observations from the Las Cumbres Observatory network, the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile, and the MMT Observatory, a joint facility of the University of Arizona and the Smithsonian Institution. *Swift* data were supplied by the UK Swift Science Data Centre at the University of Leicester. The Liverpool Telescope and William Herschel Telescope are operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council.\
\
*$^{1}$\
$^{2}$\
$^{3}$\
$^{4}$\
$^{5}$\
$^{6}$\
$^{7}$\
$^{8}$\
$^{9}$\
$^{10}$\
$^{11}$\
$^{12}$\
$^{13}$\
$^{14}$\
$^{15}$\
$^{16}$\
$^{17}$\
$^{18}$\
$^{19}$\
$^{20}$\
$^{21}$\
$^{22}$\
$^{23}$\
$^{24}$\
$^{25}$\
$^{26}$\
$^{27}$\
$^{28}$\
$^{29}$\
$^{30}$\
$^{31}$*
Host galaxy subtraction from TDE spectra
========================================
The <span style="font-variant:small-caps;">prospector</span> model spectrum allows us to remove stellar continuum from our spectroscopic data to better study emission from the TDE. Figure \[fig:hostprocess\] illustrates the method. We first construct an $r$-band light curve measured from LCO data (without any image subtraction) in an aperture of radius $3''$, chosen to match the typical aperture size used for extracting the spectrum. This unsubtracted light curve plateaus when the TDE light falls below that of the host. We then scale each spectrum (remembering that it also contains both host and TDE light) to match this light curve; synthetic photometry is calculated on the spectrum using <span style="font-variant:small-caps;">pysynphot</span>.
The model host spectrum from <span style="font-variant:small-caps;">prospector</span> is scaled to $m_r=16.47$mag, measured in a matching $3''$ aperture in the PanSTARRS $r$-band image. At each epoch, this model spectrum is interpolated to the same wavelength grid as the data, convolved with a Gaussian function to match the instrument-specific resolution, and subtracted. We verify that the fraction of flux removed in this way is reasonable by performing synthetic photometry on the subtracted spectrum, and find that it matches the *host-subtracted* photometry to better than 0.5mag at all times. As a final step, we apply a small scaling to the subtracted spectrum to correct these $\lesssim {\rm few} \times 0.1$mag discrepancies with the subtracted light curve.
Figure \[fig:hostprocess\] also shows the spectrum of AT2019qiz at 125 days after the light curve peak (i.e. when host light is the dominant component) compared to the host model, after scaling to the observed magnitudes and convolving to match the resolution, as described above. The middle panel shows the full optical range, showing the good match to the continuum shape, with some TDE emission lines clearly visible above the host level. The host model shows deep Balmer absorption lines (common in TDE host galaxies) while the subtraction residuals (i.e. the TDE-only spectrum, also plotted) exhibits prominent Balmer emission. To verify that these are real features rather than oversubtraction, we show close-ups around several of the main host absorption lines. The model gives an excellent match to the Ca II, Mg I, Na I and G-band absorptions present in the data, giving us confidence that the TDE H emission is real (and cancels out the host absorptions in the bluer Balmer lines). Moreover, the TDE shows a clear H$\alpha$ (and weak H$\beta$) emission line even before subtraction, with a profile similar to the emission lines in the subtracted spectrum. Finally, we find that the emission line fluxes decrease with time, which would not be true in the case of over-subtraction.
Figure \[fig:specsub\] shows all spectra of AT2019qiz as in Figure \[fig:spec\], but in this case after applying the host subtraction process.
![image](synthmags.pdf){width="5.8cm"} ![image](host-sub-process.pdf){width="5.8cm"} ![image](host-sub-process-zoom.pdf){width="5.8cm"}
![image](all-spec-sub.pdf){width="\textwidth"}
Mosfit Posteriors
=================
The priors and marginalised posteriors of our <span style="font-variant:small-caps;">mosfit</span> TDE model were listed in Table \[tab:mosfit\]. In Figure \[fig:post\], we plot the full two-dimensional posteriors, which show some degeneracies between $\epsilon$ and $\beta$, and $l_{\rm ph}$ and $R_{\rm ph,0}$.
![image](mosfit-posteriors.pdf){width="15cm"}
\[lastpage\]
[^1]: Contact e-mail: <mnicholl@star.sr.bham.ac.uk>
[^2]: Einstein Fellow
[^3]: Compared to other nuclear transients such as active galactic nuclei
[^4]: The source could have been triggered 5 days earlier but for missing offset information in the alert packet.
[^5]: <https://lasair.roe.ac.uk>
[^6]: https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/swift/docs/\
uvot/uvotcaldb\_sss\_01.pdf
[^7]: The atmospheric dispersion corrector on X-Shooter failed on 2019-10-10, so we were unable to reduce the data in the UVB arm for this epoch.
[^8]: <https://wis-tns.weizmann.ac.il>
[^9]: The values of $\chi^2$ are sensitive to the size of the sky-fitting region, but are always lower with the point source. Quoted values are for a box $200\times400$px.
[^10]: The UV peaks slightly earlier (MJD 58764) than the optical (MJD 56766).
|
---
abstract: 'A model of localized classical electrons coupled to lattice degrees of freedom and, via the Coulomb interaction, to each other, has been studied to gain insight into the charge and orbital ordering observed in lightly doped manganese perovskites. Expressions are obtained for the minimum energy and ionic displacements caused by given hole and electron orbital configurations. The expressions are analyzed for several hole configurations, including that experimentally observed by Yamada [*et al.*]{} in ${\rm La_{7/8}Sr_{1/8}MnO_3}$. We find that, although the preferred charge and orbital ordering depend sensitively on parameters, there are ranges of the parameters in which the experimentally observed hole configuration has the lowest energy. For these parameter values we also find that the energy differences between different hole configurations are on the order of the observed charge ordering transition temperature. The effects of additional strains are also studied. Some results for ${\rm La_{1/2}Ca_{1/2}MnO_3}$ are presented, although our model may not adequately describe this material because the high temperature phase is metallic.'
address: |
Department of Physics and Astronomy, The Johns Hopkins University\
Baltimore, Maryland 21218
author:
- 'K. H. Ahn and A. J. Millis'
title: Interplay of charge and orbital ordering in manganese perovskites
---
Over the last few years much attention has been focused on manganese perovskite-based oxides, most notably the pseudocubic materials $Re_{1-x}Ak_x{\rm MnO_3}$. (Here $Re$ is a rare earth element such as La, and $Ak$ is a divalent alkali metal element such as Ca or Sr.) The initial motivation came from the observation that for some range of $x$, and temperature, $T$, resistance can be reduced by a factor of up to $10^7$ in the presence of a magnetic field.[@ahn] Two other interesting physical phenomena occurring in this class of materials are charge ordering and orbital ordering.[@chen] In this paper, we study the connection between the two.
The important electrons in $Re_{1-x}Ak_x{\rm MnO_3}$ are the Mn $e_g$ electrons; their concentration is $1-x$. For many choices of $Re$, $Ak$, and $x$, especially at commensurate $x$ values, the $e_g$ charge distribution is not uniform and it indeed appears that a fraction $x$ of Mn ions have no $e_g$ electron while $1-x$ have a localized $e_g$ electron. A periodic pattern of filled and empty sites is said to exhibit charge ordering. There are two $e_g$ orbitals per Mn ion. A localized Mn $e_g$ electron will be in one linear combination of these; a periodic pattern of orbital occupancy is said to exhibit orbital ordering. Recently, Murakami [*et al.*]{} [@murakami] observed the charge ordering transition accompanying simultaneous orbital ordering in ${\rm La_{1/2}Sr_{3/2}MnO_4}$ at 217 K (well above the magnetic phase transition temperature 110 K). It indicates that the interplay of the charge and orbital ordering to minimize the lattice energy could be the origin of the charge ordering. In this paper we present an expression for the coupling between charge and orbital ordering, with different charge ordering patterns favoring different orbital orderings. We also argue that the orbital ordering energy differences determine the observed charge ordering in lightly doped manganites. Localized charges induce local lattice distortions, which must be accommodated into the global crystal structure; the energy cost of this accommodation is different for different charge ordering patterns.
To model the charge and orbital ordering, we assume that the electrons are localized classical objects, so that each Mn site is occupied by zero or one $e_g$ electron, and each $e_g$ electron is in a definite orbital state. This assumption seems reasonable in the lightly doped materials such as ${\rm La_{7/8}Sr_{1/8}MnO_3}$, which are strongly insulating at all temperatures,[@yamada] but may not be reasonable for the ${\rm La_{1/2}Ca_{1/2}MnO_3}$ composition,[@chen] where the charge ordered state emerges at a low temperature from a metallic state. We proceed by calculating the energies of different charge ordering patterns, emphasizing the 1/8 doping case. It is practically impossible to consider all possible charge ordering configurations. Therefore, we consider the three configurations shown in Fig. \[fig1\], which are the only ones consistent with the following basic features of the hole-lattice implied by the experimental results by Yamada [*et al.*]{}[@yamada] : invariance under translation by two lattice constants in the $x$ or $y$ direction, four in the $z$ direction, and an alternating pattern of occupied and empty planes along $z$ direction. The configuration in Fig. \[fig1\](b) is the one proposed by Yamada [*et al.*]{}[@yamada] to explain their experimental results for ${\rm La_{7/8}Sr_{1/8}MnO_3}$. For localized electrons there are three energy terms : the coupling to the lattice, which will be discussed at length below, the Coulomb interaction, and the magnetic interaction.
First, we argue that the Coulomb energy cannot explain the observed ordering pattern or transition temperature. We take as reference the state with one $e_g$ electron per Mn and denote by $\delta q_i$ the charge of a hole on a Mn site. From the classical Coulomb energy $$U_{\rm Coulomb}=\frac{1}{2\epsilon_0}
\sum_{i \neq j}
\frac{\delta q_i \delta q_j}{r_{ij}},$$ one finds that the difference in energy between the configurations in Fig. \[fig1\] is $$\Delta{\cal U}_{\rm Coulomb,\: per\: hole} =\frac{1}{2 \epsilon_0}
\sum_{i\neq o}
\frac{\Delta(\delta q_i)}{r_{io}},$$ where $o$ is a site containing a hole and $\Delta(\delta q)$ is the difference in charge between the two configurations. We estimated the above infinite sum by repeated numerical calculations for larger and larger volumes of the unit cells around the origin. We find that Fig. \[fig1\](c) has the lowest energy; 12 meV/$\epsilon_o$ lower than Fig. \[fig1\](b), and 27 meV/$\epsilon_o$ lower than Fig. \[fig1\](a).
To estimate the magnitude of the Coulomb energy differences, we need an estimate for the dielectric constant $\epsilon_0$, which we obtain from the measured reflectivity for ${\rm La_{0.9}Sr_{0.1}MnO_3}$,[@okimoto1] and the Lyddane-Sachs-Teller relation[@aschcroft] $\omega_L^2=\omega_T^2 \epsilon_0 / \epsilon_{\infty}$. At frequencies greater than the greatest phonon frequency the reflectivity is close to 0.1, implying $\epsilon_{\infty} \approx 3.4$; the reflectivity is near unity between $\omega_T=0.020$ eV, and $\omega_L=0.024$ eV, implying $\epsilon_0 \approx 5.0$. Because both ${\rm La_{7/8}Sr_{1/8}MnO_3}$ and ${\rm La_{0.9}Sr_{0.1}MnO_3}$ are insulating and have similar compositions, their static dielectric constants are expected to be similar. Using $\epsilon_0\approx 5.0$, the energy difference between different configurations of holes is only around 2.4 meV, or 30 K per hole, which is small compared to the observed charge ordering temperature of 150 K $-$ 200 K of these materials. The inconsistency with the experimentally observed hole configuration and the smallness of the energy difference scale indicate that the electrostatic energy is not the main origin of charge ordering for this material.
Even though the magnetic and charge ordering transitions show a correlation in ${\rm La_{7/8}Sr_{1/8}MnO_3}$,[@kawano] we do not think that the magnetic contribution to charge and orbital ordering is as important as the lattice contribution for three reasons. First, in undoped ${\rm LaMnO_3}$, the orbital ordering and the structural phase transition occur at around 800 K and the magnetic ordering at around 140 K,[@wollan; @kanamori] suggesting that the magnetic effects are relatively weak. Second, in ${\rm La_{7/8}Sr_{1/8}MnO_3}$ the Mn spins are ferromagnetically ordered with moment close to the full Mn moment at temperatures greater than the charge ordering temperature,[@kawano] and ferromagnetic order does not favor one charge configuration over another. Third, although in ${\rm La_{7/8}Sr_{1/8}MnO_3}$ antiferromagnetic order appears at the charge ordering transition, the antiferromagnetic moment is very small (less than 0.1 of the full Mn moment),[@kawano] so the energy associated with this ordering must be much less than 140 K/site associated with magnetic ordering in ${\rm LaMnO_3}$. Therefore, we think that the canted antiferromagnetism occuring upon charge ordering in ${\rm La_{7/8}Sr_{1/8}MnO_3}$ (Ref. 7) is not the cause but the effect of the charge and orbital ordering. We now turn our attention to the lattice energy.
A classical model for the lattice distortions of the insulating perovskite manganites has been derived in Ref. 10, and shown to be consistent with experimental results on ${\rm LaMnO_3}$. This model is adopted here with an additional term, an energy cost for shear strain. We now briefly outline the model, which is explained in more detail in Ref. 10 and the Appendix. The ionic displacements included are the vector displacement $\vec{\delta}_i$ of the Mn ion on site $i$, and the $\hat{a}$ directional scalar displacement $u_i^a$ ($a=x$, $y$, and $z$) of the O ion which sits between the Mn ion on site $i$ and the Mn ion on site $i+\hat{a}$. For convenience, $\vec{\delta}_i$ and $u_i^a$ are defined to be dimensionless in the following way: the lattice constant of the ideal cubic perovskite is $b$, the Mn ion position in the ideal cubic perovskite is $\vec{R}_i$, the actual Mn ion position is $\vec{R}_i+b\vec{\delta}_i$, and the actual O ion position is $\vec{R}_i+(b/2+bu_i^a)\hat{a}$. The lattice energy is taken to be harmonic and depends only on the nearest neighbor Mn-O distance and the first and second nearest neighbor Mn-Mn distances. The spring constants corresponding to these displacements are $K_1$, $K_2$, and $K_3$ as shown in Fig. \[fig2\]. Because $K_1$ and $K_2$ involve bond stretching, while $K_3$ involves bond bending, $K_1\geq K_2 \gg K_3$ is expected. Thus, $E_{\rm lattice}=E_{\text{Mn-O}}
+E_{\text{Mn-Mn,first}}+E_{\text{Mn-Mn,second}}$, where $$\begin{aligned}
E_{\text{Mn-O}} &=& \frac{1}{2} K_1
\sum_{i,a}(\delta^{a}_{i}-u^{a}_{i})^{2}
+(\delta^{a}_{i} - u^{a}_{i-a})^{2}, \\
E_{\text{Mn-Mn,first}} &=& \frac{1}{2} K_2
\sum_{i,a}(\delta^{a}_{i}-\delta^{a}_{i-a})^{2}, \\
E_{\text{Mn-Mn,second}} &=& \frac{1}{2} K_3
\sum_{
i,(a,b)
}
\left[ \left(\frac{\delta^a_{i+a+b}+\delta^b_{i+a+b}}{\sqrt{2}}\right)-
\left(\frac{\delta^a_{i}+\delta^b_{i}}{\sqrt{2}}\right) \right]^{2}
\nonumber \\
& &+
\left[ \left(\frac{\delta^a_{i+a-b}-\delta^b_{i+a-b}}{\sqrt{2}}\right)-
\left(\frac{\delta^a_{i}-\delta^b_{i}}{\sqrt{2}}\right)\right]^{2}. \end{aligned}$$ In the above equations $a$ denotes $x$, $y$, and $z$, and $(a,b)$ represents $(x,y)$, $(y,z)$, and $(z,x)$. $E_{\text{Mn-Mn,second}}$ was not considered in Ref. 10. The shear modulus produced by this term is important, because without it, a Mn ion on site $i+\hat{x}$ can have arbitrary large $y$ directional displacement relative to the Mn ion on site $i$ at no cost of energy. For this reason, the model with $K_3=0$ has singularities, whose proper treatment requires $K_3\neq 0$ in our model. However, still we expect $K_3$ will be much smaller than $K_1$ or $K_2$. Therefore, in order to simplify the calculation, the $K_3/K_1\rightarrow0$ limit has been taken after the expression of minimized energy and equilibrium ionic displacements have been obtained.
Second, we consider the electronic degree of freedom. We parameterize the electron density by the variable $h_i$. If an electron is present on site $i$, $h_i=0$; if no electron is present, $h_i=1$. If there is an electron on site $i$, the electron orbital state, which is a linear combination of the two $e_g$ orbitals, is parameterized by an angle $\theta_i$ as follows. $$|\psi_{i}(\theta_i)>=
\cos{\theta_{i}}|d_{3z^{2}-r^{2}}>+\sin{\theta_{i}}|d_{x^{2}-y^{2}}>$$ with $0 \leq \theta_{i} < \pi$. The electron orbital state couples to the distortion of the surrounding oxygen octahedra through the Jahn-Teller distortion. The coupling is given by $$\begin{aligned}
E_{\rm JT} &=& - \lambda \sum_{i} (1-h_{i})[\cos{2\theta_{i}}\{v^{z}_{i}-
\frac{1}{2}(v^{x}_{i}+v^{y}_{i})\}+
\sin{2\theta_{i}}\frac{\sqrt{3}}{2}(v^{x}_{i}-v^{y}_{i})] \nonumber \\
&=& -\lambda\sum_{i,a}(1-h_{i})v^{a}_{i} \cos{2(\theta_{i}+\psi_{a})}, \end{aligned}$$ where $$\begin{aligned}
v^{a}_{i}&=&u^{a}_{i}-u^{a}_{i-a}, \\
\psi_{x}=-\pi/3,\: \psi_{y}&=&\pi/3,\: \psi_{z}=0.\end{aligned}$$ If a hole is present on site $i$, it attracts the surrounding oxygens equally, giving rise to a breathing distortion energy given by $$E_{\rm hole}=\beta\lambda\sum_{i}h_{i}(v^{x}_{i}+v^{y}_{i}+v^{z}_{i}).$$ The parameter $\beta$ represents the strength of the breathing distortion relative to the Jahn-Teller distortion. Finally, following Kanamori,[@kanamori] we include a phenomenological cubic anharmonicity term given by $$E_{\rm anharm} = - A \sum_{i} (1-h_{i}) \cos{6\theta_i}. \label{eq:anharm}$$ The sign has been chosen so that the electron orbital states of $|3x^2-r^2>$, $|3y^2-r^2>$, or $|3z^2-r^2>$, with $\hat{x}$, $\hat{y}$, and $\hat{z}$ pointing toward nearest oxygen ions are favored when $A$ is positive. The total energy, which is the sum of all the above energy terms, is given by $$E_{\rm tot} = E_{\text{Mn-O}}+E_{\text{Mn-Mn,first}}+
E_{\text{Mn-Mn,second}}+E_{\rm JT}+E_{\rm hole}+E_{\rm anharm}.
\label{eq:TOT}$$
We minimized $E_{\rm tot}$ about $\delta_i^a$’s and $u_i^a$’s for fixed hole and orbital configurations. These are conveniently expressed in terms of the variables $\delta_{\vec{k}}^a$, $u_{\vec{k}}^a$, $h_{\vec{k}}$, and $c_{\vec{k}}^a$ defined in the following way. $$\begin{aligned}
\delta_{i}^{a} &=& \sum_{\vec{k}}
e^{-i \vec{k} \cdot \vec{R}_{i}}\delta_{\vec{k}}^{a}, \label{eq:ee}\\
u_{i}^{a} &=& \sum_{\vec{k}} e^{-i \vec{k}
\cdot \vec{R}_{i}} u_{\vec{k}}^{a}, \label{eq:uu}\\
h_{i} &=& \sum_{\vec{k}} e^{-i \vec{k} \cdot \vec{R}_{i}}
h_{\vec{k}},
\label{eq:hole} \\
(1-h_{i})\cos{2(\theta_{i}+\psi_{a})} &=& \sum_{\vec{k}} e^{-i \vec{k}
\cdot \vec{R}_{i}} c_{\vec{k}}^{a}. \label{eq:orbital} \end{aligned}$$ The details are shown in the Appendix. The minimized energy per Mn ion may be written as $$\frac{E_{\rm tot}}{N}={\cal E}_{\vec{k}=0}+
\sum_{\vec{k}\neq 0, a}{\cal E}^a_{\vec{k}}+
\frac{E_{\rm anharm}}{N}, \label{eq:tot}$$ where $$\begin{aligned}
{\cal E}^a_{\vec{k}}&=&
\left\{ \begin{array}{ll}
-\frac{\lambda^2}{(K_1+2K_2)K_1} [ K_1+K_2(1-\cos{k_a}) ]
(\beta h_{\vec{k}}-c^{a}_{\vec{k}})(\beta h_{-\vec{k}}
- c^a_{-\vec{k}}), &
\mbox{if $k_a\neq 0$ } \\
0, & \mbox{if $k_a= 0$ }
\end{array}
\right. \label{eq:nonzerok} \\
{\cal E}_{\vec{k}=0}&=&-\frac{\lambda^2}{K_1+2K_2}[3(\beta h_0)^2
+\sum_{a}(c^a_0)^2].
\label{eq:zerok}\end{aligned}$$ The long wave length strain $e^{ab}$, and the $\vec{k}(\neq 0)$ components of the ionic displacements are given as $$\begin{aligned}
e^{ab}&=&-\frac{2\lambda}{K_1+2K_2}(\beta h_0-c_0^a) \delta_{ab}, \\
u^a_{\vec{k}\neq 0}&=&
\left\{ \begin{array}{ll}
-\frac{\lambda [K_1+K_2(1-\cos{k_a})]}{(K_1+2K_2)K_1}
\frac{1-e^{-ik_a}}{1-\cos{k_a}}
(\beta h_{\vec{k}} -c^a_{\vec{k}}), &
\mbox{if $k_a\neq0$ } \\
0, & \mbox{if $k_a=0$ }
\end{array}
\right. \\
\delta^a_{\vec{k}\neq 0}&=&
\left\{ \begin{array}{ll}
-i\frac{\lambda\sin{k_a}}{(K_1+2K_2)(1-\cos{k_a})}
(\beta h_{\vec{k}} -c^a_{\vec{k}}), &
\mbox{if $k_a\neq 0$ } \\
0, & \mbox{if $k_a=0$. }
\end{array}
\right. \label{eq:del}\end{aligned}$$
Because $h_i$’s and $(1-h_{i})\cos{2(\theta_{i}+\psi_{a})}$’s are bounded by $\pm 1$, we cannot treat $h_{\vec{k}}$’s and $c^a_{\vec{k}}$’s as independent variables to minimize $E_{\rm tot}$. Therefore, we minimize $E_{\rm tot}$ over the orbital variable $\theta_i$’s at fixed hole configurations; the ground state is then the hole configuration of the lowest energy.
For ${\rm La_{7/8}Sr_{1/8}MnO_3}$, we consider the three hole configurations shown in Fig. \[fig1\], each of which is a Bravais lattice, with a unit cell containing one Mn site with a hole and seven Mn sites without holes. The orbital configuration may be different in different unit cells of the lattice defined by the holes. We consider the case where the orbital configuration is the same in each unit cell. In addition to that, we also consider all possible two sublattice symmetry breakings. Therefore, we have seven (if no symmetry breaking) or fourteen (if two-sublattice symmetry breaking) orbital variable $\theta_i$’s. $E_{\rm tot}/N$ in Eq. (\[eq:tot\]) for each configuration is expressed in terms of those variables through Eqs. (\[eq:anharm\]), (\[eq:hole\]), (\[eq:orbital\]), (\[eq:nonzerok\]), and (\[eq:zerok\]), and is minimized about $\theta_i$’s. For this minimization, we use the [*FindMinimum*]{} routine in [*Mathematica*]{} in the following way: for each set of parameters, and for each configuration, we check the local minimal values by using 50 $-$ 200 random starting values of $\theta_i$’s.
According to Ref. 10, $\lambda/K_1$ ranges over 0.04 $-$ 0.05, and $K_2/K_1$ is between 0 and 1. $A/K_1$ ranges around 0.0002, and $K_1\approx 200$ eV.[@millis] Recently, a local breathing distortion of $0.12$ $\AA$ has been directly observed in ${\rm La_{0.75}Ca_{0.25}MnO_3}$.[@billinge] The Jahn-Teller distortion is estimated around 0.15 $\AA$ from the Mn-O distances of ${\rm LaMnO_3}$.[@ellemans] This implies that the breathing distortion and the Jahn-Teller distortion in these materials have similar order of magnitude, i.e., $\beta=O(1)$. We varied $\beta$ in the range of 0 $-$ 10, and $A/K_1$ in the range of 0 $-$ 0.00035, with $\lambda/K_1=0.045$, $K_2/K_1=0.5$, and $K_1= 200$ eV. For each set of those parameters, the minimum energy per hole for each fixed hole configuration in Fig. \[fig1\] has been found. By comparing them, we find the most favored hole configuration for each $\beta$ and $A/K_1$, which is shown in Fig. \[fig3\] as a plot in $\beta$-$A/K_1$ plane.
At large $\beta$ ($\gtrsim 7$), the configuration shown in Fig. \[fig1\](c) is the most favored, and that shown in Fig. \[fig1\](a) is the least favored. This can be related to the fact that, in $y$-$z$ and $z$-$x$ directional planes, Fig. \[fig1\](c) has the most even distribution of holes, and Fig. \[fig1\](a) has the least even distribution. For large $\beta$, the contraction of oxygen octahedra toward holes is strong, and an uneven distribution of holes generates larger strains and elevates minimum energies. Particularly, the square hole net squeezes the electron orbital at the center of the square along the direction perpendicular to the square plane. In the cubic hole configuration of Fig. \[fig1\](a), the six squeezed electron orbitals point toward the cubic center, putting the electron orbital at the center at high energy, which is consistent with our result that Fig. \[fig1\](a) has far higher minimum energies than Figs. \[fig1\](b) and \[fig1\](c) in large $\beta$ limit.
As $\beta$ is decreased into the range of 2 $-$ 5, the favored hole configuration becomes that of Fig. \[fig1\](b), which is the experimentally observed hole configuration. We expect that the difference of the energy per hole between the ground state hole configuration and the next lowest energy hole configuration corresponds approximately to the charge ordering temperature. The calculation results indicate that, when $\beta$ is in the range of 2.0 $-$ 2.5 or around $5.0$ and $A/K_1=0.0002$, the charge ordering temperature is around 100 $-$ 200 K, which is consistent with experimental results. As $\beta$ is decreased further, the most favored hole configuration changes further and the temperature difference scale decreases.
Figure \[fig3\] also shows the tendency that the configuration of Fig. \[fig1\](c) becomes more favored as $A/K_1$ increases. We think this occurs because the anharmonicity energy distorts the oxygen octahedra tetragonally, which can be more easily accommodated by the tetragonal hole configuration of Fig. \[fig1\](c).
In Table \[table1\], we have shown an example of the orbital states, ionic displacements, and uniform strains corresponding to the minimum energy configuration for Fig. \[fig1\](b) when $A/K_1=0.0002$, $\lambda/K_1=0.045$, $K_2/K_1=0.5$ and $\beta=2.5$. $x$, $y$, and $z$ directions are shown in Fig. \[fig1\]. The nearest Mn-Mn distance is unit. $(n_i^x,n_i^y,n_i^z)$ is defined in such a way that $(n_i^x,n_i^y,n_i^z)+N_1(2,0,0)+2N_2(0,2,0)+2N_3(1,0,2)$’s and $(n_i^x,n_i^y,n_i^z)+N_1(2,0,0)+(2N_2+1)(0,2,0)+(2N_3+1)(1,0,2)$’s, where $N_1$, $N_2$, and $N_3$ are integers, represent the coordinates of the sites indexed by $i$. $\vec{k}=0$ parts of the ionic displacements have been subtracted to find the non-uniform parts of the displacements.
The energy expressions in Eqs. (\[eq:nonzerok\]) and (\[eq:zerok\]) are adequate for bulk materials. When the material is grown on a substrate as a thin film, generally there is a strain generated by lattice mismatch between the film and the substrate materials. To see the effect of this strain, we add a term proportional to $c_0^{a'}$ ($a'=x$, $y$, or $z$) to the energy, which corresponds to an $a'$ directional strain. Using a parameter $g$, we replace ${\cal E}_{\vec{k}=0}$ in Eq. (\[eq:zerok\]) by the following expression: $${\cal E'}_{\vec{k}=0}=-\frac{\lambda^2}{K_1+2K_2}[3(\beta h_0)^2
+g c^{a'}_0+\sum_{a}(c^a_0)^2
].$$ We repeated similar calculations to find the favored hole configurations for different values of the applied strain, parameterized by $g$. The applied strain breaks cubic symmetry. Some of the hole configurations also break cubic symmetry. For these cases the energy depends on the relative orientation of the strain and hole symmetry breakings. We consider all possible orientations and find the lowest energy state. We have varied $g$ between $-0.4$ and $0.4$, and $\beta$ between 0 and 7, with $A/K_1=0.0002$, $\lambda/K_1=0.045$, and $K_2/K_1=0.5$. The results are shown as a phase diagram in $\beta$-$g$ plane in Fig. \[fig4\]. It shows that Fig. \[fig1\](c) configuration is favored more as $|g|$ increases. This feature can be understood in the following way. For small $g$’s, the leading correction to the minimum energy for each hole configuration is $ -\lambda^2 g \tilde{c^{a}_0} / (K_1+2K_2) $, where $\tilde{c^{a}_0}$ represents $c^a_0|_{g=0}$. Therefore the configuration which has a larger $|\tilde{c^{a}_0}|$ will show greater change in energy for a given $g$. Since the hole configuration in Fig. \[fig1\](c) has tetragonal symmetry, which is compatible with the Jahn-Teller distortion, it has the largest $|\tilde{c^a_0}|$. Therefore, as $|g|$ increases, Fig. \[fig1\](c) is more favored than Figs. \[fig1\](a) and \[fig1\](b). Because the energy changes linearly with $g$, the phase boundaries are straight lines for small $g$, and have cusps at $g=0$ , as shown in Fig. \[fig4\]. Typical variations of $e^{aa}$ corresponding to changing $|g|$ from 0 to 0.4 are about 2 %. The results indicate that the strain generated by substrates can change ordered hole configuration and ordering temperature.
Our results indicate that the interaction of the electronic state and the lattice can be the origin of the charge ordering in this material, even though the details of the results are dependent on specific choice of $K_1$, $K_2$, $\lambda$, and $A$.
A similar calculation has been done for $Re_{1/2}Ak_{1/2}{\rm MnO_3}$, hole concentration 1/2. We choose the three hole configurations in Fig. \[fig5\] to compare the minimum energies. Each configuration has an alternating hole distribution in different set of directions : $x$, $y$, and $z$ directions for Fig. \[fig5\](a), $x$ and $y$ directions for Fig. \[fig5\](b), and $y$ direction for Fig. \[fig5\](c). Figure \[fig5\](b) is the experimentally observed hole configuration.[@chen] As we have done for $x=1/8$, we consider both the case where hole and orbital state have the same unit cell, and the case where the orbital state is composed of the two hole sublattices. Calculations for $A/K_1=0.0002$, $\lambda/K_1=0.045$, and $K_2/K_1=0.5$ show that when $\beta$ is large, configuration in Fig. \[fig5\](a) is the most favored and Fig. \[fig5\](c) is the least favored. As $\beta$ is decreased, the favored configuration is changed between $\beta$=0.5 and 0.7. After that Fig. \[fig5\](c) is the most favored and Fig. \[fig5\](a) is the least favored. When $\beta$ is large, the holes prefer to distribute evenly because of the same reason as in $x=1/8$ case. In contrast, when $\beta$ is small, electron sites prefer to have more neighboring electron sites to gain orbital energy. Our results are not consistent with the experimental results for ${\rm La_{1/2}Ca_{1/2}MnO_3}$,[@chen] which indicate that configuration Fig. \[fig5\](b) is the ground state. This inconsistency may arise because our model involves only localized electrons, while for $x=1/2$ the charge ordering state arises from a metallic phase. Modifications of our model to include hole hopping are desirable.
In summary, we have shown that the lattice effect could play an important role in the charge ordering transition observed in perovskite manganites.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported in part by NSF-DMR-9705482. We also acknowledge partial support from NSF-DMR-96322526 (The Johns Hopkins M.R.S.E.C. for Nanostructured materials).
{#section .unnumbered}
To find the minimum energy we transform $E_{\rm tot}$ in Eq. (\[eq:TOT\]) into $\vec{k}$ space, using Eqs. (\[eq:ee\]) $-$ (\[eq:orbital\]). This leads to the following energy expressions in $k$ space:
$$\begin{aligned}
E_{\rm tot}/(NK_1)&=&
\sum_{\vec{k}}
\delta_{\vec{k}}^{\dagger} M_{\vec{k}} \delta_{\vec{k}} +
\delta_{\vec{k}}^{\dagger} L_{\vec{k}}^{\dagger} u_{\vec{k}} +
u_{\vec{k}}^{\dagger} L_{\vec{k}} \delta_{\vec{k}} +
u_{\vec{k}}^{\dagger} u_{\vec{k}} +
u_{\vec{k}}^{\dagger} P_{\vec{k}} e_{\vec{k}} +
e_{\vec{k}}^{\dagger} P_{\vec{k}}^{\dagger} u_{\vec{k}} \nonumber \\
& &- \frac{A}{NK_1} \sum_{i} (1-h_{i}) \cos{6\theta_i}, \label{eq:AEqE}\end{aligned}$$
where $$\begin{aligned}
\delta_{\vec{k}}^{\dagger} &=&
( \delta^{x}_{\vec{k}}, \delta^{y}_{\vec{k}}, \delta^{z}_{\vec{k}} ), \\
u_{\vec{k}}^{\dagger} &=&
( u^{x}_{\vec{k}}, u^{y}_{\vec{k}}, u^{z}_{\vec{k}} ), \\
e_{\vec{k}}^{\dagger} &=&
( \beta h_{\vec{k}}-c^x_{\vec{k}} , \beta h_{\vec{k}}-c^y_{\vec{k}},
\beta h_{\vec{k}}-c^z_{\vec{k}} ), \end{aligned}$$
$$\begin{aligned}
&&M_{\vec{k}} =
\left(
\begin{array}{ccc}
\begin{array}{c}
1+\frac{K_2}{K_1}(1-\cos{k_x}) \\
+\frac{K_3}{K_1}(1-\cos{k_x}\cos{k_y}) \\
+\frac{K_3}{K_1}(1-\cos{k_x}\cos{k_z})
\end{array} &
\frac{K_3}{K_1}\sin{k_x}\sin{k_y} &
\frac{K_3}{K_1}\sin{k_x}\sin{k_z} \\
\frac{K_3}{K_1}\sin{k_y}\sin{k_x} &
\begin{array}{c}
1+\frac{K_2}{K_1}(1-\cos{k_y}) \\
+\frac{K_3}{K_1}(1-\cos{k_y}\cos{k_z}) \\
+\frac{K_3}{K_1}(1-\cos{k_y}\cos{k_x}) \\
\end{array} &
\frac{K_3}{K_1}\sin{k_y}\sin{k_z} \\
\frac{K_3}{K_1}\sin{k_z}\sin{k_x} &
\frac{K_3}{K_1}\sin{k_z}\sin{k_y} &
\begin{array}{c}
1+\frac{K_2}{K_1}(1-\cos{k_z}) \\
+\frac{K_3}{K_1}(1-\cos{k_z}\cos{k_x}) \\
+\frac{K_3}{K_1}(1-\cos{k_z}\cos{k_y})
\end{array}
\end{array}
\right) ,\end{aligned}$$
$$\begin{aligned}
L_{\vec{k}} &=&
\left( \begin{array}{ccc}
-\frac{1}{2}(1+e^{ik_{x}}) & 0 & 0 \\
0 & -\frac{1}{2}(1+e^{ik_y}) & 0 \\
0 & 0 & -\frac{1}{2}(1+e^{ik_z})
\end{array}
\right),
\\
P_{\vec{k}} &=&
\frac{\lambda}{2K_1}
\left( \begin{array}{ccc}
1-e^{ik_{x}} & 0 & 0 \\
0 & 1-e^{ik_y} & 0 \\
0 & 0 & 1-e^{ik_z}
\end{array}
\right), \end{aligned}$$
and $N$ is the total number of Mn sites. We obtain Eqs. (\[eq:nonzerok\]) $-$ (\[eq:del\]) by minimizing the above expression with respect to all $\delta^{a}_{\vec{k}}$ and $u^a_{\vec{k}}$. Without the second neighbor elastic energy term, $\delta_{\vec{k}}$ and $u_{\vec{k}}$ minimizing Eq. (\[eq:AEqE\]) become singular when any of $k_x$, $k_y$, and $k_z$ is zero. With nonzero $K_3$ this singularity has been uniquely solved for $\vec{k}\neq 0$, while at $\vec{k}=0$, it is not.
To find the energy term with $\vec{k}=0$, we take the $\vec{k}\rightarrow0$ limit. That corresponds to the uniform strain energy, i.e., the energy related to the change of the lattice parameters from the original cubic structure. Here the problem of the choice of the limiting process arises, because the calculation shows that the different directions of the limiting process of $\vec{k}\rightarrow 0$ have given different energies and different uniform strains. Because the lower energy state is favored after all, the appropriate limiting process will be the one which gives the minimum uniform strain energy, and it determines the uniform strain also. When $K_3/K_1 \ll 1$, this appropriate limiting process has been found to satisfy the condition of $k_x$, $k_y$, and $k_z\neq 0$. As far as $k_x$, $k_y$, and $k_z$ are nonzero, the limits are different only in the order of $K_3/K_1$. Therefore, in the $K_3/K_1\rightarrow 0$ limit, any $\vec{k}\rightarrow0$ limit process satisfying the above condition gives the correct expression of the minimum energy term with $\vec{k}=0$. It also gives a unique uniform strain.
R. von Helmolt, J. Wecker, B. Holzapfel, L. Schultz, and K. Samwer, Phys. Rev. Lett. [**71**]{}, 2331 (1993); S. Jin, T. H. Tiefel, M. McCormack, R. A. Fastnacht, R. Ramesh, and L. H. Chen, Science [**264**]{}, 413 (1994); K. Liu, X. W. Wu, K. H. Ahn, T. Sulchek, C. L. Chien, and J. Q. Xiao, Phys. Rev. B [**54**]{}, 3007 (1996). C. H. Chen and S-W. Cheong, Phys. Rev. Lett. [**76**]{}, 4042 (1996). Y. Murakami, H. Kawada, H. Kawata, M. Tanaka, T. Arima, Y. Moritomo, and Y. Tokura, Phys. Rev. Lett. [**80**]{}, 1932 (1998). Y. Yamada, O. Hino, S. Nohdo, and R. Kano, Phys. Rev. Lett. [**77**]{}, 904 (1996). Y. Okimoto, T. Katsufuji, T. Ishikawa, T. Arima and Y. Tokura, Phys. Rev. B [**55**]{}, 4206 (1997). N. W. Aschcroft, and N. D. Mermin, [*Solid State Physics*]{} (Saunders College Publishing, Fort Worth, TX, 1976). H. Kawano, R. Kajimoto, M. Kubota, and H. Yoshizawa, Phys. Rev. B [**53**]{}, 2202 (1996); [**53**]{}, 14709 (1996). E. O. Wollan and W. C. Koehler, Phys. Rev. [**100**]{}, 545 (1955). J. Kanamori, J. Appl. Phys. [**31**]{}, 14S (1960). A. J. Millis, Phys. Rev. B [**53**]{}, 8434 (1996). S. J. L. Billinge, R. G. DiFrancesco, G. H. Kwei, J. J. Neumeier, and J. D. Thompson, Phys. Rev. Lett. [**77**]{}, 715 (1996) J. B. A. A. Ellemans, B. van Laar, K. R. van der Veen, and B. O. Loopstra, J. Solid State Chem. [**3**]{}, 238 (1971).
i $(n_i^x,n_i^y,n_i^z)$ $\theta_i({\rm radian})$ $\delta_i^x-\delta_{\vec{k}=0}^x$ $\delta_i^y-\delta_{\vec{k}=0}^y$ $\delta_i^z-\delta_{\vec{k}=0}^z$ $u_i^x-u_{\vec{k}=0}^x$ $u_i^y-u_{\vec{k}=0}^y$ $u_i^z-u_{\vec{k}=0}^z$
---- ----------------------- -------------------------- ----------------------------------- ----------------------------------- ----------------------------------- ------------------------- ------------------------- -------------------------
1 (0,0,0) hole site 0 0 0 -0.135 -0.134 -0.159
2 (1,0,0) 1.11 0 0.007 0 0.135 0.007 -0.039
3 (0,1,0) 1.97 0 0 0 0.004 0.134 -0.030
4 (1,1,0) 0.03 0 0 0 -0.004 -0.007 0.047
5 (0,0,1) 0.09 0 -0.005 -0.049 0 -0.023 0.011
6 (1,0,1) 0.09 0 0.005 -0.007 0 -0.009 0.019
7 (0,1,1) 2.74 0 0 0.002 -0.043 0.023 0.037
8 (1,1,1) 1.24 0 0 0.013 0.043 0.009 -0.008
9 (0,2,0) hole site 0 0 0 -0.135 -0.133 -0.159
10 (1,2,0) 1.11 0 -0.007 0 0.135 -0.013 -0.039
11 (0,3,0) 2.28 0 0 0 -0.036 0.133 0.008
12 (1,3,0) 1.33 0 0 0 0.036 0.013 -0.037
13 (0,2,1) 0.09 0 0.005 -0.049 0 -0.009 0.011
14 (1,2,1) 0.09 0 -0.005 -0.007 0 -0.023 0.019
15 (0,3,1) 1.24 0 0 -0.013 0.043 0.009 -0.047
16 (1,3,1) 2.74 0 0 -0.002 -0.043 0.023 0.030
: Coordinates of site $i$, orbital states, ionic displacements, and uniform strains for the minimum energy configuration of Fig. \[fig1\](b), when $A/K_1=0.0002$, $\lambda/K_1=0.045$, $K_2/K_1=0.5$ and $\beta=2.5$. []{data-label="table1"}
|
---
abstract: 'In baseball games, the coefficient of restitution of baseballs strongly affects the flying distance of batted balls, which determines the home-run probability. In Japan, the range of the coefficient of restitution of official baseballs has changed frequently over the past five years, causing the number of home runs to vary drastically. We analyzed data from Japanese baseball games played in 2014 to investigate the statistical properties of pitched balls. In addition, we used the analysis results to develop a baseball-batting simulator for determining the home-run probability as a function of the coefficient of restitution. Our simulation results are explained by a simple theoretical argument.'
author:
- Hiroto Kuninaka
- Ikuma Kosaka
- Hiroshi Mizutani
title: 'Home-run probability as a function of the coefficient of restitution of baseballs'
---
Introduction
============
The bounce characteristics of baseballs have a large influence in baseball games; thus, baseball organizations often establish rules concerning official balls. For example, in Major League Baseball (MLB), the baseballs are made by tightly winding yarn around a small core and covering it with two strips of white horsehide or cowhide[@cross].
For the estimation of the bounce characteristics, the coefficient of restitution $e$ is widely used, which is defined as $$e = \frac{V_{r}}{V_{i}},$$ where $V_{i}$ and $V_{r}$ are the speeds of incidence and rebound, respectively, in a head-on collision of a ball with a plane. Note that the coefficient of restitution determines the loss of translational energy during the collision. The coefficient of restitution depends on the kind of material and the internal structure of the ball, as well as other factors such as the impact speed[@cross; @adair; @stronge; @johnson; @goldsmith], impact angle[@louge; @kuninaka_prl], temperature of the ball[@drane; @allen], and humidity at which the balls are stored[@kagan].
Various baseball organizations officially determine the range of the coefficient of restitution of baseballs. For example, the coefficient of restitution of an MLB baseball is required to be $0.546 \pm 0.032$[@kagan2]. Regarding Japanese professional baseballs, the Nippon Professional Baseball Organization (NPB) first introduced their official baseball in 2011, which was used in both Pacific and Central League. Table \[tb1\] shows a chronological table indicating the average coefficient of restitution for baseballs used in Japanese professional baseball games from 2010 to 2013[@npb], along with the annual number of home runs[@brc]. Clearly, the number of home runs decreased drastically in 2011 compared with 2010, although the difference in the average coefficient of restitution is only on the order of $10^{-2}$. The average coefficient of restitution increased in 2013 because the NPB made a baseball equipment manufacturer change the specification of the baseballs in order to increase the level of offense in baseball games.
Year Coefficient of Restitution (average) Number of Home Runs
------ -------------------------------------- --------------------- -- -- --
2010 0.418 1,605
2011 0.408 939
2012 0.408 881
2013 0.416 1,311
: Coefficient of restitution of baseballs and number of home runs in Japanese professional baseball games.
\[tb1\]
Generally, the number of home runs is strongly affected not only by the coefficient of restitution of baseballs, but also other various factors, such as the climate, the specifications of bats, and the batting skills of players, and so on. Sawicki et al. constructed a detailed batting model incorporating several factors, including the air resistance, friction between the bat and ball, wind velocity, and bat swing[@sawicki]. Although they investigated the optimal strategy for achieving the maximal range of a batted ball, they did not calculate the home-run probability, because it may be difficult to choose proper parameters for the home run probability function. However, a quantitative research on the relationship between the coefficient of restitution of baseballs and the home-run probability is valuable for two reasons. First, Table. \[tb1\] indicates that the home-run probability strongly depends on the coefficient of restitution of baseballs because the small amount of changes in the coefficient of restitution can alter the flying distances of batted balls[@nathan11]. Second, the coefficient of restitution of baseballs is a controllable factor that is important for the design of baseball equipment. The home run probability as a function of the coefficient of restitution can be a simple criterion to evaluate the characteristics of official baseballs. In addition, a quantitative research on the relationship between the coefficient of restitution of balls and the home-run probability is also valuable for physics education, as the problem is closely related to topics covered in undergraduate physics.
In this study, we developed a batting simulator using real baseball data to quantitatively investigate the home-run probability as a function of the coefficient of restitution. This paper is structured as follows. In the next section, we describe the data analysis and analysis results. Sections 3 presents the construction of our batting simulator and the simulation results. In Sections 4 and 5, we discuss and summarize our results. Appendices A and B are devoted to the derivation of the averaged force in a binary collision between a ball and a bat and the algorithm for the collision, respectively.
Data Analysis
=============
To construct our batting simulator, we first analyzed pitching data for Japanese professional baseball games held in 2014. We used data from Sportsnavi[@sportsnavi], which show various data about the pitched balls in an official game, including the ball speed, pitch type, and position of a ball crossing the home plate. Figure \[fig1\] shows a schematic of a part of a Sportsnavi page. In the data, the pitching zone is divided into 5 $\times$ 5 grids from the pitcher’s perspective, wherein 3 $\times$ 3 grids, represented by thick lines, corresponds to the strike zone (see the left panel of Fig. \[fig1\]). The numbers and the symbols in a grid respectively show the order and types of pitches, respectively, at different positions on the grid. Information about each pitch, including the ball speed, is presented in the table shown in the right side of Fig. \[fig1\]. For a later discussion, we numbered the horizontal and vertical positions of each grid as shown in Fig. \[fig1\].
Using the Sportsnavi database, we manually recorded all the positions and ball speeds of pitches in 12 selected games held in Nagoya Dome Stadium in Nagoya, Japan, from August 6 2014 to September 25 2014. We chose games held in indoor domes because the flight of baseballs is hardly affected by climatic factors such as the wind strength. We collected and analyzed data for 1,548 pitched balls.
![Schematic of a part of the Sportsnavi page. []{data-label="fig1"}](fig1.eps){width="10cm"}
Figure \[fig2\] shows the distribution of the pitched-ball speed $v$, where the open circles indicate the calculated probabilities as a function of $v$. To obtain the distribution function approximating these data, we divided the ball-speed data into two categories: those for straight balls and those for breaking balls having a curve, a two-seam fastball, etc. For each of the categorized data points, we fit the normal distribution defined by $$f_{i}(v) = \frac{1}{\sqrt{2 \pi }\sigma_{i}} \exp \left\{
- \frac{(v - \mu_{i})^{2}}{\sigma_{i}^{2}} \right\} \hspace{3mm} (i = 1, 2), \label{norm}$$ where $\mu_{i}$ and $\sigma_{i}$ are the mean and the standard deviation, respectively. The fitting parameters are presented in Table \[tb2\], where $i=1$ and $i=2$ correspond to the straight and breaking balls, respectively.
![Distribution of ball speeds. Open circles show the probabilities at each ball speed. Solid black curve shows Eq. (\[md\]) with the fitting parameters shown in Table \[tb2\]. Solid red and blue curves show the distributions of the straight and the breaking balls, respectively, weighted with $p=0.45$.[]{data-label="fig2"}](fig2.eps){width="7cm"}
Considering $f_{i}(v)$ ($i=1, 2$) to be components, we finally obtained the mixture distribution of the pitched-ball speed $v$ as $$\begin{aligned}
\label{md}
\phi(v) = p f_{1}(v) + (1-p) f_{2}(v) \hspace{3mm} (0 < p < 1).\end{aligned}$$ Here, $p$ is the mixing parameter. The black solid curve shown in Fig.\[fig2\] indicates Eq. (\[md\]) with $p=0.45$, which closely approximates the distribution of the pitched-ball speed with the coefficient of determination equal to 0.9942. The red and the blue curves show the first and the second terms, respectively, in the right-hand side of Eq. (\[md\]). Generally, the value of $p$ represents the probability of selecting each component of the mixture distribution. Thus, we consider that the pitchers chose straight and breaking balls with probabilities of $p=0.45$ and $0.55$, respectively.
$\sigma_{1}$ \[km/h\] $\mu_{1}$ \[km/h\] $\sigma_{2}$ \[km/h\] $\mu_{2}$ \[km/h\]
----------------------- -------------------- ----------------------- -------------------- -- --
139.1 4.58 127.8 7.80
: Fitting parameters used in Eq. (\[norm\]).
\[tb2\]
Next, we show the distributions of the horizontal position $y$ and the vertical position $z$ of the pitched balls in the pitching zone. Figure \[fig3\] shows the distribution of the horizontal position of the pitched balls, where the horizontal axis indicates the positions of the grids in Fig. \[fig1\], which are numbered from left to right. The open circles indicate the calculated probabilities as a function of the horizontal position. To obtain the distribution function approximating these data, we fitted the mixture distribution of the normal distributions, which is expressed as $$\begin{aligned}
\psi(y) = q h_{1}(y) + (1-q) h_{2}(y) \hspace{3mm} (0 < q < 1), \label{md2}\end{aligned}$$ where $h_{1}(y)$ and $h_{2}(y)$ are the normal distributions with the same standard deviation: 0.77.
The means of $h_{1}(y)$ and $h_{2}(y)$ are $y=2$ and $y=4$, respectively. In Fig. \[fig3\], the black solid curve shows Eq. (\[md2\]) with the mixing parameter $q=0.52$, whereas the blue and red curves show the first and second terms, respectively, in the right-hand side of Eq. (\[md2\]). The coefficient of determination of the fit is equal to 0.986, which indicate a good fit. This indicates that the pitchers tended to choose the right and left sides of the strike zone with a similar probability to prevent home runs.
![Distribution of the horizontal position of pitched balls at the home base. Solid black curve shows Eq. (\[md2\]) with $q=0.52$.[]{data-label="fig3"}](fig3.eps){width="7cm"}
On the other hand, Fig. \[fig4\] shows the distribution of the vertical position for the pitched balls. The probability is almost constant, except at $z=5$, which is the highest position. This indicates that the pitchers tended to choose heights all over the strike zone with a similar probability. The frequency at $z=5$ is relatively small to avoid the risk of long ball hitting. On the other hand, the frequency at $z=1$ is almost same as those at $z=2, 3, 4$ inside the strike zone, which may be attributed to that the balls at $z=1$ are difficult to strike.
![Distribution of the vertical position of pitched balls at the home base. []{data-label="fig4"}](fig4.eps){width="7cm"}
Simulation
==========
We performed a batting simulation according to the results presented in the previous section. Figure \[fig5\](a) shows our simulation setup, in which a home plate was placed at the origin of a Cartesian coordinate system. The initial position of the center of mass of a baseball was set at ${\bf r}_{0} = ($18.44 m, 0 m, 1.8 m$)$ according to the official baseball rules in Japan and the average height of Japanese professional baseball players. The baseball was pitched in the negative direction of the $x$ axis with an initial velocity of ${\bm V}_{0} = V_{0} \hat { {\bm C} }$, where $\hat { {\bm C} }$ is the unit direction vector of the pitch, which will be defined later. Here, $V_{0}$ is randomly chosen according to the distribution function given by Eq. (\[md\]) with the parameters shown in Table II.
![Schematics showing (a) our simulation setup and (b) the definition of $z^{'}_{b}$.[]{data-label="fig5"}](fig5.eps){width="12cm"}
The pitch direction was defined as follows. A ball needs to be thrown from about shoulder height and be launched in an almost horizontal direction to cross the plate at the correct height[@cross]. Thus, we assume that a pitcher throws a ball toward a point ${\bf P} = ($0, $P_{y}$ \[m\], 1.8 m$)$ that is on the $y$-$z$ plane. A thrown ball travels a curved path to cross the plate at a height less than the initial height due to the gravitational force (Fig. \[fig5\](b)). $P_{y}$ is the random variable selected by the distribution function $\psi(P_{y})$ defined in Eq. (\[md2\]). The unit vector of the pitch direction was defined as $\hat { {\bm C} } = {\bm C} / |{\bm C}|$, where ${\bm C} \equiv {\bf P} - {\bf r}_{0}$.
Modeling of Bat and Swing
-------------------------
We consider the bat to be an uniform cylinder 1 m long and the diameter of the base to be $6.6 \times 10^{-2}$ m, in accordance with the official baseball rules. The bat is placed along the $y$ axis, and its center of mass is positioned at $($0, 0, $z_{b}$ \[m\]$)$, where $z_{b}$ is defined later. For later discussions, we respectively label bases of the bat as $A$ and $B$ so that the $y$ component of the center of $B$ is larger than that of $A$.
Let us assume that a thrown ball crosses the $y$-$z$ plane between the time $t$ and $t+\Delta t$, where $\Delta t= 0.05$\[s\] is the time step of our simulation (Fig. \[fig5\](b)). First, we calculate $z_{b}^{'}$ using the $z$ component of the intersection between the $y$-$z$ plane and the line segment connecting ${\bf r}(t)$ and ${\bf r}(t+\Delta t)$, where ${\bf r}(t)$ is the position of the center of mass of a thrown ball at the time $t$. Next, we determine $z_{b}$ as follows: $$z_{b} = z_{b}^{'} + \sigma_{b},$$ where $\sigma_{b}$ represents the random numbers chosen from the normal distribution with a mean of $0$ and a standard deviation of $0.0366$ m, which is the diameter of an official baseball in Japan.
![A schematic of a collision between a ball and a bat.[]{data-label="sys"}](fig6.eps){width="7cm"}
Figure \[sys\] shows a schematic of a collision between a ball and a bat from the viewpoint of the home base. We assume the bat swings around an axis passing through the center of B. The bat swings with the angular velocity ${\boldsymbol \omega}^{'} =(-\omega_{z} \sin \theta, 0, \omega_{z} \cos \theta)$ with $\omega_{z} =34$ rad/s, which is the typical value of bat swings. Here $\theta$ is the angle between the direction of ${\boldsymbol \omega}^{'}$ and ${\boldsymbol \omega}=(0, 0, \omega_{z})$, the value of which is randomly chosen from the range $0^{\circ} \le \theta \le 10^{\circ}$. When a ball collides with the bat, we calculate the vector $\tilde{{\bf r}}$, which is defined by the vector from the center of $B$ to the foot of the perpendicular line passing through the center of the ball to the central axis of the bat. The velocity of the bat on the collision is defined by ${\bf V}_{b} = {\boldsymbol \omega}^{'} \times \tilde{{\bf r}}$.
Equation of Motion of Ball
--------------------------
Basically, a pitched ball obeys the equation of motion $$\begin{aligned}
\label{eqofmo}
m \frac{d^{2} {\bf r}}{dt^{2}} = - {\bf F}_{D} + {\bf F}_{L} + m {\bf g},\end{aligned}$$ where $m$ is the mass of the ball. The three terms on the right-hand side of Eq. (\[eqofmo\]) represent the drag force, the magnus force, and the gravitational force, respectively.
The drag force ${\bf F}_{D}$ generated by the resistance from the air decreases the ball speed. ${\bf F}_{D}$ is expressed as $$\begin{aligned}
{\bf F}_{D} = \frac{1}{2} C_{D} \rho V^{2} A \hat{{\bf V}},\end{aligned}$$ where $C_{D}$, $\rho$, $A$, and $\hat{{\bf V}}$ are the drag coefficient, the density of air, the cross section of a baseball, and the unit vector in the direction of the velocity, respectively. We use $C_{D} = 0.4$, corresponding to a baseball pitched at a high speed of approximately 40.2 to 44.7 m/s[@cross]. Our choice of the value will be discussed in later section. In addition, we use $\rho=1.29$ kg/m$^{3}$, which is the value at 0 C$^{\circ}$ and 1 atm.
On the other hand, the magnus force ${\bf F}_{L}$ generated by the backspin of the baseball enhances the flight of the ball after it is thrown and batted[@sawicki; @nathan08]. ${\bf F}_{L}$ is expressed as $$\begin{aligned}
{\bf F}_{L} = \frac{1}{2} C_{L} \rho V^{2} A \hat{{\bf V}}_{n},\end{aligned}$$ where $C_{L}$ and $\hat{{\bf V}}_{n} $ are the lift coefficient and the unit vector perpendicular to $\hat{{\bf V}}$, respectively. We used $C_{L}=0.2$, which is a typical value for a spinning ball[@cross]. The gravitational acceleration is indicated by the vector ${\bf g}=(0, 0, -9.8$ m/s$^{2}$). We numerically solved Eq. (\[eqofmo\]) using the velocity Verlet scheme[@frenkel].
Condition for Collision
-----------------------
A thrown ball changes its flight direction when it collides with a bat. We assume that the ball collides with the bat when all of the following conditions are fulfilled:
1. The distance between the center of mass of the ball and the central axis of the bat is less than the sum of the radius of the ball and that of the bat.
2. The $y$ component of the ball’s position, $r_{y}$, satisfies $-0.5$ m $< r_{y} < 0.5$ m.
When these conditions are fulfilled, the ball is reflected by the averaged repulsive force during $\Delta t$, $$\label{collision}
\bar{\bf F} = -\frac{\mu (1 + \tilde{e}) {\tilde {\bf V}} \cdot {\bf n}}{\Delta t} {\bf n},$$ where $\mu$, ${\bf n}$ and ${\tilde {\bf V}}$ are the reduced mass of the ball and the bat, the normal unit vector that is perpendicular to the tangential plane between a colliding ball and a bat, and the relative velocity of the ball to the bat, ${\tilde {\bf V}}={\bf V} - {\bf V}_{b}$, respectively.
In Eq. (\[collision\]), we use the coefficient of restitution between a ball and a bat $\tilde{e}$, which is defined by $$\tilde{e} = e \left(1 + \frac{m}{M_{\mathrm{eff}}}\right) + \frac{m}{M_{\mathrm{eff}}},$$ where $M_{\mathrm{eff}} = I / \tilde{r}$ with the moment of inertia $I$ of the bat around the center of B[@cross; @nathan03]. The derivation of Eq. (\[collision\]) and the algorithm for the collision are summarized in Appendices A and B, respectively. When a batted ball falls on the ground, we calculate the distance $D$ between the point of landing and the home plate.
Simulation Results
==================
![Probability density of flying distance of batted balls. Solid and dashed curves show the results of $e=0.41$ and $e=0.45$, respectively.[]{data-label="fig6"}](fig7.eps){width="9cm"}
Figure \[fig6\] shows the probability densities of the flying distance $D$ of batted balls ($e = 0.41$ and $e=0.45$), each of which was calculated from 1,000 samples. The highest peak position shifts from 75 m to 125 m with increasing coefficient of restitution of a ball. The total frequencies in $D \ge 150$, which can be regarded as the number of home runs, increases with increasing coefficient of restitution indicating an increase in the home-run probability. Thus, we investigate the relationship between the home-run probability and $e$ by changing the value of $e$ from $0.41$ to $0.45$.
![Relationship between $e$ and home-run probability $P(e)$. Solid curve shows Eq. (\[pf\_HR\]) with $C_{1}=242$ m and $\sigma=52.2$ m. []{data-label="fig7"}](fig8.eps){width="9cm"}
Figure \[fig7\] shows the relationship between the home-run probability and $e$. Here, each data point was calculated from 1,000 samples. We consider a sample with $D \ge 150$ m as a home run and define the home-run probability as the ratio of the number of home runs to 1,000. As shown in Fig. \[fig7\], the home-run probability increases with $e$, which is intuitively understood.
Here, we estimate the functional form of $P(e)$ using a simple theoretical argument. Let us suppose that a projectile is launched from the ground with a launch speed $v^{'}$ and a launch angle $\theta_{0}$ under the air friction. The range of the trajectory under the air friction can be estimated by a linear function of $v^{'}$ (Ref. 2), so that we roughly estimate the range of the trajectory as a linear function of the coefficient of restitution $e$ as $C_{1} e$ with the constant $C_{1}$. Here we assume that the speed of the thrown ball upon the collision with the bat is almost constant.
We assume that the probability density of the flying distance of a batted ball which will land at approximately $D=150$ m can be approximated by the normal distribution as $$\label{pdf_HR}
p(D) = \frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp
\left[
- \frac{(D-C_{1}e)^{2}}{2 \sigma^{2}}
\right].$$ By integrating Eq. (\[pdf\_HR\]) from $D=150$ m to infinity, we obtain the probability function $$\begin{aligned}
P(e) &=& \int_{150}^{\infty} p(D) dD\\
&=& \frac{1}{2}
\left[ 1 - {\rm erf} \left( \frac{150-C_{1} e}{\sqrt{2} \sigma}\right)
\right],\label{pf_HR}\end{aligned}$$ where ${\rm erf}(x)$ is the error function. We assume both $C_{1}$ and $\sigma$ as fitting parameters. The solid curve in Fig. \[fig7\] corresponds to Eq. (\[pf\_HR\]) drawn with $\sigma=59.8$ m and $C_{1}=271$ m, which indicates a good estimation.
Discussion
==========
Figure \[fig3\] shows the distribution of the horizontal index $y$ of pitched balls at the pitching zone, which can be approximated by the mixture distribution of the normal distributions. Because of the method of division of the pitching zone in the database that we used, the number of points was too scarce to identify a more detailed distribution. To obtain a more detailed distribution of the position of pitched balls, we can use more accurate data from PITCHf/x[@cross]. However, from the viewpoint of pitcher strategy, it may be desirable to aim at the edges of the strike zone; thus, we may find two peaks at the right and the left edges of the strike zone in the real distribution.
Figure \[fig6\] shows the probability density distributions $p(D)$ of the distance $D$, each of which has some peaks. For comparison with real data, a Japanese group attempted to draw a histogram of the distance of batted balls using data from Ultimate Zone Rating (UZR)[@uzr] in 2009 and 2010. Their results show that the histogram has two peaks around $40$ and $100$ m, which is qualitatively close to our simulation results at $e=0.45$. However, their results show that the probability of finding a ball at the distance $40 \le D \le 80$ is very low, although our simulation results show that the probability density at $e=0.45$ is almost same in the same range. Notice that the group constructed the histogram using the positions at which the fielders caught or picked up the batted balls, which are sometimes different from the points of landing. Thus, their histogram can be considered as the mixture of the histograms of the position of pickup by the infielders and the outfielders, which may cause the discrepancy between their results and our results. In the UZR analysis, the probability of finding a batted ball between 91.44 and 121.92 m was 0.179. In our simulation, on the other hand, the home-run probability $P(e)$ was $\thicksim 0.17$ at $e=0.42$, which is the closest value to the averaged coefficient of restitution in 2010. The discrepancy between the two values is attributed to that the data in 2009 and 2010 were included in the UZR analysis.
Finally, our model is a simplified model based on simple rigid-body collisions, which ignores the rotation of a batted ball. Our model includes the lift coefficient $C_{L}=0.2$ as a fixed value, although $C_{L}$ can vary depending on the rotation of the ball[@cross; @sawicki]. For example, recent experiments have demonstrated that $C_{L}$ strongly depends on the spin parameter, $S \equiv R \omega/V$, where $\omega$ and $R$ are the angular velocity and the radius of a ball, respectively[@nathan08; @kensrud_thesis; @kensrud_cl]. On the other hand, $C_{D}$ also varies according to Reynolds number, although our model includes the drag coefficient $C_{D}=0.4$ as a fixed value. Especially, the drag force on a baseball is known to drop sharply at speeds typical for thrown and batted balls[@nathan08; @kensrud_thesis; @kensrud_cd; @frohlich]. To improve our model, we will need to incorporate $C_{L}$ and $C_{D}$ dependant on the motion of thrown and batted balls. In addition, constructing a model based on elastic collisions in which the friction between the ball and bat is considered will yield more accurate results.
Concluding Remarks
==================
In this paper, we analyzed real data for the speed and the course of pitched balls in professional baseball games in Japan. Our results show that the distribution of the ball speed can be approximated by the mixture distribution of two normal distributions. In addition, we found that the horizontal position of pitched balls in the pitching zone obeys the mixture distribution of two normal distributions, each of which has a peak at the edge of the strike zone.
We simulated collisions between baseballs and a bat, where the statistics of the pitching obey our analyzed results. We finally obtained the probability density distribution $p(D)$ of the distance $D$ between the home plate and the landing point of the batted balls to calculate the home-run probability as a function of the coefficient of restitution. By using a simple theoretical argument with the assumption that $p(D)$ around $D=150$ m can be approximated by the normal distribution, we quantified the home-run probability as a function of the coefficient of restitution $e$ of baseballs.
As stated in the previous section, we will obtain more accurate results in future works by improving our model. Developing a methodology to calculate the home-run probability may yield useful information for designing baseballs.
AVERAGED FORCE IN BINARY COLLISION OF RIGID BODIES
==================================================
In Ref.1, the author derived the rebound velocity of a ball in a head-on collision with a bat[@cross]. In this appendix, we derive the rebound velocity of a batted ball in a three dimensional oblique collision. Note that we ignore the rotation of a ball and a bat in our argument.
Figure \[figa1\] shows a ball of mass $m$ colliding with a bat of mass $M$. Here we denote the colliding velocities of the ball and the bat as ${\bf V}$ and ${\bf V}_{b}$, respectively. Assuming that the ball experiences the averaged force ${\bar {\bf F}}$ from the bat during the duration $\Delta t$, we can describe the rebound velocities of the ball and the bat, ${\bf V}^{'}$ and ${\bf V}^{'}_{b}$, as $$\begin{aligned}
{\bf V}^{'} &=& {\bf V} + \frac{\bar{{\bf F}}}{m} \Delta t,\label{a1}\\
{\bf V}_{b}^{'} &=& {\bf V}_{b} - \frac{\bar{{\bf F}}}{M} \Delta t \label{a2}\end{aligned}$$ from the definition of the averaged force and Newton’s third law of motion. Note that the total momentum of the system is conserved after collision, $M {\bf V}_{b} + m {\bf V} = M {\bf V}_{b}^{'} + m {\bf V}^{'}$.
![ A schematic of a collision between a bat of mass $M$ and a ball of mass $m$. The velocities of the centers of mass of the ball and bat are denoted by ${\bf V}$ and ${\bf V}_{b}$, respectively. ${\bf n}$ is the normal unit vector perpendicular to the tangential plane. []{data-label="figa1"}](figa1.eps){width="6cm"}
By subtracting Eq. (\[a2\]) from Eq. (\[a1\]) and introducing the reduced mass $\mu $ defined by $1/\mu = 1/m + 1/M$, we obtain $$\begin{aligned}
{\tilde {\bf V}}^{'}= {\tilde {\bf V}} + \frac{1}{\mu} \bar{{\bf F}} \Delta t,\end{aligned}$$ where ${\tilde {\bf V}}$ and ${\tilde {\bf V}}^{'}$ are the relative velocities of the ball to the velocity of bat before and after collision, respectively. By introducing the normal unit vector ${\bf n}$ perpendicular to the tangential plane of the bat and ball, the scalar projection of ${\tilde {\bf V}}^{'}$ onto ${\bf n}$ is calculated as $$\begin{aligned}
{\tilde {\bf V}}^{'} \cdot {\bf n} = {\tilde {\bf V}} \cdot {\bf n} + \frac{1}{\mu} \bar{F} \Delta t,\label{a4}\end{aligned}$$ where we used $\bar{\bf F} = \bar{F} {\bf n}$.
Using Eq. (\[a4\]) in the definition of the coefficient of restitution $\tilde{e}$ between ball and bat, $$| {\tilde {\bf V}}^{'} \cdot {\bf n}| = \tilde{e} | {\tilde {\bf V}} \cdot {\bf n}|,$$ we obtain the equation $$\left[ \beta + {\tilde {\bf V}} \cdot {\bf n} (1-\tilde{e})\right]
\left[ \beta + {\tilde {\bf V}} \cdot {\bf n} (1+\tilde{e})\right] = 0,\label{qd}$$ where $\beta = \bar{F} \Delta t /\mu$. The two solutions of the quadratic equation Eq. (\[qd\]) are respectively written as $$\begin{aligned}
\beta_{1} &=& - {\tilde {\bf V}} \cdot {\bf n} (1-\tilde{e}), \\
\beta_{2} &=& - {\tilde {\bf V}} \cdot {\bf n} (1+\tilde{e}).
\end{aligned}$$ In the two solutions, only $\beta_{2}$ corresponds to the averaged force in the collision because $\beta_{1}$ becomes $0$ ($\bar{F} = 0$) when $e=1$, which will cause the penetration of the ball into the bat.
Thus, the averaged force is calculated as $$\begin{aligned}
\bar{\bf F} = \bar{F} {\bf n} &=& \frac{\mu \beta_{2}}{\Delta t} {\bf n}\\
&=& -\frac{\mu (1 + \tilde{e}) {\tilde {\bf V}} \cdot {\bf n}}{\Delta t} {\bf n}. \label{a10}\end{aligned}$$
ALGORITHM FOR COLLISION BETWEEN BALL AND BAT
============================================
![ A schematic of a collision between a bat (large circle) and a ball (small circle). The ball at time $t$ will penetrate into the bat at the time $t + \Delta t$, which is unrealistic. To obtain the normal force acting on the ball, we need to calculate the time $t + \Delta t^{'}$ when the ball touches the bat. []{data-label="figa2"}](figa2.eps){width="6cm"}
In appendix A, we explained the way to calculate the averaged force acting on the ball during collision. However, it is difficult to obtain the unit normal vector ${\bf n}$ in naive calculation because the ball can penetrate into the bat during the simulation time step, which is due to the nature of the finite difference approximation of derivatives. In this appendix, we explain the algorithm of the collision between a bat and a ball to avoid penetration used in our simulation.
Figure \[figa2\] shows a schematic of a collision between a ball and a bat, where a ball at the time $t$ is going to collide with the bat. When the first condition of collision (see section III.C) is fulfilled, the ball has penetrated into the bat at the time $t + \Delta t$. Thus, we put the ball back to the previous position at the time $t$ to determine the remaining time $\Delta t^{'}$ before collision. From the condition that the distance between the center of the bat and the center of the ball at the time $t + \Delta t^{'}$ equals to the sum of the radius of the bat and that of the ball, we obtain a quadratic equation for $\Delta t^{'}$. Here we assume that the ball travels linearly with a constant velocity during $\Delta t^{'}$ to simplify the calculation. Among the two solutions for the quadratic equation, we choose the positive one for $\Delta t^{'}$ which has a physical meaning.
From the position of the ball at the time $t + \Delta t^{'}$, we can calculate all the variables such as ${\bf n}$, $\tilde{\bf r}$, ${\bf V}_{b}$ and $\tilde{e}$ in Eq. (\[collision\]). The ball reflects from the bat according to the following algorithm. First, we put the ball back to the position at the time $t$. Next, we apply the half of the averaged force to the ball during $2 \Delta t$ so that the impulse from the bat becomes constant. With this two-step time evolution, the ball bounces from the bat without penetration.
We gratefully acknowledge Data Stadium for permitting our use of their data.
[99]{} R. Cross, *Physics of Baseball & Softball*, 1st edition (Springer, New York, 2011).
R. K. Adair, *The Physics of Baseball*, 3rd edition (Harper Perennial, New York, 2002).
W. J. Stronge, *Impact Mechanics*, 1st edition (Cambridge Univ. Press, Cambridge, 2004).
K. L. Johnson, *Contact Mechanics*, 1st edition (Cambridge Univ. Press, Cambridge, 1985).
W. Goldsmith, *Impact -The Theory and Physical Behavior of Colliding Solids-*, 1st edition (Edward Arnold (Publishers) LTD., London, 1960).
M. Y. Louge and M. E. Adams, Phys. Rev. E [**65**]{}, 021303-1–021303-6 (2002).
H. Kuninaka and H. Hayakawa, “Anomalous Behavior of Coefficient of Normal Restitution in Oblique Impact", Phys. Rev. Lett. [**93**]{}, 154301-1–154301-4 (2004).
P. J. Drane and J. A. Sherwood, “Characterization of the effect of temperature on baseball COR performance", 5th International Conference on the Engineering of Sport (UC Davis, CA), **2**, 59-65 (2004).
T. Allen, et al., “Effect of temperature on golf dynamics”, Procedia Engineering, **34**, 634-639 (2012).
D. Kagan and D. Atkinson, “The Coefficient of Restitution of Baseballs as a Function of Relative Humidity”, Phys. Teach. **42**, 330–354 (2004).
D. T. Kagan, “The effects of coefficient of restitution variations on long fly balls", Am. J. Phys., **58**, 151–154 (1990).
Nippon Professional Baseball Organization, [<http://www.npb.or.jp>](<http://www.npb.or.jp>).
Baseball-Reference.com, [<http://baseball-reference.com>](<http://baseball-reference.com>).
G. S. Sawicki, M. Hubbard, and W. J. Stronge, “How to hit home runs: Optimum baseball bat swing parameters for maximum range trajectories", Am. J. Phys., **71**, 1152–1162 (2003).
A. M. Nathan, L. V. Smith, W. M. Faber, and D. A. Russell, “Corked bats, juiced balls, and humidors: The physics of cheating in baseball", Am. J. Phys., **79**, 575–580 (2011).
Sportsnavi, [<http://baseball.yahoo.co.jp/npb/>](<http://baseball.yahoo.co.jp/npb/>).
M. Kasahara [*et al*]{}., “ Factors affecting on bat swing speed of university baseball players", NSCA Jpn J., **19**(6), 14–18 (2012).
A. M. Nathan, “The effect of spin on the flight of a baseball", Am. J. Phys., **76**, 119–124 (2008).
D. Frenkel and B. Smit, *Understanding Molecular Simulation*, 2nd edition (Academic Press, San Diego, 2002). A. M. Nathan, “Characterizing the performance of baseball bats", Am. J. Phys., **71**, 134–143 (2003).
Baseball Lab. archives (in Japanese), <https://web.archive.org/web/20131230140449/http://archive.baseball-lab.jp/column_detail/&blog_id=17&id=155>.
J. R. Kensrud, “Determining Aerodynamic Properties of Sports Balls in Situ", MS Thesis, Washington State Univ., 2010.
J. R. Kensrud and L. V. Smith, “In situ lift mesuarement of sports balls", Proc. Eng., **13**, 278–283 (2011).
J. R. Kensrud and L. V. Smith, “In situ drag mesuarement of sports balls", Proc. Eng., **2**, 2437–2442 (2010).
C. Frohlich, “Aerodynamic Drag Crisis and Its Possible Effect on the Flight of Baseballs", Am. J. Phys., **52**, 325–334 (1984).
|
---
abstract: 'We report the first observation of anti-Stokes laser-induced cooling in the Er$^{3+}$:KPb$_{2}$Cl$_{5}$ crystal and in the Er$^{3+}$:CNBZn (CdF$_{2}$-CdCl$_{2}$-NaF-BaF$_{2}$-BaCl$_{2}$-ZnF$_{2}$) glass. The internal cooling efficiencies have been calculated by using photothermal deflection spectroscopy. Thermal scans acquired with an infrared thermal camera proved the bulk cooling capability of the studied samples. Implications of these results are discussed.'
author:
- Joaquín Fernandez
- 'Angel J. Garcia–Adeva'
- Rolindes Balda
title: 'Anti-Stokes laser cooling in bulk Erbium-doped materials'
---
The basic principle that anti-Stokes fluorescence might be used to cool a material was first postulated by Pringsheim in 1929. Twenty years later Kastler suggested [@kastler1950] that rare-earth-doped crystals might provide a way to obtain solid-state cooling by anti-Stokes emission (CASE). A few years later, the invention of the laser promoted the first experimental attempt by Kushida and Geusic to demonstrate radiation cooling in a Nd$^{3+}$:YAG crystal [@kushida1968]. However, it was not until 1995 that the first solid-state CASE was convincingly proven by Epstein and coworkers in an ytterbium-doped heavy-metal fluoride glass [@Epstein1995]. Since then on, the efforts to develop other different materials doped with rare-earth (RE) ions were unsuccessful due to the inherent characteristics of the absorption and emission processes in RE ions. In most of the materials studied, the presence of nonradiative (NR) processes hindered the CASE performance. As a rule of thumb a negligible impurity parasitic absorption and near-unity quantum efficiency of the anti-Stokes emission from the RE levels involved in the cooling process are required, so that NR transition probabilities by multiphonon emission or whatever other heat generating process remain as low as possible. These constraints could explain why most of the efforts to obtain CASE in condensed matter were performed on trivalent ytterbium doped solids (glasses [@Hoyt2003] and crystals [@Bowman2000; @Medioroz2002]) having only one excited state manifold which is placed $\sim10000$ cm$^{-1}$ above the ground state. The only exception was the observation of CASE in a thulium-doped glass by using the transitions between the $^{3}$H$_{6}$ and $^{3}$H$_{4}$ manifolds to cool down the sample [@Hoyt2000]. Therefore, it is easy to see that identifying new optically active ions and materials capable of producing CASE is still an open problem with very important implications from both the fundamental and practical points of view.
On the other hand, the recent finding of new low phonon materials (both glasses [@Fernandez2000] and crystals [@Medioroz2002]) as RE hosts which may significantly decrease the NR emissions from excited state levels have renewed the interest in investigating new RE anti-Stokes emission channels. In this work, we present the first experimental demonstration of anti-Stokes laser-induced cooling in two different erbium-doped matrices: a low phonon KPb$_{2}$Cl$_{5}$ crystal and a fluorochloride glass. In order to assess the presence of internal cooling in these systems we employed the photothermal deflection technique, whereas the bulk cooling was detected by means of a calibrated thermal sensitive camera. The cooling was obtained by exciting the Er$^{3+}$ ions at the low energy side of the $^{4}$I$_{9/2}$ manifold with a tunable Ti:sapphire laser. It is worthwhile to mention that this excited state, where cooling can be induced, is also involved in infrared to visible upconversion processes nearby the cooling spectral region [@Balda2004]. Moreover, it is also noticeable that the laser induced cooling can be easily reached at wavelengths and powers at which conventional laser diodes operate, which renders these systems very convenient for applications, such as compact solid-state optical cryo-coolers.
Single crystals of nonhygroscopic Er$^{3+}$:KPb$_{2}$Cl$_{5}$ were grown in our laboratory by the Bridgman technique [@Voda2004]. The rare earth content was $0.5$ mol% of ErCl$_{3}$. The fluorochloride CNBZn glass doped with $0.5$ mol% of ErF$_{3}$ was synthesized at the Laboratoire de Verres et Ceramiques of the University of Rennes. The experimental setup and procedure for photothermal deflection measurements have been described elsewhere [@Fernandez2000; @fernandez2001]. The beam of a tunable cw Ti:sapphire ring laser (Coherent 899), with a maximum output power of $2.5$ W, was modulated at low frequency ($1-10$ Hz) by a mechanical chopper and focused into the middle of the sample with a diameter of $\sim100$ $\mu$m. The copropagating helium-neon probe laser beam ($\lambda=632.8$ nm) was focused to $\sim60$ $\mu$m, co-aligned with the pump beam, and its deflection detected by a quadrant position detector. The samples (of sizes $4.5\times6.5\times2.7\,\text{mm}^{3}$ and $10.7\times10.7\times2.2\,\text{mm}^{3}$ for the crystal and glass, respectively) were freely placed on a teflon holder inside a low vacuum ($\sim10^{-2}$ mbar) cryostat chamber at room temperature.
The cooling efficiencies of the Er$^{3+}$-doped materials were evaluated at room temperature by measuring the quantum efficiency (QE) of the emission from the $^{4}$I$_{9/2}$ manifold in the heating and cooling regions by means of the photothermal deflection spectroscopy in a collinear configuration [@Fernandez2000; @fernandez2001]. The evaluation of the QE has been carried out by considering a simplified two level system for each of the transitions involved. In the photothermal collinear configuration, the amplitude of the angular deviation of the probe beam is always proportional to the amount of heat the sample exchanges, whatever its optical or thermal properties are. The QE of the transition, $\eta$, can be obtained from the ratio of the photothermal deflection amplitude (PDS) to the sample absorption (Abs) obtained as a function of the excitation wavelength $\lambda$ around the mean fluorescence wavelength $\lambda_{0}$$$\frac{\text{PDS}}{\text{Abs}}=C\left(1-\eta\frac{\lambda}{\lambda_{0}}\right),$$ where $C$ is a proportionality constant that depends on the experimental conditions. The mean fluorescence wavelength, above which cooling is expected to occur, was calculated by taking into account the branching ratios for the emissions from level $^{4}$I$_{9/2}$. As expected, the calculated value is close to that found experimentally for the transition wavelength at which the cooling region begins.
![\[fig\_pds\_crystal\](a) Signal deflection amplitude normalized by the sample absorption as a function of pumping wavelength for the Er$^{3+}$:KPb$_{2}$Cl$_{5}$ crystal. (b) Phase of the photothermal deflection signal as a function of pumping wavelength. (c) Photothermal deflection signal waveforms in the heating (800 nm) and cooling (870 nm) regions and around the cooling threshold (850 nm).](fig1.eps)
Figure \[fig\_pds\_crystal\]a shows the normalized PDS spectrum of Er$^{3+}$:KPb$_{2}$Cl$_{5}$ crystal around the zero deflection signal ($852.5$ nm) –obtained at an input power of $1.5$ W– together with the best least-squares fitting in both the heating and cooling regions. The resulting QE values are $0.99973$ and $1.00345$, respectively and, therefore, the cooling efficiency estimated by using the QE measurements is $0.37\%$. As predicted by the theory [@Jackson1981], a sharp jump of $180^{\circ}$ in the PDS phase measured by lock-in detection can be observed during the transition from the heating to the cooling region (see Fig. \[fig\_pds\_crystal\]b). The Figure \[fig\_pds\_crystal\]c shows the PDS amplitude waveforms registered in the oscilloscope at three different excitation wavelengths: $800$ nm (heating region), $852.5$ (mean fluorescence wavelength), and $870$ nm (cooling region). As can be noticed, at $852.5$ nm the signal is almost zero whereas in the cooling region, at $870$ nm, the waveform of the PDS signal shows an unmistakable phase reversal of $180^{\circ}$ when compared with the one at $800$ nm. Figure \[fig\_pds\_glass\] shows the CASE results for the Er$^{3+}$:CNBZn glass (obtained at a pump power of $1.9$ W) where the zero deflection signal occurs around $843$ nm. The $180^{\circ}$ change of the PDS phase is also clearly attained but with a little less sharpness than for the Er$^{3+}$:KPb$_{2}$Cl$_{5}$ crystal (see Fig. \[fig\_pds\_glass\]b). The QE values corresponding to the heating and cooling regions are $0.99764$ and $1.00446$, respectively, and the estimated cooling efficiency is $0.68\%$. The PDS waveforms corresponding to the heating and cooling regions are shown in Fig. \[fig\_pds\_glass\]c. It is worthy to notice that the cooling processes in both systems can be obtained at quite low power excitations. As an example, for the Er$^{3+}$:KPb$_{2}$Cl$_{5}$ crystal, CASE is still efficient at a pump power of only $500$ mW.
![\[fig\_pds\_glass\](a) Signal deflection amplitude normalized by the sample absorption as a function of pumping wavelength for the Er$^{3+}$:CNBZn glass. (b) Phase of the photothermal deflection signal as a function of pumping wavelength. (c) Photothermal deflection signal waveforms in the heating (808 nm) and cooling (860 nm) regions.](fig2.eps)
The results described in the previous paragraphs clearly demonstrate that these systems are capable of internal laser cooling in a certain spectral range and even at small pumping powers. We also conducted measurements of the absolute temperature of the present materials as a function of time for several pumping powers between $0.25$ and $1.9$ W and wavelengths in both the heating and cooling regions described above in order to assess quantitatively their cooling potential. To perform these measurements, a Thermacam SC 2000 (FLIR Systems) infrared thermal camera was used. This camera operates between $-40^{\circ}$C and $500^{\circ}$C object temperature with a precision of $\pm0.1^{\circ}$C. The detector is an array of $320\times240$ microbolometers. The camera is connected to an acquisition card interface that is able to record thermal scans at a rate of $50$ Hz. The absolute temperature was calibrated with a thermocouple located at the sample holder.
![\[fig\_temp\_crystal\]Time evolution of the average temperature of the Er$^{3+}$:KPb$_{2}$Cl$_{5}$ at 870 nm. The insets show colormaps of the temperature field of the whole system (sample plus cryostat) at two different times as measured with the thermal camera. The rectangle in the upper inset delimits the area used for calculating the average temperature of the sample.](fig3.eps)
Thermal scans at a rate of $1$ image per second were acquired for time intervals that depend on the particular data series. The camera was placed $12$ cm apart from the window cryostat so that a lens with a field of vision of $45^{\circ}$ allows focusing the camera on the sample. Figures \[fig\_temp\_crystal\] and \[fig\_temp\_glass\] show the runs performed at 870 and 860 nm for the Er$^{3+}$:KPb$_{2}$Cl$_{5}$ crystal and Er$^{3+}$:CNBZn glass samples, respectively, using the same pump geometry conditions as the ones described above. The laser power on the sample was fixed at 1.9 W in both cases. According to the PDS measurements reported above, these pumping wavelengths are well inside the cooling region for both materials. The insets in Figs. \[fig\_temp\_crystal\] and \[fig\_temp\_glass\] depict some examples of the thermal scans obtained with the infrared camera. It is clear from those colormaps that the sample is cooling down as time goes by. However, it is difficult to extract any quantitative information about the amount the sample is cooling, as these changes are small compared with the absolute temperature of its surroundings. For this reason, in order to assess whether cooling is occurring in the bulk, we calculated the average temperature of the area enclosed in the green rectangles depicted in the upper insets of figures \[fig\_temp\_crystal\] and \[fig\_temp\_glass\] and the corresponding results constitute the green curves in those figures. As it is easy to see, both samples cool down under laser irradiation. The Er$^{3+}$:KPb$_{2}$Cl$_{5}$ sample temperature drops by $0.7\pm0.1^{\circ}$C in $1500$ s ($30$ minutes). To check that this temperature change was indeed due to laser cooling, the laser was turned off at that point. This can be easily identified as an upturn in the curve that represents the evolution of the sample temperature, which means that this quantity starts to rise as soon as the laser irradiation is stopped. On the other hand, the temperature of the Er$^{3+}$: CNBZn glass sample starts to rise when laser irradiation starts. After $\sim150$ s ($2$ minutes and a half) this tendency is inverted and the sample starts to cool down. From that point on (and in approximately $1000$ s), the average temperature of the sample drops by $0.5\pm0.1^{\circ}$C. Estimations of the expected bulk temperature change based on microscopic models proposed by Petrushkin, Samartev and Adrianov [@Petrushkin2001] yield values of $-5^{\circ}$C and $-10^{\circ}$C for the crystalline and glass samples, respectively, under the experimental conditions described above. The discrepancy between the theoretical estimates and the present experimental results can be attributed to partial re-absorption of the anti-Stokes fluorescence, not taken into account in these models, or additional absorption processes of the pumping radiation involving excited states, which are known to be significant in these materials [@Balda2004]. In any case, if one takes into account the minute concentrations of the optically active ions in the materials studied in this work and the geometry of the cooling experiment (single pass configuration), we think that the results described in this paragraph come to show that CASE in these materials is extremely efficient.
![\[fig\_temp\_glass\]Time evolution of the average temperature of the Er$^{3+}$:CNBZn at 860 nm. The insets show colormaps of the temperature field of the whole system (sample plus cryostat) at two different times as measured with the thermal camera. The rectangle in the upper inset delimits the area used for calculating the average temperature of the sample.](fig4.eps)
In conclusion, we have demonstrated cooling by anti-Stokes emission in two materials doped with Er optically active ions by using a combination of photothermal deflection measurements and time evolution of the average temperature of the sample acquired with an infrared camera. In particular, the photothermal deflection measurements clearly show internal cooling in the two samples analyzed. The cooling efficiencies are found to be about $0.4\%$ and $0.7\%$ for the crystal and glass samples, respectively. These figures are remarkable if one takes into account the fact that the concentration of the optically active ions in our materials are about $0.5\%$ of Er$^{3+}$ and that our experiments are performed in a single pass configuration. From a fundamental perspective, these results are quite important, as this ion comes to engross the small list of rare earth ions that are amenable to cooling (Yb$^{3+}$ and Tm$^{3+}$ ions being the other two known so far). On the other hand, the measurements performed by using the infrared camera demonstrate that the Er$^{3+}$ ions present in the materials are able to refrigerate these by $0.7$ and $0.5^{\circ}$C for the crystalline and glass samples, respectively. This result is extremely important from the applied point of view, as it paves the way to use this ion as an efficient anti-Stokes emitter for compact solid state optical refrigerators. Moreover, it opens a wide field of applications related with the possibility to use CASE to offset the heat generated by the laser operation in Er$^{3+}$-based fiber lasers –the so called radiation-balanced lasers [@Bowman1999]– that would allow to use dual wavelength pumping to take advantage of the cooling processes occurring at a given wavelength. This technique could allow to scale up the power of Er$^{3+}$ based fiber lasers. On the other hand, the use of Er$^{3+}$-doped nanoparticles for bioimaging or phototherapy could also take advantage of a dual wavelength pumping (at a nearby wavelength) in order to balance the thermal damage produced in a soft tissue by the infrared pumping wavelength at which the upconversion process occurs.
This work was supported by the University of the Basque Country (Grant No. UPV13525/2001). A.J.G.-A. wants to acknowledge financial support from the Spanish MEC under the “Ramón y Cajal” program.
[10]{} A. Kastler, J. Phys. Radium **11**, 255-265 (1950). T. Kushida and J. E. Geusic, Phys. Rev. Lett. **21**, 1172 (1968). R. I. Epstein, M. I. Buchwald, B. C. Edwards, T. R. Gosnell, and C. E. Mungan, Nature **377**, 500-503 (1995). C. W. Hoyt, M. P. Hasselbeck, M. Sheik-Bahae, R. I. Epstein, S. Greenfield, J. Thiede, J. Distel, and J. Valencia, J. Opt. Soc. Am. B **20**, 1066 (2003) and references therein; A. Rayner, N. R. Heckenberg, and H. Rubinsztein-Dunlop, J. Opt. Soc. Am. B **20**, 1037 (2003) and references therein. S. R. Bowman and C. E. Mungan, Appl. Phys. B **71**, 807 (2000); R. I. Epstein, J. J. Brown, B. C. Edwards, and A. Gibbs, Appl. Phys. Lett. **90**, 4815 (2001). A. Mendioroz, J. Fernandez, M. Voda, M. Al-Saleh, R. Balda, A. J. Garcia-Adeva, Opt. Lett. **27**, 1525 (2002). C. W. Hoyt, M. Sheik-Bahae, R. I. Epstein, B. C. Edwards, J. E. Anderson, Phys. Rev. Lett. **85**, 3600 (2000). J. Fernandez, A. Mendioroz, A. J. Garcia, R. Balda, J. L. Adam, Phys Rev. B, **62**, 3213 (2000). R. Balda, A. J. Garcia-Adeva, M. Voda, and J. Fernandez, Phys. Rev. B **69**, 205203 (2004). M. Voda, M. Al-Saleh, G. Lobera, R. Balda, J. Fernandez, Opt. Mat. **26**, 359 (2004). J. Fernandez, A. Mendioroz, A. J. Garcia, R. Balda, J. L. Adam, J. of Alloys and Compounds **323-324**, 239 (2001). W. B. Jackson, N. M. Amer, A. C. Boccara, and D. Fournier, Appl. Opt. **20**, 1333 (1981). S. V. Petrushkin and V. V. Samartsev, Theor. and Math. Phys. **126**, 136-145 (2001); S. N. Adrianov and V. V. Samartsev, Laser Phys. **9**, 1021 (1999). S. R. Bowman, IEEE J. Quantum Electron. **35**, 115 (1999).
|
---
abstract: 'Nearly all glass forming liquids display secondary relaxations, dynamical modes seemingly distinct from the primary alpha relaxations. We show that accounting for driving force fluctuations and the diversity of reconfiguring shapes in the random first order transition theory yields a low free energy tail on the activation barrier distribution which shares many of the features ascribed to secondary relaxations. While primary relaxation takes place through activated events involving compact regions, secondary relaxation corresponding to the tail is governed by more ramified, string-like, or percolation-like clusters of particles. These secondary relaxations merge with the primary relaxation peak becoming dominant near the dynamical crossover temperature $T_c$, where they smooth the transition between continuous dynamics described by mode-coupling theory and activated events.'
author:
- 'Jacob D. Stevenson'
- 'Peter G. Wolynes'
title: A universal origin for secondary relaxations in supercooled liquids and structural glasses
---
Diversity, a key feature of glassy systems, is most apparent in their relaxation properties. Dielectric, mechanical and calorimetric responses of supercooled liquids are not single exponentials in time, but manifest a distribution of relaxation times. The typical relaxation time grows upon cooling the liquid until it exceeds the preparation time, yielding a non-equilibrium glass, which can still relax but in an age dependent fashion. In addition to the main relaxations that are responsible for the glass transition, supercooled liquids and structural glasses exhibit faster motions, some distinct enough in time scale from the typical relaxation to be called “secondary” relaxation processes[@adichtchev.2007; @kudlik.1999; @ngai.2004; @wang.2007; @lunkenheimer.2000]. These faster motions account for only a fraction of the relaxation amplitude in the liquid but become dominant features in the relaxation of otherwise frozen glass, where they are important to the mechanical properties. These secondary relaxation processes in the solvation shell of proteins are also prominent in protein dynamics[@frauenfelder.2009].
The phenomenology of secondary relaxation has been much discussed but, owing especially to the problem of how to subtract the main peak, the patterns observed seem to be more complex and system specific than those for the main glassy relaxation. Some of the secondary relaxation motions are, doubtless, chemically specific, occurring on the shortest length scales. Nevertheless the presence of secondary relaxation in glassy systems is nearly universal[@thayyil.2008]. In this paper we will show how secondary relaxations naturally arise in the random first order transition (RFOT) theory of glasses[@lubchenko.2007] and are predicted to scale in intensity and frequency in a manner consistent with observation.
The RFOT theory is based on the notion that there is a diversity of locally frozen free energy minima that can inter-convert via activated transitions. The inter-conversions are driven by an extensive configurational entropy. RFOT theory accounts for the well known correlations between the primary relaxation time scale in supercooled liquids and thermodynamics[@xia.2000; @stevenson.2005] as well as the aging behavior in the glassy state[@lubchenko.2004]. By taking account of local fluctuations in the driving force, RFOT theory also gives a good account of the breadth of the rate distribution of the main relaxation[@xia.2001; @dzero.2008]. Here we will argue that RFOT theory suggests, universally, a secondary relaxation also will appear and that its intensity and shape depends on the configurational thermodynamics of the liquid. This relaxation corresponds with the low free energy tail of the activation barrier distribution. The distinct character of this tail comes about because the geometry of the reconfiguring regions for low barrier transitions is different from that of those rearranging regions responsible for the main relaxation. Near to the laboratory $T_g$, the primary relaxation process involves reconfiguring a rather compact cluster, but the reconfiguring clusters become more ramified as the temperature is raised and eventually resembling percolation clusters or strings near the dynamical crossover to mode coupling behavior, identified with the onset of non-activated motions[@stevenson.2006]. Reconfiguration events of the more extended type are more susceptible to fluctuations in the local driving force, even away from the crossover. These ramified or “stringy” reconfiguration events thus dominate the low barrier tail of the activation energy distribution.
When the shape distribution of reconfiguration processes is accounted for, a simple statistical computation shows that a two peaked distribution of barriers can arise. This calculation motivates a more explicit but approximate theory that gives analytical expressions for the distribution of relaxation times in the tail. In keeping with experiment, the theory predicts the secondary relaxation motions are actually most numerous near the crossover, but of course, merge in frequency with the main relaxation peak in time scale also at that crossover. Furthermore the relaxation time distribution for secondary relaxations is predicted to be described by an asymptotic power law. The theory is easily extended to the aging regime where these secondary relaxations can dominate the rearranging motions.
In RFOT theory, above the glass transition temperature, the entropic advantage of exploring phase space, manifested as a driving force for reconfiguration, is balanced by a mismatch energy at the interface between adjacent metastable states. For a flat interface in the deeply supercooled regime the mismatch energy can be described as a surface tension that can be estimated from the entropy cost of localizing a bead[@kirkpatrick.1989; @xia.2000], giving a surface tension $\sigma_0 = (3/4) k_B T r_0^{-2} \ln [1/(d_L^2 \pi e)]$ where $d_L$ is the Lindemann length, the magnitude of particle fluctuations necessary to break up a solid structure, and is nearly universally a tenth of the inter-particle spacing, ($d_L = 0.1 r_0$). The free energy profile for reconfiguration events resembles nucleation theory at first order transitions but is conceptually quite distinct. Following Stevenson-Schmalian-Wolynes (SSW)[@stevenson.2006] the free energy cost of an $N$ particle cluster with surface area $\Sigma$ making a structural transition to a new metastable state may be written
$$F(N, \Sigma ) = \Sigma \sigma_0 - N k_B T s_c - k_B T
\ln \Omega(N, \Sigma) - \sum_{\textrm{particles}} \!\!\! \delta \! \tilde{f}
\label{eqn:full_profile}$$
A key element of the free energy profile is the shape entropy $k_B
\log \Omega(N, \Sigma)$ which accounts for the number of distinct ways to construct a cluster of $N$ particles having surface area $\Sigma$. At one extreme are compact, nearly spherical objects with shape entropy close to zero. While objects such as percolation clusters or stringy chains have surface area and shape entropy both of which grow linearly with $N$. The last term of equation \[eqn:full\_profile\] accounts for the inherent spatial fluctuations in the disordered glassy system that give fluctuations in the driving force. We presently ignore local fluctuations in the surface mismatch free energy, but their inclusion would not qualitatively alter the results[@biroli.2008; @cammarota.2009; @dzero.2008]. We simplify by assuming uncorrelated disorder, so each particle joining the reconfiguration event is given a random energy, $\delta \tilde{f}$, drawn from a distribution of width ${\delta \! f}$. The r.m.s. magnitude of the driving force fluctuations above $T_g$ follows from the configurational heat capacity through the relation ${\delta \! f}\approx T \sqrt{\Delta C_p k_B}$, a result expected for large enough regions. We will assume no correlations for simplicity, but they can be included.
For nearly spherical reconfiguring regions forming compact clusters the shape entropy is very small and the mismatch free energy is $\sigma_0 4 \pi (3 N
/(4\pi \rho_0))^{\theta/3}$ with $\theta = 2$ if fluctuations are small. In disordered systems the mismatch free energies grow with exponent $\theta$ generally less than 2 reflecting preferred growth in regions of favorable energetics and the large number of metastable states which can wet the interface and reduce the effective surface tension. A renormalization group treatment of the wetting effect[@kirkpatrick.1989] suggests that $\theta =
3/2$ in the vicinity of an ideal glass transition. Incomplete wetting giving strictly $\theta=2$ only asymptotically would not change the numerics of the present theory much. Whether complete wetting occurs for supercooled liquids under laboratory conditions is still debated[@capaccioli.2008; @stevenson.2008a; @cammarota.2009]. The free energy profile describing reconfiguration events restricted to compact clusters becomes, then, $F_{\textrm{compact}}(N) = \sigma_0 4 \pi (3 N /(4\pi
\rho_0))^{\theta/3} - N T s_c$. The minimum number of particles participating in a reconfiguration event is determined by finding where the free energy profile crosses zero. For $\theta = 3/2$ the activation free energy barrier is inversely proportional to the configurational entropy, leading to the Adam-Gibbs[@adam.1965] relation for the most probable relaxation time ${F^{\ddagger}}_{\alpha} / k_BT \sim \ln \tau_{\alpha} / \tau_0 \sim s_c^{-1}$. Adding fluctuations to the profile of compact reconfiguration events yields an approximate Gaussian distribution of barriers with width scaling as $\sqrt{N^{\ddagger}} {\delta \! f}$. Xia and Wolynes[@xia.2001], and more explicitly Bhattacharyya et al.[@bhattacharyya.2008], have shown that, with the inclusion of facilitation effects, the resulting barrier distribution accounts for the stretching of the main relaxation process and yields good estimates for how the stretching exponent varies with liquid fragility.
![image](example_rand2d_1.pdf){width="80.00000%"}
![Probability distribution of free energy barriers governing relaxation events in supercooled liquids. Different curves represent different temperatures, increasing from near the glass transition temperature to near the dynamical crossover temperature. The arrows indicate the typical relaxation time predicted from the fuzzy sphere model without fluctuations. The dashed line gives the distribution of free energy barriers for a liquid just above the dynamical crossover temperature where primary relaxations disappear leaving only the secondary process. The top panel corresponds to a rather strong liquid with small fluctuations, $\Delta C_P \approx 1 k_B $ per bead — a material similar to GeO$_2$. The bottom panel corresponds to a fragile liquid with larger fluctuations, $\Delta C_P = 3 k_B $ per bead — a material similar to ortho-terphenyl. Secondary relaxations, i.e. relaxation events using string-like rearranging regions, increase in prominence as the temperature is increased, becoming the dominant process near the dynamical crossover temperature where the two peaks merge. The units of the activation energy are given in $k_BT$, which assumes a mismatch penalty primarily entropic in nature, $\sigma_0 \sim k_BT$. An energetic mismatch penalty, $\sigma_0 \sim k_BT_K$, would lead to Arrhenius behavior for the secondary relaxation process, as the distribution is peaked around the minimum free energy to initiate a stringy reconfiguration, ${F_{\mathrm{in}}}$. In this calculation we have used the continuous approximation of ${F_{\mathrm{in}}}$ shown in the text. The facilitation phenomenon, as described by Xia and Wolynes and by Bhattacharya et al.[@bhattacharyya.2008] but not accounted for here, would shift weight from the largest free energies to the center of the primary peak, raising the overall height of the primary peak relative to the secondary peak. []{data-label="fig:fuzzy_prob"}](example_dsc1_0-1_75.pdf){width="48.00000%"}
Restricting the reconfiguration events to stringy clusters (using percolation clusters gives very similar results), gives a free energy profile linear in the number of particles reconfigured, save for the minimum cost ${F_{\mathrm{in}}}$ to begin to reconfigure a region:
$$F_{\textrm{string}}(N) = -N T (s_c - {s_c^{\mathrm{string}}}) + {F_{\mathrm{in}}}.
\label{eqn:string_profile}$$
The critical “entropy” is given by $T {s_c^{\mathrm{string}}}= v_{int} (z-2) - k_B
T \ln (z-5) \approx 1.13 k_B T$. This is the difference between the surface energy written in terms of the coordination number of the random close packed lattice $z \approx 12$ ($v_{int}$ is the surface tension per nearest neighbor) and the shape entropy including excluded volume effects[@flory.1953]. If a bead can individually reconfigure then the cost to begin to reconfigure is ${F_{\mathrm{in}}}= z v_{int} - T s_c \approx 2.5-2.9k_B T$. If two must be moved then ${F_{\mathrm{in}}}\approx 6.1k_B T$. The continuous form of the surface mismatch energy gives a somewhat higher value when applied to these small regions, giving ${F_{\mathrm{in}}}^{\textrm{continuous}} = r_0^2 \sigma_04 \pi (3/(4 \pi))^{\theta/3} -Ts_c
\approx 10.5k_B T$ for a one particle reconfiguration. The remarkably simple free energy profile of equation \[eqn:string\_profile\] monotonically increases, so that below ${T_{\mathrm{string}}}$ (defined by $s_c ( {T_{\mathrm{string}}}) = {s_c^{\mathrm{string}}}$) reconfiguration via pure stringy objects is impossible. Above ${T_{\mathrm{string}}}$ the same process can occur with very small free energy barrier, having only to overcome ${F_{\mathrm{in}}}$. Thus ${T_{\mathrm{string}}}$ signals the crossover from dynamics dominated by activated events to dynamics dominated by non-activated processes. Interestingly, this crossover is mathematically analogous to the Hagedorn transition of particle theory[@hagedorn.1965]. The predicted constant value of ${s_c^{\mathrm{string}}}$ is confirmed experimentally[@stevenson.2006]. In contrast to the situation for compact reconfiguration, driving force fluctuations dramatically alter the picture of stringy relaxation. With driving force fluctuations a lucky sequence of fluctuations can easily push the nominally linearly increasing free energy profile to cross zero, so strings can be active below $T_c$.
![Distribution of free energy barriers for a strong liquid ($\Delta
C_P \approx 1k_B$ per bead) separated into the contribution from secondary relaxations (red curves), corresponding to string-like reconfiguration as in panels b and c of figure , and primary relaxations (black curves), corresponding to compact reconfigurations. The full distributions are given for comparison (blue curves). The separation of the curves makes clear that as the dynamical crossover temperature is approached the primary relaxation becomes subordinate to the secondary relaxation.[]{data-label="fig:separate_strong"}](example_dsc1_separate.pdf){width="48.00000%"}
![The corresponding results to those of figure but for a fragile liquid, one with $\Delta C_P \approx 3k_B$ per bead. []{data-label="fig:separate_fragile"}](example_dsc1_75_separate.pdf){width=".48\textwidth"}
![Distribution of free energy barriers for the secondary relaxation process from the statistical sampling of the fuzzy sphere model with fluctuations. The data correspond to a strong liquid ($\Delta C_P \approx
1$) and show that at higher temperatures (larger configurational entropies) the distribution decays more slowly. This leads to wider activation energy distributions, matching the expectations of the analytical calculations.[]{data-label="fig:decay"}](example_exp_decay_1.pdf){width="48.00000%"}
SSW[@stevenson.2006] introduced a crude model to estimate the shape entropy and surface area of the range of shapes between the extremes. This “fuzzy sphere” model, consists of a compact center of $N_c$ particles with a stringy halo of $N_f$ particles. With this interpolative model $T_g$ the preferred shape of a reconfiguring region is largely compact, while the relevant regions become more ramified close to $T_c$. Fluctuations, aggregated cumulatively in the compact core yielding a variance proportional to $N_c$, and cumulatively in the stringy halo yielding a variance proportional to $N_f$, modify the SSW two dimensional free energy landscape. The local free energy plots for several realizations of such accumulated fluctuations, assumed to have Gaussian statistics, are given in figure \[fig:2d\_profile\]. For some realizations compact reconfiguration will still be required to overcome the free energy barrier, but for other realizations the fluctuations are such that the free energy cost for reconfiguration crosses zero along the $N_f$ axis (shown in the figure in yellow) so the region is able to relax via a string-like reconfiguration event. These stringy rearranging clusters, stabilized by disorder, we argue, are key contributors to the secondary relaxation process. The statistics of the stable reconfiguration paths with lowest free energy barrier are summarized in figure \[fig:fuzzy\_prob\] with barrier distributions for two different values of ${\delta \! f}$ corresponding to a strong and a fragile liquid. We can dissagregate these distributions into the parts due, separately, to the compact events (“primary”) and to the string-like fluctuation induced events (“secondary”). These distributions are shown in figures \[fig:separate\_strong\] and \[fig:separate\_fragile\]. At temperatures near $T_g$ compact primary relaxations dominate reconfiguration, but as the temperature increases, fluctuations are able to stabilize string-like reconfiguration more easily and the secondary relaxations increases in prominence. At the same time, with increasing temperature, the primary relaxation peak shifts to lower free energy barriers, making it difficult to distinguish between the two contributions. As $T_c$ is approached and crossed, the primary and secondary peaks merge with string-like reconfiguration clusters becoming the dominant mode of relaxation. For fragile liquids, i.e. liquids with larger configurational entropy fluctuations[@stevenson.2005], the secondary relaxation peak is generally more important than for strong liquids. Both peaks are broader and begin to merge at lower temperatures for the more fragile liquids. Because facilitation effects are not explicitly accounted for (these would mostly affect the higher barriers) it is not easy to directly compare these predicted distributions with quantitative precision to experiment. In addition, the number of reconfiguring particles in the two peaks is different, so their contributions to the measured amplitudes are different as well. Nevertheless, the predicted magnitude of the secondary relaxation peak, as compared to the primary peak, seems to be somewhat larger than experiments apparently show. This disparity is more pronounced for fragile materials. The assumptions in the fuzzy sphere model, and especially the assumption of uncorrelated disorder, apparently overestimate the influence of the fluctuations, which are probably (anti-) correlated for the most fragile systems.
The barrier distribution for reconfiguration events that take an ideal stringy form can be explicitly calculated. A similar analysis to the string case can be applied to reconfiguration via percolation clusters but with somewhat different numerical constants in the relation of the free energy profile to N. This analytic calculation resembles that of Plotkin and Wolynes for the “buffing” of protein folding energy landscapes[@plotkin.2003]. The key to the calculation of $\Gamma({F^{\ddagger}})$, the distribution of free energy barriers for events with less than ${N_{\mathrm{max}}}$ displaced atoms, lies in mapping the problem onto a random walk, or a diffusion process in free energy space. Going to the limit of continuous number of particles we may write a stochastic differential equation for the free energy profile $d F / d N = d F_{\mathrm{string}} / d N +
\delta \tilde{f}$. The principal quantity to compute is $G( N, {F^{\ddagger}}; {F_{\mathrm{in}}})$, the probability that a reconfiguration path of $N$ particles has free energy $F$ if the cost to initiate the reconfiguration event is ${F_{\mathrm{in}}}$. The evolution of $G$, that follows from the stochastic profile, is described by a diffusion equation with drift subject to absorbing boundary conditions at both $F=0$ and $F={F^{\ddagger}}$. These boundary conditions permit the calculation of the distribution of free energy barriers by keeping track of the maximum excursion of the random walk.
$${\frac{\partial G}{\partial N}} + {\phi}{\frac{\partial G}{\partial F}} = \frac{1}{2} {\delta \! f}^2
{\frac{\partial ^2 G}{\partial F^2}},
\label{eqn:diffusion}$$
The slope of the mean free energy profile ${\phi}= T({s_c^{\mathrm{string}}}- s_c)$ depends simply on the proximity to the string transition. ${\phi}$ is a string tension reflecting the free energy cost of lengthening a string. The probability density for the maximum excursion of $F$ is then
$$\Gamma ( {F^{\ddagger}}) = - \frac{\partial }{\partial {F^{\ddagger}}}
\int_0^{N_{max}} \!\!\!\!\!\! d N \left< \frac{{\delta \! f}^2}{2} \left.
\frac{\partial G}{\partial F} \right|_{F=0} \right>_{0 < {F_{\mathrm{in}}}< {F^{\ddagger}}}.
\label{eqn:gamma}$$
The average $\langle \cdot \rangle _{0 < {F_{\mathrm{in}}}< {F^{\ddagger}}}$ is present to integrate over the fluctuations in the free energy cost of initiating a string — capturing the statistics of the smallest possible activation barriers. The derivative with respect to ${F^{\ddagger}}$ converts from the cumulative probability that the free energy barrier is below ${F^{\ddagger}}$ to the probability the free energy barrier is between ${F^{\ddagger}}$ and ${F^{\ddagger}}+ d {F^{\ddagger}}$. $G$ can be calculated explicitly by solving the diffusion equation using the method of images. The result may be represented in closed form in terms of the Jacobi theta function, however we leave the sum explicit to more easily examine the asymptotics
$$\begin{split}
G = & \frac{e^{\frac{{\phi}}{ {\delta \! f}^2} (F-{F_{\mathrm{in}}}-{\phi}N/2)}}{\sqrt{2 \pi
{\delta \! f}^2 N}} \\
\times &\sum_{n=-\infty}^{\infty} \bigg[
e^{-\frac{(2 n {F^{\ddagger}}+ F - {F_{\mathrm{in}}})^2}{2 {\delta \! f}^2 N}} - e^{-\frac{(2 n {F^{\ddagger}}+ F + {F_{\mathrm{in}}})^2}{2 {\delta \! f}^2 N}} \bigg]
\end{split}
\label{eqn:greens_final}$$
In the integral of equation \[eqn:gamma\] the cutoff ${N_{\mathrm{max}}}$ reflects the maximum size to which a stringy reconfiguration event would typically grow before compact rearrangements dominate. We estimate this maximum length as ${N_{\mathrm{max}}}\approx {F^{\ddagger}}_{\alpha} / {\phi}$, since certainly by that length the most important reconfiguration events would be compact. We can simplify by smoothing the cutoff so that the finite integral $\int_0^{{N_{\mathrm{max}}}} \!\! dN \cdot
$ is replaced by $\int_0^{\infty} \!\! dN \exp(-N/{N_{\mathrm{max}}}) \cdot $. The smoothed distribution of free energy barriers follows directly from equations \[eqn:gamma\] and \[eqn:greens\_final\]
$$\begin{split}
\Gamma = &{\frac{\partial }{\partial {F^{\ddagger}}}} \Bigg< \exp\left( -\frac{{F_{\mathrm{in}}}{\phi}}{{\delta \! f}^2} - \frac{1}{2} {F_{\mathrm{in}}}q \right) \\
&\times
\left(1- \frac{\exp({F_{\mathrm{in}}}q)-1}{\exp({F^{\ddagger}}q)-1 } \right) \Bigg>_{0<{F_{\mathrm{in}}}<{F^{\ddagger}}}\\
&\textrm{where, } q \equiv \frac{2}{{\delta \! f}} \sqrt{ \frac{2 {\phi}}{F^{\ddagger}_{\alpha} } + \frac{{\phi}^2}{{\delta \! f}^2}}
\end{split}
\label{eqn:gamma_solved}$$
The result involves nothing more complicated then exponentials and error functions.
The total magnitude of the secondary relaxation peak is estimated by calculating the probability that fluctuations can stabilize a stringy reconfiguration for any size barrier less than ${F^{\ddagger}}_{\alpha}$. Integrating $\Gamma$ over ${F^{\ddagger}}$ yields
$$\Psi \approx \exp \left\{ -\frac{2{\phi}\left( {F_{\mathrm{in}}}- {\phi}\right)}{ {\delta \! f}^2}
\right\}
\label{}$$
$\Psi$ increases with temperature as the dynamical crossover at ${T_{\mathrm{string}}}$ is approached, a trend that is validated experimentally[@wiedersich.1999]. At the crossover temperature and above, this secondary relaxation becomes the only remaining mode of activated relaxation. The sharp transition from activated to non-activated motions at $T_c$ that is predicted by the non-fluctuating RFOT theory, as well as by mode coupling theory[@leutheusser.1984; @gotze.1992] and the mean field theory of supercooled liquids[@singh.1985; @franz.2006; @mezard.2000], is therefore smoothed out by the string-like activated events made possible by fluctuations and exhibits no divergent critical behavior at ${s_c^{\mathrm{string}}}$[@bhattacharyya.2008]. In this temperature regime the secondary beta relaxations of mode coupling theory would be present and overlap in frequency with the string-like activated secondary relaxations, perhaps making the differentiation of the processes difficult.
$\Gamma$ can be approximated by the Gumbel extreme value distribution function[@bertin.2005]. For ${F^{\ddagger}}> {F_{\mathrm{in}}}$ the barrier distribution decays exponentially as
$$\Gamma ({F^{\ddagger}}>{F_{\mathrm{in}}}) \sim \exp (- {F^{\ddagger}}q)
\approx \exp \left( -2
\frac{{F^{\ddagger}}{\phi}}{{\delta \! f}^2} \right)
\label{eqn:decay}$$
The results agree with the sampled distribution of barriers for the string-like reconfiguration events alone (shown in figure \[fig:decay\]). Using the fact that $\tau = \tau_0 \exp({F^{\ddagger}}/k_B T)$ equation \[eqn:decay\] gives a power law distribution of relaxation times $P(\tau) \sim
\tau^{-\gamma}$ where $\gamma \approx 2 ({s_c^{\mathrm{string}}}- s_c)/\Delta C_P + 1$. Well above $T_g$ the high barrier side of the secondary relaxation blends in with the primary relaxation peak. Thus the secondary relaxation from ramified reconfiguration events often appears as only a “wing” on the main distribution[@blochowicz.2003; @blochowicz.2006].
In the aging glass, the picture of secondary relaxation is slightly modified. If the liquid fell out of equilibrium at $T_f$ then the frozen-in structure has an average excess energy per particle $\epsilon (T_f) = \epsilon(T_K) +
\int_{T_K}^{T_f} dT \Delta C_P(T)$. At temperatures $T<T_f$ a region of the liquid undergoing reconfiguration would relax to a structure with average energy $\epsilon (T) < \epsilon (T_f)$. Thus the driving force for reconfiguration gains an energetic contribution and the configuration entropy in equation \[eqn:full\_profile\] is replaced by $Ts_c \to (Ts_c + \Delta
\epsilon)$ where $\Delta \epsilon = \epsilon(T_f) - \epsilon(T) =
\int_{T}^{T_f} dT' \Delta C_P(T')$. Lubchenko and Wolynes[@lubchenko.2004] have shown that this additional driving force results in a change in slope of the typical relaxation time as a function of temperature, and a transition to nearly Arrhenius behavior as the system falls out of equilibrium. Correspondingly, falling out of equilibrium causes the string tension to be reduced by an amount $\Delta \epsilon$, giving ${\phi}= T{s_c^{\mathrm{string}}}- Ts_c - \Delta
\epsilon$ and making the system appear closer to the dynamical crossover than an equilibrated system at the same temperature. Furthermore, the driving force fluctuations are frozen in as the aging glass falls out of equilibrium becoming largely independent of temperature. These changes broaden and flatten the barrier distribution as the temperature is lowered, giving
$$\Gamma ({F^{\ddagger}}>{F_{\mathrm{in}}}) \sim \exp \left( -2 \frac{{F^{\ddagger}}(T{s_c^{\mathrm{string}}}- Ts_c - \Delta
\epsilon) }{T_f^2 k_B \Delta C_P(T_f) } \right)$$
for large ${F^{\ddagger}}$. The secondary relaxation strength in the aging regime becomes
$$\begin{split}
\Psi \approx \exp \bigg\{ &-\frac{2(T{s_c^{\mathrm{string}}}- Ts_c - \Delta \epsilon) }{
T_f^2 k_B \Delta C_P(T_f)} \\
& \times \big( {F_{\mathrm{in}}}- (T{s_c^{\mathrm{string}}}- Ts_c - \Delta \epsilon) \big) \bigg\} .
\end{split}
\label{}$$
In the limit $T \to 0$ the distribution of barriers becomes largely independent of temperature with $\Gamma({F^{\ddagger}}> {F_{\mathrm{in}}}) \sim exp(-\alpha {F^{\ddagger}})$ and $\alpha \approx ((z-2) v_{int}(T_f) -\Delta \epsilon)/(T_f k_B \Delta
C_P(T_f))$. For a broad enough distribution of barriers the dielectric absorption spectrum is determined through the simple relation, $\epsilon''(\omega) \sim P({F^{\ddagger}}= -k_B T \ln \omega / \omega_0) \sim
\omega^{\alpha T}$, and becomes flat for low temperatures, resembling the so called constant loss spectrum. In a rejuvenating glass, an aged system that is heated to a temperature above $T_f$, the energetic contribution to the driving force is negative, $\Delta \epsilon < 0$. In this situation the system appears as if it is be further from the dynamical crossover temperature than an equilibrated system at the same temperature and secondary relaxations are relatively suppressed.
Nearly all glass forming liquids display, dynamical modes seemingly distinct from the primary alpha relaxations. We have shown that by adding fluctuations to the existing structure of random first order transition theory a tail develops on the low free energy side of the activation barrier distribution which shares many of the observed features of the secondary relaxations. The relaxation process responsible for the tail differs from the primary relaxation mechanism in the geometry of the region undergoing cooperative reconfiguration. While primary relaxation takes place through activated events involving compact regions, secondary relaxation is governed by more ramified string-like, or percolation-like clusters of particles. While the existence of secondary relaxation is nearly universal, the relevant motions are of shorter length scales than those for primary relaxation, allowing additional material dependent effects and, perhaps, less universal quantitative description than for the main relaxation. The present theory, however, suggests a universal mechanism for secondary relaxation. The theory points out some general trends about the way these relaxations vary with temperature and substance which conform to observation.
Support from NSF grant CHE0317017 and NIH grant 5R01GM44557 is gratefully acknowledged. Encouraging discussion on this topic with Vas Lubchenko, Hans Frauenfelder and Jörg Schmalian are gratefully acknowledged
[10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
*et al.* . [**](http://dx.doi.org/10.1016/j.jnoncrysol.2007.02.057) ****, ().
, , , & . [**](http://dx.doi.org/10.1016/S0022-2860(98)00871-0) ****, ().
& . [**](http://dx.doi.org/10.1103/PhysRevE.69.031501) ****, ().
& . [**](http://dx.doi.org/10.1103/PhysRevB.76.064201) ****, ().
, , & . [**](http://dx.doi.org/10.1080/001075100181259) ****, ().
*et al.* . [**](http://dx.doi.org/10.1073/pnas.0900336106) ().
, , & [**](http://dx.doi.org/10.1080/14786430802270082) ****, ().
& . [**](http://dx.doi.org/10.1146/annurev.physchem.58.032806.104653) ****, ().
& . [**](http://www.pnas.org/content/97/7/2990.abstract) ****, ().
& . [**](http://dx.doi.org/10.1021/jp052279h) ****, ().
& . [**](http://link.aip.org/link/?JCPSA6/121/2852/1) ****, ().
& . [**](http://link.aps.org/abstract/PRL/v86/p5526) ****, ().
, & . [**](http://arxiv.org/abs/0809.3988) ().
, & . [**](http://dx.doi.org/10.1038/nphys261) ****, ().
, & . [**](http://link.aps.org/abstract/PRA/v40/p1045) ****, ().
, , , & . [**](http://dx.doi.org/10.1038/nphys1050) ****, ().
, , , & . [**](http://arxiv.org/abs/0904.1522) (). .
, & . [**](http://dx.doi.org/10.1021/jp802097u) ****, ().
, , & . [**](http://link.aip.org/link/?JCP/129/194505/1) ****, ().
& . [**](http://dx.doi.org/10.1063/1.1696442) ****, ().
, & . [**](http://dx.doi.org/10.1073/pnas.0808375105) ****, ().
** (, ).
. ** ****, ().
& . [**](http://dx.doi.org/10.1073/pnas.0330720100) ****, ().
*et al.* . [**](http://dx.doi.org/10.1088/0953-8984/11/10A/010) ****, ().
. [**](http://dx.doi.org/10.1103/PhysRevA.29.2765) ****, ().
& . [**](http://dx.doi.org/10.1088/0034-4885/55/3/001) ****, ().
, & . [**](http://link.aps.org/abstract/PRL/v54/p1059) ****, ().
. [**](http://stacks.iop.org/0295-5075/73/492) ****, ().
& . [**](http://dx.doi.org/10.1088/0953-8984/12/29/336) ****, ().
. [**](http://dx.doi.org/10.1103/PhysRevLett.95.170601) ****, ().
, , & . [**](http://dx.doi.org/10.1063/1.1563247) ****, ().
, , , & . [**](http://dx.doi.org/10.1063/1.2178316) ****, ().
|
---
abstract: 'The Coulomb interaction between the two protons is included in the calculation of proton-deuteron breakup and of three-body electromagnetic disintegration of ${}^3\mathrm{He}$. The hadron dynamics is based on the purely nucleonic charge-dependent (CD) Bonn potential and its realistic extension CD Bonn + $\Delta$ to a coupled-channel two-baryon potential, allowing for single virtual $\Delta$-isobar excitation. Calculations are done using integral equations in momentum space. The screening and renormalization approach is employed for including the Coulomb interaction. Convergence of the procedure is found at moderate screening radii. The reliability of the method is demonstrated. The Coulomb effect on breakup observables is seen at all energies in particular kinematic regimes.'
author:
- 'A. Deltuva'
- 'A. C. Fonseca'
- 'P. U. Sauer'
title: 'Momentum-space description of three-nucleon breakup reactions including the Coulomb interaction'
---
[^1]
Introduction \[sec:intro\]
==========================
The inclusion of the Coulomb interaction in the description of the three-nucleon continuum is one of the most challenging tasks in theoretical few-body nuclear physics [@alt:02a]. Whereas it has already been solved for elastic proton-deuteron $(pd)$ scattering with realistic hadronic interactions using various procedures [@alt:02a; @kievsky:01a; @chen:01a; @ishikawa:03a; @deltuva:05a], there are only very few attempts [@alt:94a; @kievsky:97a; @suslov:04a] to calculate $pd$ breakup, and none of them uses a complete treatment of the Coulomb interaction and realistic hadronic potentials allowing for a stringent comparison with the experimental data.
Recently in [Ref.]{} [@deltuva:05a] we included the Coulomb interaction between the protons in the description of three-nucleon reactions with two-body initial and final states. The description is based on the Alt-Grassberger-Sandhas (AGS) equation [@alt:67a] in momentum space. The Coulomb potential is screened and the resulting scattering amplitudes are corrected by the renormalization technique of [Refs.]{} [@taylor:74a; @alt:78a] to recover the unscreened limit. The treatment is applicable to any two-nucleon potential without separable expansion. Reference [@deltuva:05a] and this paper use the purely nucleonic charge-dependent (CD) Bonn potential [@machleidt:01a] and its coupled-channel extension CD Bonn + $\Delta$ [@deltuva:03c], allowing for a single virtual $\Delta$-isobar excitation and fitted to the experimental data with the same degree of accuracy as CD Bonn itself. In the three-nucleon system the $\Delta$ isobar mediates an effective three-nucleon force and effective two- and three-nucleon currents, both consistent with the underlying two-nucleon force. The treatment of [Ref.]{} [@deltuva:05a] is technically highly successful, but still limited to the description of proton-deuteron $(pd)$ elastic scattering and of electromagnetic (e.m.) reactions involving ${{}^3\mathrm{He}}$ with $pd$ initial or final states only. This paper extends the treatment of Coulomb to breakup in $pd$ scattering and to e.m. three-body disintegration of ${{}^3\mathrm{He}}$. In that extension we follow the ideas of [Refs.]{} [@taylor:74a; @alt:78a; @alt:94a], but avoid approximations on the hadronic potential and in the treatment of screened Coulomb. Thus, our three-particle equations, including the screened Coulomb potential, are completely different from the quasiparticle equations solved in [Ref.]{} [@alt:94a] where the two-nucleon screened Coulomb transition matrix is approximated by the screened Coulomb potential. In [Ref.]{} [@deltuva:05c] we presented for the first time a limited set of results for $pd$ breakup using the same technical developments we explain here in greater detail.
We have to recall that the screened Coulomb potential $w_R$ we work with is particular. It is screened around the separation $r=R$ between two charged baryons and in configuration space is given by $$\begin{gathered}
\label{eq:wr}
w_R(r) = w(r) \; e^{-(r/R)^n},\end{gathered}$$ with the true Coulomb potential $w(r)=\alpha_e/r$, $\alpha_e$ being the fine structure constant and $n$ controlling the smoothness of the screening. We prefer to work with a sharper screening than the Yukawa screening $(n=1)$ of [Ref.]{} [@alt:94a]. We want to ensure that the screened Coulomb potential $w_R$ approximates well the true Coulomb one $w$ for distances $r<R$ and simultaneously vanishes rapidly for $r>R$, providing a comparatively fast convergence of the partial-wave expansion. In contrast, the sharp cutoff $(n \to \infty)$ yields an unpleasant oscillatory behavior in the momentum-space representation, leading to convergence problems. We find the values $3 \le n \le 6$ to provide a sufficiently smooth, but at the same time a sufficiently rapid screening around $r=R$ like in [Ref.]{} [@deltuva:05a]; $n=4$ is our choice for the results of this paper. The screening radius $R$ is chosen much larger than the range of the strong interaction which is of the order of the pion wavelength $\hbar/m_\pi c \approx 1.4{\;\mathrm{fm}}$. Nevertheless, the screened Coulomb potential $w_R$ is of short range in the sense of scattering theory. Standard scattering theory is therefore applicable. A reliable technique [@deltuva:03a] for solving the AGS equation [@alt:67a] with short-range interactions is extended in [Ref.]{} [@deltuva:05a] to include the screened Coulomb potential between the charged baryons. However, the partial-wave expansion of the pair interaction requires much higher angular momenta than the one of the strong two-nucleon potential alone.
The screening radius $R$ will always remain very small compared with nuclear screening distances of atomic scale, i.e., $10^5{\;\mathrm{fm}}$. Thus, the employed screened Coulomb potential $w_R$ is unable to simulate properly the physics of nuclear screening and, even more, all features of the true Coulomb potential. Thus, the approximate breakup calculations with screened Coulomb $w_R$ have to be corrected for their shortcomings in a controlled way. References [@taylor:74a; @alt:78a] give the prescription for the correction procedure which we follow here for breakup as we did previously for elastic scattering, and that involves the renormalization of the on-shell amplitudes in order to get the proper unscreened Coulomb limit. After the indicated corrections, the predictions for breakup observables have to show independence from the choice of the screening radius $R$, provided it is chosen sufficiently large. That convergence will be the internal criterion for the reliability of our Coulomb treatment.
Configuration space treatments of Coulomb [@kievsky:97a; @suslov:04a] may provide a viable alternative to the integral equation approach in momentum space on which this paper is based. References [@kievsky:97a; @suslov:04a] have provided first results for $pd$ breakup, but they still involve approximations in the treatment of Coulomb and the employed hadronic dynamics is not realistic. Thus, a benchmark comparison between our breakup results and corresponding configuration space results is, in contrast to $pd$ elastic scattering [@deltuva:05b], not possible yet. With respect to the reliability of our Coulomb treatment for breakup, we rely solely on our internal criterion, i.e., the convergence of breakup observables with the screening radius $R$; however, that criterion was absolutely reliable for $pd$ elastic scattering and related e.m. reactions.
Section \[sec:th\] develops the technical apparatus underlying the calculations. Section \[sec:res\] presents some characteristic effects of Coulomb in three-nucleon breakup reactions. Section \[sec:concl\] gives our conclusions.
Treatment of Coulomb interaction between protons in breakup \[sec:th\]
======================================================================
This section carries over the treatment of the Coulomb interaction given in [Ref.]{} [@deltuva:05a] for $pd$ elastic scattering and corresponding e.m. reactions, to $pd$ breakup and to e.m. three-body disintegration of ${{}^3\mathrm{He}}$. It establishes a theoretical procedure leading to a calculational scheme. The discussions of hadronic and e.m. reactions are done separately.
Theoretical framework for the description of proton-deuteron breakup with Coulomb \[sec:thpdb\]
-----------------------------------------------------------------------------------------------
This section focuses on $pd$ breakup. However, the transition matrices for elastic scattering and breakup are so closely connected that certain relations between scattering operators already developed in [Ref.]{} [@deltuva:05a] have to be recalled to make this paper selfcontained.
Each pair of nucleons $(\beta \gamma)$ interacts through the strong coupled-channel potential $v_\alpha$ and the Coulomb potential $w_\alpha$. We assume that $w_\alpha$ acts formally between all pairs $(\beta \gamma)$ of particles, but it is nonzero only for states with two-charged baryons, i.e., $pp$ and $p\Delta^+$ states. We introduce the full resolvent $G^{(R)}(Z)$ for the auxiliary situation in which the Coulomb potential $w_\alpha$ is screened with a screening radius $R$, $w_\alpha$ being replaced by $w_{\alpha R}$, $$\begin{gathered}
\label{eq:GR1}
G^{(R)}(Z) = (Z - H_0 - \sum_\sigma v_\sigma - \sum_\sigma w_{\sigma R})^{-1},\end{gathered}$$ where $H_0$ is the three-particle kinetic energy operator. The full resolvent yields the full $pd$ scattering state when acting on the initial channel state $ |\phi_\alpha ({{\mathbf{q}}}_i) \nu_{\alpha_i} \rangle $ of relative $pd$ momentum ${{\mathbf{q}}}_i$, energy $E_\alpha(q_i)$ and additional discrete quantum numbers $\nu_{\alpha_i}$ and taking the appropriate limit $Z = E_\alpha(q_i) + i0$. The full $pd$ scattering state has, above breakup threshold, components corresponding to the final breakup channel states $ |\phi_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} \rangle $, $\, {{\mathbf{p}}}_f$ and ${{\mathbf{q}}}_f$ being three-nucleon Jacobi momenta, $E_0 (p_f q_f)$ its energy, and $\nu_{0_f}$ additional discrete quantum numbers. The full resolvent therefore also yields the desired $S$ matrix for breakup. The full resolvent $G^{(R)}(Z)$ depends on the screening radius $R$ for the Coulomb potential and that dependence is notationally indicated; the same will be done for operators related to $G^{(R)}(Z)$. Following standard AGS notation [@alt:67a] of three-particle scattering, the full resolvent $G^{(R)}(Z)$ may be decomposed into channel resolvents and free resolvent
\[eq:GRa\] $$\begin{aligned}
G^{(R)}_\alpha (Z) = {} & (Z - H_0 - v_\alpha - w_{\alpha R})^{-1}, \\
G_0 (Z) = {} & (Z - H_0)^{-1},\end{aligned}$$
together with the full multichannel three-particle transition matrices $U^{(R)}_{\beta \alpha}(Z)$ for elastic scattering and $U^{(R)}_{0 \alpha}(Z)$ for breakup according to
\[eq:GU\] $$\begin{aligned}
\label{eq:GUa}
G^{(R)}(Z) = {} & \delta_{\beta \alpha} G^{(R)}_\alpha (Z) +
G^{(R)}_\beta (Z) U^{(R)}_{\beta \alpha}(Z) G^{(R)}_\alpha (Z), \\
\label{eq:GU0}
G^{(R)}(Z) = {} & G_0(Z) U^{(R)}_{0\alpha}(Z) G^{(R)}_\alpha (Z).
\end{aligned}$$
The full multichannel transition matrices satisfy the AGS equations [@alt:67a]
\[eq:UbaT\] $$\begin{aligned}
\label{eq:Uba}
U^{(R)}_{\beta \alpha}(Z) = {} & \bar{\delta}_{\beta \alpha} G_0^{-1}(Z)
+ \sum_{\sigma} \bar{\delta}_{\beta \sigma} T^{(R)}_\sigma (Z) G_0(Z)
U^{(R)}_{\sigma \alpha}(Z), \\
U^{(R)}_{0 \alpha}(Z) = {} & G_0^{-1}(Z)
+ \sum_{\sigma} T^{(R)}_\sigma (Z) G_0(Z) U^{(R)}_{\sigma \alpha}(Z),
\end{aligned}$$ with $\bar{\delta}_{\beta \alpha} = 1 - {\delta}_{\beta \alpha}$; the two-particle transition matrix $T^{(R)}_\alpha (Z)$ is derived from the full channel interaction $v_\alpha + w_{\alpha R}$ including screened Coulomb, i.e., $$\begin{aligned}
\label{eq:TR}
T^{(R)}_\alpha (Z) = {}& (v_\alpha + w_{\alpha R}) +
(v_\alpha + w_{\alpha R}) G_0(Z) T^{(R)}_\alpha (Z).
\end{aligned}$$
In $pd$ elastic scattering, an alternative decomposition of the full resolvent is found conceptually more revealing. Instead of correlating the plane-wave channel state $ |\phi_\alpha ({{\mathbf{q}}}) \nu_\alpha \rangle $ in a single step to the full scattering state by $G^{(R)}(Z)$, it may be correlated first to a screened Coulomb state of proton and deuteron by the screened Coulomb potential $W^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R}$ between a proton and the center of mass (c.m.) of the remaining neutron-proton $(np)$ pair in channel $\alpha$ through
$$\begin{aligned}
G_{\alpha R}(Z) = {}&
(Z - H_0 - v_\alpha - w_{\alpha R} - W^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R})^{-1}, \\
G_{\alpha R}(Z) = {}& G^{(R)}_{\alpha}(Z) +
G^{(R)}_{\alpha}(Z) T^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R}(Z) G^{(R)}_{\alpha}(Z), \\
T^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R} (Z) = {} & W^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R} +
W^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R} G^{(R)}_{\alpha} (Z) T^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R} (Z),
\end{aligned}$$
where, in each channel $\alpha$, $w_{\alpha R}$ and $W^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R}$ are never simultaneously present: When $\alpha$ corresponds to a $pp$ pair, $w_{\alpha R}$ is present and $W^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R} = 0$; when $\alpha$ denotes an $np$ pair, $w_{\alpha R} = 0$ and $W^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R}$ is present. The same Coulomb correlation is done explicitly in both initial and final states. Thus, the full resolvent can be decomposed, in alternative to [Eq.]{} , as $$\begin{gathered}
G^{(R)}(Z) = \delta_{\beta \alpha} G_{\alpha R}(Z) +
G_{\beta R}(Z) \tilde{U}^{(R)}_{\beta\alpha}(Z) G_{\alpha R}(Z),\end{gathered}$$ yielding a new form for the full multichannel transition matrix
$$\begin{gathered}
\label{eq:U-T}
\begin{split}
U^{(R)}_{\beta \alpha}(Z) = {} &
\delta_{\beta\alpha} T^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R}(Z)
+ [1 + T^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\beta R}(Z) G^{(R)}_{\beta}(Z)] \\ & \times
\tilde{U}^{(R)}_{\beta\alpha}(Z)
[1 + G^{(R)}_{\alpha}(Z) T^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R}(Z)].
\end{split}\end{gathered}$$
The reduced operator $ \tilde{U}^{(R)}_{\beta\alpha}(Z)$ may be calculated through the integral equation $$\begin{gathered}
\begin{split}
\tilde{U}^{(R)}_{\beta\alpha}(Z) = {} &
\bar{\delta}_{\beta \alpha} [G_{\alpha R}^{-1}(Z) + v_{\alpha}] +
{\delta}_{\beta \alpha} \mathcal{W}_{\alpha R} \\ &
+ \sum_\sigma (\bar{\delta}_{\beta \sigma} v_\sigma +
{\delta}_{\beta \sigma} \mathcal{W}_{\beta R})
G_{\sigma R}(Z) \tilde{U}^{(R)}_{\sigma\alpha}(Z),
\label{eq:tU}
\end{split}\end{gathered}$$
which is driven by the strong potential $v_\alpha$ and the potential of three-nucleon nature $\mathcal{W}_{\alpha R} = \sum_{\sigma}
( \bar{\delta}_{\alpha \sigma} w_{\sigma R} -
\delta_{\alpha\sigma} W^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\sigma R} ) $. This potential $\mathcal{W}_{\alpha R}$ accounts for the difference between the direct $pp$ Coulomb interaction and the one that takes place between the proton and the c.m. of the remaining bound as well as unbound $np$ pair. When calculated between on-shell screened $pd$ Coulomb states, $\tilde{U}^{(R)}_{\beta\alpha}(Z)$ is of short-range, even in the infinite $R$ limit.
In the same spirit, the final breakup state to be analyzed may not be reached in a single step; instead it may be correlated first to a screened Coulomb state between the charged particles whose corresponding Coulomb resolvent keeps only the screened Coulomb interaction,
$$\begin{gathered}
G_R(Z) = (Z - H_0 - \sum_\sigma w_{\sigma R})^{-1}.
\end{gathered}$$
In the system of two protons and one neutron only the channel $\sigma = \rho$, corresponding to a correlated $pp$ pair, contributes to $ G_R(Z)$, $$\begin{aligned}
G_R(Z) = {} & G_0(Z) + G_0(Z) T_{\rho R}(Z) G_0(Z), \\
T_{\rho R}(Z) = {} & w_{\rho R} + w_{\rho R} G_0(Z) T_{\rho R}(Z),\end{aligned}$$
making channel $\rho$ the most convenient choice for the description of the final breakup state. Thus, for the purpose of $pd$ breakup, a decomposition of the full resolvent, alternative to [Eq.]{} is
$$\begin{aligned}
G^{(R)}(Z) = {} &
G_{R}(Z) \tilde{U}^{(R)}_{0\alpha}(Z) G_{\alpha R}(Z), \\
G^{(R)}(Z) = {} & G_{0}(Z) [1 + T_{\rho R}(Z) G_{0}(Z)]
\tilde{U}^{(R)}_{0\alpha}(Z) \nonumber \\ & \times
[1 + G^{(R)}_{\alpha}(Z) T^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R}(Z)] G^{(R)}_{\alpha}(Z),
\label{eq:GRtU0}
\end{aligned}$$
where the full breakup transition matrix may be written as
$$\begin{gathered}
\label{eq:U0t}
\begin{split}
U^{(R)}_{0\alpha}(Z) = {} & [1 + T_{\rho R}(Z) G_{0}(Z)]
\tilde{U}^{(R)}_{0\alpha}(Z) \\ & \times
[1 + G^{(R)}_{\alpha}(Z) T^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R}(Z)],
\end{split}
\end{gathered}$$
The reduced operator $ \tilde{U}^{(R)}_{0\alpha}(Z)$ may be calculated through quadrature $$\begin{gathered}
\label{eq:tU0}
\tilde{U}^{(R)}_{0\alpha}(Z) =
G_{\alpha R}^{-1}(Z) + v_{\alpha}
+ \sum_\sigma v_\sigma G_{\sigma R}(Z) \tilde{U}^{(R)}_{\sigma\alpha}(Z)\end{gathered}$$
from the correspondingly reduced operator $\tilde{U}^{(R)}_{\beta\alpha}(Z)$ of elastic scattering. In the form for the full breakup transition matrix the external distortions due to screened Coulomb in the initial and final states are made explicit. On-shell the reduced operator $\tilde{U}^{(R)}_{0\alpha}(Z)$ calculated between screened Coulomb distorted initial and final states is of finite range and has two contributions with slightly different range properties:
\(a) The contribution $G_{\alpha R}^{-1}(Z) + v_{\alpha}$, when calculated on-shell between initial $pd$ and final three-nucleon states, becomes the three-nucleon potential $\mathcal{W}_{\alpha R}$ and is the longest-range part of breakup, since the $np$ pair is correlated by the hadronic interaction only in the initial $pd$ state. The corresponding contribution in [Ref.]{} [@alt:94a] is called the pure Coulomb breakup term.
\(b) The remaining part $\sum_\sigma v_\sigma G_{\sigma R}(Z) \tilde{U}^{(R)}_{\sigma\alpha}(Z)$ is of shorter range, comparable to the one of the reduced operator $ \tilde{U}^{(R)}_{\beta\alpha}(Z)$ for elastic $pd$ scattering.
In the full breakup operator $U^{(R)}_{0 \alpha}(Z)$ the external distortions show up in screened Coulomb waves generated by $[1 + G^{(R)}_{\alpha}(Z) T^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R}(Z)]$ in the initial state and by $[1 + T_{\rho R}(Z) G_{0}(Z)]$ in the final state; both wave functions do not have proper limits as $R \to \infty$. Therefore $U^{(R)}_{0\alpha}(Z)$ has to get renormalized as the corresponding amplitude for $pd$ elastic scattering [@deltuva:05a; @alt:78a], in order to obtain the results appropriate for the unscreened Coulomb limit. According to [Refs.]{} [@alt:78a; @alt:94a], the full breakup transition amplitude for initial and final states $ |\phi_\alpha ({{\mathbf{q}}}_i) \nu_{\alpha_i} \rangle $ and $ |\phi_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} \rangle $, $E_\alpha(q_i)= E_0(p_f q_f)$, referring to the strong potential $v_\alpha$ and the unscreened Coulomb potential $w_\alpha$, is obtained via the renormalization of the on-shell breakup transition matrix $ U^{(R)}_{0 \alpha}(E_\alpha(q_i) + i0)$ in the infinite $R$ limit $$\begin{gathered}
\label{eq:UC1}
\begin{split}
\langle \phi_0 & ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} | U_{0 \alpha}
|\phi_\alpha ({{\mathbf{q}}}_i) \nu_{\alpha_i} \rangle \\ = {}&
\lim_{R \to \infty} \{ {z_R^{-\frac12}}(p_f)
\langle \phi_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} | \\ & \times
U^{(R)}_{0 \alpha}(E_\alpha(q_i) + i0)
|\phi_\alpha ({{\mathbf{q}}}_i) \nu_{\alpha_i} \rangle {\mathcal{Z}_{R}^{-\frac12}}(q_i) \},
\end{split}
\end{gathered}$$ where ${\mathcal{Z}}_{R}(q_i)$ and $z_R(p_f)$ are $pd$ and $pp$ renormalization factors defined below.
As in [Ref.]{} [@deltuva:05a] we choose an isospin description for the three baryons in which the nucleons are considered identical. The two-baryon transition matrix $T^{(R)}_{\alpha}(Z)$ becomes an operator coupling total isospin $\mathcal{T} = \frac12$ and $\mathcal{T} = \frac32$ states as described in detail in [Ref.]{} [@deltuva:05a]. Instead of the breakup amplitude given by [Eq.]{} we have to use the properly symmetrized form
$$\begin{gathered}
\begin{split} \label{eq:Uasyma}
\langle \phi_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) & \nu_{0_f} |
U_0 |\phi_\alpha ({{\mathbf{q}}}_i) \nu_{\alpha_i} \rangle \\ = {} &
\sum_\sigma
\langle \phi_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} |U_{0 \sigma}
| \phi_\sigma ({{\mathbf{q}}}_i) \nu_{\sigma_i} \rangle,
\end{split}
\end{gathered}$$
$$\begin{gathered}
\begin{split}\label{eq:Uasymb}
\langle \phi_0 & ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} |
U_0 |\phi_\alpha ({{\mathbf{q}}}_i) \nu_{\alpha_i} \rangle \\ = {} &
\lim_{R \to \infty}
\{ {z_R^{-\frac12}}(p_f) \langle \phi_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} | \\
& \times U_0^{(R)}(E_\alpha(q_i) + i0)
|\phi_\alpha ({{\mathbf{q}}}_i) \nu_{\alpha_i} \rangle {\mathcal{Z}_{R}^{-\frac12}}(q_i) \}
\end{split}
\end{gathered}$$
with $U_0^{(R)}(Z) = U^{(R)}_{0 \alpha}(Z) +
U^{(R)}_{0 \beta}(Z) P_{231} + U^{(R)}_{0 \gamma}(Z) P_{312}$ for the calculation of observables, $(\alpha \beta \gamma)$ being cyclic and $P_{231}$ and $P_{312}$ being the two cyclic permutations of $(\alpha \beta \gamma)$. The symmetrized breakup transition matrix $U_0^{(R)}(Z)$ follows by quadrature
\[eq:AGSsym\] $$\begin{gathered}
\label{eq:U0R}
\begin{split}
U_0^{(R)}(Z) = {} & (1+P) G_0^{-1}(Z) \\ & +
(1+P) T^{(R)}_{\alpha}(Z) G_0(Z) U^{(R)}(Z)
\end{split}\end{gathered}$$ from the symmetrized multichannel transition matrix $U^{(R)}(Z) = U^{(R)}_{\alpha \alpha}(Z) +
U^{(R)}_{\alpha \beta}(Z) P_{231} + U^{(R)}_{\alpha \gamma}(Z) P_{312}$ of elastic $pd$ scattering, satisfying the standard symmetrized form of the AGS integral equation , i.e., $$\begin{gathered}
\label{eq:UR}
U^{(R)}(Z) = P G_0^{-1}(Z) + P T^{(R)}_{\alpha}(Z) G_0(Z) U^{(R)}(Z),\end{gathered}$$
with $P = P_{231} + P_{312}$.
The renormalization factors ${\mathcal{Z}}_{R}(q_i)$ and $z_R(p_f)$ in the initial and final channels are diverging phase factors defined in [Ref.]{} [@taylor:74a] for a general screening and calculated in [Refs.]{} [@deltuva:05a; @yamaguchi:03a] for the screened Coulomb potential of [Eq.]{} , i.e.,
\[eq:zrqp\] $$\begin{aligned}
\label{eq:zrq}
{\mathcal{Z}}_{R}(q_i) = {} & e^{-2i \kappa(q_i)[\ln{(2q_i R)} - C/n]}, \\
\label{eq:zrp}
z_{R}(p_f) = {} & e^{-2i \kappa(p_f)[\ln{(2p_f R)} - C/n]},
\end{aligned}$$
$\kappa(q_i) = \alpha_e M/q_i$ and $\kappa(p_f) = \alpha_e \mu /p_f$ being the $pd$ and $pp$ Coulomb parameters, $M$ and $\mu$ the reduced $pd$ and $pp$ mass, $C \approx 0.5772156649$ the Euler number, and $n$ the exponent in [Eq.]{} . In $pd$ elastic scattering, the renormalization factors were used in a partial-wave dependent form which yielded a slight advantage on convergence with $R$ compared to the partial-wave independent form . In breakup, the operator $T^{(R)}_{\alpha}(Z) G_0(Z) U^{(R)}(Z)$ in [Eq.]{} is calculated in a partial-wave basis, but the on-shell elements of the full breakup operator $U_0^{(R)}(Z)$ are calculated in a plane wave basis. Therefore the renormalization is only applicable in the partial-wave independent form of [Eq.]{} .
The limit in [Eq.]{} has to be performed numerically, but, due to the finite-range nature of the breakup operator, the infinite $R$ limit is reached with sufficient accuracy at rather modest screening radii $R$. Furthermore, the longer-range pure Coulomb breakup part which after symmetrization reads $[1 + T_{\rho R}(Z) G_{0}(Z)] P v_{\alpha}
[1 + G^{(R)}_{\alpha}(Z) T^{{\mathrm{c\!\:\!.m\!\:\!.}}}_{\alpha R}(Z)]$ and the remaining shorter-range part can be renormalized with different screening radii, since the limit in [Eq.]{} exist for them separately. The limit for the pure Coulomb breakup part can even be carried out explicitly, since the renormalization of the screened Coulomb waves yields the corresponding unscreened Coulomb waves accessible in configuration space; thus, the integral can be carried out numerically in configuration space as was done indeed in [Ref.]{} [@alt:94a]. However, we find such a procedure unnecessary when our standard screening function is used. In fact, in most cases there is even no necessity for splitting the full breakup amplitude into pure Coulomb and Coulomb modified short-range parts, the only exception being the kinematical situations characterized by small momentum transfer in the $pp$ subsystem which are sensitive to the Coulomb interaction at larger distances.
The practical implementation of the outlined calculational scheme faces a technical difficulty. We solve [Eq.]{} in a partial-wave basis. The partial-wave expansion of the screened Coulomb potential converges rather slowly. In this context, the perturbation theory for higher two-baryon partial waves developed in [Ref.]{} [@deltuva:03b] is a very efficient and reliable technical tool for treating the screened Coulomb interaction in high partial waves. We vary the dividing line between partial waves included exactly and perturbatively in order to test the convergence and thereby establish the validity of the procedure. Furthermore, the partial-wave convergence becomes slightly faster when replacing lowest-order screened Coulomb contributions in $U_0^{(R)}(Z)$ by the respective plane-wave results, i.e., $$\begin{gathered}
U_0^{(R)}(Z) = [U_0^{(R)}(Z) - (1+P)w_{\alpha R} P] + (1+P)w_{\alpha R} P,\end{gathered}$$ where the first term converges with respect to partial waves faster than $U_0^{(R)}(Z)$ itself and the second term is calculated *without* partial-wave decomposition.
With respect to the partial-wave expansion in the actual calculations of this paper, we obtain fully converged results by taking into account the screened Coulomb interaction in two-baryon partial waves with pair orbital angular momentum $L \le 15$; orbital angular momenta $9 \le L \le 15$ can safely be treated perturbatively. The above values refer to the screening radius $R=30 {\;\mathrm{fm}}$; for smaller screening radii the convergence in orbital angular momentum is faster. The hadronic interaction is taken into account in two-baryon partial waves with total angular momentum $I \le 5$. Both three-baryon total isospin $\mathcal{T} = \frac12$ and $\mathcal{T} = \frac32$ states are included. The maximal three-baryon total angular momentum $\mathcal{J}$ considered is $\frac{61}{2}$.
![\[fig:R13\] Convergence of the $pd$ breakup observables with screening radius $R$. The differential cross section and the proton analyzing power $A_y(N)$ for $pd$ breakup at 13 MeV proton lab energy are shown as functions of the arclength $S$ along the kinematical curve. Results for CD Bonn potential obtained with screening radius $R= 10$ fm (dotted curves), 20 fm (dash-dotted curves), and 30 fm (solid curves) are compared. Results without Coulomb (dashed curves) are given as reference for the size of the Coulomb effect.](R13.eps)
![\[fig:R130d\] Convergence of the $pd$ breakup observables with screening radius $R$. The differential cross section and the deuteron analyzing power $A_{xx}$ for $pd$ breakup at 130 MeV are shown. Curves as in [Fig.]{} \[fig:R13\].](R130d.eps)
![\[fig:R13ppfsi\] Convergence of the $pd$ breakup observables with screening radius $R$. The differential cross section for $pd$ breakup at 13 MeV proton lab energy in the $pp$-FSI configuration is shown as function of the relative $pp$ energy $E_{pp}$. Results obtained with screening radius $R= 10$ fm (dotted curves), 20 fm (dashed-double-dotted curves), 30 fm (dashed-dotted curves), 40 fm (double-dashed-dotted curves), and 60 fm (solid curves) are compared. Results without Coulomb (dashed curves) are given as reference for the size of the Coulomb effect.](R13eppfsi.eps)
Figures \[fig:R13\] - \[fig:R13ppfsi\] study the convergence of our method with increasing screening radius $R$ according to [Eq.]{} . All the calculations of this section are based on CD Bonn as the hadronic interaction. The kinematical final-state configurations are characterized in a standard way by the polar angles of the two protons and by the azimuthal angle between them, $(\theta_1, \theta_2, \varphi_2 - \varphi_1)$. We show several characteristic examples referring to $pd$ breakup at 13 MeV proton lab energy and at 130 MeV deuteron lab energy. The convergence is impressive for the spin-averaged differential cross section as well as for the spin observables in most kinematical situations as demonstrated in [Figs.]{} \[fig:R13\] and \[fig:R130d\]. The screening radius $R= 20 {\;\mathrm{fm}}$ is sufficient; only in the top plot of [Fig.]{} \[fig:R13\] the curves for $R = 20 {\;\mathrm{fm}}$ and $R = 30 {\;\mathrm{fm}}$ are graphically distinguishable. The exception requiring larger screening radii is the differential cross section in kinematical situations characterized by very low $pp$ relative energy $E_{pp}$, i.e., close to the $pp$ final-state interaction ($pp$-FSI) regime, as shown in [Fig.]{} \[fig:R13ppfsi\]. In there, the $pp$ repulsion is responsible for decreasing the cross section, converting the $pp$-FSI peak obtained in the absence of Coulomb into a minimum with zero cross section at $p_f=0$, i.e., for $E_{pp}=0$. A similar convergence problem also takes place in $pp$ scattering at very low energies as discussed in [Ref.]{} [@deltuva:05a]. In fact, screening and renormalization procedure cannot be applied at $p_f=0$, since the renormalization factor $z_R(p_f=0)$ is ill-defined. Therefore an extrapolation has to be used to calculate the observables at $p_f=0$, which works pretty well since the observables vary smoothly with $p_f$. In [Fig.]{} \[fig:R13ppfsi\] the fully converged result would start at zero for $E_{pp}=0$.
The seen Coulomb effects and their physics implications are discussed in [Sec.]{} \[sec:res\].
Three-body e.m. disintegration of ${{}^3\mathrm{He}}$ \[sec:them\]
------------------------------------------------------------------
For the description of the considered e.m. processes the matrix element $\langle \psi^{(-)}_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} |
j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle$ of the e.m. current operator between the three-nucleon bound state and the breakup scattering state has to be calculated. The calculation of that matrix element without Coulomb and the meaning of the momenta ${{\mathbf{Q}}}$ and ${{{\mathbf{K}}}_{+}}$ are discussed in great length in [Refs.]{} [@deltuva:04a; @deltuva:04b]. This subsection only discusses the modification which arises due to the inclusion of the Coulomb interaction between the charged baryons. Coulomb is included as a screened potential and the dependence of the bound and scattering states, i.e., $| B^{(R)} \rangle$ and $ |\psi^{(\pm)(R)}_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} \rangle $, on the screening radius $R$ is notationally made explicit. In analogy to $pd$ breakup, the current matrix element referring to the unscreened Coulomb potential is obtained via renormalization of the matrix element referring to the screened Coulomb potential in the infinite $R$ limit $$\begin{gathered}
\label{eq:jR}
\begin{split}
\langle \psi^{(-)}_0 & ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} |
j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle \\ = {} &
\lim_{R \to \infty}
\{ {z_R^{-\frac12}}(p_f) \langle \psi^{(-)(R)}_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f} |
j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B^{(R)} \rangle \}.
\end{split}\end{gathered}$$ The renormalization factor $z_R(p_f)$ is the same as used in $pd$ breakup for the final state. Due to the short-range nature of $ j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B^{(R)} \rangle$ the limit $R \to \infty$ is reached with sufficient accuracy at finite screening radii $R$. The presence of the bound-state wave function in the matrix element strongly suppresses the contribution of the screened Coulomb interaction in high partial waves, i.e., two-baryon partial waves with orbital angular momentum $L \le 6$ are sufficient for convergence. The other quantum-number related cutoffs in the partial-wave dependence of the matrix element are the same as in [Refs.]{} [@deltuva:04a; @deltuva:04b], i.e., $I \le 4$, $\mathcal{J} \le \frac{15}{2}$ for photoreactions, and $I \le 3$, $\mathcal{J} \le \frac{35}{2}$ for inelastic electron scattering from ${{}^3\mathrm{He}}$. All calculations include both total isospin $\mathcal{T} = \frac12$ and $\mathcal{T} = \frac32$ states.
Figures \[fig:Rg15\] and \[fig:Rg55\] study the convergence of our method with increasing screening radius $R$ for the three-body photodisintegration of ${{}^3\mathrm{He}}$ at 15 MeV and 55 MeV photon lab energy. The calculations are again based on CD Bonn as the hadronic interaction and the currents from [Refs.]{} [@deltuva:04a; @deltuva:04b]. We show the differential cross section and the target analyzing power $A_y$ for selected kinematic configurations. The convergence is again extremely good and quite comparable to $pd$ breakup; the screening radius $R= 20 {\;\mathrm{fm}}$ is fully sufficient in most cases. The only exceptional cases, as in the $pd$ breakup, are $pp$-FSI regimes as shown in [Fig.]{} \[fig:Rg55\]. The convergence with increasing screening radius $R$ is the same for three-body electrodisintegration of ${{}^3\mathrm{He}}$; we therefore omit a corresponding figure.
![\[fig:Rg15\] Convergence of the ${{}^3\mathrm{He}}(\gamma,pp)n$ reaction observables with screening radius $R$. The differential cross section and the target analyzing power $A_y$ at 15 MeV photon lab energy are shown. Curves as in [Fig.]{} \[fig:R13\].](Rg15.eps)
![\[fig:Rg55\] Convergence of the ${{}^3\mathrm{He}}(\gamma,pn)p$ reaction observables with screening radius $R$. The differential cross section at 55 MeV photon lab energy in the $pp$-FSI configuration is shown. Curves as in [Fig.]{} \[fig:R13ppfsi\].](Rg55.eps)
Results \[sec:res\]
===================
We base our calculations on the two-baryon coupled-channel potential CD Bonn + $\Delta$ with and without Coulomb and use the CD Bonn potential with Coulomb as purely nucleonic reference. We use the charge and current operators of [Refs.]{} [@deltuva:04a; @deltuva:04b], appropriate for the underlying dynamics. In contrast to [Ref.]{} [@deltuva:05a] we do not include one-nucleon relativistic charge corrections for photoreactions, since their effect on the considered observables is very small.
Obviously, we have much more predictions than it is possible to show. Therefore we make a selection of the most interesting predictions which illustrate the message we believe the results tell us. The readers are welcome to obtain the results for their favorite data from us.
Proton-deuteron breakup \[sec:pdb\]
-----------------------------------
Figures \[fig:d5sss\] - \[fig:d5sfsi\] give our results for the fivefold differential cross section at 10.5 MeV, 13 MeV, 19 MeV, and 65 MeV proton lab energies in the standard space star, collinear, quasifree scattering (QFS), and $np$ final-state interaction ($np$-FSI) configurations, for which there is available experimental data. Though the inclusion of Coulomb slightly improves the agreement with data in the space star configurations in [Fig.]{} \[fig:d5sss\], the Coulomb effect is far too small to reproduce the difference between $pd$ and $nd$ data and to resolve the so-called *space star anomaly* at 13 MeV. The inclusion of Coulomb clearly improves the description of the data around the collinear points at lower energies, i.e., at the minima in [Fig.]{} \[fig:d5scl\]. The remaining discrepancies around the peaks are probably due to the finite geometry, not taken into account in our calculations owing to the lack of information on experimental details, but may also be due to the underlying hadronic interaction. The inclusion of Coulomb decreases the differential cross section around the QFS peaks, i.e., around the central peaks in [Fig.]{} \[fig:d5sqfs\]; those changes are supported by the data at lower energies. In the $np$-FSI configurations of [Fig.]{} \[fig:d5sfsi\] the Coulomb effect is rather insignificant.
![\[fig:d5sss\] Differential cross section for space star configurations as function of the arclength $S$ along the kinematical curve. Results including $\Delta$-isobar excitation and the Coulomb interaction (solid curves) are compared to results without Coulomb (dashed curves). In order to appreciate the size of the $\Delta$-isobar effect, the purely nucleonic results including Coulomb are also shown (dotted curves). The experimental $pd$ data (circles) are from [Ref.]{} [@grossmann:96] at 10.5 MeV, from [Ref.]{} [@rauprich:91] at 13 MeV, from [Ref.]{} [@patberg:96] at 19 MeV, from [Ref.]{} [@zejma:97a] at 65 MeV, and $nd$ data at 13 MeV (squares) from [Ref.]{} [@strate:89].](d5sss.eps)
![\[fig:d5scl\] Differential cross section for collinear configurations. Curves and experimental data as in [Fig.]{} \[fig:d5sss\], except for 65 MeV data from [Ref.]{} [@allet:94a].](d5scl.eps)
![\[fig:d5sqfs\] Differential cross section for QFS configurations. Curves and experimental data as in [Fig.]{} \[fig:d5sss\], except for 65 MeV data from [Ref.]{} [@allet:96a].](d5sqfs.eps)
![\[fig:d5sfsi\] Differential cross section for $np$-FSI configurations. Curves and experimental data as in [Fig.]{} \[fig:d5sss\].](d5sfsi.eps)
The Coulomb effect on proton analyzing powers in the considered kinematical configurations is usually small on the scale of the experimental error bars. We therefore show in [Fig.]{} \[fig:Aycl\] only few collinear configurations.
![\[fig:Aycl\] Proton analyzing power for collinear configurations at 13 MeV and 65 MeV proton lab energy. Curves and experimental data as in [Fig.]{} \[fig:d5scl\].](Aycl.eps)
Recently, $pd$ breakup has been measured at 130 MeV deuteron lab energy in a variety of kinematical configurations [@kistryn:05a]. In some of them we find significant Coulomb effects for the differential cross section as well as for the deuteron analyzing powers. Examples are shown in [Figs.]{} \[fig:d5s130d\] and \[fig:A130d\]. By and large the agreement between theoretical predictions and experimental data is improved. The $pp$-FSI repulsion is responsible for lowering the peak of the differential cross section in the configuration $(15^{\circ},15^{\circ},40^{\circ})$ in [Fig.]{} \[fig:d5s130d\] left, where the relative $pp$ energy is rather low at the peak. In contrast, the relative $pp$ energy gets considerably increased as one changes the azimuthal angle to $160^{\circ}$ in [Fig.]{} \[fig:d5s130d\] right, leading to an increase of the differential cross section due to Coulomb. Since the total breakup cross section at this energy, corresponding to 65 MeV proton lab energy, is almost unaffected by Coulomb, as shown in [Fig.]{} \[fig:tot3N\], one may expect in given configurations an increase of the cross section due to Coulomb to compensate for the sharp decrease of the cross section in the vicinity of $pp$-FSI points. Figure \[fig:A130d\] shows deuteron tensor analyzing powers $A_{xx}$ and $A_{yy}$ with moderate Coulomb effect for the same configurations for which experimental data may become available soon [@stephan:05a].
![\[fig:d5s130d\] Differential cross section for $pd$ breakup at 130 MeV deuteron lab energy. Curves as in [Fig.]{} \[fig:d5sss\]. The experimental data are from [Ref.]{} [@kistryn:05a]. ](d5s130d.eps)
![\[fig:A130d\] Deuteron analyzing powers for $pd$ breakup at 130 MeV deuteron lab energy. Curves as in [Fig.]{} \[fig:d5sss\]. ](A130d.eps)
![\[fig:tot3N\] Total cross section for $pd$ breakup as function of the proton lab energy. Curves as in [Fig.]{} \[fig:d5sss\]. The experimental data are from [Ref.]{} [@carlson:73a]. ](sigbtot.eps)
Compared to the results of [Ref.]{} [@alt:94a] based on a simple hadronic $S$-wave potential, we see a rough qualitative agreement in most cases. Quantitatively, the Coulomb effect we observe is smaller than the one of [Ref.]{} [@alt:94a].
Figures \[fig:d5sss\] - \[fig:tot3N\] recall also the $\Delta$-isobar effect on observables, which, in most cases we studied, is much smaller than the Coulomb effect. As expected, the $\Delta$-isobar effect on polarization observables is more significant than on the differential cross sections, which confirms previous findings [@deltuva:03c].
Three-body e.m. disintegration of ${{}^3\mathrm{He}}$
-----------------------------------------------------
Experimental data for three-body photodisintegration of ${{}^3\mathrm{He}}$ are very scarce; we therefore show in [Fig.]{} \[fig:d4s\] only two examples referring to the semiinclusive ${{}^3\mathrm{He}}(\gamma,pn)p$ reaction at 55 MeV and 85 MeV photon lab energy. The semiinclusive fourfold differential cross section is obtained from the standard fivefold differential cross section by integrating over the kinematical curve $S$. For scattering angles corresponding to the peak of the fourfold differential cross section the region of the phase space to be integrated over contains $pp$-FSI regime where the $pp$-FSI peak obtained without Coulomb is converted into a minimum as shown in [Fig.]{} \[fig:Rg55\]. Therefore the fourfold differential cross section in [Fig.]{} \[fig:d4s\] is also significantly reduced by the inclusion of Coulomb, clearly improving the agreement with the data. A similar Coulomb effect of the same origin is shown in [Fig.]{} \[fig:d3s\] for the semiinclusive threefold differential cross section for ${{}^3\mathrm{He}}(\vec{\gamma},n)pp$ reaction at 15 MeV photon lab energy. In contrast, the photon analyzing power remains almost unchanged by the inclusion of Coulomb. The experiment measuring this reaction is in progress [@tornow:05a], but the data are not available yet. Again the importance of $\Delta$-isobar degree of freedom is considerably smaller than the effect of Coulomb.
![\[fig:d4s\] The semiinclusive fourfold differential cross section for ${{}^3\mathrm{He}}(\gamma,pn)p$ reaction at 55 MeV and 85 MeV photon lab energy as function of the $np$ opening angle $\theta_p + \theta_n$ with $\theta_p = 81^{\circ}$. Curves as in [Fig.]{} \[fig:d5sss\]. The experimental data are from [Ref.]{} [@kolb:91a]. ](d4skolb.eps)
![\[fig:d3s\] The semiinclusive threefold differential cross section and photon analyzing power $\Sigma$ for ${{}^3\mathrm{He}}(\vec{\gamma},n)pp$ reaction at 15 MeV photon lab energy as function of the neutron energy $E_n$ for the neutron scattering angle $\theta_n = 90^{\circ}$. Curves as in [Fig.]{} \[fig:d5sss\].](d3sAg15.eps)
The available data of three-nucleon electrodisintegration of ${{}^3\mathrm{He}}$ refer to fully inclusive observables. In [Fig.]{} \[fig:RF300\] we show ${{}^3\mathrm{He}}$ inclusive longitudinal and transverse response functions $R_L$ and $R_T$ as examples. Though the Coulomb effect may be large in particular kinematic regions, it is rather insignificant for the total cross section and therefore also for response functions. Only the transverse response function near threshold is affected more visibly as shown in [Fig.]{} \[fig:RTt\]; at higher momentum transfer there is also quite a large $\Delta$-isobar effect.
![\[fig:RF300\] ${{}^3\mathrm{He}}$ inclusive longitudinal and transverse response functions $R_L$ and $R_T$ for the momentum transfer $\;|{{\mathbf{Q}}}| = 300\;\mathrm{MeV}$ as functions of the energy transfer $Q_0$. Curves as in [Fig.]{} \[fig:d5sss\]. The experimental data are from [Ref.]{} [@dow:88a] (circles) and from [Ref.]{} [@marchand:85a] (squares).](RF300C.eps)
![\[fig:RTt\] ${{}^3\mathrm{He}}$ inclusive transverse response function $R_T$ near threshold as function of the excitation energy $E_x$, $Q_t$ being the value of three-momentum transfer at threshold. Curves as in [Fig.]{} \[fig:d5sss\]. The experimental data are from [Ref.]{} [@hicks:03a].](RTt.eps)
Summary \[sec:concl\]
=====================
In this paper we show how the Coulomb interaction between the charged baryons can be included into the momentum-space description of proton-deuteron breakup and of three-body e.m. disintegration of ${{}^3\mathrm{He}}$ using the screening and renormalization approach. The theoretical framework is the AGS integral equation [@alt:67a]. The calculations are done on the same level of accuracy and sophistication as for the corresponding neutron-deuteron and ${{}^3\mathrm{H}}$ reactions. The conclusions of the paper refer to the developed technique and to the physics results obtained with that technique.
*Technically*, the idea of screening and renormalization is the one of [Refs.]{} [@taylor:74a; @alt:78a; @alt:94a]. However, our practical realization differs quite significantly from the one of [Ref.]{} [@alt:94a]:
\(1) We use modern hadronic interactions, CD Bonn and CD Bonn + $\Delta$, in contrast to the simple $S$-wave separable potentials of [Ref.]{} [@alt:94a]. Our use of the full potential requires the standard form of the three-particle equations, different from the quasiparticle approach of [Ref.]{} [@alt:94a].
\(2) We do not approximate the screened Coulomb transition matrix by the screened Coulomb potential.
\(3) The quasiparticle approach of [Ref.]{} [@alt:94a] treats the screened Coulomb potential between the protons without partial-wave expansion and therefore has no problems with the slow convergence of that expansion. Our solution of three-nucleon equations proceeds in partial-wave basis and therefore faces the slow partial-wave convergence of the Coulomb interaction between the charged baryons. However, we are able to obtain fully converged results by choosing a special form of the screening function and by using the perturbation theory of [Ref.]{} [@deltuva:03b] for treating the screened Coulomb transition matrix in high partial waves. This would not be possible, if we had used Yukawa screening as in [Ref.]{} [@alt:94a] for two reasons: (a) The convergence with respect to screening would require much larger radii $R$; (b) The larger values of $R$ would necessitate the solution of the AGS equation with much higher angular momentum states.
\(4) Our method for including the Coulomb interaction is efficient. Though the number of the isospin triplet partial waves to be taken into account is considerably higher than in the case without Coulomb, the required computing time increases only by a factor of 3 — 4 for each screening radius $R$, due to the use of perturbation theory for high partial waves.
The obtained results are fully converged with respect to the screening and with respect to the quantum number cutoffs; they are therefore well checked for their validity. The employed technique gets cumbersome in kinematical regions with very low relative $pp$ energy, i.e., $pp$ c.m. energies below 0.1 MeV, due to the need for quite large screening radii.
*Physicswise*, the Coulomb effect in $pd$ breakup and in three-body e.m. disintegration of ${{}^3\mathrm{He}}$ is extremely important in kinematical regimes close to $pp$-FSI. There the $pp$ repulsion converts the $pp$-FSI peak obtained in the absence of Coulomb into a minimum with zero cross section. This significant change of the cross section behavior has important consequences in nearby configurations where one may observe instead an increase of the cross section due to Coulomb. This phenomenon is independent of the beam energy and depends solely on specific momentum distributions of the three-nucleon final state. Therefore, unlike in $pd$ elastic scattering where the Coulomb contribution decreases with the beam energy until it gets confined to the forward direction, in three-body breakup large Coulomb effects may always be found in specific configurations besides $pp$-FSI, even at high beam energies.
Another important consequence of this work is that we can finally ascertain with greater confidence the quality of two- and three-nucleon force models one uses to describe $pd$ observables; any disagreement with high quality $pd$ data may now be solely attributed to the underlying nuclear interaction. In the framework of the present study we reanalyzed the contribution of $\Delta$-isobar degrees of freedom to three-body breakup observables. The largest $\Delta$ effects take place in analyzing powers for given configurations. Nevertheless the lack of high quality analyzing power data on a broad spectrum of configurations prevents a full evaluation of the $\Delta$ effects in $pd$ breakup. The situation is even worse in three-body photodisintegration of ${{}^3\mathrm{He}}$, where there are neither kinematically complete experiments without polarization nor any analyzing power data available.
The authors thank St. Kistryn and H. Paetz gen. Schieck for providing experimental data. A.D. is supported by the FCT grant SFRH/BPD/14801/2003, A.C.F. in part by the FCT grant POCTI/FNU/37280/2001, and P.U.S. in part by the DFG grant Sa 247/25.
[10]{}
E. O. Alt, A. M. Mukhamedzhanov, M. M. Nishonov, and A. I. Sattarov, Phys. Rev. C [**65**]{}, 064613 (2002).
A. Kievsky, M. Viviani, and S. Rosati, Phys. Rev. C [**64**]{}, 024002 (2001).
C. R. Chen, J. L. Friar, and G. L. Payne, Few-Body Syst. [**31**]{}, 13 (2001).
S. Ishikawa, Few-Body Syst. [**32**]{}, 229 (2003).
A. Deltuva, A. C. Fonseca, and P. U. Sauer, Phys. Rev. C [**71**]{}, 054005 (2005).
E. O. Alt and M. Rauh, Few-Body Syst. [**17**]{}, 121 (1994).
A. Kievsky, M. Viviani, and S. Rosati, Phys. Rev. C [**56**]{}, 2987 (1997).
V. M. Suslov and B. Vlahovic, Phys. Rev. C [**69**]{}, 044003 (2004).
E. O. Alt, P. Grassberger, and W. Sandhas, Nucl. Phys. [**B2**]{}, 167 (1967).
J. R. Taylor, Nuovo Cimento [**B23**]{}, 313 (1974); M. D. Semon and J. R. Taylor, [*ibid.*]{} [**A26**]{}, 48 (1975).
E. O. Alt, W. Sandhas, and H. Ziegelmann, Phys. Rev. C [**17**]{}, 1981 (1978); E. O. Alt and W. Sandhas, [*ibid.*]{} [**21**]{}, 1733 (1980).
R. Machleidt, Phys. Rev. C [**63**]{}, 024001 (2001).
A. Deltuva, R. Machleidt, and P. U. Sauer, Phys. Rev. C [**68**]{}, 024005 (2003).
A. Deltuva, A. C. Fonseca, and P. U. Sauer, Phys. Rev. Lett. [**95**]{}, 092301 (2005).
A. Deltuva, K. Chmielewski, and P. U. Sauer, Phys. Rev. C [**67**]{}, 034001 (2003).
A. Deltuva, A. C. Fonseca, A. Kievsky, S. Rosati, P. U. Sauer, and M. Viviani, Phys. Rev. C [**71**]{}, 064003 (2005).
M. Yamaguchi, H. Kamada, and Y. Koike, nucl-th/0310024.
A. Deltuva, K. Chmielewski, and P. U. Sauer, Phys. Rev. C [**67**]{}, 054004 (2003).
A. Deltuva, L. P. Yuan, J. Adam Jr., A. C. Fonseca, and P. U. Sauer, Phys. Rev. C [**69**]{}, 034004 (2004).
A. Deltuva, L. P. Yuan, J. Adam Jr., and P. U. Sauer, Phys. Rev. C [**70**]{}, 034004 (2004).
R. Gro[ß]{}mann [*et al.*]{}, Nucl. Phys. [**603**]{}, 161 (1996).
G. Rauprich [*et al.*]{}, Nucl. Phys. [**A535**]{}, 313 (1991).
H. Patberg [*et al.*]{}, Phys. Rev. C [**53**]{}, 1497 (1996).
J. Zejma [*et al.*]{}, Phys. Rev. C [**55**]{}, 42 (1997).
J. Strate [*et al.*]{}, Nucl. Phys. [**A501**]{}, 51 (1989).
M. Allet [*et al.*]{}, Phys. Rev. C [**50**]{}, 602 (1994).
M. Allet [*et al.*]{}, Few-Body Syst. [**20**]{}, 27 (1996).
St. Kistryn [*et al.*]{}, Phys. Rev. C [**68**]{}, 054004 (2003); St. Kistryn [*et al.*]{}, Phys. Rev. C, to be published, nucl-th/0508012.
E. Stephan [*et al.*]{}, AIP Conf. Proc. [**768**]{}, 73 (2005).
R. F. Carlson [*et al.*]{}, Lett. Nuovo Cimento [**8**]{}, 319 (1973).
W. Tornow [*et al.*]{}, AIP Conf. Proc. [**768**]{}, 138 (2005).
N. R. Kolb, P. N. Dezendorf, M. K. Brussel, B. B. Ritchie, and J. H. Smith, Phys. Rev. C [**44**]{}, 37 (1991).
K. Dow [*et al.*]{}, Phys. Rev. Lett. [**61**]{}, 1706 (1988).
C. Marchand [*et al.*]{}, Phys. Lett. B [**153**]{}, 29 (1985).
R. S. Hicks [*et al.*]{}, Phys. Rev. C [**67**]{}, 064004 (2003).
[^1]: on leave from Institute of Theoretical Physics and Astronomy, Vilnius University, Vilnius 2600, Lithuania
|
---
abstract: |
In this article, we prove the existence of bounded solutions of quadratic backward SDEs with jumps, that is to say for which the generator has quadratic growth in the variables $(z,u)$. From a technical point of view, we use a direct fixed point approach as in Tevzadze [@tev], which allows us to obtain existence and uniqueness of a solution when the terminal condition is small enough. Then, thanks to a well-chosen splitting, we recover an existence result for general bounded solution. Under additional assumptions, we can obtain stability results and a comparison theorem, which as usual imply uniqueness.
[**Key words:**]{} BSDEs, quadratic growth, jumps, fixed-point theorem.
[**AMS 2000 subject classifications:**]{} 60H10, 60H30
author:
- 'Nabil [Kazi-Tani]{}[^1]'
- 'Dylan [Possamaï]{}[^2]'
- 'Chao [Zhou]{}[^3]'
title: 'Quadratic BSDEs with jumps: a fixed-point approach[^4] '
---
Introduction
============
Motivated by duality methods and maximum principles for optimal stochastic control, Bismut studied in [@bis] a linear backward stochastic differential equation (BSDE). In their seminal paper [@pardpeng], Pardoux and Peng generalized such equations to the non-linear Lipschitz case and proved existence and uniqueness results in a Brownian framework. Since then, a lot of attention has been given to BSDEs and their applications, not only in stochastic control, but also in theoretical economics, stochastic differential games and financial mathematics. In this context, the generalization of Backward SDEs to a setting with jumps enlarges again the scope of their applications, for instance to insurance modeling, in which jumps are inherent (see for instance Liu and Ma [@liu]). Li and Tang [@li] were the first to obtain a wellposedness result for Lipschitz BSDEs with jumps, using a fixed point approach similar to the one used in [@pardpeng].
Let us now precise the structure of these equations in a discontinuous setting. Given a filtered probability space $(\Omega,\mathcal F,\left\{\mathcal F_t\right\}_{0\leq t\leq T},\mathbb P)$ generated by an $\mathbb R^d$-valued Brownian motion $B$ and a random measure $\mu$ with compensator $\nu$, solving a BSDEJ with generator $g$ and terminal condition $\xi$ consists in finding a triple of progressively measurable processes $(Y,Z,U)$ such that for all $t \in [0,T]$, $\mathbb P-a.s.$ $$\begin{aligned}
Y_t=\xi +\int_t^T g_s(Y_s,Z_s,U_s)ds-\int_t^T Z_s dB_s -\int_t^T \int_{\R^d\backslash \{0\}} U_s(x)(\mu-\nu)(ds,dx). \label{def_bsdej}\end{aligned}$$ We refer the reader to Section \[notations\_qbsdej\] for more precise definitions and notations. In this paper, $g$ will be supposed to satisfy a Lipschitz-quadratic growth property. More precisely, $g$ will be Lipschitz in $y$, and will satisfy a quadratic growth condition in $(z,u)$ (see Assumption \[assump:hquad\](iii) below). The interest for such a class of quadratic BSDEs has increased a lot in the past few years, mainly due to the fact that they naturally appear in many stochastic control problems, for instance involving utility maximization (see among many others [@ekr] and [@him]).
When the filtration is generated only by a Brownian motion, the existence and uniqueness of quadratic BSDEs with a bounded terminal condition has been first treated by Kobylanski [@kob]. Using an exponential transformation, she managed to fall back into the scope of BSDEs with a coefficient having linear growth. Then the wellposedness result for quadratic BSDEs is obtained by means of an approximation method. The main difficulty lies then in proving that the martingale part of the approximation converges in a strong sense. This result has then been extended in several directions, to a continuous setting by Morlais [@morlais], to unbounded solutions by Briand and Hu [@bh] or more recently by Mocha and Westray [@moc]. In particular cases, several authors managed to obtain further results, to name but a few, see Hu and Schweizer [@hu], Hu, Imkeller and Müller [@him], Mania and Tevzadze [@man] or Delbaen, et al. [@del]. This approach was later totally revisited by Tevzadze [@tev], who gave a direct proof in the Lipschitz-quadratic setting. His methodology is fundamentally different, since he uses a fixed-point argument to obtain existence of a solution for small terminal condition, and then pastes solutions together in the general bounded case. In this regard, there is no longer any need to obtain the difficult strong convergence result needed by Kobylanski [@kob]. More recently, applying yet a completely different approach using now a forward point of view and stability results for a special class of quadratic semimartingales, Barrieu and El Karoui [@elkarbar] generalized the above results. Their approach has the merit of greatly simplifying the problem of strong convergence of the martingale part when using approximation arguments, since they rely on very general semimartingale convergence results. Notice that this approach was, partially, present in an earlier work of Cazanave, Barrieu and El Karoui [@elkarcaz], but limited to a bounded framework.
Nonetheless, when it comes to quadratic BSDEs in a discontinuous setting, the literature is far less abounding. Until very recently, the only existing results concerned particular cases of quadratic BSDEs, which were exactly the ones appearing in utility maximization or indifference pricing problems in a jump setting. Thus, Becherer [@bech] first studied bounded solutions to BSDEs with jumps in a finite activity setting, and his general results were improved by Morlais [@mor], who proved existence of the solution to a special quadratic BSDE with jumps, which naturally appears in a utility maximization problem, using the same type of techniques as Kobylanski. The first breakthrough in order to tackle the general case was obtained by Ngoupeyou [@ngou] in his PhD thesis, and the subsequent papers by El Karoui, Matoussi and Ngoupeyou [@elmatn] and by Jeanblanc, Matoussi and Ngoupeyou [@jmn]. They non-trivially extended the techniques developed in [@elkarbar] to a jump setting, and managed to obtain existence of solutions for quadratic BSDEs with non-bounded terminal conditions. We emphasize that some of our arguments were inspired by their techniques and the ones developed in [@elkarbar]. Nonetheless, as explained throughout the paper, our approach follows a completely different direction and allows in some cases to consider BSDEs which are outside of the scope of [@elmatn], even though, unlike them, we are constrained to work with bounded terminal conditions. Moreover, at least for small terminal conditions, our approach allows to obtain a wellposedness theory for multidimensional quadratic BSDEs with jumps.
After the completion of this paper, we became aware of a very recent result of Laeven and Stadje [@laev] who proved a general existence result for BSDEJs with convex generators, using verification arguments. We emphasize that our approach is very different and do not need any convexity assumption in order to obtain existence of a solution. Nonetheless, their result and ours do not imply each other.
Our aim here is to extend the fixed-point methodology of Tevzadze [@tev] to the case of a discontinuous filtration. We first obtain an existence result for a terminal condition $\xi$ having a ${\left\|\cdot\right\|}_{\infty}$-norm which is small enough. Then the result for any $\xi$ in $\mathbb L^{\infty}$ follows by splitting $\xi$ in pieces having a small enough norm, and then pasting the obtained solutions to a single equation. Since we deal with bounded solutions, the space of BMO martingales will play a particular role in our setting. We will show that it is indeed the natural space for the continuous and the pure jump martingale terms appearing in the BSDE \[def\_bsdej\], when $Y$ is bounded. When it comes to uniqueness of a solution in this framework with jumps, we need additional assumptions on the generator $g$ for a comparison theorem to hold. Namely, we will use on the one hand the Assumption \[assump.roy\], which was first introduced by Royer [@roy] in order to ensure the validity of a comparison theorem for Lipschitz BSDEs with jumps, and on the other hand a convexity assumption which was already considered by Briand and Hu [@bh2] in the continuous case. We extend here these comparison theorems to our setting (Proposition \[prop.comp\]), and then use them to give a uniqueness result.
This wellposedness result for bounded quadratic BSDEs with jumps opens the way to many possible applications. Barrieu and El Karoui [@elkarbar2] used quadratic BSDEs to define time consistent convex risk measures and study their properties. The extension of some of these results to the case with jumps is the object of our accompanying paper [@kpz4].
The rest of this paper is organized as follows. In Section \[section.1\], we give all the notations and present the natural spaces and norms in our framework. Then in Section \[sec.qbsdej\] we provide the definition of BSDE with jump, we give the main assumptions on our generator and we prove several a priori estimates for the solution corresponding solution. Next, in Sections \[sec.ex1\] and \[sec.ex2\] we prove an existence result for a small enough terminal condition which we then extend to the general bounded case. Finally, Section \[sec.comp\] is devoted to the obtention of comparison theorems and stability results for our class of BSDEJs.
Preliminaries {#section.1}
=============
Notations {#notations_qbsdej}
---------
We consider in all the paper a filtered probability space $\left(\Omega,\mathcal F, \left\lbrace \mathcal F_t\right\rbrace_{0\leq t\leq T},\mathbb P\right)$, whose filtration satisfies the usual hypotheses of completeness and right-continuity. We suppose that this filtration is generated by a $d$-dimensional Brownian motion $B$ and an independent integer valued random measure $\mu(\omega,dt,dx)$ defined on $\mathbb R^+\times E$, with compensator $\lambda(\omega,dt,dx)$. $\widetilde \Omega:= \Omega \times \mathbb R^+ \times E$ is equipped with the $\sigma$-field $\widetilde{\mathcal P}:= \mathcal P \times \mathcal E$, where $\mathcal P$ denotes the predictable $\sigma$-field on $\Omega \times \mathbb R^+$ and $\Ec$ is the Borel $\sigma$-field on $E$.
To guarantee the existence of the compensator $\lambda(\omega,dt,dx)$, we assume that for each $A$ in $\Bc(E)$ and each $\omega$ in $\Omega$, the process $X_t:= \mu(\omega,A,[0,t]) \in \Ac^+_{loc}$, which means that there exists an increasing sequence of stopping times $(T_n)$ such that $T_n \to + \infty$ a.s. and the stopped processes $X_t^{T_n}$ are increasing, càdlàg, adapted and satisfy $\E[X_{\infty}]<+\infty$.
We assume in all the paper that $\lambda$ is absolutely continuous with respect to the Lebesgue measure $dt$, i.e. $\lambda(\omega,dt,dx)=\nu_t(\omega,dx)dt$. Finally, we denote $\widetilde\mu$ the compensated jump measure $$\widetilde\mu(\omega,dx,dt) = \mu(\omega,dx,dt) - \nu_t(\omega,dx)\, dt.$$
In our setting, we emphasise that we allow the compensator of the jump measure to be a random measure unlike most of the literature where it is a classic Lévy measure (see however [@bech] for a similar approach). This will not increase the complexity of our proofs, provided that the martingale representation property of Assumption \[martingale\_representation\] below holds true.
Following Tang and Li [@li] and Barles et al. [@barles], the definition of a BSDE with jumps is then
\[def\_bsdej2\] Let $\xi$ be a $\mathcal F_T$-measurable random variable. A solution to the BSDEJ with terminal condition $\xi$ and generator $g$ is a triple $(Y,Z,U)$ of progressively measurable processes such that $$Y_t=\xi+\int_t^Tg_s(Y_s,Z_s,U_s)ds-\int_t^TZ_sdB_s-\int_t^T\int_{E} U_s(x)\widetilde\mu(dx,ds),\ t\in[0,T],\ \mathbb P-a.s.
\label{eq:bsdej}$$
where $g:\Omega\times[0,T]\times\mathbb R\times\mathbb R^d\times \Ac(E) \rightarrow \mathbb R$ is a given application and $$\Ac(E):=\left\{u: \, E \rightarrow \R,\, \Bc(E)-\text{measurable} \right\}.$$
Then, the processes $Z$ and $U$ are supposed to satisfy the minimal assumptions so that the quantities in are well defined, namely $(Z,U) \in \mathcal Z \x \mathcal U$, where $\mathcal Z$ (resp. $\Uc$) denotes the space of all $\mathbb F$-predictable $\mathbb R^d$-valued processes $Z$ (resp. $\mathbb F$-predictable functions $U$) with $$\int_0^T{\left|Z_t\right|}^2dt <+\infty,\ \left(\text{resp. }\int_0^T\int_{E}{\left|U_t(x)\right|}^2 \nu_t(dx)dt <+\infty\right),\ \mathbb P-a.s.$$
Notice that this is a particular case of the framework considered earlier by El Karoui and Huang [@elkh] and El Karoui et al. [@elkaroui] where the filtration is more general, and therefore they do not have an explicit form for the martingale part which is orthogonal to the Brownian one. Here, knowing explicitly this orthogonal martingale allows to have a dependance of the generator in it, through the predictable function $U$. Here $U$ plays a role analogous to the quadratic variation in the continuous case. However, there are some notable differences, since for each $t$, $U_t$ is a function mapping $E$ to $\R$. This is why the treatment of the dependence in $u$ in the assumptions for the generator is not symmetric to the treatment of the dependence in $z$, and in particular we deal with Fréchet derivatives with respect to $u$ (see Assumption \[lipschitz\_assumption\] for more precise statements).
Standard spaces and norms
-------------------------
We introduce the following norms and spaces for any $p\geq 1$.
$\mathcal S^\infty$ is the space of $\mathbb R$-valued càdlàg and $\mathcal F_t$-progressively measurable processes $Y$ such that $${\left\|Y\right\|}_{\mathcal S^\infty}:=\underset{0\leq t\leq T}{\sup}{\left\|Y_t\right\|}_\infty<+\infty.$$
$\mathbb H^p$ is the space of $\mathbb R^d$-valued and $\mathcal F_t$-progressively measurable processes $Z$ such that $${\left\|Z\right\|}^p_{\mathbb H^p}:=\mathbb E\left[\left(\int_0^T{\left|Z_t\right|}^2dt\right)^{\frac p2}\right]<+\infty.$$
The two spaces above are the classical ones in the BSDE theory in continuous filtrations. We introduce finally a space which is specific to the jump case, and which plays the same role for $U$ as $\mathbb H^p$ for $Z$. $\mathbb J^p$ is the space of predictable and $\mathcal E$-measurable applications $U:\Omega\times[0,T]\times E$ such that $${\left\|U\right\|}^p_{\mathbb J^p}:=\mathbb E\left[\left(\int_0^T\int_E{\left|U_s(x)\right|}^2\nu_s(dx)ds\right)^{\frac p2}\right]<+\infty.$$
A word on càdlàg BMO martingales
--------------------------------
The recent literature on quadratic BSDEs is very rich on remarks and comments about the deep theory of continuous BMO martingales. However, it is clearly not as well documented when it comes to càdlàg BMO martingales, whose properties are crucial in this paper. Indeed, apart from some remarks in the book by Kazamaki [@kaz], the extension to the càdlàg case of the classical results of BMO theory, cannot always be easily found. Our main goal in this short subsection is to give a rapid overview of the existing literature and results concerning BMO martingales with càdlàg trajectories, with an emphasis where the results differ from the continuous case. Let us start by recalling some notations and definitions.
$\rm{BMO}$ is the space of square integrable càdlàg $\mathbb R^d$-valued martingales $M$ such that $${\left\|M\right\|}_{\rm{BMO}}:=\underset{\tau\in\mathcal T_0^T}{\esup^\mathbb P}{\left\|\mathbb E_\tau\left[\left(M_T-M_{\tau^-}\right)^2\right]\right\|}_{\infty}<+\infty,$$ where for any $t\in[0,T]$, $\mathcal T_t^T$ is the set of $(\mathcal F_s)_{0\leq s\leq T}$-stopping times taking their values in $[t,T]$.
$\mathbb J^2_{\rm{BMO}}$ is the space of predictable and $\mathcal E$-measurable applications $U:\Omega\times[0,T]\times E$ such that $${\left\|U\right\|}^2_{\mathbb J^2_{\rm{BMO}}}:={\left\|\int_0^.\int_EU_s(x)\widetilde\mu(dx,ds)\right\|}_{ \rm{BMO}}<+\infty.$$
$\mathbb H^2_{\rm{BMO}}$ is the space of $\mathbb R^d$-valued and $\mathcal F_t$-progressively measurable processes $Z$ such that $${\left\|Z\right\|}^2_{\mathbb H^2_{\rm{BMO}}}:={\left\|\int_0^.Z_sdB_s\right\|}_{\rm{BMO}}<+\infty.$$
As soon as the process $\langle M\rangle$ is defined for a martingale $M$, which is the case if for instance $M$ is locally square integrable, then it is easy to see that $M\in \rm{BMO}$ if the jumps of $M$ are uniformly bounded in $t$ by some positive constant $C$ and $$\underset{\tau\in\mathcal T_0^T}{\esup^\mathbb P}{\left\|\mathbb E_\tau\left[\langle M\rangle_T-\langle M\rangle_{\tau}\right]\right\|}_{\infty}\leq C.$$ Furthermore the BMO norm of $M$ is then smaller than $2C$. We also recall the so called energy inequalities (see [@kaz] and the references therein). Let $Z\in\mathbb H^2_{\rm{BMO}}$, $U\in\mathbb J^2_{\rm{BMO}}$ and $p\geq 1$. Then we have $$\begin{aligned}
\label{energy}
\nonumber&\mathbb E\left[\left(\int_0^T{\left|Z_s\right|}^2ds\right)^p\right]\leq 2p!\left(4{\left\|Z\right\|}_{\mathbb H^2_{\rm{BMO}}}^2\right)^p\\
&\mathbb E\left[\left(\int_0^T\int_EU_s^2(x)\nu_s(dx)ds\right)^p\right]\leq 2p!\left(4{\left\|U\right\|}_{\mathbb J^2_{\rm{BMO}}}^2\right)^p.\end{aligned}$$
Let us now turn to more precise properties and estimates for BMO martingales. It is a classical result (see [@kaz]) that the Doléans-Dade exponential of a continuous BMO martingale is a uniformly integrable martingale. Things become a bit more complicated in the càdlàg case, and more assumptions are needed. Let us first define the Doléans-Dade exponential of a square integrable martingale $X$, denoted $\mathcal E(X)$. This is as usual the unique solution $Z$ of the SDE $$Z_t=1+\int_0^tZ_{s^-}dX_s, \ \mathbb P-a.s.,$$ and is given by the formula $$\mathcal E(X)_t=e^{X_t-\frac12<X^c>_t}\prod_{0<s\leq t}(1+\Delta X_s)e^{-\Delta X_s},\ \mathbb P-a.s.$$
One of the first results concerning Doléans-Dade exponential of BMO martingales was proved by Doléans-Dade and Meyer [@dolm]. They showed that
Let $M$ be a càdlàg BMO martingale such that ${\left\|M\right\|}_{\rm{BMO}}<1/8$. Then $\mathcal E(M)$ is a strictly positive uniformly integrable martingale.
The constraint on the norm of the martingale being rather limiting for applications, this result was subsequently improved by Kazamaki [@kaz2], where the constraints is now on the jumps of the martingale
Let $M$ be a càdlàg BMO martingale such that there exists $\delta >0$ with $\Delta M_t\geq -1+\delta$, for all $t\in [0,T]$, $\mathbb P-a.s.$ Then $\mathcal E(M)$ is a strictly positive uniformly integrable martingale.
Furthermore, we emphasize, as recalled in the counter-example of Remark $2.3$ in [@kaz], that a complete generalization to the càdlàg case is not possible. We also refer the reader to Lépingle and Mémin [@lepinmemin1] and [@lepinmemin2] for general sufficient conditions for the uniform integrability of Doléans-Dade exponentials of càdlàg martingales. This also allows us to obtain immediately a Girsanov Theorem in this setting, which will be extremely useful throughout the paper.
\[girsanov\] Let us consider the following càdlàg martingale $M$ $$M_t:=\int_0^t\varphi_sdB_s+\int_0^t\int_E\gamma_s(x)\widetilde\mu(dx,ds),\ \mathbb P-a.s.,$$ where $\gamma$ is bounded and $(\varphi,\gamma)\in\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}$ and where there exists $\delta >0$ with $\gamma_t\geq -1+\delta$, $\mathbb P\times d\nu_t-a.e.$, for all $t\in [0,T]$.
Then, the probability measure $\mathbb Q$ defined by $\frac{d\mathbb Q}{d\mathbb P}=\mathcal E\left(M_.\right),$ is indeed well-defined and starting from any $\mathbb P$-martingale, by, as usual, changing adequately the drift and the jump intensity, we can obtain a $\mathbb Q$-martingale.
We now address the question of the so-called reverse Hölder inequality, which implies in the continuous case that if $M$ is a BMO martingale, there exists some $r>1$ such that $\mathcal E(M)$ is $L^r$-integrable. As for the previous result on uniform integrability, this was extended to the càdlàg case first in [@dolm] and [@kaz3], with the additional assumption that the BMO norm or the jumps of $M$ are sufficiently small. The following generalization is taken from [@izu]
\[prop.expor\] Let $M$ be a càdlàg BMO martingale such that there exists $\delta >0$ with $\Delta M_t\geq -1+\delta$, for all $t\in [0,T]$, $\mathbb P-a.s.$ Then $\mathcal E(M)$ is in $L^r$ for some $r>1$.
Quadratic BSDEs with jumps {#sec.qbsdej}
==========================
The non-linear generator
------------------------
Following the Definition \[def\_bsdej\] of BSDEs with jumps, we need now to specify in more details the assumptions we make on the generator $g$. The most important one in our setting will be the quadratic growth assumption of Assumption \[assump:hquad\](ii) below. It is the natural generalization to the jump case of the usual quadratic growth assumption in $z$. Before proceeding further, let us define the following function $$j_t(u):=\int_E\left(e^{u(x)}-1-u(x)\right)\nu_t(dx).$$
This function $j(u)$ plays the same role for the variable $u$ as the square function for the variable $z$. In order to understand this, let us consider the following “simplest” quadratic BSDE with jumps $$y_t=\xi+\int_t^T\left(\frac\gamma2{\left|z_s\right|}^2+\frac1\gamma j_s(\gamma u_s)\right)ds-\int_t^Tz_sdB_s-\int_t^T\int_Eu_s(x)\widetilde\mu(dx,ds), \ t\in[0,T],\\ \mathbb P-a.s.$$
Then a simple application of Itô’s formula gives formally $$e^{\gamma y_t}=e^{\gamma \xi}-\int_t^Te^{\gamma y_s}z_sdB_s-\int_t^T\int_Ee^{\gamma y_{s^-}}\left(e^{\gamma u_s(x)}-1\right)\widetilde\mu(dx,ds),\ t\in[0,T],\ \mathbb P-a.s.$$
Still formally, taking the conditional expectation above gives finally $$y_t=\frac1\gamma\ln\left(\mathbb E_t\left[e^{\gamma \xi}\right]\right),\ t\in[0,T],\ \mathbb P-a.s.,$$ and we recover the so-called entropic risk measure which in the continuous case corresponds to a BSDE with generator $\frac{\gamma}{2}{\left|z\right|}^2$.
Of course, for the above to make sense, the function $j$ must at the very least be well defined. A simple application of Taylor’s inequalities shows that if the function $x\mapsto u(x)$ is bounded $d\nu_t-a.e.$ for every $0\leq t\leq T$, then we have for some constant $C>0$ $$0\leq e^{u(x)}-1-u(x)\leq Cu^2(x), \text{ $d\nu_t-a.e.$ for every $0\leq t\leq T$.}$$
Hence, if we introduce for $1<p\leq +\infty$ the spaces $$L^p(\nu):=\left\{u,\ \text{$\mathcal E$-measurable, such that $u\in L^p(\nu_t)$ for all $0\leq t\leq T$}\right\},$$ then $j$ is well defined on $L^2(\nu)\cap L^\infty(\nu)$. We now give our quadratic growth assumption on $g$.
\[assump:hquad\]\[Quadratic growth\]
[(i)]{} For fixed $(y,z,u)$, $g$ is $\mathbb{F}$-progressively measurable.
[(ii)]{} For any $p\geq 1$ $$\label{inte}
\underset{\tau\in\mathcal T_0^T}{\esup^\mathbb P}\ \mathbb E_\tau\left[\left(\int_\tau^T{\left|g_t(0,0,0)\right|}dt\right)^p\right]<+\infty, \ \mathbb P-a.s.$$
[(iii)]{} $g$ has the following growth property. There exists $(\beta,\gamma)\in \mathbb R_+\times \mathbb R^*_+$ and a positive predictable process $\alpha$ satisfying the same integrability condition as $g_t(0,0,0)$, such that for all $(\omega,t,y,z,u)$ $$\begin{aligned}
-\alpha_t-\beta{\left|y\right|}-\frac\gamma2{\left|z\right|}^2-\frac{j_t(-\gamma u)}{\gamma} \leq g_t(\omega,y,z,u)-g_t(0,0,0)\leq \alpha_t+\beta{\left|y\right|}+\frac\gamma2{\left|z\right|}^2+\frac{j_t(\gamma u)}{\gamma} .\label{eq_quadratique}\end{aligned}$$
\[remrem\] We emphasize that unlike the usual quadratic growth assumptions for continuous BSDEs, condition is not symmetric. It is mainly due to the fact that unlike the functions ${\left|.\right|}$ and ${\left|.\right|}^2$, the function $j$ is not even. Moreover, with this non-symmetric condition, it is easily seen that if $Y$ is a solution to equation with a generator satisfying the condition , then $-Y$ is also a solution to a BSDE whose generator satisfy the same condition . More precisely, if $(Y,Z,U)$ solves equation , then $(-Y,-Z,-U)$ solves the BSDEJ with terminal condition $-\xi$ and generator $\widetilde g_t(y,z,u):=-g_t(-y,-z,-u)$ which clearly also satisfies . This will be important for the proof of Lemma \[lemma.bmo\].
We also want to insist on the structure which appears in . Indeed, the constant $\gamma$ in front of the quadratic term in $z$ is the same as the one appearing in the term involving the function $j$. As already seen for the entropic risk measure above, if the constants had been different, say respectively $\gamma_1$ and $\gamma_2$, the exponential transformation would have failed. Moreover, since the function $\gamma\mapsto \gamma^{-1}j_t(\gamma u)$ is not monotone, then we cannot increase or decrease $\gamma_1$ and $\gamma_2$ to recover the desired estimate . Moreover, we emphasize that such a structure already appeared in [@elmatn], [@jmn] and [@ngou], where it was also crucial in order to obtain existence. Notice however that thanks to our particular context of bounded terminal conditions, we will show that in some cases, we are no longer constrained by this structure (see Remark \[rem.assumptions\]).
First a priori estimates for the solution
-----------------------------------------
We first prove a result showing a link between the BMO spaces and quadratic BSDEs with jumps, a property which is very well known in the continuous case since the paper by Hu, Imkeller and Müller [@him], and which also appears in [@mor] and [@ngou]. We emphasize that only Assumption \[assump:hquad\] is necessary to obtain it. Before proceeding, we define for every $x\in\mathbb R$ and every $\eta\neq0$, $h_\eta(x):=(e^{\eta x}-1-\eta x)/\eta$. The function $h_\eta$ already appears in our growth Assumption \[assump:hquad\](ii), and the following trivial property that it satisfies is going to be crucial for us $$h_{2\eta}(x)=\frac{\left(e^{\eta x}-1\right)^2}{2\eta}+ h_\eta(x).
\label{eq:oulala}$$
We also give the two following inequalities which are of the utmost importance in our jump setting. We emphasize that the first one is trivial, while the second one can be proved using simple but tedious algebra. $$\begin{aligned}
\label{1}
2\leq e^x+e^{-x},\text{ $\forall x\in\mathbb R$},\ \ x^2\leq a\left(e^x-1\right)^2+ \frac{\left(1-e^{-x}\right)^2}{a},\text{ $\forall(a,x)\in\mathbb R^*_+\times\mathbb R$.}\end{aligned}$$
We then have the following Lemma (which is closely related to Proposition $8$ in [@ngou]).
\[lemma.bmo\] Let Assumption \[assump:hquad\] hold. Assume that $(Y,Z,U)$ is a solution to the BSDEJ such that $(Z,U)\in \mathcal Z\times \mathcal U$, the jumps of $Y$ are bounded and $$\underset{\tau\in\mathcal T_0^T}{\esup^{\P}}\ \mathbb E_\tau\left[\exp\left(2\gamma\underset{\tau\leq t\leq T}{\sup}\pm Y_t\right)\vee\exp\left(4\gamma\underset{\tau\leq t\leq T}{\sup}\pm Y_t\right)\right]<+\infty, \; \mathbb P-a.s.
\label{eq:3}$$
Then $Z\in\mathbb H^2_{\rm{BMO}}$ and $U\in \mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$.
First of all, since the size of the jumps of $Y$ is bounded, there exists a version of $U$, that is to say that there exists a predictable function $\widetilde U$ such that for all $t\in[0,T]$ $$\int_E{\left|\widetilde U_t(x)-U_t(x)\right|}^2\nu_t(dx)=0,\ \mathbb P-a.s.,$$ and such that $|\widetilde{U}_t(x)|\leq C,\text{ for all $t$, $\mathbb P-a.s$}.$ For the sake of simplicity, we will always consider this version and we still denote it $U$. For the proof of this result, we refer to Morlais [@mor].
Let us consider the following processes $$\int_0^Te^{2\gamma Y_t}Z_tdB_t\text{ and }\int_0^Te^{2\gamma Y_{t^-}}\left(e^{2\gamma U_t(x)}-1\right)\widetilde\mu(dx,dt).$$
We will first show that they are local martingales. Indeed, we have $$\begin{aligned}
\int_0^Te^{4\gamma Y_t}Z_t^2dt\leq \exp\left(4\gamma \underset{0\leq t\leq T}{\sup} Y_t\right)\int_0^TZ_t^2dt<+\infty, \ \mathbb P-a.s., \end{aligned}$$ since $Z\in\mathcal Z$ and holds. Similarly, we have $$\begin{aligned}
\int_0^T\int_Ee^{4\gamma Y_t}U_t^2(x)\nu_t(dx)dt\leq \exp\left(4\gamma \underset{0\leq t\leq T}{\sup} Y_t\right)\int_0^T\int_EU_t^2(x)\nu_t(dx)dt<+\infty, \ \mathbb P-a.s., \end{aligned}$$ since $U\in\mathcal U$ and holds.
Let now $(\tau_n)_{n\geq 1}$ be a localizing sequence for the $\mathbb P$-local martingales above. By Itô’s formula under $\mathbb P$ applied to $e^{2\gamma Y_t}$, we have for every $\tau \in \mathcal T^T_0$ $$\begin{aligned}
&\frac{4\gamma^2}{2}\int_\tau^{\tau_n}e^{2\gamma Y_t}{\left|Z_t\right|}^2dt+2\gamma\int_\tau^{\tau_n}\int_Ee^{2\gamma Y_t}h_{2\gamma}\left(U_t(x)\right)\nu_t(dx)dt\\
&=e^{2\gamma Y_{\tau_n}}-e^{2\gamma Y_{\tau}}+2\gamma\int_\tau^{\tau_n}e^{2\gamma Y_t}g_t(Y_t,Z_t,U_t)dt-2\gamma\int_\tau^{\tau_n}e^{2\gamma Y_{t}}Z_tdB_t\\
&\hspace{0.9em}-2\gamma\int_\tau^{\tau_n}\int_Ee^{2\gamma Y_{t^{-}}}\left(e^{2\gamma U_t(x)}-1\right)\widetilde\mu(dx,dt)\\
&\leq e^{2\gamma Y_{\tau_n}}-e^{2\gamma Y_{\tau}}+2\gamma\int_\tau^{\tau_n}e^{2\gamma Y_t}\left(\alpha_t+{\left|g_t(0,0,0)\right|}+\beta{\left|Y_t\right|}\right)dt\\
&\hspace{0.9em}+2\gamma\int_\tau^{\tau_n}e^{2\gamma Y_t}\left(\frac{\gamma}{2}{\left|Z_t\right|}^2+\int_Eh_\gamma\left(U_t(x)\right)\nu_t(dx)\right)dt-2\gamma\int_\tau^{\tau_n}e^{2\gamma Y_{t}}Z_tdB_t\\
&\hspace{0.9em}-2\gamma\int_\tau^{\tau_n}\int_Ee^{2\gamma Y_{t^{-}}}\left(e^{2\gamma U_t(x)}-1\right)\widetilde\mu(dx,dt),\ \mathbb P-a.s.\end{aligned}$$
Now the situation is going to be different from the continuous case, and the property is going to be important. Indeed, we can take conditional expectation and thus obtain $$\begin{aligned}
&\mathbb E_\tau\left[\gamma^2\int_\tau^{\tau_n}e^{2\gamma Y_t}{\left|Z_t\right|}^2dt+\int_\tau^{\tau_n}\int_Ee^{2\gamma Y_t}\left(e^{\gamma U_t(x)}-1\right)^2\nu_t(dx)dt\right]\\
&\leq C\left(1+\mathbb E_\tau\left[\left(\int_\tau^{\tau_n}\left(\alpha_t+{\left|g_t(0,0,0)\right|}\right)dt\right)^2+\exp\left(2\gamma \underset{\tau\leq t\leq T}{\sup}Y_{t}\right) + \exp\left(4\gamma \underset{\tau\leq t\leq T}{\sup}Y_{t}\right)\right]\right)\\
&\leq C\left(1+\mathbb E_\tau\left[\exp\left(2\gamma \underset{\tau\leq t\leq T}{\sup}Y_{t}\right)\vee \exp\left(4\gamma \underset{\tau\leq t\leq T}{\sup}Y_{t}\right)\right]\right),\end{aligned}$$ where we used the inequality $2ab\leq a^2+b^2$, the fact that for all $x\in\mathbb R$, ${\left|x\right|}e^x\leq C(1+e^{2x})$ for some constant $C>0$ (which as usual can change value from line to line) and the fact that Assumption \[assump:hquad\](ii) and (iii) hold.
Using Fatou’s lemma and the monotone convergence theorem, we obtain $$\begin{aligned}
\label{eq:bmo1}
\nonumber&\mathbb E_\tau\left[\gamma^2\int_\tau^{T}e^{2\gamma Y_t}{\left|Z_t\right|}^2dt+\int_\tau^{T}\int_Ee^{2\gamma Y_t}\left(e^{\gamma U_t(x)}-1\right)^2\nu_t(dx)dt\right]\\
&\leq C\left(1+\underset{\tau\in\mathcal T_0^T}{\esup^\mathbb P}\ \mathbb E_\tau\left[ \exp\left(2\gamma \underset{\tau\leq t\leq T}{\sup}Y_{t}\right)\vee \exp\left(4\gamma \underset{\tau\leq t\leq T}{\sup}Y_{t}\right)\right]\right).\end{aligned}$$
Now, we apply the above estimate for the solution $(-Y,-Z,-U)$ of the BSDEJ with terminal condition $-\xi$ and generator $\widetilde g_t(y,z,u):=-g_t(-y,-z,-u)$, which still satisfies Assumption \[assump:hquad\] (see Remark \[remrem\]) $$\begin{aligned}
\label{eq:bmo2}
\nonumber&\mathbb E_\tau\left[\gamma^2\int_\tau^{T}e^{-2\gamma Y_t}{\left|Z_t\right|}^2dt+\int_\tau^{T}\int_Ee^{-2\gamma Y_t}\left(e^{-\gamma U_t(x)}-1\right)^2\nu_t(dx)dt\right]\\
&\leq C\left(1+\underset{\tau\in\mathcal T_0^T}{\esup^\mathbb P}\ \mathbb E_\tau\left[ \exp\left(2\gamma \underset{\tau\leq t\leq T}{\sup}\left(-Y_{t}\right)\right)\vee \exp\left(4\gamma \underset{\tau\leq t\leq T}{\sup}\left(-Y_{t}\right)\right)\right]\right).\end{aligned}$$
Let us now sum the inequalities and . We obtain $$\begin{aligned}
\nonumber&\mathbb E_\tau\left[\int_\tau^{T}\left(e^{2\gamma Y_t}+e^{-2\gamma Y_t}\right){\left|Z_t\right|}^2+\int_Ee^{2\gamma Y_t}\left(e^{\gamma U_t(x)}-1\right)^2+e^{-2\gamma Y_t}\left(e^{-\gamma U_t(x)}-1\right)^2\nu_t(dx)dt\right]\\
&\leq C\left(1+\underset{\tau\in\mathcal T_0^T}{\esup^\mathbb P}\ \mathbb E_\tau\left[ \underset{\tau\leq t\leq T}{\sup}\left\{e^{2\gamma Y_{t}}\vee e^{4\gamma Y_{t}}+e^{2\gamma \left(-Y_{t}\right)}\vee e^{4\gamma \left(-Y_{t}\right)}\right\}\right]\right).\end{aligned}$$
Finally, from the inequalities in , this shows the desired result.
In the above Proposition, if we only assume that $$\mathbb E\left[\exp\left(2\gamma\underset{0\leq t\leq T}{\sup}\pm Y_t\right)\vee \exp\left(4\gamma\underset{0\leq t\leq T}{\sup}\pm Y_t\right)\right]<+\infty,$$ then the exact same proof would show that $(Z,U)\in\mathbb H^2\times\mathbb J^2$. Moreover, using the Neveu-Garsia Lemma in the same spirit as [@elkarbar], we could also show that $(Z,U)\in\mathbb H^p\times\mathbb J^p$ for all $p>1$.
We emphasize that the results of this Proposition highlight the fact that we do not necessarily need to consider solutions with a bounded $Y$ in the quadratic case to obtain [*a priori*]{} estimates. It is enough to assume the existence of some exponential moments. This is exactly the framework developed in [@bh] and [@elkarbar] in the continuous case and in [@elmatn] and [@ngou] in the jump case. It implies furthermore that it s not necessary to let the BMO spaces play a particular role in the general theory. Nonetheless, our proof of existence will rely heavily on BMO properties of the solution, and the simplest condition to obtain the estimate is to assume that $Y$ is indeed bounded. The aim of the following Proposition is to show that we can control the $\mathcal S^\infty$ norm of $Y$ by the $L^\infty$ norm of $\xi$. Since the proof is very similar to the proof of Lemma $1$ in [@bh], we will omit it.
\[prop.estim\] Let $\xi\in\mathbb L^\infty$. Let Assumption \[assump:hquad\] hold and assume that $${\left|g(0,0,0)\right|}+\alpha\leq M,$$ for some constant $M>0$. Let $(Y,Z,U)\in \mathcal S^{\infty}\times\mathbb H^{2}\times \mathbb J^2$ be a solution to the BSDEJ . Then we have $${\left|Y_t\right|}\leq \gamma M\frac{e^{\beta(T-t)}-1}{\beta}+\gamma e^{\beta(T-t)}{\left\|\xi\right\|}_{\mathbb L^\infty},\ \mathbb P-a.s.$$
Existence and uniqueness for a small terminal condition {#sec.ex1}
=======================================================
The aim of this Section is to obtain an existence and uniqueness result for BSDEJs with quadratic growth when the terminal condition is small enough. However, we will need more assumptions for our proof to work. First, we assume from now on that we have the following martingale representation property. We need this assumption since we will rely on the existence results in [@barles] or [@li] which need the martingale representation.
\[martingale\_representation\] Any local martingale $M$ with respect to the filtration $(\mathcal F_t)_{0\leq t\leq T}$ has the predictable representation property, that is to say that there exist a unique predictable process $H$ and a unique predictable function $U$ such that $(H,U)\in\mathcal Z\times\mathcal U$ and $$M_t=M_0+\int_0^tH_sdB_s+\int_0^t\int_EU_s(x)\widetilde\mu(dx,ds), \; \mathbb P-a.s.$$
This martingale representation property holds for instance when the compensator $\nu$ does not depend on $\omega$, i.e when $\nu$ is the compensator of the counting measure of an additive process in the sense of Sato [@sato]. It also holds when $\nu$ has the particular form described in [@kpz1], in which case $\nu$ depends on $\omega$.
Of course, we also need to assume more properties for our generator $g$. Before stating them, let us describe the underlying intuitions. We want to obtain existence through a fixed point argument, therefore we have to assume some kind of control in $(y,z,u)$ of our generator. In the classical setting of [@pardpeng] and [@elkaroui], the required contraction is obtained by using the Lipschitz property of the generator $g$ and by considering well-chosen weighted norms. More precisely, and abusing notations, they consider for some constant $\upsilon$ the spaces $\mathbb H^2_\upsilon$ consisting of progressively measurable processes $X$ such that $${\left\|X\right\|}^2_{\mathbb H^2_\upsilon}:=\mathbb E\left[\int_0^Te^{\upsilon s}{\left|X_s\right|}^2ds\right]<+\infty.$$
Then by choosing $\upsilon$ large enough, they can obtain a contraction in these spaces. In our context and in the context of [@tev], the Lipschitz assumption for the generator, which would imply linear growth, is replaced by some kind of local Lipschitz assumption with quadratic growth. In return, it becomes generally impossible to recover a contraction. Indeed, as we will see later on, the application for which we want to find a fixed point is no longer Lipschitz but only locally Lipschitz. In these regard, it is useless for us to use weighted norms, since they can only diminish the constants intervening in our estimates. The idea is then to localize the procedure in a ball, so that the application will become Lipschitz, and then to choose the radius of this ball sufficiently small so that we actually recover a contraction. The crucial contribution of Tevzadze [@tev] to this problem is to show that such controls can be obtained by taking a terminal condition small enough.
We now state our assumptions and refer the reader to Remark \[rem.assumptions\] for more discussions.
\[lipschitz\_assumption\]\[Lipschitz assumption\]
Let Assumption \[assump:hquad\](i),(ii) hold and assume furthermore that
[(i)]{} $g$ is uniformly Lipschitz in $y$. $${\left|g_t(\omega,y,z,u)-g_t(\omega,y',z,u)\right|}\leq C{\left|y-y'\right|}\text{ for all }(\omega,t,y,y',z,u).$$
[(ii)]{} $\exists$ $\mu>0$ and $\phi\in\mathbb H^2_{\rm{BMO}}$ such that for all $(t,y,z,z',u)$ $${\left| g_t(\omega,y,z,u)- g_t(\omega,y,z',u)-\phi_t.(z-z')\right|}\leq \mu {\left|z-z'\right|}\left({\left|z\right|}+{\left|z'\right|}\right).$$
[(iii)]{} $\exists$ $\mu>0$ and $\psi\in\mathbb J^2_{\rm{BMO}}$ such that for all $(t,x)$ $$C_1(1\wedge{\left|x\right|})\leq\psi_t(x)\leq C_2(1\wedge{\left|x\right|}),$$ where $C_2>0$, $C_1\geq-1+\delta$ where $\delta>0$. Moreover, for all $(\omega,t,y,z,u,u')$ $${\left| g_t(\omega,y,z,u)- g_t(\omega,y,z,u')-\langle\psi_t,u-u'\rangle_t\right|}\leq \mu {\left\|u-u'\right\|}_{L^2(\nu_t)}\left({\left\|u\right\|}_{L^2(\nu_t)}+{\left\|u'\right\|}_{L^2(\nu_t)}\right),$$ where $\langle u_1,u_2\rangle_t:=\int_Eu_1(x)u_2(x)\nu_t(dx)$ is the scalar product in $L^2(\nu_t)$.
\[rem.assumptions\] Let us comment on the above assumptions. The first one concerning Lipschitz continuity in the variable $y$ is classical in the BSDE theory. The two others may seem a bit complicated, but as already mentioned above, they are almost equivalent to saying that the function $g$ is locally Lipschitz in $z$ and $u$. In the case of the variable $z$ for instance, those two properties would be equivalent if the process $\phi$ were bounded. Here we allow something a bit more general by letting $\phi$ be unbounded but in $\mathbb H^2_{\rm{BMO}}$. Once again, since these assumptions allow us to apply the Girsanov property of Proposition \[girsanov\], we do not need to bound the processes and BMO type conditions are sufficient. Moreover, Assumption \[lipschitz\_assumption\] also implies a weaker version of Assumption \[assump:hquad\]. Indeed, it implies clearly that $${\left|g_t(y,z,u)-g_t(0,0,0)-\phi_t.z-\langle \psi_t,u\rangle_t\right|}\leq C{\left|y\right|}+\mu\left({\left|z\right|}^2+{\left\|u\right\|}^2_{L^2(\nu_t)}\right).$$
Then, for any $u\in L^2(\nu)\cap L^\infty(\nu)$ and for any $\gamma>0$, we have using the mean value Theorem $$\frac\gamma2 e^{-\gamma{\left\|u\right\|}_{L^\infty(\nu)}}{\left\|u\right\|}^2_{L^2(\nu_t)}\leq \frac1\gamma j_t(\pm\gamma u)\leq \frac\gamma2 e^{\gamma{\left\|u\right\|}_{L^\infty(\nu)}}{\left\|u\right\|}^2_{L^2(\nu_t)}.$$
Denote $\delta g_t:=g_t(y,z,u)-g_t(0,0,0)$. We deduce using the Cauchy-Schwarz inequality and the trivial inequality $2ab\leq a^2+b^2$ $$\begin{aligned}
&\delta g_t\leq\frac{{\left|\phi_t\right|}^2}{2}+\frac{{\left\|\psi_t\right\|}^2_{L^2(\nu_t)}}{2}+C{\left|y\right|}+\left(\mu+\frac12\right)\left({\left|z\right|}^2+\frac{2e^{\gamma{\left\|u\right\|}_{L^\infty(\nu)}}}{\gamma^2}j_t(\gamma u)\right)\\
&\delta g_t\geq-\frac{{\left|\phi_t\right|}^2}{2}-\frac{{\left\|\psi_t\right\|}^2_{L^2(\nu_t)}}{2}-C{\left|y\right|}-\left(\mu+\frac12\right)\left({\left|z\right|}^2+\frac{2e^{\gamma{\left\|u\right\|}_{L^\infty(\nu)}}}{\gamma^2}j_t(-\gamma u)\right).\end{aligned}$$
It is easy to check, using the energy inequalities and the definition of the essential supremum, that the term ${\left|\phi_t\right|}^2+{\left\|\psi_t\right\|}^2_{L^2(\nu_t)}$ above satisfies the integrability condition . Hence, we have obtained a growth property which is similar to , the only difference being that the constants appearing in the quadratic term in $z$ and the term involving the function $j$ are not the same. We thus are no longer constrained by the structure already mentioned in Remark \[remrem\].
We now show that if we can solve the BSDEJ for a generator $g$ satisfying Assumption \[lipschitz\_assumption\] with $\phi=0$ and $\psi=0$, we can immediately obtain the existence for general $\phi$ and $\psi$. This will simplify our subsequent proof of existence. Notice that the result relies essentially on the Girsanov Theorem of Proposition \[girsanov\].
\[lemma.phipsi\] Define $\overline{g}_t(\omega,y,z,u):=g_t(\omega,y,z,u)-\phi_t(\omega).z-\langle\psi_t(\omega),u\rangle_t.$ Then $(Y,Z,U)$ is a solution of the BSDEJ with generator $g$ and terminal condition $\xi$ under $\mathbb P$ if and only if $( Y,Z,U)$ is a solution of the BSDEJ with generator $\overline g$ and terminal condition $\xi$ under $\mathbb Q$ where $$\frac{d\mathbb Q}{d\mathbb P}=\mathcal E\left(\int_0^T\phi_sdB_s+\int_0^T\int_E\psi_s(x)\widetilde\mu(dx,ds)\right).$$
We have clearly $$\begin{aligned}
Y_t\-\-=\-\-\xi\-+\-\int_t^T\overline g_s(Y_s,Z_s,U_s)ds \--\-\int_t^TZ_s(dB_s-\phi_sds)\--\-\int_t^T\-\int_EU_s(x)\-(\widetilde\mu(dx,ds)\--\-\psi_s(x)\nu_s(dx)ds).\end{aligned}$$
Now, by our BMO assumptions on $\phi$ and $\psi$ and the fact that we assumed that $\psi\geq-1+\delta$, we can apply Proposition \[girsanov\] and $\mathbb Q$ is well defined. Then by Girsanov Theorem, we know that $dB_s-\phi_sds$ and $\widetilde\mu(dx,ds)-\psi_s(x)\nu_s(dx)ds$ are martingales under $\mathbb Q$. Hence the desired result.
It is clear that if $g$ satisfies Assumption \[lipschitz\_assumption\], then $\overline g$ defined above satisfies Assumption \[lipschitz\_assumption\] with $\phi=\psi=0$.
Following Lemma \[lemma.phipsi\] we assume for the time being that $g(0,0,0)=\phi=\psi=0$. Our first result is the following
\[th.small\] Assume that $${\left\|\xi\right\|}_\infty\leq\frac{1}{2\sqrt{15}\sqrt{2670}\mu e^{\frac32CT}},$$ where $C$ is the Lipschitz constant of $g$ in $y$, and $\mu$ is the constant appearing in Assumption \[lipschitz\_assumption\]. Then under Assumption \[lipschitz\_assumption\] with $\phi=0$, $\psi=0$ and $g(0,0,0)=0$, there exists a unique solution $(Y,Z,U)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$ of the BSDEJ .
Notice that in the above Theorem, we do not need Assumption \[assump:hquad\](iii) to hold. This is linked to the fact that, as discussed in Remark \[rem.assumptions\], Assumption \[lipschitz\_assumption\] implies a weak version of Assumption \[assump:hquad\](iii), which is sufficient for our purpose here.
We first recall that we have with Assumption \[lipschitz\_assumption\] when $g(0,0,0)=\phi=\psi=0$ $$\label{es}
{\left|g_t(y,z,u)\right|}\leq C{\left|y\right|}+\mu{\left|z\right|}^2+\mu{\left\|u\right\|}^2_{L^2(\nu_t)}.$$
Consider now the map $\Phi:(y,z,u)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)\rightarrow (Y,Z,U)$ defined by $$Y_t=\xi+\int_t^Tg_s(Y_s,z_s,u_s)ds-\int_t^TZ_sdB_s-\int_t^T\int_EU_s(x)\widetilde\mu(dx,ds).
\label{eq:5}$$
The above is nothing more than a BSDEJ with jumps whose generator depends only on $Y$ and is Lipschitz. Besides, since $(z,u)\in\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$, using , and the energy inequalities we clearly have $$\mathbb E\left[\left(\int_0^T{\left|g_s(0,z_s,u_s)\right|}ds\right)^2\right]<+\infty.$$
Hence, the existence of $(Y,Z,U)\in\mathcal S^2\times\mathbb H^2\times\mathbb J^2$ is ensured by the results of Barles, Buckdahn and Pardoux [@barles] or Li and Tang [@li] for Lipschitz BSDEJs with jumps. Of course, we could have let the generator in depend on $(y_s,z_s,u_s)$ instead. The existence of $(Y,Z,U)$ would then have been a consequence of the predictable martingale representation Theorem. However, the form that we have chosen will simplify some of the following estimates.
We first show that $(Y,Z,U)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$.
Recall that by the Lipschitz hypothesis in $y$, there exists a bounded process $\lambda$ such that $$g_s(Y_s,z_s,u_s)=\lambda_sY_s+g_s(0,z_s,u_s).$$
Let us now apply Itô’s formula to $e^{\int_t^s\lambda_udu}Y_s$. We obtain easily from Assumption \[lipschitz\_assumption\] $$\begin{aligned}
Y_t&=\mathbb E_t\left[e^{\int_t^T\lambda_sds}\xi+\int_t^Te^{\int_t^s\lambda_udu}(\lambda_s Y_s+g_s(0,z_s,u_s))ds-\int_t^T\lambda_se^{\int_t^s\lambda_udu}Y_sds\right]\\
&\leq \mathbb E_t\left[e^{\int_t^T\lambda_sds}\xi+\mu\int_t^Te^{\int_t^s\lambda_udu}\left({\left|z_s\right|}^2+\int_Eu_s^2(x)\nu_s(dx)\right)ds\right]\\
&\leq {\left\|\xi\right\|}_\infty+C\left({\left\|z\right\|}^2_{\mathbb H^2_{\rm{BMO}}}+{\left\|u\right\|}^2_{\mathbb J^2_{\rm{BMO}}}\right).\end{aligned}$$
Therefore $Y$ is bounded and consequently, since its jumps are also bounded, we know that there is a version of $U$ such that $${\left\|U\right\|}_{L^\infty(\nu)}\leq 2{\left\|Y\right\|}_{\mathcal S^\infty}.$$
Let us now prove that $(Z,U)\in \mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}$. Applying Itô’s formula to $e^{\eta t}{\left|Y_t\right|}^2$ for some $\eta>0$, we obtain for any stopping time $\tau\in\mathcal T_0^T$ $$\begin{aligned}
&e^{\eta \tau}{\left|Y_\tau\right|}^2+\mathbb E_\tau\left[\int_\tau^Te^{\eta s}{\left|Z_s\right|}^2ds+\int_\tau^T\int_Ee^{\eta s}U_s^2(x)\nu_s(dx)ds\right]\\
&=\mathbb E_\tau\left[e^{\eta T}\xi^2+2\int_\tau^Te^{\eta s}Y_sg_s(Y_s,z_s,u_s)ds-\eta\int_\tau^Te^{\eta s}{\left|Y_s\right|}^2ds\right]\\
&\leq \mathbb E_\tau\left[e^{\eta T}\xi^2+(2C-\eta)\int_\tau^Te^{\eta s}{\left|Y_s\right|}^2ds+2{\left\|Y\right\|}_{\mathcal S^\infty}\int_\tau^Te^{\eta s}{\left|g_s(0,z_s,u_s)\right|}ds\right].\end{aligned}$$
Choosing $\eta= 2C$, and using the elementary inequality $2ab\leq \frac{a^2}{\eps}+\eps b^2$, we obtain $$\begin{aligned}
&{\left|Y_\tau\right|}^2+\mathbb E_\tau\left[\int_\tau^T{\left|Z_s\right|}^2ds+\int_\tau^T\int_EU_s^2(x)\nu_s(dx)ds\right]\\
&\leq \mathbb E_\tau\left[e^{\eta T}\xi^2+\eps{\left\|Y\right\|}_{\mathcal S^\infty}^2+\frac{e^{2\eta T}}{\eps}\left(\int_\tau^T{\left|g_s(0,z_s,u_s)\right|}ds\right)^2\right].\end{aligned}$$
Hence, $$\begin{aligned}
&(1-\eps){\left\|Y\right\|}_{\mathcal S^\infty}^2+{\left\|Z\right\|}^2_{\mathbb H^2_{\rm{BMO}}}+{\left\|U\right\|}^2_{\mathbb J^2_{\rm{BMO}}}\leq e^{\eta T}{\left\|\xi\right\|}_\infty^2+64\mu^2\frac{e^{2\eta T}}{\eps}\left({\left\|z\right\|}_{\mathbb H^2_{\rm{BMO}}}^4+{\left\|u\right\|}_{\mathbb J^2_{\rm{BMO}}}^4\right).\end{aligned}$$
And finally, choosing $\eps=1/2$ $${\left\|Y\right\|}_{\mathcal S^\infty}^2+{\left\|Z\right\|}^2_{\mathbb H^2_{\rm{BMO}}}+{\left\|U\right\|}^2_{\mathbb J^2_{\rm{BMO}}}\leq 2e^{\eta T}{\left\|\xi\right\|}_\infty^2+256\mu^2e^{2\eta T}\left({\left\|z\right\|}_{\mathbb H^2_{\rm{BMO}}}^4+{\left\|u\right\|}_{\mathbb J^2_{\rm{BMO}}}^4\right).$$
Our problem now is that the norms for $Z$ and $U$ in the left-hand side above are to the power $2$, while they are to the power $4$ on the right-hand side. Therefore, it will clearly be impossible for us to prevent an explosion if we do not first start by restricting ourselves in some ball with a well chosen radius. This is exactly the mathematical manifestation of the phenomenon discussed at the beginning of this section. Define therefore $R=\frac{1}{2\sqrt{2670}\mu e^{\eta T}}$, and assume that ${\left\|\xi\right\|}_\infty\leq \frac{R}{\sqrt{15}e^{\frac12\eta T}}$ and that $${\left\|y\right\|}^2_{\mathcal S^\infty}+{\left\|z\right\|}^2_{\mathbb H^2_{\rm{BMO}}}+{\left\|u\right\|}^2_{\mathbb J^2_{\rm{BMO}}}+{\left\|u\right\|}^2_{L^\infty(\nu)}\leq R^2.$$
Denote $\Lambda:={\left\|Y\right\|}^2_{\mathcal S^\infty}+{\left\|Z\right\|}^2_{\mathbb H^2_{\rm{BMO}}}+{\left\|U\right\|}^2_{\mathbb J^2_{\rm{BMO}}}+{\left\|U\right\|}^2_{L^\infty(\nu)}$. We have, since ${\left\|U\right\|}^2_{L^\infty(\nu)}\leq 4{\left\|Y\right\|}^2_{\mathcal S^\infty}$ $$\begin{aligned}
\Lambda\leq 5{\left\|Y\right\|}^2_{\mathcal S^\infty}+{\left\|Z\right\|}^2_{\mathbb H^2_{\rm{BMO}}}+{\left\|U\right\|}^2_{\mathbb J^2_{\rm{BMO}}}&\leq 10e^{\eta T}{\left\|\xi\right\|}_\infty^2+1280\mu^2e^{2\eta T}\left({\left\|z\right\|}_{\mathbb H^2_{\rm{BMO}}}^4+{\left\|u\right\|}_{\mathbb J^2_{\rm{BMO}}}^4\right)\\
&\leq \frac{2R^2}{3}+3560\mu^2e^{2\eta T}R^4=\frac{2R^2}{3}+ \frac{R^2}{3}=R^2.\end{aligned}$$
Hence if $\mathcal B_R$ is the ball of radius $R$ in $\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$, we have shown that $\Phi(\mathcal B_R)\subset \mathcal B_R$.
We show that $\Phi$ is a contraction in this ball of radius $R$.
For $i=1,2$ and $(y^i,z^i,u^i)\in\mathcal B_R$, we denote $(Y^i,Z^i,U^i):=\Phi(y^i,z^i,u^i)$ and $$\begin{aligned}
&\delta y:=y^1-y^2, \ \delta z:=z^1-z^2,\ \delta u:=u^1-u^2,\ \delta Y:=Y^1-Y^2\\
&\delta Z:=Z^1-Z^2,\ \delta U:=U^1-U^2,\ \delta g:=g(Y^2,z^1,u^1)-g(Y^2,z^2,u^2).\end{aligned}$$
Arguing as above, we obtain easily $${\left\|\delta Y\right\|}_{\mathcal S^\infty}^2+{\left\|\delta Z\right\|}^2_{\mathbb H^2_{\rm{BMO}}}+{\left\|\delta U\right\|}^2_{\mathbb J^2_{\rm{BMO}}}\leq 4e^{2\eta T}\underset{\tau\in\mathcal T_0^T}{\sup}\left(\mathbb E_\tau\left[\int_\tau^T{\left|\delta g_s\right|}ds\right]\right)^2.$$
We next estimate that $$\begin{aligned}
\left(\mathbb E_\tau\left[\int_\tau^T{\left|\delta g_s\right|}ds\right]\right)^2&\leq 2\mu^2\left(\mathbb E_\tau\left[\int_\tau^T{\left|\delta z_s\right|}\left({\left|z^1_s\right|}+{\left|z^2_s\right|}\right)ds\right]\right)^2\\
&\hspace{0.9em}+2\mu^2\left(\mathbb E_\tau\left[\int_\tau^T{\left\|\delta u_s\right\|}_{L^2(\nu_s)}\left({\left\|u_s^1\right\|}_{L^2(\nu_s)}+{\left\|u_s^2\right\|}_{L^2(\nu_s)}\right)ds\right]\right)^2\\
&\leq 2\mu^2\left(\mathbb E_\tau\left[\int_\tau^T{\left|\delta z_s\right|}^2ds\right]\mathbb E_\tau\left[\int_\tau^T({\left|z^1_s\right|}+{\left|z^2_s\right|})^2ds\right]\right.\\
&\hspace{0.9em}\left.+\mathbb E_\tau\left[\int_\tau^T{\left\|\delta u_s\right\|}^2_{L^2(\nu_s)}ds\right]\mathbb E_\tau\left[\int_\tau^T\left({\left\|u_s^1\right\|}_{L^2(\nu_s)}+{\left\|u_s^2\right\|}_{L^2(\nu_s)}\right)^2ds\right]\right)\\
&\leq 4R^2\mu^2\left(\mathbb E_\tau\left[\int_\tau^T{\left|\delta z_s\right|}^2ds\right]+\mathbb E_\tau\left[\int_\tau^T\int_E\delta u_s^2(x)\nu(dx)ds\right]\right)\\
&\leq 32R^2\mu^2\left({\left\|\delta z\right\|}_{\mathbb H^2_{\rm{BMO}}}^2+{\left\|\delta u\right\|}_{\mathbb J^2_{\rm{BMO}}}^2\right)\end{aligned}$$
From these estimates, we obtain, using again the fact that ${\left\|\delta U\right\|}^2_{L^\infty(\nu)}\leq 4{\left\|\delta Y\right\|}_{\mathcal S^\infty}^2$ $$\begin{aligned}
{\left\|\delta Y\right\|}_{\mathcal S^\infty}^2+{\left\|\delta Z\right\|}^2_{\mathbb H^2_{\rm{BMO}}}+{\left\|\delta U\right\|}^2_{\mathbb J^2_{\rm{BMO}}}+{\left\|\delta U\right\|}^2_{L^\infty(\nu)}&\leq 640R^2\mu^2e^{2\eta T}\left({\left\|\delta z\right\|}_{\mathbb H^2_{\rm{BMO}}}^2+{\left\|\delta u\right\|}_{\mathbb J^2_{\rm{BMO}}}^2\right)\\
&=\frac{16}{267}\left({\left\|\delta z\right\|}_{\mathbb H^2_{\rm{BMO}}}^2+{\left\|\delta u\right\|}_{\mathbb J^2_{\rm{BMO}}}^2\right).\end{aligned}$$
Therefore $\Phi$ is a contraction which has a unique fixed point.
Then, from Lemma \[lemma.phipsi\], we have immediately the following corollary
\[corcor\] Assume that $${\left\|\xi\right\|}_\infty\leq\frac{1}{2\sqrt{15}\sqrt{2670}\mu e^{\frac32CT}},$$ where $C$ is the Lipschitz constant of $g$ in $y$, and $\mu$ is the constant appearing in Assumption \[lipschitz\_assumption\]. Then under Assumption \[lipschitz\_assumption\] with $g(0,0,0)=0$, there exists a unique solution $(Y,Z,U)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$ of the BSDEJ .
We now show how we can get rid of the assumption that $g_t(0,0,0)=0$.
\[corcor2\] Assume that $${\left\|\xi\right\|}_\infty+D{\left\|\int_0^T{\left|g_t(0,0,0)\right|}dt\right\|}_\infty\leq\frac{1}{2\sqrt{15}\sqrt{2670}\mu e^{\frac32CT}},$$ where $C$ is the Lipschitz constant of $g$ in $y$, $\mu$ is the constant appearing in Assumption \[lipschitz\_assumption\] and $D$ is a large enough positive constant. Then under Assumption \[lipschitz\_assumption\], there exists a solution $(Y,Z,U)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$ of the BSDEJ .
By Corollary \[corcor\], we can show the existence of a solution to the BSDEJ with generator $\widetilde g_t(y,z,u):=g_t(y-\int_0^tg_s(0,0,0)ds,z,u)-g_t(0,0,0)$ and terminal condition $\overline \xi:=\xi+\int_0^Tg_t(0,0,0)dt$. Indeed, even though $\widetilde g$ is not null at $(0,0,0)$, it is not difficult to show with the same proof as in Theorem \[th.small\] that a solution $(\overline Y,\overline Z,\overline U)$ exists (the same type of arguments are used in [@tev]). More precisely, $\widetilde g$ still satisfies Assumption \[lipschitz\_assumption\](i) and when $\phi$ and $\psi$ in Assumption \[lipschitz\_assumption\] are equal to $0$, we have the estimate $${\left|\widetilde g_t(y,z,u)\right|}\leq C{\left\|\int_0^T{\left|g_s(0,0,0)\right|}ds\right\|}_\infty +C{\left|y\right|}+\mu{\left|z\right|}^2+\mu{\left\|u\right\|}^2_{L^2(\nu_t)},$$ which is the counterpart of . Thus, since the constant term in the above estimate is assumed to be small enough, it will play the same role as ${\left\|\xi\right\|}_\infty$ in the first Step of the proof of Theorem \[th.small\].
For the Step $2$, everything still work thanks to the following estimate $$\begin{aligned}
{\left|\widetilde g_t(Y^2,z^1,u^1)-\widetilde g_t(Y^2,z^2,u^2)\right|}\leq& \ \mu{\left|z^1-z^2\right|}\left({\left|z^1\right|}+{\left|z^2\right|}\right)\\
&+\mu{\left\|u^1-u^2\right\|}_{L^2(\nu_t)}\left({\left\|u^1\right\|}_{L^2(\nu_t)}+{\left\|u^2\right\|}_{L^2(\nu_t)}\right).\end{aligned}$$
Then, if we define $(Y_t,Z_t,U_t):=(\overline Y_t-\int_0^tg_s(0,0,0)ds,\overline Z_t,\overline U_t),$ it is clear that it is a solution to the BSDEJ with generator $g$ and terminal condition $\xi$.
We emphasize that the above proof of existence extends readily to a terminal condition which is in $\mathbb R^n$ for any $n\geq 2$.
Existence for a bounded terminal condition {#sec.ex2}
==========================================
We now show that we can still prove existence of a solution for any bounded terminal condition. In return, we will now have to strengthen once more our assumptions on the generator. Intuitively speaking, the Lipschitz and local Lipschitz assumptions in Assumption \[lipschitz\_assumption\] are no longer enough and are replaced by stronger regularity assumptions.
\[assump:hh\]
- $g$ is uniformly Lipschitz in $y$. $${\left|g_t(\omega,y,z,u)-g_t(\omega,y',z,u)\right|}\leq C{\left|y-y'\right|}\text{ for all }(\omega,t,y,y',z,u).$$
- $g$ is $C^2$ in $z$ and there are $\theta>0$ and $(r_t)_{0\leq t\leq T}\in\mathbb H^2_{\rm{BMO}}$, such that for all $(t,\omega,y,z,u)$, $$\lvert D_z g_t(\omega,y,z,u)\rvert\leq r_t + \theta{\left|z\right|}, \ \lvert D^2_{zz} g_t(\omega,y,z,u)\rvert\leq \theta.$$
- $g$ is twice Fréchet differentiable in the Banach space $L^2(\nu)$ and there are constants $\theta$, $\delta>0$, $C_1\geq-1+\delta$, $C_2\geq 0$ and a predictable function $m\in\mathbb J^2_{\rm{BMO}}$ s.t. for all $(t,\omega,y,z,u,x)$, $${\left|D_u g_t(\omega,y,z,u)\right|}\leq m_t+\theta{\left|u\right|},\ C_1(1\wedge{\left|x\right|})\leq D_ug_t(\omega,y,z,u)(x)\leq C_2(1\wedge{\left|x\right|})$$ $${\left\|D^2_u g_t(\omega,y,z,u)\right\|}_{L^2(\nu_t)}\leq \theta.$$
The assumptions (ii) and (iii) above are generalizations to the jump case of the assumptions considered by Tevzadze [@tev]. They will only be useful in our proof of existence and are tailor-made to allow us to apply the Girsanov transformation of Proposition \[girsanov\]. Notice also that since the space $L^2(\nu)$ is clearly a Banach space, there is no problem to define the Fréchet derivative.
We emphasize here that Assumption \[assump:hh\] is stronger than Assumption \[lipschitz\_assumption\]. Indeed, we have the following result
\[lemma.assump\] If Assumption \[assump:hh\](ii) and (iii) hold, then so do Assumption \[lipschitz\_assumption\](ii) and (iii).
We will only show that if Assumption \[assump:hh\]$\rm{(iii)}$ holds, so does Assumption \[lipschitz\_assumption\]$\rm{(iii)}$, the proof being similar for Assumption \[assump:hh\]$\rm{(ii)}$. Since $g$ is twice Fréchet differentiable in $u$, we introduce the process $\psi_t:=D_ug_t(y,z,0)$ which is bounded from above by $m$ and from below by $C_1\geq-1+\delta$ by assumption. Thus, $\psi\in\mathbb J^2_{\rm{BMO}}$. By the mean value theorem, we compute that for some $\lambda\in[0,1]$ and with $u_\lambda:=\lambda u+(1-\lambda)u'$ $$\begin{aligned}
{\left|g_t(y,z,u)-g_t(y,z,u')-\langle\psi_t,u-u'\rangle_t\right|}&\leq{\left\|D_ug_t(y,z,u_\lambda)-\psi_t\right\|}{\left\|u-u'\right\|}_{L^2(\nu_t)}\\
&\leq \theta{\left\|\lambda u+(1-\lambda)u'\right\|}_{L^2(\nu_t)}{\left\|u-u'\right\|}_{L^2(\nu_t)},\end{aligned}$$ by the bound on $D^2_ug$. The result now follows easily.
We can now state our main existence result.
Let $\xi\in\mathbb L^\infty$. Under Assumptions \[assump:hquad\] and \[assump:hh\], there exists a solution $(Y,Z,U)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$ of the BSDEJ .
Of course, our existence results are, somehow, less general than the ones obtained in [@elmatn] and [@ngou], since they consider generators which only satisfy Assumption \[assump:hquad\] and are continuous. Moreover, their terminal conditions are not necessarily bounded. However, we emphasize that in the case of a small terminal condition, our result allows us to no longer assume the structure condition of Assumption \[assump:hquad\], which can be restrictive from the point of view of applications. Notwithstanding this, we would also like to remind the reader that our approach is fundamentally different from theirs, and allows to obtain solutions from Picard iterations. This property could be useful for numerical simulations.
The idea of the proof is to find a “good” splitting of the BSDEJ into the sum of BSDEJs for which the terminal condition is small and existence holds. Then we paste everything together. This is during this pasting step that the regularity of the generator in $z$ and $u$ in Assumption \[assump:hh\] is going to be important.
$\rm{(i)}$ We first assume that $g_t(0,0,0)=0$. Consider an arbitrary decomposition of $\xi$ $$\xi=\sum_{i=1}^n\xi_i\text{ such that }{\left\|\xi_i\right\|}_{\infty}\leq \frac{1}{2\sqrt{15}\sqrt{2670}\mu e^{\frac32CT}},\text{ for all $i$.}$$
We will now construct a solution to recursively.
We define $g^1:=g$ and $(Y^1,Z^1,U^1)$ as the unique solution of $$Y_t^1=\xi_1+\int_t^Tg_s^1(Y_s^1,Z_s^1,U_s^1)ds-\int_t^TZ_s^1dB_s-\int_t^T\int_EU^1_s(x)\widetilde{\mu}(ds,dx),\ \mathbb P-a.s.
\label{eq:bsdej1}$$
Let us show why this solution exists. Since $g^1$ satisfies Assumption \[assump:hh\], we know by Lemma \[lemma.assump\] that it satisfies Assumption \[lipschitz\_assumption\] with $\phi_t:=D_zg_t(y,0,u)$ and $\psi_t:=D_ug_t(y,z,0)$, these processes being respectively in $\mathbb H^2_{\rm{BMO}}$ and $\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$ by assumption. Furthermore, we have $\psi_t(x)\geq C_1(1\wedge{\left|x\right|})$ with $C_1\geq -1+\delta$. Thanks to Theorem \[th.small\] and with the notations of Lemma \[lemma.phipsi\], we can then define the solution to the BSDEJ with driver $\overline g^1$ (which still satisfies $\overline g^1(0,0,0)=0$) and terminal condition $ \xi_1$ under the probability measure $\mathbb Q^1$ defined by $$\frac{d\mathbb Q^1}{d\mathbb P}=\mathcal E\left(\int_0^T\phi_sdB_s+\int_0^T\int_E\psi_s(x)\widetilde\mu(dx,ds)\right).$$
Thanks to Lemma \[lemma.phipsi\], this gives us a solution $(Y^1,Z^1,U^1)$ to with $Y^1$ bounded, which in turn implies with Lemma \[lemma.bmo\] that $(Z^1,U^1)\in\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$.
We assume that we have constructed similarly $(Y^j,Z^j,U^j)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$ for $j\leq i-1$. We then define the generator $$g_t^i(y,z,u):=g_t\left(\overline{Y}^{i-1}_t+y,\overline{Z}^{i-1}_t+z,\overline{U}^{i-1}_t+u\right)-g_t\left(\overline{Y}^{i-1}_t,\overline{Z}^{i-1}_t,\overline{U}^{i-1}_t\right),$$ where $$\overline{Y}^{i-1}_t:=\sum_{j=1}^{i-1}Y_t^j,\ \overline{Z}^{i-1}_t:=\sum_{j=1}^{i-1}Z_t^j,\ \overline{U}^{i-1}_t:=\sum_{j=1}^{i-1}U_t^j.$$
Notice that $g^i(0,0,0)=0$ and since $g$ satisfies Assumption \[assump:hquad\](iii), we have the estimate $$\begin{aligned}
&g^i_t(y,z,u)\leq 2\alpha_t+\beta{\left|y+\overline{Y}_t^{i-1}\right|}+\beta{\left|\overline{Y}_t^{i-1}\right|}+\gamma{\left|z+\overline{Z}^{i-1}_t\right|}^2+\gamma{\left|\overline{Z}^{i-1}_t\right|}^2\\
&\hspace{5.1em} +\frac1\gamma j_t\left(\gamma\left(u+\overline{U}^{i-1}_t\right)\right)+\frac1\gamma j_t\left(\gamma\overline{U}^{i-1}_t\right)\\
&\leq2\alpha_t+2\beta{\left|\overline{Y}_t^{i-1}\right|}+3\gamma{\left|\overline{Z}^{i-1}_t\right|}^2+\frac{1}{\gamma} j_t\left(\gamma\overline{U}^{i-1}_t\right)+\frac{1}{2\gamma} j_t\left(2\gamma\overline{U}^{i-1}_t\right)+\beta{\left|y\right|}+2\gamma{\left|z\right|}^2+\frac{1}{2\gamma} j_t\left(2\gamma u\right),\end{aligned}$$ where we used the inequalities $(a+b)^2\leq 2(a^2+b^2)$ and the fact that for all $(\gamma_1,\gamma_2)\in\mathbb R^2$ (see [@kpz4] for a proof) $$\label{gammma}
(\gamma_1+\gamma_2)\left(e^{\frac{x+y}{\gamma_1+\gamma_2}}-1- \frac{x+y}{\gamma_1+\gamma_2}\right)\leq \gamma_1\left(e^{\frac{x}{\gamma_1}}-1- \frac{x}{\gamma_1}\right) + \gamma_2\left(e^{\frac{y}{\gamma_2}}-1- \frac{y}{\gamma_2}\right).$$
Then, since $(\overline{Y}^{i-1},\overline{Z}^{i-1},\overline{U}^{i-1})\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$ , we know that the term which does not depend on $(y,z,u)$ above satisfies the same integrability condition as $g_t(0,0,0)$ in (see also the arguments we used in Remark \[rem.assumptions\]). Therefore, since $g^i(0,0,0)=0$, we have one side of the inequality in Assumption \[assump:hquad\](iii), and the other one can be proved similarly. This yields that $g^i$ satisfies Assumption \[assump:hquad\].
Similarly as in Step $1$, we will now show that there exists a solution $(Y^i,Z^i,U^i)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$ to the BSDEJ $$Y_t^i=\xi_i+\int_t^Tg_s^i(Y_s^i,Z_s^i,U_s^i)ds-\int_t^TZ_s^idB_s-\int_t^T\int_EU_s^i(x)\widetilde\mu(dx,ds),\ \mathbb P-a.s.
\label{eq:bsdeji}$$
Since $g$ satisfies Assumptions \[assump:hh\], we can define $$\begin{aligned}
\phi^i_t:=D_zg^i_t(y,0,u)=D_zg_t(\overline{Y}^{i-1}_t+y,\overline{Z}^{i-1}_t,\overline{U}^{i-1}_t+u),\\
\psi^i_t:=D_ug^i_t(y,z,0)=D_ug_t(\overline{Y}^{i-1}_t+y,\overline{Z}^{i-1}_t+z,\overline{U}^{i-1}_t).\end{aligned}$$
We then know that $${\left|\phi^i_t\right|}\leq r_t+\theta{\left|\overline{Z}^{i-1}_t\right|},\ {\left|\psi^i_t\right|}\leq m_t+\theta{\left|\overline{U}^{i-1}_t\right|}, \ \psi_t^i(x)\geq C_1(1\wedge{\left|x\right|})\geq-1+\delta.$$
Since by hypothesis $(\overline{Z}^{i-1}, \overline{U}^{i-1})\in\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$, we can define $\mathbb Q^i$ by $$\frac{d\mathbb Q^i}{d\mathbb P}=\mathcal E\left(\int_0^T\phi_s^idB_s+\int_0^T\int_E\psi_s^i(x)\widetilde\mu(dx,ds)\right).$$
Now, using the notations of Lemma \[lemma.phipsi\], we define a generator $\bar g^i$ from $g^i$ (which still satisfies $\overline g^i(0,0,0)=0$). It is then easy to check that $\bar g^i$ satisfies Assumption \[lipschitz\_assumption\]. Therefore, by Theorem \[th.small\], we obtain the existence of a solution to the BSDEJ with generator $\bar g^i$ and terminal condition $\xi_i$ under $\mathbb Q^i$. Using Lemma \[lemma.phipsi\], this provides a solution $(Y^i,Z^i,U^i)$ with $Y^i$ bounded to the BSDEJ . By Lemma \[lemma.bmo\] and since $g^i$ satisfies Assumption \[assump:hquad\], the boundedness of $Y^i$ implies that $(Z^i,U^i)\in\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$ and therefore that $(\overline{Y}^i,\overline{Z}^i,\overline{U}^i)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$.
Finally, by summing the BSDEJs , we obtain $$\overline{Y}^n=\xi+\int_t^Tg_s(\overline{Y}^n_s,\overline{Z}^n_s,\overline{U}^n_s)ds-\int_t^T\overline{Z}^n_sdB_s-\int_t^T\int_E\overline{U}^n_s(x)\widetilde\mu(dx,ds).$$
Since $\overline{Y}^n$ is bounded (because the $Y^i$ are all bounded), Lemma \[lemma.bmo\] implies that $(\overline{Z}^n,\overline{U}^n)\in\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}\cap L^\infty(\nu)$, which ends the proof.
$\rm{(ii)}$ In the general case $g_t(0,0,0)\neq 0$, we can argue exactly as in Corollary \[corcor2\] (see also Proposition $2$ in [@tev]) to obtain the result.
Comparison and stability {#sec.comp}
========================
A uniqueness result
-------------------
We emphasize that the above theorems provide an existence result for every bounded terminal condition, but we only have uniqueness when the infinite norm of $\xi$ is small enough. In order to have a general uniqueness result, we add the following assumptions, which were first introduced by Royer [@roy] and Briand and Hu [@bh2]. Notice that [@ngou] also considers Assumption \[assump.roy\] to recover uniqueness.
\[assump.roy\] For every $(y,z,u,u')$ there exists a predictable and $\mathcal E$-measurable process $(\gamma_t)$ such that $$g_t(y,z,u)-g_t(y,z,u')\leq\int_E\gamma_t(x)(u-u')(x)\nu_t(dx),$$ where there exist constants $C_2>0$ and $C_1\geq-1+\delta$ for some $\delta>0$ such that $$C_1(1\wedge{\left|x\right|})\leq\gamma_t(x)\leq C_2(1\wedge{\left|x\right|}).$$
\[assump.bh\] $g$ is jointly convex in $(z,u)$.
We then have the following result
\[th.unique\] Assume that $\xi\in\mathbb L^\infty$, and that the generator $g$ satisfies either
1. Assumptions \[assump:hquad\], \[assump:hh\](i),(ii) and \[assump.roy\].
2. Assumptions \[assump:hquad\], \[assump:hh\] and \[assump.bh\], and that $g(0,0,0)$ and the process $\alpha$ appearing in Assumption \[assump:hquad\](iii) are bounded by some constant $M>0$.
Then there exists a unique solution to the BSDEJ .
In order to prove this Theorem, we will use the following comparison Theorem for BSDEJs
\[prop.comp\] Let $\xi^1$ and $\xi^2$ be two $\mathcal F_T$-measurable random variables. Let $g^1$ be a function satisfying either of the following
1. Assumptions \[assump:hquad\], \[lipschitz\_assumption\](i),(ii) and \[assump.roy\].
2. Assumptions \[assump:hquad\], \[lipschitz\_assumption\](i) and \[assump.bh\], and that ${\left|g^1(0,0,0)\right|}+\alpha\leq M$ where $\alpha$ is the process appearing in Assumption \[assump:hquad\](iii) and $M$ is a positive constant.
Let $g^2$ be another function and for $i=1,2$, let $(Y^i,Z^i,U^i)$ be the solution of the BSDEJ with terminal condition $\xi^i$ and generator $g^i$ (we assume that existence holds in our spaces), that is to say for every $t\in[0,T]$ $$Y^i_t=\xi^i+\int_t^Tg^i_s(Y^i_s,Z^i_s,U^i_s)ds-\int_t^TZ^i_sdBs-\int_t^T\int_EU^i_s(x)\widetilde\mu(dx,ds),\ \mathbb P-a.s.$$ Assume further that $\xi^1\leq\xi^2,\ \mathbb P-a.s.$ and $g_t^1(Y_t^2,Z_t^2,U_t^2)\leq g_t^2(Y_t^2,Z_t^2,U_t^2),\ \mathbb P-a.s.$ Then $Y_t^1\leq Y_t^2$, $\mathbb P-a.s.$ Moreover in case (i), if in addition we have $Y^1_0=Y^2_0$, then for all $t$, $Y^1_t=Y^2_t$, $Z_t^1=Z_t^2$ and $U_t^1=U_t^2$, $\mathbb P-a.s.$
Of course, we can replace the convexity property in Assumption \[assump.bh\] by concavity without changing the results of Proposition \[prop.comp\]. Indeed, if $Y$ is a solution to the BSDEJ with convex generator $g$ and terminal condition $\xi$, then $-Y$ is a solution to the BSDEJ with concave generator $\widetilde g(y,z,u):=-g(-y,-z,-u)$ and terminal condition $-\xi$. then we can apply the results of Proposition \[prop.comp\].
In order to prove $(i)$, let us note $$\begin{aligned}
&\delta Y:=Y^1-Y^2,\ \delta Z:=Z^1-Z^2,\ \delta U:=U^1-U^2,\ \delta \xi:=\xi^1-\xi^2\\
&\delta g_t:=g_t^1(Y_t^2,Z_t^2,U_t^2)- g_t^2(Y_t^2,Z_t^2,U_t^2).\end{aligned}$$
Using Assumption \[lipschitz\_assumption\]$\rm{(i)},\rm{(ii)}$, we know that there exist a bounded $\lambda$ and a process $\eta$ with $${\left|\eta_s\right|}\leq\mu\left({\left|Z_s^1\right|}+{\left|Z_s^2\right|}\right),
\label{eq:machin}$$ such that $$\begin{aligned}
\label{ew}
\nonumber\delta Y_t&=\delta\xi+\int_t^T\delta g_sds+\int_t^T\lambda_s\delta Y_sds+\int_t^T(\eta_s+\phi_s)\delta Z_sds\\
\nonumber&\hspace{0.9em}+\int_t^Tg_s^1(Y_s^1,Z_s^1,U_s^1)-g_s^1(Y_s^1,Z_s^1,U_s^2)ds-\int_t^T\int_E\delta U_s(x)\gamma_s(x)\nu_s(dx)ds\\
&\hspace{0.9em}+\int_t^T\int_E\delta U_s(x)\gamma_s(x)\nu_s(dx)ds-\int_t^T\delta Z_sdB_s-\int_t^T\int_E\delta U_s(x)\widetilde\mu(dx,ds),\end{aligned}$$ where $\gamma$ is the predictable process appearing in the right hand side of Assumption \[assump.roy\].
Define for $s\geq t$, $e^{\Lambda_s}:=e^{\int_t^s\lambda_udu},$ and $$\frac{d\mathbb Q}{d\mathbb P}:=\mathcal E\left(\int_t^s(\eta_s+\phi_s)dB_s+\int_t^s\int_E\gamma_s(x)\widetilde\mu(dx,ds)\right).$$
Since the $Z^i$ are in $\mathbb H^2_{\rm{BMO}}$, so is $\eta$ and by our assumption on $\gamma_s$ the above stochastic exponential defines a true strictly positive uniformly integrable martingale (see Kazamaki [@kaz2]). Then applying Itô’s formula and taking conditional expectation under the probability measure $\mathbb Q$, we obtain $$\begin{aligned}
\label{eq.eq}
\nonumber\delta Y_t&=\mathbb E_t^\mathbb Q\left[e^{\Lambda_T} \delta\xi+\int_t^T e^{\Lambda_s}\delta g_sds\right]\\
&\hspace{0.9em}+\mathbb E_t\left[\int_t^T e^{\Lambda_s}\left(g_s^1(Y_s^1,Z_s^1,U_s^1)-g_s^1(Y_s^1,Z_s^1,U_s^2)-\int_E\gamma_s(x)\delta U_s(x)\nu_s(dx)\right)ds\right]\leq 0,\end{aligned}$$ using Assumption \[assump.roy\].
The proof of the comparison result when (ii) holds is a generalization of Theorem $5$ in [@bh2]. However, due to the presence of jumps our proof is slightly different. For the convenience of the reader, we will highlight the main differences during the proof.
For any $\theta\in(0,1)$ let us denote $$\delta Y_t:=Y_t^1-\theta Y_t^2,\ \delta Z_t:=Z_t^1-\theta Z_t^2,\ \delta U_t:=U_t^1-\theta U_t^2,\ \delta\xi:=\xi^1-\theta\xi^2.$$
First of all, we have for all $t\in[0,T]$ $$\delta Y_t=\delta\xi+\int_t^TG_sds-\int_t^T\delta Z_sdB_s-\int_t^T\int_E\delta U_s(x)\widetilde\mu(dx,ds),\ \mathbb P-a.s.,$$ where $$G_t:=g_t^1(Y_t^1,Z_t^1,U_t^1)-\theta g_t^2(Y_t^2,Z_t^2,U_t^2).$$
We emphasize that unlike in [@bh2], we have not linearized the generator in $y$ using the Assumption \[lipschitz\_assumption\]$\rm{(i)}$. It will be clear later on why.
We will now bound $G_t$ from above. First, we rewrite it as $$G_t=G_t^1+G_t^2+G_t^3,$$ where $$\begin{aligned}
&G_t^1:=g_t^1(Y_t^1,Z_t^1,U_t^1)-g_t^1(Y_t^2,Z_t^1,U_t^1), \ G_t^2:=g_t^1(Y_t^2,Z_t^1,U_t^1)-\theta g_t^1(Y_t^2,Z_t^2,U_t^2)\\ &G_t^3:=\theta\left(g_t^1(Y_t^2,Z_t^2,U_t^2)-g_t^2(Y_t^2,Z_t^2,U_t^2)\right).\end{aligned}$$
Then, we have using Assumption \[lipschitz\_assumption\](i) $$\begin{aligned}
\label{G1}
\nonumber G_t^1&=g_t^1(Y_t^1,Z_t^1,U_t^1)-g_t^1(\theta Y_t^2,Z_t^1,U_t^1)+g_t^1(\theta Y_t^2,Z_t^1,U_t^1)-g_t^1(Y_t^2,Z_t^1,U_t^1)\\
&\leq C\left({\left|\delta y_t\right|}+(1-\theta){\left|y_t^2\right|}\right).\end{aligned}$$
Next, we estimate $G^2$ using Assumption \[assump:hquad\] and the convexity in $(z,u)$ of $g^1$ $$\begin{aligned}
\nonumber g_t^1(Y_t^2,Z_t^1,U_t^1)&=g_t^1\left(Y_t^2,\theta Z_t^2+(1-\theta)\frac{Z_t^1-\theta Z_t^2}{1-\theta},\theta U_t^2+(1-\theta)\frac{U_t^1-\theta U_t^2}{1-\theta}\right)\\
\nonumber &\leq \theta g_t^1(Y_t^2,Z_t^2,U_t^2)+(1-\theta)g_t^1\left(Y_t^2,\frac{\delta Z_t}{1-\theta},\frac{\delta U_t}{1-\theta}\right)\\
&\leq \theta g_t^1(Y_t^2,Z_t^2,U_t^2)+(1-\theta)\left(M+\beta{\left|Y_t^2\right|}\right)+\frac{\gamma}{2(1-\theta)}{\left|\delta Z_t\right|}^2\\
&\hspace{0.9em}+\frac{1-\theta}{\gamma}j_t\left(\frac{\gamma}{1-\theta}\delta U_t\right).\end{aligned}$$
Hence $$G_t^2\leq (1-\theta)\left(M+\beta{\left|Y_t^2\right|}\right)+\frac{\gamma}{2(1-\theta)}{\left|\delta Z_t\right|}^2+\frac{1-\theta}{\gamma}j_t\left(\frac{\gamma}{1-\theta}\delta U_t\right).
\label{eq:G2}$$
Finally, $G^3$ is negative by assumption. Therefore, using and , we obtain $$G_t\leq C{\left|\delta Y_t\right|}+(1-\theta)\left(M+\widetilde\beta{\left|Y_t^2\right|}\right)+\frac{\gamma}{2(1-\theta)}{\left|Z_t\right|}^2+\frac{1-\theta}{\gamma}j_t\left(\frac{\gamma}{1-\theta}\delta U_t\right),
\label{eq:G3}$$ where $\widetilde\beta:=\beta +C.$
Now we will get rid of the quadratic and exponential terms in $z$ and $u$ using a classical exponential change. Let us then denote for some $\nu>0$ $$\begin{aligned}
&P_t:=e^{\nu\delta Y_t},\ Q_t:=\nu e^{\nu \delta Y_t}\delta Z_t, \ R_t(x):=e^{\nu \delta Y_{t^-}}\left(e^{\nu\delta U_t(x)}-1\right).\end{aligned}$$
By Itô’s formula we obtain for every $t\in[0,T]$, $\mathbb P-a.s.$ $$P_t=P_T+\int_t^T\nu P_s\left(G_s-\frac\nu2{\left|\delta Z_s\right|}^2-\frac1\nu j_s(\nu\delta U_s)\right)ds-\int_t^TQ_sdB_s-\int_t^T\int_ER_s(x)\widetilde\mu(dx,ds).$$
Now choose $\nu=\gamma/(1-\theta)$. We emphasize that this is here that the presence of jumps forces us to change our proof in comparison with the one in [@bh2]. Indeed, if we had immediately linearized in $y$ then we could not have chosen $\nu$ constant such that the quadratic and exponentials terms in would disappear. This is not a problem in [@bh2], since they can choose $\nu$ of the form $M/(1-\theta)$ with $M$ large enough and still make the quadratic term in $z$ disappear. However, in the jump case, the application $\gamma \mapsto \gamma^{-1}j_t(\gamma u)$ is not always increasing, and this trick does not work. Nonetheless, we now define the strictly positive and continuous process $$D_t:=\exp\left(\gamma\int_0^t\left(M+\widetilde\beta{\left|Y_t^2\right|}+\frac{C}{1-\theta}{\left|\delta Y_s\right|}\right)ds\right).$$
Applying Itô’s formula to $D_tP_t$, we obtain $$\begin{aligned}
d(D_sP_s)=&-\nu D_sP_s\left(G_s-\frac\nu2{\left|\delta Z_s\right|}^2-\frac{j_s(\nu\delta U_s)}{\nu} -C{\left|\delta Y_s\right|}+(1-\theta)\left(M+\widetilde\beta{\left|Y_s^2\right|}\right)\right)ds\\
&+D_sQ_sdB_s+\int_ED_{s^-}R_s(x)\widetilde\mu(dx,ds).\end{aligned}$$
Hence, using the inequality , we deduce $$D_tP_t\leq \mathbb E_t\left[D_TP_T\right], \ \mathbb P-a.s.,$$ which can be rewritten $$\delta Y_t\leq \frac{1-\theta}{\gamma}\ln\left(\mathbb E_t\left[\exp\left(\gamma\int_t^T\left(M+\widetilde\beta{\left|Y_t^2\right|}+\frac{C}{1-\theta}{\left|\delta Y_s\right|}\right)ds+\frac{\gamma}{1-\theta}\delta\xi\right)\right]\right),\ \mathbb P-a.s.$$
Next, we have $$\delta\xi=(1-\theta)\xi^1+\theta\left(\xi^1-\xi^2\right)\leq (1-\theta){\left|\xi^1\right|}.$$
Consequently, we have for some constant $C_0>0$, independent of $\theta$, using the fact that $Y^2$ and $\xi^1$ are bounded $\mathbb P-a.s.$ $$\begin{aligned}
\delta Y_t\leq& \frac{1-\theta}{\gamma}\left(\ln(C_0)+\ln\left(\mathbb E_t\left[\exp\left(\frac{C}{1-\theta}\int_t^T{\left|\delta Y_s\right|}ds\right)\right]\right)\right),\ \mathbb P-a.s.
\label{eq:G5}\end{aligned}$$
We finally argue by contradiction. More precisely, let $$\mathcal A:=\left\{\omega\in \Omega, Y_t^1(\omega)>Y_t^2(\omega)\right\},$$ and assume that $\mathbb P(\mathcal A)>0.$ Let us then call $\mathcal N$ the $\mathbb P$-negligible set outside of which holds. Since $\mathcal A$ has a strictly positive probability, $\mathcal B:=\mathcal A\cap \left(\Omega\backslash\mathcal N\right)$ is not empty and also has a strictly positive probability. Then, we would have from that for every $\omega \in \mathcal B$ $$\delta Y_t(\omega)\leq \frac{1-\theta}{\gamma}\ln(C_0)+\frac{C}{\gamma}\int_t^T{\left\|\delta Y_s\right\|}_{\infty,\mathcal B}ds,
\label{eq:G6}$$ where ${\left\|\cdot\right\|}_{\infty,\mathcal B}$ is the usual infinite norm restricted to $\mathcal B$.
Now, using the dominated convergence theorem, we can let $\theta\uparrow 1^-$ in to obtain that for any $\omega \in\mathcal B$ $$Y_t^1(\omega)-Y_t^2(\omega)\leq \frac{C}{\gamma}\int_t^T{\left\| Y_s^1-Y_s^2\right\|}_{\infty,\mathcal B}ds,$$ which in turns implies, since $\mathcal B\subset\mathcal A$ $${\left\|Y_t^1-Y_t^2\right\|}_{\infty,\mathcal B}\leq \frac{C}{\gamma}\int_t^T{\left\| Y_s^1-Y_s^2\right\|}_{\infty,\mathcal B}ds.$$
But with Gronwall’s lemma this implies that ${\left\|Y_t^1-Y_t^2\right\|}_{\infty,\mathcal B}=0$ and the desired contradiction. Hence the result.
Let us now assume that $Y_0^1=Y_0^2$ and that we are in the same framework as in Step $1$. Using this in above when $t=0$, we obtain $$\begin{aligned}
\label{eq.eq2}
\nonumber0&=\mathbb E^\mathbb Q\left[e^{\Lambda_T}\delta\xi+\int_0^T e^{\Lambda_s}\delta g_sds\right]\\
&\hspace{0.9em}+\mathbb E^\mathbb Q\left[\int_0^T e^{\Lambda_s}\left(g_s^1(Y_s^1,Z_s^1,U_s^1)-g_s^1(Y_s^1,Z_s^1,U_s^2)-\int_E\delta U_s(x)\gamma_s(x)\nu_s(dx)\right)ds\right]\leq 0.\end{aligned}$$
Hence, since all the above quantities have the same sign, this implies in particular that $$e^{\Lambda_T}\delta\xi+\int_0^T e^{\Lambda_s}\delta g_sds=0,\ \mathbb P-a.s.$$
Moreover, we also have $\mathbb P-a.s.$ $$\int_0^T e^{\Lambda_s}\left(g_s^1(Y_s^1,Z_s^1,U_s^1)-g_s^1(Y_s^1,Z_s^1,U_s^2)\right)ds=\int_0^T e^{\Lambda_s}\left(\int_E\delta U_s(x)\gamma_s(x)\nu_s(dx)\right)ds.$$
Using this result in , we obtain with Itô’s formula $$\begin{aligned}
\label{eq.eq3}
\nonumber\delta Y_t&=\int_0^T e^{\Lambda_s}\left(\int_E\delta U_s(x)\gamma_s(x)\nu_s(dx)\right)ds-\int_t^T e^{\Lambda_s}\delta Z_s(dB_s-(\eta_s+\phi_s)ds)\\
&\hspace{0.9em}-\int_t^T\int_E e^{\Lambda_{s^-}}\delta U_s(x)\widetilde\mu(dx,ds).\end{aligned}$$
The right-hand side is a martingale under $\mathbb Q$ with null expectation. Thus, since $\delta Y_t\leq 0$, this implies that $Y_t^1=Y_t^2,$ $\mathbb P-a.s.$ Using this in , we obtain that the martingale part must be equal to $0$, which implies that $\delta Z_t=0$ and $\delta U_t=0$.
\[rem.imp\] In the above proof of the comparison theorem in case (i), we emphasize that it is actually sufficient that, instead of Assumption \[assump.roy\], the generator $g$ satisfies $$g_s^1(Y_s^1,Z_s^1,U_s^1)-g_s^1(Y_s^1,Z_s^1,U_s^2)\leq\int_E\gamma_s(x)\delta U_s(x)\nu_s(dx),$$ for some $\gamma_s$ such that $$C_1(1\wedge{\left|x\right|})\leq\gamma_s(x)\leq C_2(1\wedge{\left|x\right|}).$$
Besides, this also holds true for the comparison Theorem for Lipschitz BSDEJs with jumps proved by Royer (see Theorem $2.5$ in [@roy]).
We can now prove Theorem \[th.unique\]
First let us deal with the question of existence.
1. If $g$ satisfies Assumptions \[assump:hquad\], \[assump:hh\](i),(ii) and \[assump.roy\], the existence part can be obtained exactly as in the previous proof, starting from a small terminal condition, and using the fact that Assumption \[assump.roy\] implies that $g$ is Lipschitz in $u$. Thus we omit it.
2. If $g$ satisfies Assumptions \[assump:hquad\], \[assump:hh\] and \[assump.bh\], then we already proved existence for bounded terminal conditions.
The uniqueness is then a simple consequence of the above comparison theorem.
In [@kpz4], we prove a non-linear Doob-Meyer decomposition and obtain as a consequence a reverse comparison Theorem.
A priori estimates and stability
--------------------------------
In this subsection, we show that under our hypotheses, we can obtain [*[*a priori*]{}*]{} estimates for quadratic BSDEs with jumps. We have the following results
\[apriori\] Let $(\xi^1,\xi^2)\in\mathbb L^\infty\times\mathbb L^\infty$ and let $g$ be a function satisfying Assumptions \[assump:hquad\], \[lipschitz\_assumption\](i),(ii) and \[assump.roy\]. Let us consider for $i=1,2$ the solutions $(Y^i,Z^i,U^i)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}$ of the BSDEJs with generator $g$ and terminal condition $\xi^i$ (once again existence is assumed). Then we have for some constant $C>0$ $$\begin{aligned}
&{\left\|Y^1-Y^2\right\|}_{\mathcal S_\infty}+{\left\|U^1-U^2\right\|}_{L^\infty(\nu)} \leq C{\left\|\xi^1-\xi^2\right\|}_\infty\\
&{\left\|Z^1-Z^2\right\|}^2_{\mathbb H^2_{\rm{BMO}}}+{\left\|U^1-U^2\right\|}^2_{\mathbb J^2_{\rm{BMO}}}\leq C{\left\|\xi^1-\xi^2\right\|}_{\infty}.\end{aligned}$$
Following exactly the same arguments as in Step $1$ of the proof Proposition \[prop.comp\], we obtain with the same notations $$Y_t^1-Y_t^2=\mathbb E^\mathbb Q_t\left[e^{\Lambda_T}(\xi^1-\xi^2)\right]\leq C{\left\|\xi^1-\xi^2\right\|}_\infty,\ \mathbb P-a.s.$$ Notice then that this implies as usual that there is a version of $(U^1-U^2)$ (still denoted $(U^1-U^2)$ for simplicity) which is bounded by $2{\left\|Y^1-Y^2\right\|}_{\mathcal S_\infty}$. This gives easily the first estimate.
Let now $\tau\in\mathcal T_0^T$ be a stopping time. Denote also $$\delta g_s:=g_s(Y_s^1,Z_s^1,U_s^1)-g_s(Y_s^2,Z_s^2,U_s^2).$$
By Itô’s formula, we have using standard calculations $$\begin{aligned}
\label{eq:G7}
\nonumber\mathbb E_\tau\left[\int_\tau^T{\left|Z_s\right|}^2ds+\int_\tau^T{\left\|U_s\right\|}^2_{L^2(\nu_s)}ds\right]\leq&\ \mathbb E_\tau\left[{\left|\xi^1-\xi^2\right|}^2+2\int_\tau^T(Y_s^1-Y_s^2)\delta g_sds\right]\\
\leq& {\left\|\xi^1-\xi^2\right\|}_\infty^2+2{\left\|Y^1-Y^2\right\|}_{\mathcal S_\infty}\mathbb E_\tau\left[\int_\tau^T{\left|\delta g_s\right|}ds\right]. \end{aligned}$$
Then, using Assumption \[assump:hquad\], we estimate $$\begin{aligned}
{\left|\delta g_t\right|}&\leq C\left({\left|g_t(0,0,0)\right|}+\alpha_t+\sum_{i=1,2}{\left|Y_t^i\right|}+{\left|Z_t^i\right|}^2+j_t\left(\gamma U^i\right)+j_t\left(-\gamma U^i\right)\right)\\
&\leq C\left({\left|g_t(0,0,0)\right|}+\alpha_t+\sum_{i=1,2}{\left|Y_t^i\right|}+{\left|Z_t^i\right|}^2+{\left\|U^i_t\right\|}_{L^2(\nu)}^2\right),\end{aligned}$$ where we used the fact that for every $x$ in a compact subset of $\mathbb R$, $0\leq e^x-1-x\leq Cx^2$. Using this estimate and the integrability assumed on $g_t(0,0,0)$ and $\alpha_t$ in entails $$\begin{aligned}
\nonumber&\mathbb E_\tau\left[\int_\tau^T{\left|Z_s\right|}^2ds+\int_\tau^T{\left\|U_s\right\|}^2_{L^2(\nu_s)}ds\right]\\
&\leq {\left\|\xi^1-\xi^2\right\|}_\infty^2+C{\left\|\xi^1-\xi^2\right\|}_{\infty}\left(1+\sum_{i=1,2}{\left\|Y^i\right\|}_{\mathcal S^\infty}+{\left\|Z^i\right\|}_{\mathbb H^2_{\rm{BMO}}}+{\left\|U^i\right\|}_{\mathbb J^2_{\rm{BMO}}}\right)\leq C{\left\|\xi^1-\xi^2\right\|}_\infty, \end{aligned}$$ which ends the proof.
\[apriori2\] Let $(\xi^1,\xi^2)\in\mathbb L^\infty\times\mathbb L^\infty$ and let $g$ be a function satisfying Assumptions \[assump:hquad\], \[lipschitz\_assumption\](i) and \[assump.bh\] and such that ${\left|g(0,0,0)\right|}+\alpha\leq M$ where $\alpha$ is the process appearing in Assumption \[assump:hquad\](iii) and $M$ is a positive constant. Let us consider for $i=1,2$ the solutions $(Y^i,Z^i,U^i)\in\mathcal S^\infty\times\mathbb H^2_{\rm{BMO}}\times\mathbb J^2_{\rm{BMO}}$ of the BSDEJs with generator $g$ and terminal condition $\xi^i$ (once again existence is assumed). Then we have for some constant $C>0$ $$\begin{aligned}
&{\left\|Y^1-Y^2\right\|}_{\mathcal S^\infty}+{\left\|U^1-U^2\right\|}_{L^\infty(\nu)} \leq C{\left\|\xi^1-\xi^2\right\|}_\infty\\
&{\left\|Z^1-Z^2\right\|}^2_{\mathbb H^2_{\rm{BMO}}}+{\left\|U^1-U^2\right\|}^2_{\mathbb J^2_{\rm{BMO}}}\leq C{\left\|\xi^1-\xi^2\right\|}_{\infty}.\end{aligned}$$
Following Step $2$ of the proof of Proposition \[prop.comp\], we obtain for any $\theta\in(0,1)$ $$\frac{Y_t^1-\theta Y_t^2}{1-\theta}\leq\frac{1}{\gamma}\ln\left(\mathbb E_t\left[\exp\left(\gamma\int_t^T\left(M+\widetilde\beta{\left|Y_s^2\right|}+\frac{C{\left|Y_s^1-\theta Y_s^2\right|}}{1-\theta}\right)ds+\frac{\gamma(\xi^1-\theta\xi^2)}{1-\theta}\right)\right]\right),$$ and of course by symmetry, the same holds if we interchange the roles of the exponents $1$ and $2$. Since all the quantities above are bounded, we obtain easily after some calculations, after letting $\theta \uparrow 1^-$ and by symmetry $${\left|Y_t^1-Y_t^2\right|}\leq C\left({\left\|\xi^1-\xi^2\right\|}_\infty+\int_t^T{\left\|Y_s^1-Y_s^2\right\|}_\infty ds\right),\ \mathbb P-a.s.$$ Hence, we can use Gronwall’s lemma to obtain ${\left\|Y^1-Y^2\right\|}_{\mathcal S^\infty}\leq C{\left\|\xi^1-\xi^2\right\|}_\infty.$ All the other estimates can then be obtained as in the proof of Proposition \[apriori\].
#### Acknowledgements:
The authors would like to thank Nicole El Karoui and Anis Matoussi for their precious advices which greatly helped to improve a previous version of this paper.
[aa12]{} Barles, G., Buckdahn, R., Pardoux, E. (1997). Backward stochastic differential equations and integral-partial differential equations, [*Stoch. Stoch. Rep.*]{}, 60(1-2):57–83. Barrieu, P., Cazanave, N., El Karoui, N. (2008). Closedness results for BMO semi-martingales and application to quadratic BSDEs, [*C. R. Acad. SCI. Paris, Ser. I*]{}, 346:881–886. Barrieu, P., El Karoui, N. (2008). Pricing, hedging and optimally designing derivatives via minimization of risk measures. [*Indifference pricing: theory and applications, Princeton University Press, Princeton, USA.*]{} Barrieu, P., El Karoui, N. (2011). Monotone stability of quadratic semimartinga-les with applications to unbounded general quadratic BSDEs, [*Ann. Prob*]{}, to appear. Becherer, D. (2006). Bounded solutions to backward SDEs with jumps for utility optimization and indifference hedging, [*Ann. of App. Prob*]{}, 16(4):2027–2054. Bismut, J.M. (1973). Conjugate convex functions in optimal stochastic control, [*J. Math. Anal. Appl.*]{}, 44:384–404. Briand, Ph., Hu, Y. (2006). BSDE with quadratic growth and unbounded terminal value, [*Probab. Theory Relat. Fields*]{}, 136:604–618. Briand, Ph., Hu, Y. (2008). Quadratic BSDEs with convex generators and unbounded terminal conditions, [*Probab. Theory Relat. Fields*]{}, 141:543–567. Delbaen, F., Hu, Y., Richou, A. (2011). On the uniqueness of solutions to quadratic BSDEs with convex generators and unbounded terminal conditions, [*Ann. Inst. Henri Poincaré*]{}, 47:559–574. Doléans-Dade, C., Meyer, P.A. (1977). Une caractérisation de BMO, [*Séminaire de Probabilités XI, Univ. de Strasbourg*]{}, Lecture Notes in Math., 581:383–389. El Karoui, N., Peng, S. and Quenez, M.C. (1994). Backward stochastic differential equations in finance, [*Mathematical Finance*]{}, 7(1):1–71. El Karoui, N., Huang, S. (1997). A general result of existence and uniqueness of backward stochastic differential equations, [*in El Karoui, N., Mazliak, L. (Eds.), Backward Stochastic Differential Equations*]{}, Pitman Research Notes Mathematical Series, 364:141–159. El Karoui, N., Rouge, R. (2000). Pricing via utility maximization and entropy, [*Mathematical Finance*]{}, 10:259–276. El Karoui, N., Matoussi, A., Ngoupeyou, A. (2012). Quadratic BSDE with jumps and unbounded terminal condition, preprint. Hu, Y., Imkeller, P., and M[ü]{}ller, M. (2005). Utility maximization in incomplete markets, [*Ann. Appl. Proba.*]{}, 15(3):1691–1712. Hu, Y., Schweizer, M. (2011). Some new results for an infinite-horizon stochastic control problem, [*Advanced Mathematical Methods for Finance*]{}, Eds. Di Nunno, Oksendal, Springer, 36–39. Izumisawa, M., Sekiguchi, T., Shiota, Y. (1979). Remark on a characterization of BMO-martingales, [*Tôhoku Math. Journ.*]{}, 31:281–284. Jeanblanc, M., Matoussi, A., Ngoupeyou, A.B. (2010). Robust utility maximization in a discontinuous filtration, preprint, [*arXiv:1201.2690*]{}. Kazamaki, N. (1979). A sufficient condition for the uniform integrability of exponential martingales, [*Math. Rep. Toyama University*]{}, 2:1–11. Kazamaki, N. (1979). On transforming the class of BMO-martingales by a change of law, [*Tôhoku Math. J.*]{}, 31:117–125. Kazamaki, N. (1994). Continuous exponential martingales and BMO. [*Springer-Verlag*]{}. Kazi-Tani, N., Possamaï, D., Zhou, C. (2012). Second-order BSDEs with jumps, part I: aggregation and uniqueness, preprint, [*arXiv:1208.0757*]{}. Kazi-Tani, N., Possamaï, D., Zhou, C. (2012). Non-linear expectations and dynamic risk-measures related to quadratic BSDEs with jumps, preprint. Kobylanski, M. (2000). Backward stochastic differential equations and partial differential equations with quadratic growth, [*Ann. Prob.*]{} 28:259–276. Laeven, R.J.A., Stadje, M.A. (2012). Robust portfolio choice and indifference valuation, preprint, [*http://repository.tue.nl/733411*]{}. Lépingle, D. and Mémin, J. (1978). Sur l’intégrabilité uniforme des martingales exponentielles, [*Z. Wahrscheinlichkeitstheorie verw. Gebiete*]{}, 42:175–203. Lépingle, D. and Mémin, J. (1978). Intégrabilité uniforme et dans $L^r$ des martingales exponentielles, [*Sém. Prob. de Rennes*]{}. Liu, Y., Ma, J. (2009). Optimal reinsurance/investment problems for general insurance models, [*Ann. App. Prob.*]{}, 19(4):1495–1528. Mania, M., Tevzadze, R. (2006). An exponential martingale equation, [*Elec. Com. Prob.*]{}, 11:306–316. Mocha, M., Westray, N. (2012). Quadratic semimartingale BSDEs under an exponential moments condition, [*Séminaire de Probabilités XLIV*]{}, Springer, 105–139. Morlais, M.-A. (2009). Quadratic BSDEs driven by a continuous martingale and applications to the utility maximization problem, [*Finance Stoc.*]{}, 13(1):121–150. Morlais, M.-A. (2010). A new existence result for quadratic BSDEs with jumps with application to the utility maximization problem, [*Stoch. Proc. App.*]{}, 10:1966-1995. Ngoupeyou, A.B. (2010). Optimisation des portefeuilles d’actifs soumis au risque de défaut, [*Université d’Evry*]{}, PhD Thesis. Pardoux, E. and Peng, S (1990). Adapted solution of a backward stochastic differential equation, [*Systems Control Lett.*]{}, 14:55–61. Royer, M. (2006). Backward stochastic differential equations with jumps and related non-linear expectations, [*Stoch. Proc. and their App.*]{}, 116:1358–1376. Sato, K.-I. (1999). Lévy processes and infinitely divisible distributions, [*Cambridge University Press*]{}. Tang, S., Li, X. (1994). Necessary conditions for optimal control of stochastic systems with random jumps, [*SIAM J. Control Optim.*]{}, 32(5):1447–1475. Tevzadze, R. (2008). Solvability of backward stochastic differential equations with quadratic growth, [*Stoch. Proc. and their App.*]{}, 118:503–515.
[^1]: CMAP, Ecole Polytechnique, Paris, nabil.kazitani@polytechnique.edu.
[^2]: CEREMADE, Université Paris Dauphine, possamai@ceremade.dauphine.fr.
[^3]: Department of Mathematics, National University of Singapore, Singapore, matzc@nus.edu.sg. Part of this work was carried out while the author was working at CMAP, Ecole Polytechnique, whose financial support is kindly acknowledged.
[^4]: Research partly supported by the Chair [*Financial Risks*]{} of the [*Risk Foundation*]{} sponsored by Société Générale, the Chair [*Derivatives of the Future*]{} sponsored by the [Fédération Bancaire Française]{}, and the Chair [*Finance and Sustainable Development*]{} sponsored by EDF and Calyon.
|
---
author:
- Wenjun Liao
- Chenghua Lin
bibliography:
- 'sample.bib'
title: Deep Ensemble Learning for News Stance Detection
---
***Keywords: Stance detection, Fake News, Neural Network, Deep ensemble learning, NLP***
Extended Abstract {#extended-abstract .unnumbered}
=================
Detecting stance in news is important for news veracity assessment because it helps fact-checking by predicting a stance with respect to a central claim from different information sources. Initiated in 2017, the Fake News Challenge Stage One[^1] (FNC-1) proposed the task of detecting the stance of a news article body relative to a given headline, as a first step towards fake news detection. The body text may agree or disagree with the headline, discuss the same claim as the headline without taking a position or is unrelated to the headline.
Several state-of-the-art algorithms [@hanselowski2018retrospective; @riedel2017simple] have been implemented based on the training dataset provided by FNC-1. We conducted error analysis for the top three performing systems in FNC-1. Team1 *‘SOLAT in the SWEN’* from Talos Intelligence[^2] won the competition by using a 50/50 weighted average ensemble of convolutional neural network and gradient boosted decision trees. Team2, *‘Athene’* from TU Darmstadt achieved the second place by using hard-voting for results generated by five randomly initialized Multilayer Perceptron (MLP) structures, where each MLP is constructed with seven hidden layers [@hanselowski2018retrospective]. The two approaches use features of semantic analysis, bag of words as well as baseline features defined by FNC-1, which include word/ngram overlap features and indicator features for polarity and refutation. Team3, *‘UCL Machine Reading’* uses a simple end to end MLP model with a 10000-dimension Term Frequency (TF) vector (5000 extracted from headlines and 5000 from text body) and a one-dimension TF-IDF cosine similarity vector as input features [@riedel2017simple]. The MLP architecture has one hidden layer with 100 units, and it’s output layer has four units corresponding to four possible classes. Rectified linear unit activation function is applied on the hidden-layer and Softmax is applied on the output layer. The loss function is the sum of $l_2$ regularization of MLP weights and cross entropy between outputs and true labels. The result is decided by the argmax function upon output layer. Several techniques are adopted to optimize the model training process such as mini-batch training and dropout. According to our error analysis, UCL’s system is simple but tough-to-beat, therefore it is chosen as the new baseline.
**Method.** In this work, we developed five new models by extending the system of UCL. They can be divided into two categories. The first category encodes additional keyword features during model training, where the keywords are represented as indicator vectors and are concatenated to the baseline features. The keywords consist of manually selected refutation words based on error analysis. To make this selection process automatic, three algorithms are created based on the Mutual Information (MI) theory. The keywords generator based on MI customized class (MICC) gave the best performance. Figure 1(a) illustrates the work-flow of the MICC algorithm. The second category adopts article body-title similarity as part of the model training input, where word2vec is introduced and two document similarity calculation algorithms are implemented: word2vec cosine similarity and Word Mover’s Distance.
**Results.** Outputs generated from different aforementioned methods are combined following two rules, *concatenation* and *summation*. Next, single models as well as ensemble of two or three randomly selected models go through 10-fold cross validation. The output layer becomes $4\cdot\textit{N}$-dimension when adopting concatenation rule, where *N* is the number of models selected for ensemble. We considered the evaluation metric defined by FNC, where the correct classification of relatedness contributes 0.25 points and correctly classify related pairs as agree, disagree or discuss contributes 0.75 points. Experimental results show that ensemble of three neural network models trained from simple bag-of-words features gives the best performance. These three models are: the baseline MLP; a model from category one where manually selected keyword features are added; a model from category one where added keywords feature are selected by the MICC algorithm.
After hyperparameters tuning on validation set, the ensemble of three selected models has shown great performance on the test dataset. As shown in Table 1, our system beats the FNC-1 winner team Talos by 34.25 marks, which is remarkable considering our system’s relatively simple architecture. Figure 1(b) demonstrates the performance of our system. Our deep ensemble model does not outstand in any of the four stance detection categories. However, it reflects the averaging outcome of the best results from the three individual models. It is the ensemble effect that brings the best result in the end. Evaluation has demonstrated that our proposed ensemble-based system can outperform the state-of-the-art algorithms in news stance detection task with a relatively simple implementation.
[.45]{} ![(a) The illustration of customized-class based MI algorithm. The input is the customized theme word, documents are then classified according to the themes. The output are groups of keywords under different class. (b) The heat map of the detection results. []{data-label="fig:fig"}](Mutual_Information.jpg "fig:"){width="0.95\linewidth"}
[.55]{} ![(a) The illustration of customized-class based MI algorithm. The input is the customized theme word, documents are then classified according to the themes. The output are groups of keywords under different class. (b) The heat map of the detection results. []{data-label="fig:fig"}](Heatmap_of_confusion_matrix.png "fig:"){width="0.95\linewidth"}
\[1\][D[.]{}[.]{}[\#1]{}]{}
[@ l \*[7]{}[d[2.3]{}]{} ]{}
Team & & & &\
& & & &\
UCL & 0.44 & 0.066 & 0.814 & 0.979 & 0.404 & &\
Athene & 0.447 & 0.095 & 0.809 & 0.992 &0.416 & &\
Talos & 0.585 & 0.019 & 0.762 &0.987 &0.409& &\
This work & 0.391 & 0.067 & 0.855 &0.980 &0.403& &\
Appendix[^3] {#appendix .unnumbered}
============
The code of this work is available at Github: <https://github.com/amazingclaude/Fake_News_Stance_Detection>.\
\
The full thesis regarding this work is available at ResearchGate: <https://www.researchgate.net/publication/327634447_Stance_Detection_in_Fake_News_An_Approach_based_on_Deep_Ensemble_Learning>.
[^1]: <http://www.fakenewschallenge.org>
[^2]: <https://github.com/Cisco-Talos/fnc-1>
[^3]: This page is not included in the submitted camera-ready version.
|
cond-mat/0602009
[Comments on the Superconductivity Solution\
of an Ideal Charged Boson System$^*$]{}
R. Friedberg$^1$ and T. D. Lee$^{1,~2}$\
[\
[*New York, NY 10027, U.S.A.*]{}\
[*2. China Center of Advanced Science and Technology (CCAST/World Lab.)*]{}\
[*P.O. Box 8730, Beijing 100080, China*]{}\
]{}
[Abstract]{}
We review the present status of the superconductivity solution
for an ideal charged boson system, with suggestions for possible
improvement.
———————————-
A dedication in celebration of the 90th birthday of Professor V. L. Ginzburg
This research was supported in part by the U. S. Department of Energy Grant
DE-FG02-92ER-40699
[**1. Introduction**]{}
An ideal charged boson system is of interest because of the simplicity in its formulation and yet the complexity of its manifestations. The astonishingly complicated behavior of this idealized system may provide some insight to the still not fully understood properties of high $T_c$ superconductivity. As is well known, R. Schafroth\[1\] first studied the superconductivity of this model fifty years ago. In this classic paper he concluded that at zero temperature $T=0$ and in an external constant magnetic field $H$, there is a critical field $$(H_c)_{Sch}=e\rho/2m\eqno(1.1)$$ with $\rho$ denoting the overall number density of the charged bosons and $m$, $e$ their mass and electric charge respectively; the system is in the super phase when $H<H_c$, and in the normal phase when $H>H_c$. Due to an oversight, Schafroth neglected the exchange part of the electrostatic energy, which invalidates his conclusion as was pointed out in a 1990 paper \[2\] by Friedberg, Lee and Ren (FLR). This oversight when corrected makes the ideal charged boson model even more interesting. Some aspects of this simple model are still not well understood.
In what follows we first review the Schafroth solution and then the FLR corrections. Our discussions are confined only to $T=0$.
[**2. Hamiltonian and Schafroth Solution**]{}
Let $\phi({\bf r})$ be the charged boson field operator and $\phi^\dag({\bf r})$ its hermitian conjugate, with their equal-time commutator given by $$[\phi({\bf r}),~\phi^\dag({\bf r}')]=\delta^3({\bf r}-{\bf
r}').\eqno(2.1)$$ These bosons are non-relativistic, enclosed in a large cubic volume $\Omega=L^3$ and with an external constant background charge density $-e\rho_{ext}~$ so that the integral of the total charge density $$eJ_0\equiv e \phi^\dag\phi-e \rho_{ext}\eqno(2.2)$$ is zero. The Coulomb energy operator is given by $$H_{Coul}=\frac{e^2}{8\pi} \int~|~{\bf r}-{\bf r}'|^{-1}~:J_0({\bf
r})J_0({\bf r}'):~ d^3rd^3r'\eqno(2.3)$$ where $:~:$ denotes the normal product in Wick’s notation\[3\] so as to exclude the Coulomb self-energy.
Expand the field operator $\phi({\bf r})$ in terms of a complete orthonormal set of $c$-number function $\{f_i({\bf r})\}$: $$\phi({\bf r})=\sum\limits_i a_if_i({\bf r})\eqno(2.4)$$ with $a_i$ and its hermitian conjugate $a_i^\dag$ obeying the commutation relation $[a_i,~a_j^\dag]=\delta_{ij}$, in accordance with (2.1). Take a normalized state vector $|>$ which is also an eigenstate of all $a_i^\dag a_i$ with $$a_i^\dag a_i |>=n_i|>.\eqno(2.5)$$ For such a state, the expectation value of the Coulomb energy $E_{Coul}$ can be written as a sum of three terms: $$<|H_{Coul}|>=E_{ex}+E_{dir}+E_{dir}'\eqno(2.6)$$ where $$E_{ex}=\sum\limits_{i\neq j}\frac{e^2}{8\pi}\int d^3rd^3r' |~{\bf
r}-{\bf r}'|^{-1}n_i n_j f_i^*({\bf r}) f_j^*({\bf r}')f_i({\bf
r}')f_j({\bf r})$$ $$E_{dir}=\frac{e^2}{8\pi}\int d^3rd^3r' |~{\bf r}-{\bf
r}'|^{-1}<|J_0({\bf r})|><|J_0({\bf r}')|>\eqno(2.7)$$ and $$E_{dir}'=-\sum\limits_{i}\frac{e^2}{8\pi}\int d^3rd^3r'|~{\bf
r}-{\bf r}'|^{-1}n_i |f_i({\bf r})|^2 |f_i({\bf r}')|^2.$$ The last term $E_{dir}'$ is the subtraction, recognizing that in Wick’s normal product each particle does not interact with itself.
In the Schafroth solution, for the super phase at $T=0$ all particles are in the zero momentum state; therefore, on account of (2.2) the ensemble average of $J_0$ is zero and so is the Coulomb energy. For the normal phase, take the magnetic field ${\bf
B}=B\hat{z}$ with $B$ uniform and pick its gauge field ${\bf
A}=Bx\hat{y}$. At $T=0$, let $$f_i({\bf r})=e^{ip_iy}\psi_i(x).\eqno(2.8)$$ Schafroth assumed $p_i=eBx_i$ with $x_i$ spaced at regular intervals $\lambda=2\pi/eBL$, which approaches zero as $L\rightarrow \infty$. This makes the boson density uniform and therefore $E_{dir}=0$. In the same infinite volume limit, one can show readily that $\Omega^{-1}E_{dir}'\rightarrow 0$. Since Schafroth omitted $E_{ex}$, his energy consists only of $$E_{field}=\int d^3r \frac{1}{2}~B^2,\eqno(2.9)$$ $$E_{mech}=\sum\limits_{i}n_i \int d^3r
\frac{1}{2m}~\bigg(\frac{d\psi_i}{dx}\bigg)^2\eqno(2.10)$$ and $$E_{dia}=\sum\limits_{i}n_i \int d^3r \frac{1}{2m}~(p_i-eA_y(x))^2$$ $$~~~~~~~~=\sum\limits_{i}n_i \int d^3r
\frac{eB}{2m}~(x-x_i)^2.\eqno(2.11)$$ The sum of (2.10) and (2.11) gives the usual cyclotron energy $$E_{mech}+E_{dia}=\sum\limits_{i}n_i ~\frac{eB}{2m}.\eqno(2.12)$$ Combining with (2.9), Schafroth derived the total Helmholtz free energy density in the normal phase at zero temperature to be $$F_n=\frac{1}{2}~B^2+\frac{e\rho}{2m}~B\eqno(2.13)$$ (Throughout the paper, we take $e$ and $B$ to be positive, since all energies are even in these parameters.)
The derivation of (2.13) is, however, flawed by the omission of $E_{ex}$. It turns out that for the above particle wave function (2.8), when $x_i-x_j$ is $<<$ the cyclotron radius $a=(eB)^{-\frac{1}{2}}$, the coefficient of $n_in_j$ in $E_{ex}$ is proportional to $|x_i-x_j|^{-1}$. Hence $\Omega^{-1}E_{ex}$ becomes $\infty$ logarithmically as the spacing $\lambda
\rightarrow 0$.
[**3. Corrected Normal State at High Density**]{}
In this and the next section, we review the FLR analysis for the high density case, when $\rho > r_b^{-3}$ where $r_b=$ Bohr radius $=4\pi/me^2$.
a\. . We discuss first the case when $B$ is $>>(m\rho)^{\frac{1}{2}}$, so that the Coulomb correction to the magnetic energy (2.13) can be treated as a perturbation. To find the groundstate energy, we shall continue to assume (2.8) with $p_i=eBx_i$ and $x_i$ equally spaced at interval $\lambda$, but keeping $\lambda \neq 0$. Now as $\Omega
\rightarrow \infty$, $\Omega^{-1}E_{dir}'$ remains zero, but $\Omega^{-1}E_{dir}$ in fact increases as $\lambda^2$ for $\lambda>>a$, the cyclotron radius. The lowest value of $E_{dir}+E_{ex}$ are both complicated in this range. The minimization can be done exactly, yielding $$\lambda=\pi a \lambda_0\eqno(3.1)$$ where $$1-\bigg(\frac{\pi}{2}\bigg)^{\frac{1}{2}} \lambda_0 = \lambda_0^2
\sum\limits_{\mu=1}^{\infty}e^{-2\mu^2/\lambda_0}\eqno(3.2)$$ The sum of $E_{dir}$ and $E_{ex}$ is found to be proportional to $1/B$. Hence, (2.13) is replaced by $$F_n=\frac{1}{2}~B^2+\frac{e\rho}{2m}~B+\frac{e\rho^2}{B}~\gamma_n\eqno(3.3)$$ with $$\gamma_n=\gamma_{dir}+\gamma_{ex}=0.00567+0.00715=0.0128.\eqno(3.4)$$
b\. . Clearly (3.3) cannot be extended to $B\rightarrow 0$, as the last term would diverge. In its derivation the $\psi_i(x)$ in (2.8) is taken to be the usual simple harmonic oscillator wave function determined by the magnetic field $B$ only, without regard to $E_{Coul}$. This is valid only when $B>>(m\rho )^{\frac{1}{2}}$. For much smaller $B$, we may consider a configuration in which the function $\psi_i(x)$ is spread out flat over a width $\lambda-l$, then drops to zero sinusoidally over a smaller width $l$ on each side. Neighboring $\psi_i$ overlap only in the strips of width $l$.Thus, it can be arranged that $\sum\limits_i |\psi_i(x)|^2$ is uniform and hence $$E_{dir}=0.\eqno(3.5)$$
For sufficiently weak $B$, we find $\lambda>>$ the London length $\lambda_L=(e^2\rho/m)^{-\frac{1}{2}}$. We must then drop the assumption that $B$ is uniform; it is largest in the overlap region and drops to zero over the length $\lambda_L$ in either side. Let $\overline{B}$ be the average of $B$ over $\Omega$ and $\overline{a} \equiv (e\overline{B})^{-\frac{1}{2}}$ the corresponding “cyclotron radius”. It is then found that $$\lambda =\frac{\pi \overline{a}^2}{\epsilon \lambda_L},\eqno(3.6)$$ $$l=\lambda_L \epsilon^2 <<\lambda_L\eqno(3.7)$$ where $$\epsilon=\bigg(\frac{\pi^5}{8m^2\lambda_L^2}\bigg)^{1/7}<<1
\eqno(3.8)$$ is independent of $\overline{B}$. The energies $E_{mech}$, $E_{ex}$, $E_{dia}$ are all proportional to $\overline{B}$, and one obtains for the free energy density $$F_n=\frac{1}{2}~\overline{B}^2+\frac{e\rho}{m}~\eta
\overline{B}\eqno(3.9)$$ with $$\eta=\frac{7\pi}{16\epsilon}>>1,\eqno(3.10)$$ much bigger than the Schafroth result.
c.. Let $$B_1=e\rho/m,~~~B_2=(m\rho)^{\frac{1}{2}}\eqno(3.11)$$ Between the above strong field case $B>>B_2$ and the weak field case when $\overline{B}<<\eta B_1$, we have the regime when $\psi$ remains flat as in the previous section, but with $\lambda_L >>
\lambda \geq l$. Hence, $E_{dir}=0$ as in (b), but $B$ is uniform as in (a). One obtains an estimate $$F_n=\frac{1}{2}~B^2+\frac{5}{32}~B_0^{7/5}B^{3/5}\eqno(3.12)$$ where $$B_0=\bigg(\frac{16}{3}\pi m \lambda_L\bigg)^{2/7}B_1.\eqno(3.13)$$
The formulas (3.3-4), (3.9-10) and (3.12-13) are strictly upper bounds, which might be improved with better wave functions. We hope these to be good estimates.
[**4. Super State at High Density**]{}
a\. . The coherent length $\xi$ governing the disappearance of the normal phase outside a vortex is found from the Ginzburg-Landau (G.-L.) equation\[4\] to be $$\xi=(2\lambda_L/m)^{\frac{1}{2}}.\eqno(4.1)$$ and hence (taking $m\lambda_L>>1$) $\lambda_L>>\xi$ so that we should have a type [II]{} superconductor, with the critical field for vortex penetration $$H_{c_1} \cong \frac{1}{2}B_1[~\ln
(\lambda_L/\xi)+1.623~]\eqno(4.2)$$ in which the constant differs from that given by the G.-L. equation because of the long range Coulomb field.
However, in Schafroth’s solution because of (2.13), his normal phase would begin to exist at $$(H_c)_{Sch}=\frac{1}{2}~B_1<<H_{c_1},\eqno(4.3)$$ above which $(F_n-BH)_{Sch}$ would also be lower than that for the super phase, making it a Type [I]{} superconductor. But Schafroth’s solution is invalid; (2.13) must be replaced by (3.9) at low $B$, giving $$H_c=\eta B_1\eqno(4.4)$$ with $\eta>>\frac{1}{2}$, as shown in (3.10).
Next, we compare the above corrected $H_c$ with $H_{c_1}$. Using (4.1), we write (4.2) as $$H_{c_1}=\bigg(\frac{7}{8}~\ln\zeta\bigg)B_1\eqno(4.5)$$ where $$\ln \zeta =\frac{4}{7}~\bigg(\frac{1}{2}\ln
\frac{m\lambda_L}{2}+1.623\bigg).\eqno(4.6)$$ Likewise, because of (3.8) and (3.10), the parameter $\eta$ in (4.4) can also be expressed in terms of the same $\zeta$: $$\eta=\kappa~\zeta\eqno(4.7)$$ with the constant $\kappa$ given by $$\kappa=\frac{7}{8}~\bigg(\frac{\pi}{2}~e^{-3.246}\bigg)^{2/7}.\eqno(4.8)$$ Thus, $$\frac{H_c}{H_{c_1}}=\frac{8\kappa}{7}~\frac{\zeta}{\ln
\zeta}.\eqno(4.9)$$ Now $\zeta/\ln \zeta$ has a minimum$=e$ when $\zeta=e$. Thus, $$\frac{H_c}{H_{c_1}}>\bigg(\frac{\pi}{2}\bigg)^{2/7}e^{0.07}>1,\eqno(4.10)$$ and the system is indeed a Type [II]{} superconductor.
b\. . Once $H=H_{c_1}+$, the vortices appear and soon become so numerous that their typical separation is of the order of $\lambda_L$. This gives an average $B$ of the order of $B_1=e\rho/m$.
c\. . To increase $B$ further, it is necessary to increase $H$ on account of the interaction energy between vortices. The vortex separation distance is of the order of the cyclotron radius $a=(eB)^{-\frac{1}{2}}$. In the regime $\xi<<a<<\lambda_L$ (correspondingly, $B_2>>B>>B_1$), the vortices naturally form a lattice to minimize their interaction energy. An involved calculation gives the Helmholtz free energy density at $T=0$ to be $$F_s=\frac{1}{2}B^2+\frac{1}{4}B_1B(\ln \frac{B_2}{B}+{\sf
const})\eqno(4.11)$$ where $B_2=(m\rho)^{\frac{1}{2}}$ and the constant is $$\begin{aligned}
\label{4.12}
~~~~~~~~~~-\ln 4\pi+\left\{
\begin{array}{ll}
4.068~,~~~{\sf square~~lattice}\\
4.048~,~~~{\sf triangular~~lattice}.
\end{array}
\right.~~~~~~~~~~~~~~~~~~~~~~~~~~(4.12)\end{aligned}$$
d\. . In this case of very strong magnetic field when the cyclotron radius and the separation distance are both much less than $\xi$ (but we assume that the system remains non-relativistic). The free energy density is dominated by the RHS of (2.13). However, there is still a super phase whose wave function is assumed to be given by the Abrikosov solution, giving by Eq.(8) of Ref.\[5\] and its Coulomb energy is calculated as a perturbation. The result for the Helmholtz energy density in the super phase is $$F_s=\frac{1}{2}B^2+\frac{e\rho}{2m}B+\frac{e\rho^2}{B}\gamma_s\eqno(4.13)$$ with $$\begin{aligned}
\label{4.14}
~~~~~~~~~~~~\gamma_s=\left\{
\begin{array}{ll}
0.01405~,~~~{\sf square~~lattice}\\
0.01099~,~~~{\sf triangular~~lattice}.
\end{array}
\right.~~~~~~~~~~~~~~~~~~~~~~~~~~~~(4.14)\end{aligned}$$
e\. . The super phase regimes of Sections 4a-b, 4c, 4d correspond (with respect to the value of $B$ or its average $\overline{B}$) to those of the normal phase regimes discussed in 3b, 3c and 3a respectively. Since, for the triangular lattice, a comparison of (4.14) with (3.4) gives $\gamma_s=0.01099<\gamma_n=0.0128$, we see that $F_s<F_n$ for the same $B>>B_2$. Similarly, $F_s<F_n$ for the same $B$ in the regimes $B_2>>B>>B_1$ and $B<<B_1$. From these results and that $H=dF/dB$ is monotonic in $B$, one can readily deduce that the Legendre transform $$\tilde{F}=F-BH\eqno(4.15)$$ satisfies $\tilde{F}_s<\tilde{F}_n$ for the same $H$ in all these regions. (See Figure 1)
From this it seems possible that $$H_{c_2}=\infty;\eqno(4.16)$$ the super phase may persist at high density for all values of the magnetic field.
f\. . In the problem discussed in Ref.\[5\], the Ginzburg-Landau function $\Psi$ is an order parameter, whereas our $f_i(r)$ are single particle wave functions. Nevertheless, except for the constant in (4.2), the two problems have the same physics content at high $\rho$ when $H<<B_2$. For higher field when $H\geq B_2$, the Ginzburg-Landau $\Psi$ should vanish; however, this is not true in our problem. At $T=0$, we place all the particles in the coherent state, making the charge density to vary greatly within a unit lattice cell. Our result (4.14) favoring a triangular lattice is unrelated to that of \[5\], because the ratio parameter $<|\Psi|^4>/<|\Psi|^2>^2$ in \[5\] does not appear in our problem. The lattice dependence in our problem is electrostatic in origin.
[**5. Low Density at Zero Field**]{}
a\. . At very low density and with zero magnetic field, $E_{dir}'$ of (2.7) becomes important. The lowest energy is now achieved by placing the individual charges in separate cells forming a lattice, with little or no overlap. Hence $E_{ex}$ can be disregarded, and a trial wave function leads in the limit $\rho\rightarrow 0$ to $$N^{-1}(E_{Coul}+E_{mech})=-\alpha_n\frac{e^2}{4\pi R}\eqno(5.1)$$ where $$(4\pi/3) R^3=\rho^{-1}~~~{\sf and}~~~\alpha_n\cong 0.9\eqno(5.2)$$ very closely. The above formula (5.1) is valid for $\rho<<r_b^{-3}$, with $r_b$ the Bohr radius.
b\. . In the same limit, the super phase energy also becomes negative, as shown by a Bogolubov-type transformation\[6-8\]. This leads to $$N^{-1}(E_{Coul}+E_{mech})=-\alpha_s\frac{e^2}{4\pi R}\eqno(5.3)$$ with $$0.316<\alpha_s<0.558.\eqno(5.4)$$ Thus, $\alpha_s<\alpha_n$ and the normal phase holds at $\rho<<r^{-3}_b$.
c\. . As $\rho$ increases, (5.1-2) serves only as a lower bound; i.e., $$N^{-1}(E_{Coul}+E_{mech})>-(0.9)\frac{e^2}{4\pi R}.\eqno(5.5)$$ For the normal phase, when $\rho$ approaches $r^{-3}_b$, the single particle wave function leading to (5.1-2) can no longer fit without overlap. We confine each particle within a cube, give it a $r^{-1}\sin qr$ wave function as a trial function, just avoiding overlap so that $E_{ex}=0$. With approximation neglecting the distinction between sphere and cube, we find $$N^{-1}E_{mech}=\frac{\pi^2}{2mR^2},~~~N^{-1}E_{Coul}=-\frac{e^2}{4\pi
R}K_n\eqno(5.6)$$ where $$K_n\approx 0.76~.\eqno(5.7)$$ Equating the above $N^{-1}(E_{Coul}+E_{mech})$ for the normal phase with the corresponding expression (5.3) for the super phase, we find the critical density $\rho_c$ given by $$r_b^{3}~\rho_c=\frac{6}{\pi^7}~(K_n-\alpha_s)^3.\eqno(5.8)$$ The system is in the normal state when $\rho<\rho_c$, and in the super state when $\rho>\rho_c$. (Eq.(4.12) in the FLR paper is equivalent to (5.8), but without the subtraction of $K_n$ by $\alpha_s$.)
[**6. Further Improvement**]{}
Although the FLR paper (66 pages in the Annals of Phys.) is quite lengthy, several important questions remain open.
a\. The energies in above sections 3 and 4 are all upper bounds obtained from trial functions. Perhaps a better trial function, like changing slabs into cylinders, might lower these bounds and put into questions some of the FLR conclusions. Also a numerical calculation exploring the transition regions would be valuable in case there are surprises, particularly when $B\sim
B_2$. In this connection we note that, e.g., in (4.9) the relevant factor in $\zeta/\ln \zeta$ is $(m\lambda_L)^{2/7}/\ln
(m\lambda_L)$, which becomes large when $m\lambda_L \rightarrow
\infty$; yet, it is $<1$ when $m\lambda_L=100$ and only near but still less than $2$ when $m\lambda_L$ is $2000$.
b\. The calculation of the above (5.6-7), i.e., Section 4.2 in the FLR paper, can be improved in several ways. First, consider the integral $\frac{1}{2}\int J_0Vd^3r$, with $V$ the potential due to $J_0$. Because the spatial integral of $J_0$ is zero, and since each particle does not interact with itself we have $$N^{-1}(E_{dir}+E_{dir}')=e\int\psi^2_0({\bf r})\overline{V}({\bf
r})d^3r\eqno(6.1)$$ where $\psi_0({\bf r})$ is located inside a sphere, centered at zero, and $\overline{V}({\bf r})$ is due to all of $J_0$ except the term due to $\psi^2_0$. Second, there is no need to ignore the distinction between sphere and cube. Using theorems from electrostatics, one can reduce (6.1) to the solution of a Madelung problem with like charges at lattice points and a background charge filling space, plus a correction $\frac{1}{6}e^2\rho\int\psi^2_0({\bf r})r^2d^3r$. This correction can be combined with $E_{mech}$ to optimize $\psi_0$, and the Madelung problem can be done by known methods. Third, the energy can probably be reduced by placing the centers of the particle wave functions on a body-centered cubic lattice, as the cell available to each particle would then be more nearly spherical than a cube.
c\. Both FLR and the present paper have left open the question of what happens at low density and high field. It would be surprising if the boundary between super and normal phases were independent of the magnetic field strength. In the $H$ versus $\rho$ phase diagram at $T=0$ and high $H$, does the boundary between normal and super phases bend towards lower $\rho$, or towards higher?
[**7. Comment**]{}
The two most striking results in our paper are $H_{c_1}<H_c$, making the superconductor Type [II]{} instead of Type [I]{}, and that $H_{c_2}$ might be infinite. An improvement in the weak field normal trial function (the above Section 3b) might invalidate the first conclusion by lowering $\eta$ in (3.9). An improvement in the strong field normal trial function (Section 3a) could invalidate the second conclusion by lowering $\gamma_n$ in (3.3).
The field of condensed matter physics has received from its very beginning many deep and beautiful contributions from Russian physicists and masters L. D. Landau, V. L. Ginzburg, N. N. Bogolubov, A. A. Abrikosov, A. M. Polyakov and others. It is our privilege to add this small piece to honor this great and strong tradition and to celebrate the 90th birthday of V. L. Ginzburg.
[ ]{}
|
---
abstract: 'We consider revenue maximization in online auction/pricing problems. A seller sells an identical item in each period to a new buyer, or a new set of buyers. For the online pricing problem, we show regret bounds that scale with the *best fixed price*, rather than the range of the values. We also show regret bounds that are *almost scale free*, and match the offline sample complexity, when comparing to a benchmark that requires a *lower bound on the market share*. These results are obtained by generalizing the classical learning from experts and multi-armed bandit problems to their *multi-scale* versions. In this version, the reward of each action is in a *different range*, and the regret with respect to a given action scales with its *own range*, rather than the maximum range.'
author:
- |
Sébastien Bubeck sebubeck@microsoft.com\
Microsoft Research,\
1 Microsoft Way,\
Redmond, WA 98052, USA. Nikhil Devanur nikdev@microsoft.com\
Microsoft Research,\
1 Microsoft Way,\
Redmond, WA 98052, USA. Zhiyi Huang zhiyi@cs.hku.hk\
Department of Computer Science,\
The University of Hong Kong,\
Pokfulam, Hong Kong. Rad Niazadeh rad@cs.stanford.edu\
Department of Computer Science,\
Stanford University,\
Stanford, CA 94305, USA.
bibliography:
- 'bibliography.bib'
title: 'Multi-scale Online Learning and its Applications to Online Auctions'
---
online learning, multi-scale learning, auction theory, bandit information, sample complexity
|
---
author:
- 'L. Sbordone'
- 'L. Monaco'
- 'C. Moni Bidin'
- 'P. Bonifacio'
- 'S. Villanova'
- 'M. Bellazzini'
- 'R. Ibata'
- 'M. Chiba'
- 'D. Geisler'
- 'E. Caffau'
- 'S. Duffau'
date: 'Received September 15, 1996; accepted March 16, 1997'
title: 'Chemical abundances of giant stars in and , two globular clusters associated with the Sagittarius dwarf Spheroidal galaxy?'
---
[The tidal disruption of the Sagittarius dwarf Spheroidal galaxy (Sgr dSph) is producing the most prominent substructure in the Milky Way (MW) halo, the Sagittarius Stream. Aside from field stars, it is suspected that the Sgr dSph has lost a number of globular clusters (GC). Many Galactic GC are thought to have originated in the Sgr dSph. While for some candidates an origin in the Sgr dSph has been confirmed owing to chemical similarities, others exist whose chemical composition has never been investigated.]{} [ and are two of these scarcely studied Sgr dSph candidate-member clusters. To characterize their composition we analyzed one giant star in , and two in .]{} [We analyze high-resolution and signal-to-noise spectra by means of the MyGIsFOS code, determining atmospheric parameters and abundances for up to 21 species between O and Eu. The abundances are compared with those of MW halo field stars, of unassociated MW halo globulars, and of the metal-poor Sgr dSph main body population.]{} [We derive a metallicity of \[/H\]=$-2.26\pm$0.10 for , and of \[/H\]=$-1.99\pm0.075$ and $-1.97\pm0.076$ for the two stars in . This makes one of the most metal-poor globular clusters in the MW. Both clusters display an $\alpha$ enhancement similar to the one of the halo at comparable metallicity. The two stars in clearly display the Na-O anticorrelation widespread among MW globulars. Most other abundances are in good agreement with standard MW halo trends.]{} [The chemistry of the Sgr dSph main body populations is similar to that of the halo at low metallicity. It is thus difficult to discriminate between an origin of and in the Sgr dSph, and one in the MW. However, the abundances of these clusters do appear closer to that of Sgr dSph than of the halo, favoring an origin in the Sgr dSph system. ]{}
Introduction {#c_intro}
============
It is a fundamental prediction of models of galaxy formation based on the cold dark matter (CDM) scenario, that dark matter haloes of the size of that of the Milky Way grow through the accretion of smaller subsystems [see e.g. @Font11 and references therein]. These resemble very much the “protogalactic fragments” invoked by @searle78. The merging of minor systems is supposed to be a common event in the early stages of the galactic history, playing a role even in the formation of the stellar disk [e.g. @lake89; @abadi03]. Despite this general agreement, the processes governing galaxy formation still present many obscure aspects, and understanding them is one of the greatest challenges of modern astrophysics. For example, it has been noticed that the chemical abundance patterns of present-day dwarf spheroidal (dSph) galaxies in the Local Group are very different from those observed among stars in the Galactic halo [see @vladilo03; @venn04 and references therein]. Most noticeably, dSphs typically show a disappearance of $\alpha$ enhancement at lower metallicity than stars in the Milky Way, which is considered evidence of a slow, or bursting, star formation history. This is at variance with the properties of the stars belonging to the old, spheroidal Galactic component. This clearly excludes that the known dSph can represent the typical building blocks of larger structures such as the Galactic halo [@geisler07]. The observed differences are not unexpected, however, because dSphs represent a very different environment for star formation [@lanfranchi03; @lanfranchi04]. In any case, the observed dSphs are evolved structures that survived merging, while the models suggest that, although accretion events take place even today, the majority of the merging processes occurred very early in the history of our Galaxy. The chemical peculiarities of the present-day small satellite galaxies could have appeared later in their evolution, and the genuine building blocks could therefore have been chemically very different from what is observed today, but more similar to the resulting merged structures. The model of @Font06 implies that the satellites that formed the halo were accreted eight to nine Gyr ago, while the presently observed satellites were accreted only four to five Gyr ago, or are still being accreted.
The Sagittarius (Sgr dSph) galaxy is one of the most studied systems in the Local Group because it is the nearest known dSph [@monaco04], currently merging with the Milky Way [@ibata94]. It thus represents a unique opportunity to study in detail both the stellar population of a dSph and the merging process of a minor satellite into a larger structure. Among Local Group galaxies the Sgr dSph is certainly exceptional, first because of the high metallicity of its dominant population (\[Fe/H\]$\sim -0.5$) compared to its relatively low luminosity ($M_V=-$13.4, @Mateo). While the other galaxies of the Local Group follow a well-defined metallicity-luminosity relation, the Sgr dSph is clearly underluminous by almost three magnitudes with respect to this relation [see figure 5 of @Bonifacio05]. The chemical composition of the Sgr dSph is also very exotic because, aside from the aforementioned underabundance of $\alpha$-elements typical of small galaxies, all the other chemical elements studied so far present very peculiar patterns, clearly distinct from the Milky Way [@sbordone07]. However, this behavior is observed only for stars with $[\mathrm{Fe/H}]\geq -$1. No full chemical analysis has been performed to date on field Sgr stars of lower metallicity, but the measured abundances of $\alpha$-elements suggest that the chemical differences with the Galactic halo should be much lower for $[\mathrm{Fe/H}]\leq -$1 [@monaco05]. At $[\mathrm{Fe/H}]\leq -$1.5 the Sgr dSph stellar population could be chemically indistinguishable from the halo, at variance with other dSphs in the Local Group [@shetrone01; @shetrone03], although even this difference tends to disappear at lower metallicities [@tolstoy09].
Decades of Galactic studies have shown that crucial information about the properties and the history of a galaxy can be unveiled by the study of its globular clusters (GCs). For many aspects they can still be approximated as simple, coeval, and chemically homogeneous stellar populations, although it has been known for a while that this is not strictly true [see @gratton12 for a review]. They thus represent a snapshot of the chemical composition of the host galaxy at the time of their formation. The family of GCs associated with the Sgr dSph today counts five confirmed members. However eighteen more clusters have been proposed as belonging to the Sgr dSph (see @bellazzini02 [@bellazzini03a; @bellazzini03] and Table 1 of @law10, hereafter L10, for a complete census). Nevertheless, the probability of a chance alignment with the Sgr streams is not negligible, and many objects in this large list of candidates are most probably not real members. In their recent analysis based on new models of the Sgr tidal disruption, L10 found that only fifteen of the candidates proposed in the literature have a non-negligible probability of belonging to the Sgr dSph. However, calculating the expected quantity of false associations in the sample, they proposed that only the nine GCs with high confidence levels most likely originate from the Sgr galaxy (in good quantitative agreement with the previous analysis by @bellazzini03). This sample of objects with very high membership probability includes all five of the confirmed clusters (M54, , , , and ), plus [@carraro09], [@carraro07], , and [@bellazzini03]. [The large list of GC candidate members is particularly interesting because the estimated total luminosity of the Sgr galaxy is comparable to that of Fornax [@vandenBergh00; @majewski03] which, with its five confirmed GCs, is known for its anomalously high GC specific frequency [@vandenBergh98]. Hence, if more than five GCs were confirmed members of the Sgr family, the parent dSph would be even more anomalous than Fornax, unless its total luminosity has been largely underestimated. Estimating Sgr dSph mass is, however, difficult because the the galaxy is being tidally destroyed, and its relatively fast chemical evolution and presence of young, metal-rich populations hint at a very massive progenitor [@bonifacio04; @sbordone07; @siegel07; @tolstoy09; @deboer14].]{}
Stimulated by the results of L10, we performed a chemical analysis of and , since no high-resolution study of the clusters abundances exists to date. These objects are particularly interesting because of their very low metallicity [[$[\mathrm{Fe/H}]\approx-2$,]{} @harris96 2010 web version], which means that they can be used to trace the early stages of the chemical evolution of the host galaxy. In fact, NGC5053 could be one of the most metal-poor GCs in the Sgr dSph family, and is also regarded as one of the most metal-poor GC known in the Milky Way. @law10 associate both clusters with the primary wrap of the trailing arm of the galaxy, in a section of the tail probably lost by the Sgr dSph between 3 and 5 Gyr ago. Their calculations, based on their Sgr dSph merging model and the cluster position, distance, and radial velocity, indicate that has a very high probability of originating from the Sgr dSph (99.6%), while for this value is lower but still very significant (96%).
Observations and data reduction
===============================
A high-resolution spectrum of one red giant star in was retrieved with its calibration files from the Keck Observatory archive[^1]. It was collected with one 1800s integration with HIRES [@vogt94] on 2003 June 23 (program ID: C02H, PI: I. Ivans). The spectrum covered the range 438-678 nm at a resolution of R=48000, resulting from the use of a $0\farcs 86$-wide slit. The target corresponds to object 69 in the list of standard stars compiled by @stetson00, and its coordinates and photometric data are given in Table \[pos\_param\_table\]. The spectrum was reduced with HIRES Redux, the IDL-based data reduction pipeline written by Jason X. Prochaska[^2], for a typical S/N ratio [ per pixel]{} of about 80. The radial velocity (RV) of the target was measured with the [*fxcor*]{} IRAF[^3] task, cross-correlating [@tonry79] its spectrum with a synthetic template of a metal-poor red giant drawn from the library of @coelho05. The observed velocity, reduced to heliocentric value was 42.2$\pm$0.7 km s$^{-1}$. This value matches the systemic cluster RV proposed by @pryor93 and @geisler95, who measured 42.8$\pm$0.3 km s$^{-1}$ and 42.4$\pm$1.2 km s$^{-1}$, respectively. [The simultaneous coincidence of the RV, metallicity (see Sect. \[results\_5053\]), and photometry (i.e. distance, Fig. \[f\_cmd\]) with the cluster values confirms that the target is a cluster member.]{}
![Color-magnitude diagrams of the target clusters (small open circles). The large full dots indicate the position of the program stars.[]{data-label="f_cmd"}](cmd.pdf){width="\hsize"}
Two bright red giant stars in NGC5634 were spectroscopically observed at the SUBARU telescope on 2009 March 3 with the High Dispersion Spectrograph [HDS, @noguchi02] in Echelle mode (program ID: S09A-026). The targets were selected from the photometry of @bellazzini02 [hereafter B02], and their ID numbers, coordinates, and magnitudes are given in Table \[pos\_param\_table\]. Four exposures were collected for star \#3 and two for star \#2, for a total integration time of three and two hours, respectively. The standard StdYb setup, 2x2 CCD binning, and the $1\farcs 2$ slit produced high-resolution spectra (R=30000) in the range 410-685 nm, secured on the blue and red CCDs simultaneously. Data were reduced as in @monaco11 through a combination of standard IRAF tasks, and dedicated scripts are available from the HDS website[^4]. [ The spectra of each star were then shifted to laboratory wavelengths and merged. The final combined spectrum of NGC5634-2 had S/N$\approx$120 [ per pixel]{} at 600nm, while the results were of lesser quality for NGC5634-3 (S/N$\approx$80 [ per pixel]{}) as a consequence of shorter exposure times. ]{}
The RV of the two stars was measured on each extracted spectrum with the same procedure used for NGC5053-069, separately for the blue and the red CCD. We thus obtained eight measurements for NGC5634-3 and four for NGC5634-2. For each star, velocities differ by no more than 0.9 km s$^{-1}$ and have a dispersion of 0.4 km s$^{-1}$. The latter value will be assumed as the internal error associated with our estimates, given by the average of the single measurements: $-12.8\pm$0.4 and $-20.6\pm$0.4 km s$^{-1}$ for NGC5634-2 and NGC5634-3, respectively. This velocity difference is compatible with both stars being cluster members. The velocity dispersion of this cluster is not known. If we assume the velocities of the two stars are consistent within 1 $\sigma$ this implies an estimate of $\sigma = 3.9$ km s$^{-1}$ that is compatible with what is found in several globular clusters of similar mass. The two stars could be compatible to less than 1 $\sigma$ and the velocity dispersion would be even higher. The estimates of the cluster RV in the literature are scarce, and affected by large errors. Early investigations by @mayall46 and @hesser86 obtained large negative values that do not agree with our results ($-63\pm$12 and $-41\pm$9 km s$^{-1}$, respectively). On the contrary, our measurements agree better with @peterson85, who measured the RV of five cluster stars with an uncertainty of 25 km s$^{-1}$: their average value is $-$26.0 km s$^{-1}$ with an rms of 29.1 km s$^{-1}$, and the resulting statistical error on the mean is 13.0 km s$^{-1}$. Our targets lie very close on the cluster isochrone, have identical parameters (i.e. the same distance) and metallicity, and similar RV, hence their cluster membership is extremely likely. We conclude that the cluster RV was probably underestimated in the literature, and the average value of $-45.1$ km s$^{-1}$ quoted by @harris96 should be revised upward. This RV was assumed by to assess the probability of association with the Sgr galaxy, and it matched very well the expected value for the Sgr trailing arm. As a consequence, the association of NGC5634 with this stellar stream would be less likely after the revision of the cluster RV. However, Fig. 6 of shows that this would still fall in the range predicted by the model even if increased by $\sim$25 km s$^{-1}$, hence this correction is still compatible with their assignment.
![A sample of the spectra for the three targets around 560 nm. Spectra have been normalized and then vertically shifted for legibility.[]{data-label="f_spectra"}](the_spectra.pdf){width="\hsize"}
The position of the program stars in the color-magnitude diagram (CMD) of the parent cluster is shown in Fig. \[f\_cmd\], where the photometric catalogs of @stetson00 and were used for NGC5053 and NGC6534, respectively. In Fig. \[f\_spectra\] we show a portion of the resulting spectra of the three stars. Target coordinates, photometry, radial velocities, and determined atmospheric parameters are listed in Tab. \[pos\_param\_table\].
------ ------------- --------------- -------- --------- ------------------------ ---------------------- ---------------------------- ----------------------- --------------
Star RA Dec $V$ $(V-I)$ [V$_\mathrm{helio}$]{} [$T_\mathrm{eff}$]{} [$\log{\ensuremath{g}}$]{} [V$_\mathrm{turb}$]{} [\[Fe/H\]]{}
(J2000) (J2000) (mag) (mag) K dex
13:16:35.96 +17:41:12.8 14.565 1.163 42.2 4450 1.15 1.85 $-$2.26
14:29:30.06 $-$05:58:39.4 14.776 1.481 $-$12.8 4085 0.22 1.72 $-$1.99
14:29:40.50 $-$05:57:09.7 14.761 1.432 $-$20.6 4097 0.45 1.61 $-$1.92
------ ------------- --------------- -------- --------- ------------------------ ---------------------- ---------------------------- ----------------------- --------------
\
\
Stellar parameters and abundance analysis {#stelpar}
=========================================
Spectroscopically determined parameters
---------------------------------------
The chemical analysis was performed by means of [MyGIsFOS]{}[^5] [@sbordone14]. For this purpose, a grid of synthetic spectra covering the range between 480 nm and 690nm was computed with the following characteristics (start value, end value, step, unit): [$T_\mathrm{eff}$]{} (4000, 5200, 200, K), [$\log{\ensuremath{g}}$]{} (0.5, 3, 0.5, ); [V$_\mathrm{turb}$]{} (1.0, 3.0, 1.0, ); [\[Fe/H\]]{} ($-$4.0, $-$0.5, 0.5, dex); [\[$\alpha$/Fe\]]{}($-$0.4, 0.4, 0.4, dex). This corresponds to a grid of 3024 atmosphere models, the majority of which belonged to the MPG grid described in @sbordone14, with the exception of the ones with [$T_\mathrm{eff}$]{}=4000 K and [$\log{\ensuremath{g}}$]{}=0.5, which were computed for this work. The models were computed assuming mono-dimensional, plane-parallel, and local thermodynamical equilibrium (LTE) approximations, using ATLAS 12 [@kurucz05; @sbordone04; @sbordone05]. Synthetic spectra were then computed by means of SYNTHE [@castelli05]. Atomic and molecular line data were retrieved from the R. L. Kurucz website, but the [$\log{\ensuremath{gf}}$]{} for the lines used in the analysis were updated according to the values provided in the second version of the Gaia-ESO (GES) “clean” line list. In the grid, $\alpha$ enhancement is modeled by varying in lockstep even atomic number elements between O and Ti, inclusive.
![[ On the left, ]{} 568.2 nm line and 568.4 nm lines in the spectra of stars (black solid line) and (red dash-dotted line). [ The [MyGIsFOS]{} fit to this specific feature is \[Na/H\]=-2.22 for , and 1.72 for . On the right, the 568.8 nm line in the same stars. Abundances here are \[Na/H\]=-2.15 and -1.76.]{} The remarkable difference in Na abundance between the two stars is quite evident.[]{data-label="na_line_fig"}](Na_Figures_united.pdf){width="\hsize"}
The good quality and extensive coverage of the spectra provided enough lines for a fully spectroscopic parameter determination. As discussed in more detail in @sbordone04, [MyGIsFOS]{} determines effective temperature by searching the zero of the [$T_\mathrm{eff}$]{}-LEAS[^6] relation, the microturbulent velocity by eliminating the dependence of abundance from line equivalent width, the surface gravity by imposing - ionization equilibrium. Since [\[$\alpha$/Fe\]]{} may have a significant influence on the atmospheric structure and affect line formation, a “global” [\[$\alpha$/Fe\]]{} is one of the dimensions of the synthetic grid. [ The synthetic grid needs to be broadened to match the combination of instrumental and macroturbulent / rotational broadening for each star. After checking the fit profiles on a number of unblended, metallic lines, we applied a Gaussian broadening to the grid of FWHM=9 for all three stars.]{}
![[ \[/Fe\], \[/Fe\], \[/Fe\], and \[/Fe\] ratios plotted against \[Fe/H\] for the studied stars in (filled black squares) and (filled black circles), compared with Sgr dSph main body populations (open black circles, @sbordone15; open black diamonds, @sbordone07), and mean values for five globular clusters related to Sgr dSph: (red asterisk), (open red triangle), (open red diamond), (open star), and (open red square). Small open gray triangles are Milky Way stars [@venn04; @reddy06]. Large open cyan symbols are MW globular clusters (square) and (triangle). The large open black square connected by a dashed line with the filled black square represents the result for when plotting \[$\alpha$/\] vs. \[/H\], with the latter represented by the open symbol. ]{}[]{data-label="alpha_fe_fig"}](all_alphas.pdf){width="\hsize"}
\
\
Photometric parameters estimates
--------------------------------
In addition to the fully spectroscopic parameter estimates, we also estimated [$T_\mathrm{eff}$]{} from the de-reddened color $(V-I)$ through the equations of @alonso99 [@alonso01], assuming $E(V-I)$=0.08 mag for [@bellazzini02], and $E(B-V)$=0.017 mag for [@nemec04], with $E(V-I)=1.34\times E(B-V)$ [@dean78]. The adopted color–temperature relation is independent of metallicity.
The surface gravity was calculated through the equation $$\log{\mathrm{g}}=\log{\mathrm{M}}+4\log{\mathrm{T_{eff}}}+0.4(M_\mathrm{V}+BC)-12.503,
\label{e_logg}$$ obtained from basic relations, where $M$ is the mass, BC is the bolometric correction, and the solar values $T_{\sun}$=5777 K, $\log{g_{\sun}}$=4.44, $M_\mathrm{bol,\sun}$=4.75 are assumed in the calculation of the constant term. The stellar mass was fixed to 0.80$\pm$0.05 M$_{\sun}$ for all the stars, and BC was deduced from the temperature interpolating the tables of @alonso99. The absolute magnitude in the $V$ band was estimated from the cluster reddening, defined as before, and distance modulus $(m-M)_V$=16.23 [@harris96] and 17.36 magnitudes [@bellazzini02] for and , respectively. Starting from these values for [$T_\mathrm{eff}$]{} and [$\log{\ensuremath{g}}$]{}, we re-derived [V$_\mathrm{turb}$]{} and abundances as we did for the fully spectroscopic case: for brevity, we will refer to this set of parameters as “photometric” parameters from now on.
The temperature of the two stars in are very similar to the spectroscopic values ($\Delta T_\mathrm{eff}<40$ K), and the gravities are higher but still compatible within uncertainties. The photometric gravity of is, on the contrary, 0.9 dex higher than the spectroscopic value. In particular, the spectroscopic parameters of this star cannot satisfy the basic relation above, unless one or more of the other input quantities (cluster parameters, stellar mass, BC) are revised to unrealistic values.
Choice of adopted atmospheric parameters {#choap}
----------------------------------------
The discrepancy between photometric and spectroscopic atmospheric parameters is a well-documented fact for giant stars around or below \[Fe/H\]=$-$2. In two recent studies, for instance, @mashonkina11 and @bergemann12 study the case of the metal-poor giant , whose parameters can be reliably derived from photometry and Hipparcos-based parallaxes. Said parameters ([$T_\mathrm{eff}$]{}=4665 K, [$\log{\ensuremath{g}}$]{}=1.64, [V$_\mathrm{turb}$]{}=1.61 , [\[Fe/H\]]{}=-2.60) are quite close to the ones photometrically derived for , and, when using them in a 1D-LTE analysis, shows a – imbalance and residual LEAS quite close to the ones displays in the present study. @mashonkina11 manages to recover a satisfactory ionization balance for when treating line formation taking into account departures from LTE (NLTE), but is left with an unacceptable LEAS, albeit a reduced one with respect to the LTE case. @bergemann12 also employs NLTE line formation for , but in association with horizontally averaged 3D hydrodynamical models, which preserve the vertical temperature structure of 3D models. Despite this further refinement, the LEAS problem of remains unsolved, likely indicating that the horizontal temperature variations (that are averaged out in the @bergemann12 models) must be accounted for [*together*]{} with NLTE to properly describe line formation in these cool, metal-poor giants. It is thus evident that NLTE affects line formation in a relevant way in stars like , and likely, to a lower extent, and .
In the case of , the LTE treatment significantly underestimates Fe ionization and upper-level populations [@mashonkina11], leading to an overestimate of line strength, and a consequent underestimate of the derived abundance. We thus decided to adopt the photometric set of parameters for , employ for the determination of [\[Fe/H\]]{}, and derive all the \[X/Fe\] ratios with respect to \[/H\].
The situation is less straightforward in the case of the two stars in . The two stars show quasi-identical spectra, which the spectroscopic parameter estimate underscores by providing practically equal parameters. Despite this agreement, they show a significant $(V-I)$ difference, which translates to a 86 K photometric [$T_\mathrm{eff}$]{} difference. Most likely, this is a consequence of being a second-generation star, as clearly indicated by its Na abundance (see Sect. \[results\] and Fig. \[na\_line\_fig\]). The photometric effect of abundance variations in globular clusters was studied in detail in @sbordone11, where it can be seen that second-generation stars in the upper RGB become slightly bluer than first-generation ones, essentially because of the flux lost in the UV due to stronger NH absorption being transferred more in the $V$ band than in the $I$. An inspection of @sbordone11 “reference” and “CNONa2” isochrones around the spectroscopic temperature of shows an effect of about 0.04 magnitudes in $(V-I)$, strikingly similar to the difference in color between and . This cannot be taken strictly at face value: the @sbordone11 calculations were performed at a slightly higher metallicity (\[Fe/H\]=-1.62), and the assumed variations of C, N, and O abundances were likely larger than the ones encountered in . However, it is a clear indication that, in the absence of calibrations computed taking into account abundance variations such as the one encountered in globular clusters, photometric temperature estimates might be significantly skewed in second-generation stars [ in the upper part of the RGB]{}. It is interesting to note how star , on the other hand, shows an excellent agreement between photometric and spectroscopic temperature, and a photometric gravity that is only 0.33 dex higher than the spectroscopic. [ Although we are considering one single star here, this hints at the fact that NLTE effects on on [$T_\mathrm{eff}$]{} and gravity are strongly reduced at the metallicity of ]{}. For this reason, we feel confident that spectroscopic parameters can be employed in this case, and we consider them preferable owing to their insensitivity to the effects of CNO abundance variations.
![\[Ni/Fe\] vs. [\[Fe/H\]]{} for the same samples, and using the same symbols as in Fig. \[alpha\_fe\_fig\].[]{data-label="ni_fe_fig"}](nife_all_wide.pdf){width="\hsize"}
In Tab. \[alternate\_param\_table\] we list the variations in \[X/H\] in each star that would have originated from employing the “rejected” set of parameters for the star, i.e. the photometric parameters for the two stars in , and the spectroscopic ones in . [ Here, the strongest variations are, as expected, encountered in the ionized species in . Since the rejected parameter set would force gravity to recover – ionization equilibrium, abundances for and other ionized species decrease consistently by 0.25-0.35 dex. Most neutral species decrease of about 0.05 dex (likely due to the roughly 100 K lower [$T_\mathrm{eff}$]{} in the rejected set), with the exception of Zn, which shows a stronger effect. The effect is the opposite in , where employing the rejected photometric set would break Fe ionization equilibrium, raising the abundance of ionized species, although, in this case, varies more than most of the other ionized elements. Also affected is , since the \[\] 630 nm line is, naturally, sensitive to pressure. Finally, has very close photometric and spectroscopic parameters, leading to almost identical abundances. As usually observed in these cases, \[X/Fe\] abundance ratios produced taking care of matching ionization stages are quite robust against variations in atmospheric parameters.]{}
-- --------------------------------- --------------------------------- ---------------------------------
[****]{} [****]{} [****]{}
[$T_\mathrm{eff}$]{}=4343 [$T_\mathrm{eff}$]{}=4071 [$T_\mathrm{eff}$]{}=4135
[$\log{\ensuremath{g}}$]{}=0.26 [$\log{\ensuremath{g}}$]{}=0.55 [$\log{\ensuremath{g}}$]{}=0.55
[V$_\mathrm{turb}$]{}=1.7 [V$_\mathrm{turb}$]{}=1.7 [V$_\mathrm{turb}$]{}=1.6
$\Delta$\[X/H\] $\Delta$\[X/H\] $\Delta$\[X/H\]
– 0.19 0.04
$-$0.01 $-$0.05 0.02
– $-$0.01 0.00
– – 0.02
– 0.06 0.01
0.02 $-$0.06 $-$0.01
$-$0.37 0.16 0.03
$-$0.07 $-$0.05 0.04
$-$0.26 0.07 $-$0.02
– 0.04 0.08
$-$0.03 $-$0.05 0.05
0.01 $-$0.04 0.05
$-$0.04 $-$0.02 0.03
$-$0.27 0.21 0.06
– 0.02 0.05
$-$0.07 0.00 0.04
– – 0.04
$-$0.14 0.10 $-$0.02
$-$0.29 0.07 0.02
$-$0.26 0.12 0.04
– 0.17 0.04
-- --------------------------------- --------------------------------- ---------------------------------
: Variation in \[X/H\] when using for each star the alternate parameter set with respect to the one chosen. Variations are computed as rejected – chosen.[]{data-label="alternate_param_table"}
Detailed abundances
-------------------
After parameter determination, [MyGIsFOS]{} derived detailed abundances for all the elements for which regions had been provided and viable lines were found. All the employed regions, as well as the observed and best-fitting synthetic profiles, can be obtained online via Vizier (see Appendix \[mygi\_appendix\]). Hyper fine splitting (HFS) values for the used lines were instead derived from the fourth version of the GES line list [@heiterprep], and used for the lines of , , , , , , and . Isotopic mixture for Eu was derived from @anders89.
Abundances for all elements were measured within [MyGIsFOS]{}, with the exception of oxygen, whose abundance was determined in by measuring the \[\] 630.0304 nm line
. The line could be measured by MyGIsFOS in , but not in since the line was contaminated by telluric absorptions. In the latter case, the region around the \[\] 630.0304 nm line was decontaminated from telluric lines according to the following procedure. First of all, we calculated a synthetic telluric spectrum where parameters were varied in order to match the depth and width of the unblended telluric features. Then we [ divided the spectral region surrounding the \[\] 630.0304 nm line by this spectrum]{}. We then proceeded to fit the oxygen line in both stars as follows: for each star, an ATLAS 12 model at the final parameters and abundances was computed, from which two small grids of synthetic spectra were computed in the region surrounding the oxygen line at varying O abundances. The [fitprofile]{} line-fitting code [@thygesen15] was then employed to derive the best-fitting O abundance by $\chi^2$ minimization. [ For consistency, [fitprofile]{} was employed to derive O abundances in both and ]{}.
Results
=======
Derived abundances are in Tab. \[abund\_table\]. The assumed solar abundances used in the grid are also listed in this table, and were taken from the compilation of @lodders09, except for O, Fe, and Eu, which where taken from @caffau11, while Tab. \[abund\_variation\_table\] lists the impact on the derived abundances for star of altering each atmospheric parameter by an amount roughly equivalent to the estimated error.
In Figs. \[alpha\_fe\_fig\] to \[nieu\_fig\] we plot various chemical abundance ratios in the three studied stars compared with values in different Sgr dSph populations, other globular clusters, and stars in the Milky Way disk and halo. [ In all these figures, typical error bars for the present analysis are overplotted: they correspond to line-to-line scatter as indicated in Tab. \[abund\_table\]. Where only one line was measured, we show a default 0.15 dex error bar for \[X/Fe\], and the corresponding bar is traced in gray]{}. For the globular clusters associated with Sgr dSph (red symbols), average cluster values have been retrieved from @carretta14, and come from @carretta14 and @mottini08 for , @mottini08 for , @carretta10 for , @cohen04 for , and @sbordone05b for . Milky Way stars (gray symbols) are taken from the @venn04 compilation, while the two MW comparison clusters and (cyan symbols) come from @lind11 and @koch14, respectively. The data points referring to the Sgr dSph metal-poor main body populations come from a companion work to the present one, based on UVES [@dekker00] spectra. The full analysis of these stars will be presented in @sbordone15, here we just point out that these stars are similar (albeit slightly less evolved) to the ones studied here, and have been analyzed by means of [MyGIsFOS]{}, using the same grid and the same region list employed in this paper. Metal-rich ([\[Fe/H\]]{}$>$-1.0) Sgr dSph data points are taken from @sbordone07.
{#results_5634}
The two stars examined are very high-luminosity red giants, as made evident by their very low gravity, which required a slight extrapolation of the [MyGIsFOS]{} grid in the gravity dimension (Table \[pos\_param\_table\]). In fact, both stars show a clear emission component on H$\alpha$ wings (see Fig. \[5634\_halpha\_fig\]), apparently asymmetric with respect to the absorption line, which could be interpreted as evidence of mass loss from the star. The emission appears to be more blueshifted (or more blue-asymmetric) in star , leading to a slightly different center of the absorption component.
The two stars are extremely similar, with atmospheric parameters whose differences are well within the observational uncertainties. Chemical abundances are also, in most cases, extremely similar between the two stars. The [\[Fe/H\]]{} ($-1.94\pm$0.08 and $-1.93\pm$0.08 for ) indicate a cluster metallicity in excellent agreement with existing photometric estimates [[\[Fe/H\]]{}=$-$1.94, see @bellazzini02 and references therein].
[lrrrrrrrrrrrr]{} Variation & & & & & & & & & & &\
[$T_\mathrm{eff}$]{} $+$50 K & 0.00 & 0.04 & 0.03 & 0.03 & $-$0.01 & 0.08 & $-$0.01 & 0.09 & 0.00 & 0.10 & 0.09\
[$T_\mathrm{eff}$]{} $-$50 K & $-$0.01&$-$0.04 & $-$0.04 & $-$0.03 & 0.00 & $-$0.06 & $-$0.01 & $-$0.09 & 0.00 & $-$0.10 & $-$0.09\
[$\log{\ensuremath{g}}$]{} $+$0.3 & 0.10 &$-$0.03 & $-$0.01 & 0.01 & 0.05 & $-$0.03 & 0.13 & $-$0.02 & 0.06 & $-$0.01 & $-$0.02\
[$\log{\ensuremath{g}}$]{} $-$0.3 & $-$0.12 & 0.03 & 0.01 & $-$0.01 & $-$0.05 & 0.05 & $-$0.14 & 0.02 & $-$0.05 & 0.01 & 0.02\
[V$_\mathrm{turb}$]{} $+$0.2 & $-$0.02&$-$0.02 & $-$0.03 & $-$0.01 & $-$0.01 & $-$0.07 & $-$0.06 & $-$0.03 & $-$0.07 & $-$0.01 & $-$0.08\
[V$_\mathrm{turb}$]{} $-$0.2 & 0.00 & 0.02 & 0.03 & 0.01 & 0.01 & 0.11 & 0.06 & 0.07 & 0.08 & 0.01 & 0.11\
[\[$\alpha$/Fe\]]{} $+$0.2 & 0.01&$-$0.01 & 0.00 & $-$0.01 & 0.04 & 0.02 & 0.04 & 0.05 & 0.05 & 0.00 & 0.00\
[\[$\alpha$/Fe\]]{} $-$0.2 & $-$0.07& 0.02 & $-$0.01 & 0.01 & $-$0.04 & $-$0.03 & $-$0.05 & $-$0.05 & $-$0.05 & 0.00 & 0.00\
\
Variation & & & & & & & & & & & &\
[$T_\mathrm{eff}$]{} $+$50 K & 0.07 & 0.04 & $-$0.08 & 0.06 & 0.05 & 0.07 & $-$0.04 & 0.01 & 0.02 & $-$0.02 &\
[$T_\mathrm{eff}$]{} $-$50 K & $-$0.07 & $-$0.04 & 0.08 & $-$0.06 & $-$0.05 & $-$0.08 & 0.04 & $-$0.02 & $-$0.02 & 0.02 &\
[$\log{\ensuremath{g}}$]{} $+$0.3 & $-$0.01 & 0.00 & 0.19 & 0.03 & 0.02 & $-$0.03 & 0.07 & 0.08 & 0.14 & 0.19 &\
[$\log{\ensuremath{g}}$]{} $-$0.3 & 0.01 & 0.00 & $-$0.16 & $-$0.03 & $-$0.02 & 0.02 & $-$0.07 & $-$0.07 & $-$0.13 & $-$0.15 &\
[V$_\mathrm{turb}$]{} $+$0.2 & $-$0.03 & $-$0.06 & $-$0.06 & $-$0.01 & $-$0.03 & $-$0.04 & $-$0.05 & $-$0.08 & $-$0.23 & $-$0.01 &\
[V$_\mathrm{turb}$]{} $-$0.2 & 0.03 & 0.07 & 0.06 & 0.01 & 0.04 & 0.05 & 0.06 & 0.09 & 0.30 & 0.01 &\
[\[$\alpha$/Fe\]]{} $+$0.2 & 0.00 & 0.00 & 0.07 & 0.01 & 0.00 & $-$0.02 & 0.10 & 0.03 & 0.06 & 0.10 &\
[\[$\alpha$/Fe\]]{} $-$0.2 & 0.00 & $-$0.01 & $-$0.07 & $-$0.01 & 0.00 & 0.02 & $-$0.09 & $-$0.03 & $-$0.06 & $-$0.09 &\
Oxygen appears enhanced with respect to the solar ratio in a fashion compatible with typical halo values at this metallicity in , less so in . The \[O/Fe\] ratio differs by 0.27 dex between the two stars, anticorrelating with the difference in Na abundance.
Sodium is the element that displays the most strikingly different abundance between the two stars. They differ by 0.34 dex, well beyond the line-to-line dispersion, with star being the more Na-rich. The difference is readily visible in the spectrum, as shown in Fig. \[na\_line\_fig\], where features appear much stronger in , despite almost identical parameters. The clear Na abundance difference, together with the opposite O abundance difference, make it highly probable that displays a significant Na abundance spread, and possibly a Na-O abundance anticorrelation, as is almost universally observed in globular clusters. It should be remarked that the observed Na abundance difference is about half of the typical full extent of the Na abundance spread as observed in most GCs [see for instance Fig. 2 in @gratton12]. A sample of only two stars, obviously, does not allow one to infer the full extent of the spread, nor the numerical significance of the second generation in .
NLTE corrections are available for two of the lines we used (568.2nm and 568.8nm) as derived from the calculations of @lind11b, and made available on the web through the [INSPECT]{}[^7] interface. The available calculations have a lower [$\log{\ensuremath{g}}$]{} limit of 1.0, so we could not test the values for the atmospheric parameters of our stars. However, corrections appear to be very small in this parameter domain (0.03 dex for the 568.2nm line, and 0.05 for the 568.8nm line with the parameters and line strength of star ).
[ The $\alpha$ elements Mg, Si, and Ca are all enhanced with respect to iron by $\sim 0.3$ to 0.5dex. Titanium is also enhanced by about 0.3 dex; one should note, however, that nucleosynthetically it is not a pure $\alpha$ element, since it may be synthesized in nuclear statistical equilibrium, together with iron-peak elements. The odd light element Al is only detected in star NGC 5634-3 and it is strongly enhanced over iron (0.6dex). If on the one hand this is not too surprising, since this star is also enhanced in Na, it is surprising that this does not appear to be accompanied by a decrease in Mg abundance as is usually observed [see e.g. @g01]. The iron peak elements, V, Cr, Co, and Ni (Fig. \[ni\_fe\_fig\]) follow the iron abundance, while Sc seems to be slightly [**enhanced with respect to**]{} iron in both stars, and Mn and Cu slightly underabundant with respect to iron. In fact Mn is found to be underabundant both in Sgr dSph stars [@McW03; @McW03b; @sbordone07] and in Milky Way stars [@g89; @C04] at low metallicity. @berg08 presented NLTE computations that increase the Mn abundances so that that \[Mn/Fe\] $\sim 0$. Copper is measured in NGC 5634-3 only, and by a single line. The fact that is underabundant with respect to iron is coherent with what is observed in Galactic stars [@Mishenina; @Bihain]. One should note, however, that @Bonifacio10 warned against the possible effects of granulation and NLTE that may affect the determination of Cu abundances, based on the differences found between the abundances in dwarfs and giants in the Globular Cluster NGC6397, which has a metallicity similar to NGC5634. Zinc is slightly underabundant with respect to iron in both stars, not inconsistent with the Galactic trend at this metallicity [@Mishenina; @Bihain], but also not inconsistent with what is observed in Sgr dSph stars at higher metallicity [@sbordone07]. The neutron capture element Y (Fig. \[y\_fe\_fig\]) is underabundant with respect to iron, consistent with what is observed in the Globular Cluster NGC6397 [@James; @lind11] and in the field stars of similar metallicity [@Burris; @Fulbright; @Mashonkina01], but again, not inconsistent with what is observed in the higher metallicity stars of Sgr dSph [@sbordone07]. The average \[Ba/Fe\] of the two stars is nearly zero (Fig. \[ba\_fe\_fig\]), which is at variance with what is observed in NGC6397 [@James; @lind11], where \[Ba/Fe\] is $\sim -0.2$, although it should be stressed that in Galactic stars we see a large scatter in \[Ba/Fe\] at this metallicity [@Burris; @Fulbright; @Mashonkina03]. Europium is strongly enhanced over iron in both stars, even more than what is found in NGC6397 [@James; @lind11], and we also observe a large scatter in \[Eu/Fe\] among Galactic stars at this metallicity [@Burris; @Fulbright; @Mashonkina03], while it has been found essentially at the solar value in the two solar metallicity Sgr dSph stars analyzed by @Bonifacio00. ]{}
![\[Y/Fe\] plotted against \[Fe/H\]. Symbols are the same as in Fig. \[alpha\_fe\_fig\] except for the large blue asterisk, which represents the average Y abundance in as measured by @mottini08. Here, in Fig. \[ba\_fe\_fig\], and in Fig. \[bay\_fe\_fig\] only the filled point appears for since both Ba and Y are always compared to .[]{data-label="y_fe_fig"}](yfe_all_wide.pdf){width="\hsize"}
![[ \[Ba/Fe\] plotted against \[Fe/H\], symbols as in Fig. \[y\_fe\_fig\].]{}[]{data-label="ba_fe_fig"}](bafe_vs_fe.pdf){width="\hsize"}
![\[Ba/Y\] plotted against \[Fe/H\], see Fig. \[y\_fe\_fig\] for the symbol legend.[]{data-label="bay_fe_fig"}](bay_all_wide.pdf){width="\hsize"}
{#results_5053}
The single star analyzed in shows a metallicity of \[/H\]=$-2.26\pm$0.01, in excellent agreement with the value of $-$2.27 in @harris96, based in large part on the Ca triplet analysis of 11 stars by @geisler95.
The choice of employing as the reference iron abundance and of assuming photometric atmosphere parameters has relevant effects on the \[X/Fe\] abundance ratios listed in Tab. \[abund\_table\]; since \[/H\] is 0.22 dex lower than \[/H\], if \[/H\] were used as reference, as is usually done for neutral species, all the neutral \[X/Fe\] ratios would be 0.22 dex [*higher*]{}. The choice of using as reference was motivated by the belief that is significantly affected by NLTE and 3D effects in this star. However, at the present time NLTE / 3D corrections are only available for a handful of species, so that we generally do not know whether the same is true for other neutral species, and to what extent. If we assumed, for instance, that a given neutral species was affected in the same way as , for that species the ratio against would be the appropriate one, while the ratio against is the correct one if one assumes the species to be unaffected by NLTE/3D. As such, star is difficult to plot in Figs. \[alpha\_fe\_fig\] and \[ni\_fe\_fig\]: to make the issue visible, we plotted the results for this star with a double symbol, the filled one indicating the abundance ratio as listed in Table \[abund\_table\], the open one plotting instead \[X/\] vs. \[/H\].
Sodium is measured in only through the 568.8nm line and delivers a fairly high abundance (\[Na/\]=0.21), about 0.4 dex higher than expected for a first-generation star. The NLTE correction computed through [INSPECT]{} for this line (assuming [$\log{\ensuremath{g}}$]{}=1.0) is of -0.04 dex (in the sense of the NLTE-corrected abundance being lower). This would hint at the presence of multiple stellar generations in as well.
While the present paper was undergoing the refereeing process, an independent analysis of NGC5053 based on medium resolution spectra (R$\sim 13\,000$) appeared as a preprint [@boberg]. They observed star NGC5053-69, (star 6 in their list). In spite of the lower resolution and limited spectral range, their derived atmospheric parameters are in remarkably good agreement with ours (130 K difference in Teff, -0.05 in log g and -0.07dex in \[Fe/H\]. They manage to measure \[O/Fe\] from the \[OI\] 630nm line and find \[O/Fe\]=-0.2, which is consistent with their enhanced sodium (\[Na/Fe\]=0.6). Sodium is one of the most discrepant elements with respect to our analysis. We did not use the same lines: they used the 616.2nm line, while we used the 568.2nm and 568.8nm lines. We also adopted different NLTE corrections; @boberg used those of @gratton99. The other element that is distinctly different between the two analyses is Ca. @boberg derive \[Ca/Fe\]=+0.49, while we derive +0.10. Considering that our Ca abundance relies on nine lines and \[Ca/Fe\] is in excellent agreement with \[Si/Fe\] in this star (also derived from nine lines), we believe that our result is very robust. Given that no details on the lines used are given in the preprint, we cannot make a hypothesis about the reason of this discrepancy. With the two exceptions of Na and Ca, the agreement on the abundance ratios for the other elements in common (Ti, Ni, Ba) is excellent.
Strong underabundances of Cr and Mn, with respect to iron, are observed in this star, in line with what is observed in Milky Way stars; in both cases this is likely due to the neglect of NLTE effects [@Bonifacio09; @BC10; @berg08].
Both the neutron capture elements Y and Ba are found to be underabundant with respect to iron. For Y this is similar to what is observed in the more metal-rich stars of Sgr dSph [@sbordone07]. For Ba instead, all the giant stars observed by [@sbordone07] are enhanced in Ba. Again, we would like to stress that for both elements a large scatter is observed in Galactic stars at these metallicities [@Burris; @Fulbright; @Mashonkina01; @Mashonkina03].
[ Assessing the proposed association with Sgr dSph]{}
-----------------------------------------------------
The association of and with Sgr dSph [@bellazzini03; @law10] has so far been exclusively based on their position and kinematic properties, and is thus probabilistic. One of the obvious reasons for exploring the chemistry of these clusters is thus to look for characteristics that may set them aside from the typical MW behavior at similar metallicities, but that are shared with known Sgr dSph populations. Such a chemical signature is rather dramatic at higher metallicity [@cohen04; @sbordone07], but it becomes more and more difficult to discern as metallicity decreases, since Sgr dSph chemical evolution at low metallicities appears to closely match the values observed in the MW halo [e.g. @mcwilliam13]. [ We will now look in greater detail at some chemical markers that might be used to infer an association of and with Sgr dSph.]{}
In Fig. \[alpha\_fe\_fig\], [ the \[X/Fe\] ratios for the three $\alpha$-elements Mg, Si, and Ca, and the mixed-$\alpha$-Fe-peak Ti in and are compared ]{} with Sgr dSph main body populations, the MW halo, globular clusters associated with Sgr dSph, and globular clusters thought to have originally formed in the MW. Since the [\[$\alpha$/Fe\]]{} ratio in Sgr dSph closely follows the MW value below \[Fe/H\]$\sim-$1.2, we do not expect to see any odd behavior here. In fact, behaves as expected. The one star analyzed in , on the other hand, displays strikingly low [ \[Ca/Fe\] and \[Ti/Fe\]]{} when is employed as reference, but it [ would]{} fall in line with all the other displayed populations if were used as reference.
![\[Eu/Fe\] plotted against \[Fe/H\], see Fig. \[alpha\_fe\_fig\] for the symbol legend.[]{data-label="eu_fe_fig"}](eufe_all_wide.pdf){width="\hsize"}
[Nickel]{} (Fig. \[ni\_fe\_fig\]) is another element whose ratio to iron is characteristically low in metal-rich Sgr dSph populations with respect to the MW. However, the Galactic distribution of \[Ni/Fe\] becomes more dispersed at lower metallicity. While and , together with confirmed Sgr dSph system member clusters, remain toward the lower end of the MW field stars \[Ni/Fe\] values, they are fully compatible with the abundances in and .
![\[Eu/Fe\] plotted against \[Ni/Fe\], see Fig. \[alpha\_fe\_fig\] for the symbol legend.[]{data-label="nieu_fig"}](nife_eufe.pdf){width="\hsize"}
Yttrium [ and barium abundances (see Figs. \[y\_fe\_fig\] and \[ba\_fe\_fig\]) as well as their ratio (Fig. \[bay\_fe\_fig\])]{} are also rather peculiar in the more metal-rich Sgr dSph populations [see e.g. Fig. 12 in @carretta14]. Yttrium in and, in particular, in , appears lower than in the MW halo population, and is coherent with both metal-rich and metal-poor Sgr dSph main body populations. Both and actually appear a little more yttrium-poor than the other known Sgr dSph clusters, with the exception of . A somewhat puzzling case here is whose \[Y/Fe\] ratio is derived in @mottini08 and @carretta14, but is reported on average as 0.4 dex lower in the former. The difference persists in the one star the two studies have in common where Y was measured. Since the @mottini08 abundance is in better agreement with the general trend of Sgr dSph populations, we plot it as well (as a blue large asterisk) in Figs. \[y\_fe\_fig\] and \[bay\_fe\_fig\]. At any rate, again, and are not significantly removed from what is found in .
[ Barium is generally enhanced in the more metal-rich Sgr dSph populations, but it has been known for a while that a subpopulation exists of Ba-poor Sgr dSph stars [Fig. 5 in @sbordone07 and Fig. \[ba\_fe\_fig\] here], whose origin is unclear. The high \[Ba/Fe\] ratio, and the connected high \[Ba/Y\] ratio, are among the Sgr dSph chemical signatures that persist to the lowest metallicity, as shown in Fig. \[bay\_fe\_fig\]. However, even this chemical peculiarity disappears around \[Fe/H\]=-2.0, most likely as an effect of the diminishing relevance of s-process at low metallicities. Seen in this context, and nicely follow a general trend toward low \[Ba/Fe\] at low metallicity, and higher-than-MW values at high metallicity, which is discernible among the Sgr dSph core populations (with the notable exception of the aforementioned low-Ba population), and the associated globular clusters. However, as said above, and Ba abundances are also compatible with MW values, and a similar picture is also drawn by the \[Ba/Y\] ratio.]{}
![[**The H$\alpha$ line in the spectra of the**]{} stars (black) and (red), normalized and shifted to zero velocity. The absorption line on the blue side is the 655.959 nm.[]{data-label="5634_halpha_fig"}](Halpha.pdf){width="\hsize"}
Finally, \[Eu/Fe\] [ in ]{} appears to be somewhat higher than typical for MW stars of comparable metallicity and for and . Europium also shows a higher abundance here than in any other Sgr dSph cluster with the exception of the much more metal-rich .
It is interesting to couple the results for Ni and Eu (as we do in Fig. \[nieu\_fig\]). Here, the contemporary low Ni and high Eu abundance observed both in the Sgr dSph metal-poor population and in sets them apart from the MW field and, to a lesser degree, from and . Among Sgr dSph clusters, the same locus is shared by only, which is much more metal-rich than , while more metal-poor Sgr dSph clusters appear to agree more with the abundances found in MW globulars. [ Although this result is intriguing, its significance is limited by the rather large error bar on Ni abundances.]{}
Conclusions
===========
We present detailed chemical abundance studies of the distant halo globular clusters and based on three luminous giant stars. The star has a metallicity of \[/H\]=$-2.26\pm$0.10, while the two stars analyzed in have metallicities of \[/H\]=$-1.99\pm$0.07 and $-1.92\pm$0.08 for and , respectively (uncertainties representing internal line-to-line scattering).
Star appears to be O-rich and Na-poor, while is 0.4 dex richer in Na, and 0.3 dex poorer in O, thus indicating the presence in of the Na/O anticorrelation, almost universally observed in globular clusters. The high Na abundance in star is also not consistent with the value expected in a first-generation star, [ indicating – in agreement with @boberg –]{} that hosts multiple stellar populations as well.
On the basis of their kinematics, both clusters are strongly suspected to have formed in the Sagittarius dwarf spheroidal galaxy, and to have been subsequently stripped by tidal interaction with the Milky Way. Although the Sgr dSph has a very characteristic set of chemical abundances in its metal-rich population, its more metal-poor stars resemble closely a typical halo composition, making it hard to use chemical abundances to assess the origin of these two clusters in the Sgr dSph system. Hints in this sense exist: has a remarkably low yttrium abundance, a higher than usual europium content, a finding which, when coupled with its nickel abundance, makes it strikingly similar to metal-poor Sgr dSph populations, and more different from MW field and GC stars. More generally, both clusters appear to fall, chemically, closer to Sgr dSph populations than to halo populations. [ All these conclusions combined make an origin for both clusters ( in particular) in the Sgr dSph system an appealing possibility. However, none of them appears strong enough to firmly confirm or reject this attribution.]{}
Support for L. S. and S. D. is provided by Chile’s Ministry of Economy, Development, and Tourism’s Millennium Science Initiative through grant IC120009, awarded to The Millennium Institute of Astrophysics, MAS. C.M.B. gratefully acknowledges the support provided by Fondecyt reg. n.1150060. S.V. gratefully acknowledges the support provided by Fondecyt reg. n. 1130721. M. B. acknowledges financial support from PRIN MIUR 2010-2011, project “The Chemical and Dynamical Evolution of the Milky Way and Local Group Galaxies”, prot. 2010LY5N2T. D.G. gratefully acknowledges support from the Chilean BASAL Centro de Excelencia en Astrofísica y Tecnologías Afines (CATA) grant PFB-06/2007. E.C. is grateful to the FONDATION MERAC for funding her fellowship. Based in part on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan. This research has made use of NASA’s Astrophysics Data System, and of the VizieR catalogue access tool, CDS, Strasbourg, France
Abadi, M. G., Navarro, J. F., Steinmetz, M., & Eke, V. R. 2003, , 597, 21
Alonso, A., Arribas, S., & Mart[í]{}nez-Roger, C. 2001, , 376, 1039
Alonso, A., Arribas, S., & Mart[í]{}nez-Roger, C. 1999, , 140, 261
Anders, E., & Grevesse, N. 1989, , 53, 197
Bellazzini, M., Ibata, R., Ferraro, F. R., & Testa, V. 2003A, , 405, 577
Bellazzini, M., Ferraro, F. R., & Ibata, R. 2003B, , 125, 188
Bellazzini, M., Ferraro, F. R., & Ibata, R. 2002, , 124, 915
Bergemann, M., Lind, K., Collet, R., Magic, Z., & Asplund, M. 2012, , 427, 27
Bergemann, M., & Cescutti, G. 2010, , 522, AA9
Bergemann, M., & Gehren, T. 2008, , 492, 823
Bihain, G., Israelian, G., Rebolo, R., Bonifacio, P., & Molaro, P. 2004, , 423, 777
Boberg, O. M., Friel, E. D., & Vesperini, E. 2015, arXiv:1504.01791
Bonifacio, P., Caffau, E., & Ludwig, H.-G. 2010, , 524, AA96
Bonifacio, P., Spite, M., Cayrel, R., et al. 2009, , 501, 519
Bonifacio, P. 2005, 13th Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, 560, 3
Bonifacio, P., Sbordone, L., Marconi, G., Pasquini, L., & Hill, V. 2004, , 414, 503
Bonifacio, P., Hill, V., Molaro, P., et al. 2000, , 359, 663
Burris, D. L., Pilachowski, C. A., Armandroff, T. E., et al. 2000, , 544, 302
Caffau, E., Ludwig, H.-G., Steffen, M., Freytag, B., & Bonifacio, P. 2011a, , 268, 255
Carraro, G., & Bensby, T. 2009, , 397, L106
Carraro, G., Zinn, R., & Moni Bidin, C. 2007, , 466, 181
Carretta, E., Bragaglia, A., Gratton, R. G., et al. 2014, , 561, A87
Carretta, E., Bragaglia, A., Gratton, R. G., et al. 2010, , 520, A95
Castelli, F. 2005, Memorie della Società Astronomica Italiana Supplementi, 8, 25
Cayrel, R., Depagne, E., Spite, M., et al. 2004, , 416, 1117
Coelho, P., Barbuy, B., Mel[é]{}ndez, J., Schiavon, R. P., & Castilho, B. V. 2005, , 443, 735
Cohen, J. G. 2004, , 127, 1545
Dean, J. F., Warren, P. R., & Cousins, A. W. J. 1978, , 183, 569
de Boer, T. J. L., Belokurov, V., Beers, T. C., & Lee, Y. S. 2014, , 443, 658
Dekker, H., D’Odorico, S., Kaufer, A., Delabre, B., & Kotzlowski, H. 2000, , 4008, 534
Font, A. S., Johnston, K. V., Bullock, J. S., & Robertson, B. E. 2006, , 638, 585
Font, A. S., Benson, A. J., Bower, R. G., et al. 2011, , 417, 1260
Fulbright, J. P. 2000, , 120, 1841
Geisler, D., Wallerstein, G., Smith, V. V., & Casetti-Dinescu, D. I. 2007, , 119, 939
Geisler, D., Piatti, A. E., Claria, J. J., & Minniti, D. 1995, , 109, 605
Gratton, R. G., Carretta, E., & Bragaglia, A. 2012, , 20, 50
Gratton, R. G., Bonifacio, P., Bragaglia, A., et al. 2001, , 369, 87
Gratton, R. G., Carretta, E., Eriksson, K., & Gustafsson, B. 1999, , 350, 955
Gratton, R. G. 1989, , 208, 171
Harris, W. E. 1996, , 112, 1487 (2010 edition)
Heiter, U., et al., in preparation
Hesser, J. E., Shawl, S. J., & Meyer, J. E. 1986, , 98, 403
Ibata, R. A., Gilmore, G., & Irwin, M. J. 1994, , 370, 194
James, G., Fran[ç]{}ois, P., Bonifacio, P., et al. 2004, , 427, 825
Koch, A., & McWilliam, A. 2014, , 565, A23
Kurucz, R. L. 2005, Memorie della Societa Astronomica Italiana Supplement, 8, 14
Lake, G. 1989, , 98, 1554
Lanfranchi, G. A., & Matteucci, F. 2004, , 351, 1338
Lanfranchi, G. A., & Matteucci, F. 2003, , 345, 71
Law, D. R., & Majewski, S. R. 2010, , 718, 1128
Lind, K., Charbonnel, C., Decressin, T., et al. 2011, , 527, A148
Lind, K., Asplund, M., Barklem, P. S., & Belyaev, A. K. 2011b, , 528, A103
Lodders, K., Plame, H., & Gail, H.-P. 2009, Landolt-B[ö]{}rnstein - Group VI Astronomy and Astrophysics Numerical Data and Functional Relationships in Science and Technology Volume 4B: Solar System. Edited by J.E. Tr[ü]{}mper, 2009, 4.4., 44
Majewski, S. R., Skrutskie, M. F., Weinberg, M. D., Ostheimer, J. C., ApJ, 599, 1082
Mashonkina, L., Gehren, T., Shi, J.-R., Korn, A. J., & Grupp, F. 2011, , 528, A87
Mashonkina, L., Gehren, T., Travaglio, C., & Borkova, T. 2003, , 397, 275
Mashonkina, L., & Gehren, T. 2001, , 376, 232
Mateo, M. L. 1998, , 36, 435
Mayall, N. U. 1946, , 104, 290
McWilliam, A., Wallerstein, G., & Mottini, M. 2013, , 778, 149
McWilliam, A., Rich, R. M., & Smecker-Hane, T. A. 2003, , 593, L145
McWilliam, A., Rich, R. M., & Smecker-Hane, T. A. 2003, , 592, L21
Mishenina, T. V., Kovtyukh, V. V., Soubiran, C., Travaglio, C., & Busso, M. 2002, , 396, 189
Monaco, L., Saviane, I., Correnti, M., Bonifacio, P., & Geisler, D. 2011, , 525, A124
Monaco, L., Bellazzini, M., Bonifacio, P., et al. 2005, , 441, 141
Monaco, L., Bellazzini, M., Ferraro, F. R., & Pancino, E. 2004, , 353, 874
Mottini, M., Wallerstein, G., & McWilliam, A. 2008, , 136, 614
Nemec, J. M. 2004, , 127, 2185
Noguchi, K., Aoki, W., Kawanomoto, S., et al. 2002, , 54, 855
Peterson, R. C. 1985, , 297, 309
Pryor, C., & Meylan, G. 1993, Structure and Dynamics of Globular Clusters, 50, 357
Reddy, B. E., Lambert, D. L., & Allende Prieto, C. 2006, , 367, 1329
Stetson, P. B. 2000, , 112, 925
Sbordone, L., et al., 2015, in preparation.
Sbordone, L., Caffau, E., Bonifacio, P., & Duffau, S. 2014, , 564, A109
Sbordone, L., Salaris, M., Weiss, A., & Cassisi, S. 2011, , 534, A9
Sbordone, L., Bonifacio, P., Buonanno, R., et al. 2007, , 465, 815
Sbordone, L. 2005a, Memorie della Società Astronomica Italiana Supplementi, 8, 61
Sbordone, L., Bonifacio, P., Marconi, G., Buonanno, R., & Zaggia, S. 2005b, , 437, 905
Sbordone, L., Bonifacio, P., Castelli, F., & Kurucz, R. L. 2004, Memorie della Società Astronomica Italiana Supplementi, 5, 93
Searle, L., & Zinn, R. 1978, , 225, 357
Shetrone, M., Venn, K. A., Tolstoy, E., et al. 2003, , 125, 684
Shetrone, M. D., C[ô]{}t[é]{}, P., & Sargent, W. L. W. 2001, , 548, 592
Siegel, M. H., Dotter, A., Majewski, S. R., et al. 2007, , 667, L57
Thygesen, A., et al., 2015, in preparation.
Tolstoy, E., Hill, V., & Tosi, M. 2009, , 47, 371
Tonry, J., & Davis, M. 1979, , 84, 1511
van den Bergh, S., 1998, ApJ, 505, L127
van den Bergh, S., 2000, PASP, 112, 529
Venn, K. A., Irwin, M., Shetrone, M. D., et al. 2004, , 128, 1177
Vladilo, G., Sbordone, L., & Bonifacio, P. 2003, The Local Group as an Astrophysical Laboratory, 107
Vogt, S. S., Allen, S. L., Bigelow, B. C., et al. 1994, , 2198, 362
Line-by-line [MyGIsFOS]{} fits {#mygi_appendix}
==============================
[MyGIsFOS]{} operates by fitting a region around each relevant spectral line against a grid of synthetic spectra, and deriving the best-fitting abundance directly, rather than using the line equivalent width (EW), which [**[MyGIsFOS]{} computes, but mainly to estimate [V$_\mathrm{turb}$]{}**]{}. While this makes [MyGIsFOS]{} arguably more robust against the effect of line blends, it makes it rather pointless to provide a line-by-line table of EWs, atomic data, and derived abundances,[ unless atomic data for all the features included in the fitted region are provided, which is quite cumbersome.]{}
To allow verification and comparison of our results, we thus decided to provide the per-feature abundances as well as the actual observed and synthetic best-fitting profile for each fitted region. We describe here in detail how these results will be delivered, since we plan to maintain the same format for any future [MyGIsFOS]{}-based analysis.
The ultimate purpose of these comparisons is to ascertain whether abundance differences between works stem from any aspect of the line modeling, and assess the amount of the effect. We believe that providing the observed and synthetic profiles is particularly effective in this sense for a number of reasons:
- The reader can apply whatever abundance analysis technique he/she prefers to either the observed or the synthetic spectra. For instance, if EWs are employed, he/she can determine the EW of either profile, and derive the abundance with his/her choice of atomic data, atmosphere models, and so on. This would allow him/her to directly determine what abundance the chosen method would assign to our best-fitting synthetic, and to compare it with the one we derive.
- The reader can evaluate whether our choice of continuum placement corresponds or differs from the one he/she applied, and assess the broadening we applied to the syntheses, both looking at the feature of interest, and looking for any other useful feature (e.g. unblended lines, which are often used to set the broadening for lines needing synthesis).
- The reader can directly assess the goodness of our fit with whatever estimator he/she prefers.
- The reader gains access to the actual observed data we employed for every spectral region we used. Not every spectrum is available in public archives, and even fewer are available in reduced, and (possibly) coadded form.
An example of [MyGIsFOS]{} fit results is presented in Fig. \[mygi\_fit\_figure\]. To make the results easy to handle by Vizier, they were split into two tables:
- The features table contains basic data for each region successfully fitted in every star. This includes a code identifying the star (e.g. [N5053\_69]{}), the ion the feature was used to measure, the starting and ending wavelength, the derived abundance, the EWs for the observed and best-fitting synthetic (determined by integration under the pseudo-continuum), the local S/N ratio, the small Doppler and continuum shift that MyGIsFOS allows in the fitting of each feature, flags to identify the features that were used in the [$T_\mathrm{eff}$]{} and [V$_\mathrm{turb}$]{} fitting process (the flags are not relevant for features not measuring ), and finally a feature code formed of the element, ion, and central wavelength of the fitted range (e.g. [3000\_481047]{}). This last code is unique to each feature, and it can be used to retrieve the profile of the fit from the next table.
- The fits table contains all the fitted profiles for all the features for all the stars. Each line of the table corresponds to a specific pixel of a specific fit of a specific star, and contains the star identifying code, then the feature identifying code, followed by the wavelength, and the (pseudonormalized) synthetic and observed flux in that pixel. The user interested, for instance, in looking at the fit of the aforementioned feature in star , has simply to select from the table all the lines beginning with “[N5053\_69 3000\_481047]{}”.
[ Both tables]{} are available through CDS. To reproduce the fit as plotted in Fig. \[mygi\_fit\_figure\], the synthetic flux must be [*divided*]{} by the continuum value provided [ in the features table]{}.
![An example of [MyGIsFOS]{} fit for two features of star . The blue box corresponds to a feature, while the red box is a feature. Gray boxes are pseudo-continuum estimation intervals. Observed pseudo-normalized spectrum is in black; magenta profiles are best-fitting profiles for each region. The black dotted horizontal line is the pseudo-continuum level, while the continuous thin magenta horizontal line represents the best-fit continuum for the feature. Around it, dashed and dotted horizontal magenta lines represent 1-$\sigma$ and 3-$\sigma$ intervals of the local noise (S/N=88 in this area). Vertical dashed and continuous lines mark the theoretical feature center, and the actual center after the best-fit, per-feature Doppler shift has been applied. []{data-label="mygi_fit_figure"}](mygi_fit_example.pdf){width="\hsize"}
[^1]: https://koa.ipac.caltech.edu/cgi-bin/KOA/nph-KOAlogin
[^2]: http://www.ucolick.org/ xavier/HIRedux/index.html
[^3]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
[^4]: http://www.naoj.org/Observing/Instruments/HDS/hdsql-e.html
[^5]: [mygisfos.obspm.fr](mygisfos.obspm.fr) will be available soon. Meanwhile contact L.S. directly
[^6]: Lower Energy Abundance Slope, see @sbordone14.
[^7]: <http://www.inspect-stars.com>
|
---
abstract: 'We prove that $\Ext^{\bullet}_A(k,k)$ is a Gerstenhaber algebra, where $A$ is a Hopf algebra. In case $A=D(H)$ is the Drinfeld double of a finite dimensional Hopf algebra $H$, our results implies the existence of a Gerstenhaber bracket on $H^{\bullet}_{GS}(H,H)$. This fact was conjectured by R. Taillefer in [@T3]. The method consists in identifying $\Ext^{\bullet}_A(k,k)$ as a Gerstenhaber subalgebra of $H^{\bullet}(A,A)$ (the Hochschild cohomology of $A$).'
author:
- 'Marco A. Farinati $^{1}$ - Andrea Solotar ${}^{1}$'
title: 'G-structure on the cohomology of Hopf algebras'
---
[Dto. de Matemática Facultad de Cs. Exactas y Naturales. Universidad de Buenos Aires. Ciudad Universitaria Pab I. 1428 - Buenos Aires - Argentina. e-mail: asolotar@dm.uba.ar, mfarinat@dm.uba.ar\
Research partially supported by UBACYT X062 and Fundación Antorchas (proyecto 14022-47).\
Both authors are research members of CONICET (Argentina).]{}
Introduction {#introduction .unnumbered}
============
The motivation of this paper is to prove that $H^{\bullet}_{GS}(H,H)$ has a structure of a G-algebra. We prove this result when $H$ is a finite dimensional Hopf algebra (see Theorem \[teo3\] and Corollary \[coroimportante\]). $H^{\bullet}_{GS}$ is the cohomology theory for Hopf algebras defined by Gerstenhaber and Schack in [@GS1].
In order to obtain commutativity of the cup product we prove a general statement on $\Ext$ groups over Hopf algebras (without any finiteness assumption). When $H$ is finite dimensional, the category of Hopf bimodules is isomorphic to a module category, over an algebra $X$ (also finite dimensional) defined by C. Cibils and M. Rosso (see [@CR]), and this category is also equivalent to the category of Yetter-Drinfeld modules, which is isomorphic to the category of modules over the Hopf algebra $D(H)$ (the Drinfeld double of $H$). In [@T2], R. Taillefer has defined a natural cup product in $H^{\bullet}_{GS}(H,H)=H^{\bullet}_b(H,H)$ (see [@GS2] for the definition of $H^{\bullet}_b$). When $H$ is finite dimensional she proved that $H^{\bullet}_b(H,H)\cong\Ext^{\bullet}_{X}(H,H)$, and using this isomorphism she showed that it is (graded) commutative. In a later work [@T3] she extended the result of commutativity of the cup product to arbitrary dimensional Hopf algebras and she conjectured the existence (and a formula) of a Gerstenhaber bracket.
Our method for giving a Gerstenhaber bracket is the following: under the equivalence of categories ${}_X$-$\mod\cong {}_{D(H)}$-$\mod$, the object $H$ corresponds to $H^{coH}=k$, so $\Ext^{\bullet}_{X}(H,H)\cong
\Ext^{\bullet}_{D(H)}(k,k)$ (isomorphism of graded algebras); after D. Ştefan [@St] one knows that $\Ext^{\bullet}_{D(H)}(k,k)\cong H^{\bullet}(D(H),k)$. In Theorem \[teo2\] we prove that, if $A$ is an arbitrary Hopf algebra, then $H^{\bullet}(A,k)$ is isomorphic to a subalgebra of $H^{\bullet}(A,A)$ (in particular it is graded commutative) and in Theorem \[teo3\] we prove that the image of $H^{\bullet}(A,k)$ in $H^{\bullet}(A,A)$ is stable under the brace operation, in particular it is closed under the Gerstenhaber bracket of $H^(A,A)$. So, the existence of the Gerstenhaber bracket on $H^{\bullet}_{GS}(H,H)$ follows, at least in the finite dimensional case, taking $A=D(H)$. We don’t know if this bracket coincides with the formula proposed in [@T3].
We also provide a proof that the algebra $\Ext^{\bullet}_{\C}(k,k)$ is graded commutative when $\C$ is a braided monoidal category satisfying certain homological hypothesis (see Theorem \[teo1\]). This gives an alternative proof of the commutativity result in the arbitrary dimensional case taking $\C={}_H^H\Y\D$, the Yetter-Drinfeld modules.
In this paper, the letter $A$ will denote a Hopf algebra over a field $k$.
Cup products
============
This section has two parts. First we prove a generalization of the fact that the cup product on $H^{\bullet}(G,k)$ is graded commutative. The general abstract setting is that of a braided (abelian) category with enough injectives satisfying a Künneth formula (see definitions below). The other part will concern the relation between self extensions of $k$ and Hochschild cohomology of $A$ with coefficients in $k$.
Let us recall the definition of a braided category:
The data $(\C,\ot,k,c)$ is called a [**braided**]{} category with unit element $k$ if
1. $\C$ is an abelian category.
2. $-\ot -$ is a bifunctor, bilinear, associative, and there are natural isomorphisms $k\ot X\cong X\cong X\ot k$ for all objects $X$ in $\C$.
3. For all pair of objects $X$ and $Y$, $c_{X,Y}:X\ot Y\to Y\ot X$ is a natural isomorphism. The isomorphisms $c_{X,k}:X\ot k\cong k\ot X$ agrees with the isomorphism of the unit axiom, and for all triple $X$, $Y$, $Z$ of objects in $\C$, the Yang-Baxter equation is satisfied: $$(\id_Z\ot c_{X,Y})\circ (c_{X,Z}\ot \id_Y)\circ(\id_X\ot c_{Y,Z})=
(\id_Y\ot c_{X,Z})\circ (c_{X,Y}\ot \id_Z)$$
If one doesn’t have the data $c$, and axioms 1 and 2 are satisfied, we say that $(\C,\ot,k)$ is a [**monoidal**]{} category.
We will say that a monoidal category $(\C,\ot,k)$ satisfies the [**Künneth formula**]{} if and only if there are natural isomorphisms $H_*(X_*,d_X)\ot H_*(Y_*,d_Y)\cong H_*(X_*\ot Y_*,d_{X\ot Y})$ for all pair of complexes in $\C$.
\[teo1\] Let $(\C, \ot,k,c)$ be a braided category with enough injectives satisfying the Künneth formula, then $\Ext^{\bullet}_{\C}(k,k)$ is graded commutative.
We proceed as in the proof that $H^{\bullet}(G,k)$ is graded commutative (see for example [@B], page 51, Vol I). The proof is based on two points: firstly a definition of a cup product using $\ot$, secondly a Lemma relating this construction and the Yoneda product of extensions.
Let $0\to M\to X_p\to \dots X_1\to N\to 0$ and $0\to M'\to X'_q\to \dots X'_1\to N'\to 0$ be two extensions in $\C$. Then $N_*:=(0\to M\to X_p\to \dots X_1\to 0)$ and $N'_*:=(0\to M'\to X'_q\to \dots X'_1\to 0)$ are two complexes, quasi-isomorphic to $N$ and $N'$ respectively. By the Künneth formula $N_*\ot N'_*$ is a complex quasi-isomorphic to $N\ot N'$, so “completing” this complex with $N\ot N'$ (more precisely considering the mapping cone of the chain map $N_*\ot N'_*\to N\ot N'$) one has an extension in $\C$, beginning with $M\ot M'$ and ending with $N\ot N'$.
So, we have defined a cup product: $$\Ext^p_{\C}(N,M)\times\Ext_{\C}^q(N', M')\to
\Ext_{\C}^{p+q}(N\ot N',M\ot M')$$ We will denote this product by a dot, and the Yoneda product by $\smile$. The Lemma relating this product and the Yoneda one is the following:
If $f\in\Ext^p_{\C}(M,N)$ and $g\in\Ext^q_{\C}(M',N')$, then $$f\smile g=(f\ot \id_{N'})\smile (\id_M\ot g)$$
[*Proof of the Lemma:*]{} Interpreting the elements $f$ and $g$ as extensions, it is clear how to define a morphism of complexes $(f\ot \id_{N'})\smile (\id_M\ot g)\to f\smile g$, and by the Künneth formula, it is a quasi-isomorphism.
In the particular case $M=M'=N=N '=k$, the Lemma implies that $f.g=f\smile g$ for all $f$ and $g$ in $\Ext^{\bullet}_{\C}(k,k)$. Now the theorem is a consequence of the isomorphism $(X_*\ot Y_*,d_{X\ot Y})\cong
(Y_*\ot X_*,d_{Y\ot X})$, valid for every pair of complexes in $\C$, defined by: $$(-1)^{pq}c_{X,Y}:X_p\ot Y_q\to Y_q\ot X_p$$
\[teo2\] If $A$ is a Hopf algebra then $\Ext^{\bullet}_{\C}(k,k)\cong H^{\bullet}(A,k)$. Moreover $H^{\bullet}(A,k)$ is isomorphic to a subalgebra of $H^{\bullet}(A,A)$.
After D. Ştefan [@St], since $A$ is an $A$-Hopf Galois extension of $k$, $H^{\bullet}(A, M)\cong \Ext^{\bullet}_A(k,M^{\ad})$ for all $A$-bimodule $M$. In particular, $H^{\bullet}(A,k)=\Ext^{\bullet}_A(k,k)$. But one can give, for this particular case, an explicit morphism at the complex level. In order to do this, we will choose a particular resolution of $k$ as left $A$-module.
Let $C_*(A,b')$ be the standard resolution of $A$ as $A$-bimodule, namely $C_n(A,b')=A\ot A^{\ot n}\ot A$ and $b'(a_0\ot\dots \ot a_{n+1})=
\sum_{i=0}^n(-1)^{i}a_0\ot\dots \ot a_i.a_{i+1}\ot
\dots \ot a_{n+1}$ ($a_i\in A$). This resolution splits on the right, so $(C_*(A)\ot_A k,b'\ot \id_k)$ is a resolution of $A\ot_A k=k$ as left $A$-module. Using this resolution, $\Ext^{\bullet}_A(k,k)$ is the homology of the complex $(\Hom_A(C_*(A)\ot_Ak,k), (b'\ot_A\id_k)^{*})\cong
(\Hom(A^{\ot *},k),\partial)$. Under this isomorphism, the differential $\partial$ is given by $$(\partial f)(a_1\ot\dots\ot a_n)=\epsilon(a_1)f(a_2\ot\dots \ot a_n)+$$ $$+
\sum_{i=1}^{n-1}(-1)^if(a_1\ot\dots\ot a_i.a_{i+1}\ot\dots\ot a_n)+
(-1)^nf(a_1\ot\dots\ot a_{n-1})\epsilon(a_n)$$ And this is precisely the formula of the differential of the standard Hochschild complex computing $H^{\bullet}(A,k)$.
One can easily check that the cup product on $\Ext^{\bullet}_A(k,k)$ (which equals the Yoneda product in this case) corresponds to the cup product on $H^{\bullet}(A,k)$, so this isomorphism is an algebra isomorphism.
Now we will give two multiplicative maps $H^{\bullet}(A,k)\to H^{\bullet}(A,A)$ and $H^{\bullet}(A,A)\to H^{\bullet}(A,k)$. Consider the counit $\epsilon:A\to k$, it is an algebra map, so the induced map $\epsilon_*:H^{\bullet}(A,A)\to H^{\bullet}(A,k)$ is multiplicative. We will define a multiplicative section of this map.
Let $f:A^{\ot p}\to k$ be a Hochschild cocycle, define $F:A^{\ot p}\to A$ by the formula: $$F(a^1\ot\dots\ot a^p):=
a^1_1 \dots a^p_1.
f(a^1_2\ot\dots\ot a^p_2)$$ Where we have used the Sweedler-type notation with summation symbol omitted: $a^i_1\ot a^i_2=\Delta(a^i)$, for $a^i\in A$.
Let us check that $F$ is a Hochschild cocycle with values in $A$. $$\partial(F)(a^0\ot\dots \ot a^p)=
a^0F(a^1\ot\dots \ot a^p)+$$ $$+
\sum_{i=0}^{p-1}(-1)^{i+1}F(a^0\ot\dots\ot a^i.a^{i+1}\ot\dots \ot a^p)
+(-1)^{p+1}F(a^0\ot\dots \ot a^{p-1})a^p=$$ $$=a^0.a^1_1\dots a^p_1.f(a^1_2\ot\dots \ot a^p_2)
+(-1)^{p+1}a^0_1\dots a^{p-1}_1.f(a^0_2\ot\dots \ot a^{p-1}_2)a^p+$$ $$+ \sum_{i=0}^{p-1}(-1)^{i+1}a^0_1\dots a^i_1a^{i+1}_1\dots a^p_1.
f(a^0_2\ot\dots\ot a^i_2.a^{i+1}_2\ot\dots \ot a^p_2)$$ Using that $f$ is a Hochschild cocycle with values in $k$, we know that $$0=
\epsilon(a^0)f(a^1\ot\dots \ot a^p)+
\sum_{i=0}^{p-1}(-1)^{i+1}f(a^0\ot\dots\ot a^i.a^{i+1}\ot\dots \ot a^p)
+(-1)^{p+1}f(a^0\ot\dots \ot a^{p-1})\epsilon(a^p)$$ So, the summation term in $\partial(F)$ can be replaced using the equality $$\sum_{i=0}^{p-1}(-1)^{i+1}a^0_1\dots a^i_1a^{i+1}_1\dots a^p_1.
f(a^0_2\ot\dots\ot a^i_2.a^{i+1}_2\ot\dots \ot a^p_2)=$$ $$= - a^0_1\dots a^i_1a^{i+1}_1\dots a^p_1.\left(
\epsilon(a^0_2)f(a^1_2\ot\dots \ot a^p_2)+
(-1)^{p+1}f(a^0_2\ot\dots \ot a^{p-1}_2)\epsilon(a^p_2) \right)=$$ $$= - \left(a^0.a^1_1\dots a^p_1. f(a^1_2\ot\dots \ot a^p_2)+
(-1)^{p+1}a^0_1\dots a^{p-1}_1.a^pf(a^0_2\ot\dots \ot a^{p-1}_2)\right)$$ and this finishes the computation of $\partial F$.
Clearly $\epsilon F=f$, so $\epsilon_*$ is a split epimorphism. To check that $f\mapsto F$ is multiplicative is straightforward:
Let us denote $\wh{f}:=F$, and if $g:A^{\ot q}\to k$, $\wh{g}:A^{\ot q}\to
A$ the cocycle corresponding to $g$. $$\begin{array}{rcl}
\wh{f\smile g}(a^1\ot\dots \ot a^{p+q})&=&
a^1_1\dots a^{p+q}_1.(f\smile g)(a^1_2\ot\dots \ot a^{p+q}_2)\\
&=&
a^1_1\dots a^{p+q}_1.f(a^1_2\ot\dots \ot a^p_2)g(a^{p+1}_2\ot
\dots\ot a^{p+q}_2)\\
&=&(\wh{f}\smile\wh{g})(a^1\ot\dots \ot a^{p+q})
\end{array}$$
Brace operations
================
In this section we prove our main theorem, stating that the map $H^{\bullet}(A,k)\to H^{\bullet}(A,A)$ is “compatible” with the brace operations, and as a consequence with the Gerstenhaber bracket.
\[teo3\] The image $H^{\bullet}(A,k)\to H^{\bullet}(A,A)$ is stable under the brace operation. Moreover, if $\wh{f}$ and $\wh{g}$ are the images in $H^{\bullet}(A,A)$ of $f$ and $g$ in $H^{\bullet}(A,k)$, then $\wh{f}\circ_i\wh{g}=\wh{f\circ_i\wh{g}}$
Let us recall the definition of the brace operations (see [@G]). If $F:A^{\ot p}\to H$ and $G:A^{\ot q}\to A$ and $1\leq i\leq p$, then $F\circ_iG:A^{\ot p+q-1}\to A$ is defined by $$(F\circ_iG)(a^1\ot \dots\ot a^{i}\ot b^1\ot\dots\ot b^{q}
\ot a^{i+1}\ot\dots\ot a^{p})=
F(a^1\ot\dots\ot a^{i}\ot G(b^1\ot\dots\ot b^{q})
\ot a^{i+1}\ot\dots\ot a^{p})$$ Asume now that $f:A^{\ot p}\to k$, $g:A^{\ot q}\to k$ and $F=\wh{f}$ and $G=\wh{g}$, namely $$F(a^1\ot \dots\ot a^{p})=
a^{1}_1\dots a^{p}_1.
f(a^{1}_2\ot \dots\ot a^{p}_2)$$ and similarly for $G$ and $g$. Then $$\begin{array}{l}
(F\circ_iG)(a^1\ot \dots\ot a^{i}\ot b^1\ot\dots\ot b^{q}\ot a^{i+1}\ot
\dots\ot a^{p})\\
=F\left(a^1\ot\dots\ot a^{i}\ot G(b^1\ot \dots\ot b^{q})
\ot a^{i+1}\ot \dots\ot a^{p}\right)\\
=F\left(a^1\ot \dots\ot a^{i}\ot b^1_1\dots b^{q}_1.
g(b^1_2\ot \dots\ot b^{q}_2) \ot a^{i+1}\ot\dots\ot a^{p}\right)\\
= a^{1}_1\dots a^{i}_1.b^{1}_1\dots b^{q}_1.
a^{i+1}_1 \dots a^{p}_1.
f\left( a^{1}_2\ot\dots \ot a^{i}_2\ot b^{1}_2\dots
b^{q}_2.g(b^{1}_3\ot \dots \ot b^{q}_3)\ot
a^{i+1}_2\ot \dots \ot a^{p}_2\right)\\
=\wh{f\circ_i G}(a^1\ot \dots\ot a^{i}\ot b^1\ot \dots\ot b^{q}
\ot a^{i+1}\ot\dots\ot a^{p})
\end{array}$$
Recall that the brace operations define a “composition” operation $F\circ G=\sum_{i=1}^{p}(-1)^{q(i-1)}F\circ_iG$, where $F\in H^p(A,A)$ and $G\in H^q(A,A)$. The Gerstenhaber bracket is defined as the commutator of this composition, so we have the desired corollary:
If $A$ is a Hopf algebra, then $H^{\bullet}(A,k)$ is a Gerstenhaber subalgebra of $H^{\bullet}(A,A)$.
Consider $H$ a finite dimensional Hopf algebra and $X=X(H)$ the algebra defined by C. Cibils and M. Rosso (see [@CR]). We can prove, at least in the finite dimensional case, the conjecture of [@T3] that $H^{\bullet}_{GS}(H,H)$ is a Gerstenhaber algebra:
\[coroimportante\] Let $H$ be a finite dimensional Hopf algebra, then $H^{\bullet}_{GS}(H,H)$ $(\cong H_{A4}^{\bullet}(H,H)\cong
\Ext^{\bullet}_X(H,H))$ is a Gerstenhaber algebra.
The isomorphism $H^{\bullet}_{GS}(H,H)\cong H_{A4}^{\bullet}(H,H)
\cong \Ext^{\bullet}_X(H,H)$ was proved in [@T2].
Let $A$ denote $D(H)$, the Yetter-Drinfeld double of $H$. One knows that ${}_X$-$\mod \cong {}_A$-$\mod$, then $\Ext^{\bullet}_X(H,H)\cong \Ext_A^{\bullet}(H^{coH},H^{coH})=
\Ext_A^{\bullet}(k,k)$, and this a Gerstenhaber subalgebra of $H^{\bullet}(A,A)$.
Let $H$ be a Hopf algebra and assume that $H$ is a Koszul algebra (i.e. $H$ is graded, $H_0=k$, and $E(E(H))=H$, where $E(\Lambda)
=\Ext^{\bullet}_{\Lambda}(k,k)$, for an augmented algebra $\Lambda$). Then $E(H)$ is graded commutative.
[30]{} D. Benson, Representations and cohomology, Vol I. Cambridge Studies in Advanced Mathematics. 30. Cambridge University Press (1998).
Claude Cibils and Marc Rosso, [*Hopf bimodules are modules*]{}. J. Pure Appl. Algebra 128, No.3, 225-231 (1998).
Murray Gerstenhaber, [*The cohomology structure of an associative ring*]{}. Ann. Math. (2) 78, 267-288 (1963).
Murray Gerstenhaber and Samuel D. Schack, [*Bialgebra cohomology, deformations, and quantum groups*]{}. Proc. Natl. Acad. Sci. USA 87, No.1, 478-481 (1990).
Murray Gerstenhaber and Samuel D. Schack, [*Algebras, bialgebras, quantum groups, and algebraic deformations*]{}. Deformation theory and quantum groups with applications to mathematical physics, Proc. AMS-IMS-SIAM Jt. Summer Res. Conf., Amherst/MA (USA) 1990, Contemp. Math. 134, 51-92 (1992).
Dragos Ştefan, [*Hochschild cohomology on Hopf-Galois extensions*]{}. [ J. Pure Appl. Alg.]{}, [**103**]{}-2 (1995), pp. 221-233.
Rachel Taillefer, [*Cohomology theories of Hopf bimodules and cup-product*]{}. C. R. Acad. Sci., Paris, Sér. I, Math. 332, No.3, 189-194 (2001).
Rachel Taillefer, thesis Université Montpellier 2 (2001).
Rachel Taillefer, [*Injective Hopf bimodules, cohomologies of infinite dimensional Hopf algebras and graded-commutativity of the Yoneda product*]{}. `ArXivMath math.KT/0207154`.
|
---
abstract: 'In addition to constraining bilateral exposures of financial institutions, there are essentially two options for future financial regulation of systemic risk (SR): First, financial regulation could attempt to reduce the financial fragility of global or domestic systemically important financial institutions (G-SIBs or D-SIBs), as for instance proposed in Basel III. Second, future financial regulation could attempt strengthening the financial system as a whole. This can be achieved by re-shaping the topology of financial networks. We use an agent-based model (ABM) of a financial system and the real economy to study and compare the consequences of these two options. By conducting three “computer experiments” with the ABM we find that re-shaping financial networks is more effective and efficient than reducing leverage. Capital surcharges for G-SIBs can reduce SR, but must be larger than those specified in Basel III in order to have a measurable impact. This can cause a loss of efficiency. Basel III capital surcharges for G-SIBs can have pro-cyclical side effects.'
author:
- 'Sebastian Poledna$^{1,2}$'
- 'Olaf Bochmann$^{4,5}$'
- 'Stefan Thurner$^{1,2,3}$'
title: 'Basel III capital surcharges for G-SIBs fail to control systemic risk and can cause pro-cyclical side effects'
---
Introduction {#intro}
============
Six years after the financial crisis of 2007-2008, millions of households worldwide are still struggling to recover from the aftermath of those traumatic events. The majority of losses are indirect, such as people losing homes or jobs, and for the majority, income levels have dropped substantially. For the economy as a whole, and for households and for public budgets, the miseries of the market meltdown of 2007-2008 are not yet over.As a consequence, a consensus for the need for new financial regulation is emerging [@Aikman:2013aa]. Future financial regulation should be designed to mitigate risks within the financial system as a whole, and should specifically address the issue of systemic risk (SR).
SR is the risk that the financial system as a whole, or a large fraction thereof, can no longer perform its function as a credit provider, and as a result collapses. In a narrow sense, it is the notion of contagion or impact from the failure of a financial institution or group of institutions on the financial system and the wider economy [@De-Bandt:2000aa; @BIS:2010aa]. Generally, it emerges through one of two mechanisms, either through interconnectedness or through the synchronization of behavior of agents (fire sales, margin calls, herding). The latter can be measured by a potential capital shortfall during periods of synchronized behavior where many institutions are simultaneously distressed [@Adrian:2011aa; @Acharya:2010aa; @Brownlees:2012aa; @Huang:2012aa]. Measures for a potential capital shortfall are closely related to the leverage of financial institutions [@Acharya:2010aa; @Brownlees:2012aa]. Interconnectedness is a consequence of the network nature of financial claims and liabilities [@Eisenberg:2001aa]. Several studies indicate that financial network measures could potentially serve as early warning indicators for crises [@Caballero:2012aa; @Billio:2012aa; @Minoiu:2013aa].
In addition to constraining the (potentially harmful) bilateral exposures of financial institutions, there are essentially two options for future financial regulation to address the problem [@Haldane:2011aa; @Markose:2012aa]: First, financial regulation could attempt to reduce the financial fragility of “super-spreaders” or *systemically important financial institutions* (SIFIs), i.e. limiting a potential capital shortfall. This can achieved by reducing the leverage or increasing the capital requirements for SIFIs. “Super-spreaders” are institutions that are either too big, too connected or otherwise too important to fail. However, a reduction of leverage simultaneously reduces efficiency and can lead to pro-cyclical effects [@Minsky:1992aa; @Fostel:2008aa; @Geanakoplos:2010aa; @Adrian:2008aa; @Brunnermeier:2009aa; @Thurner:2012aa; @Caccioli:2012aa; @Poledna:2014ab; @Aymanns:2014aa; @Caccioli:2015aa]. Second, future financial regulation could attempt strengthening the financial system as a whole. It has been noted that different financial network topologies have different probabilities for systemic collapse [@Roukny:2013aa]. In this sense the management of SR is reduced to the technical problem of re-shaping the topology of financial networks [@Haldane:2011aa].
The Basel Committee on Banking Supervision (BCBS) recommendation for future financial regulation for SIFIs is an example of the first option. The Basel III framework recognizes SIFIs and, in particular, global and domestic systemically important banks (G-SIBs or D-SIBs). The BCBS recommends increased capital requirements for SIFIs – the so called “SIFI surcharges” [@BIS:2010aa; @Georg:2011aa]. They propose that SR should be measured in terms of the impact that a bank’s failure can have on the global financial system and the wider economy, rather than just the risk that a failure could occur. Therefore they understand SR as a global, system-wide, loss-given-default (LGD) concept, as opposed to a probability of default (PD) concept. Instead of using quantitative models to estimate SR, Basel III proposes an indicator-based approach that includes the size of banks, their interconnectedness, and other quantitative and qualitative aspects of systemic importance.
There is not much literature on the problem of dynamically re-shaping network topology so that networks adapt over time to function optimally in terms of stability and efficiency. A major problem in NW-based SR management is to provide agents with incentives to re-arrange their local contracts so that global (system-wide) SR is reduced. Recently, it has been noted empirically that individual transactions in the interbank market alter the SR in the total financial system in a measurable way [@Poledna:2014aa; @Poledna:2015aa]. This allows an estimation of the marginal SR associated with financial transactions, a fact that has been used to propose a tax on systemically relevant transactions [@Poledna:2014aa]. It was demonstrated with an agent-based model (ABM) that such a tax – the [*systemic risk tax*]{} (SRT) – is leading to a dynamical re-structuring of financial networks, so that overall SR is substantially reduced [@Poledna:2014aa].
In this paper we study and compare the consequences of two different options for the regulation of SR with an ABM. As an example for the first option we study Basel III with capital surcharges for G-SIBs and compare it with an example for the second option – the SRT that leads to a self-organized re-structuring of financial networks. A number of ABMs have been used recently to study interactions between the financial system and the real economy, focusing on destabilizing feedback loops between the two sectors [@Delli-Gatti:2009aa; @Battiston:2012ab; @Tedeschi:2012aa; @Porter:2014aa; @Thurner:2013aa; @Poledna:2014aa; @Klimek:2014aa]. We study the different options for the regulation of SR within the framework of the CRISIS macro-financial model[^1]. In this ABM, we implement both the Basel III indicator-based measurement approach, and the increased capital requirements for G-SIBs. We compare both to an implementation of the SRT developed in [@Poledna:2014aa]. We conduct three “computer experiments” with the different regulation schemes. First, we investigate which of the two options to regulate SR is superior. Second, we study the effect of increased capital requirements, the “surcharges”, on G-SIBs and the real economy. Third, we clarify to what extend the Basel III indicator-based measurement approach really quantifies SR, as indented by the BCBS.
Basel III indicator-based measurement approach and capital surcharges for G-SIBs
================================================================================
Basel III indicator-based measurement approach
----------------------------------------------
The Basel III indicator-based measurement approach consists of five broad categories: size, interconnectedness, lack of readily available substitutes or financial institution infrastructure, global (cross-jurisdictional) activity and “complexity”. As shown in \[indicator\], the measure gives equal weight to each of the five categories. Each category may again contain individual indicators, which are equally weighted within the category.
[p[5cm]{} p[7cm]{} p[3cm]{}]{} Category (and weighting) & Individual indicator & Indicator weighting\
Cross-jurisdictional activity (20%) & Cross-jurisdictional claims & 10%\
& Cross-jurisdictional liabilities & 10%\
Size (20%) & Total exposures as defined for use in the Basel III leverage ratio & 20%\
Interconnectedness (20%) & Intra-financial system assets & 6.67%\
& Intra-financial system liabilities & 6.67%\
& Securities outstanding & 6.67%\
Substitutability/financial institution infrastructure (20%) & Assets under custody & 6.67%\
& Payments activity & 6.67%\
& Underwritten transactions in debt and equity markets & 6.67%\
Complexity (20%) & Notional amount of over-the-counter (OTC) derivatives & 6.67%\
& Level 3 assets & 6.67%\
& Trading and available-for-sale securities & 6.67%
Here below we describe each of the categories in more detail.
This indicator captures the global “footprint” of Banks. The motivation is to reflect the coordination difficulty associated with the resolution of international spillover effects. It measures the bank’s cross-jurisdictional activity relative to other banks activities. Here it differentiates between
\[(i)\]
cross-jurisdictional claims, and
cross-jurisdictional liabilities.
This indicator reflects the idea that size matters. Larger banks are more difficult to replace and the failure of a large bank is more likely to damage confidence in the system. The indicator is a measure of the normalized total exposure used in Basel III leverage ratio.
A bank’s systemic impact is likely to be related to its interconnectedness to other institutions via a network of contractual obligations. The indicator differentiates between
\[(i)\]
in-degree on the network of contracts
out-degree on the network of contracts, and
outstanding securities.
The indicator for this category captures the increased difficulty in replacing banks that provide unique services or infrastructure. The indicator differentiates between
\[(i)\]
assets under custody
payment activity, and
underwritten transactions in debt and equity markets.
Banks are complex in terms of business structure and operational “complexity”. The costs for resolving a complex bank is considered to be greater. This is reflected in the indicator as
\[(i)\]
notional amount of over-the-counter derivatives
level 3 assets, and
trading and available-for-sale securities.
The score $S_{j}$ of the Basel III indicator-based measurement approach for each bank $j$ and each indicator $D^{i}$, e.g. cross-jurisdictional claims, is calculated as the fraction of the individual banks with respect to all $B$ Banks and then weighted by the indicator weight ($\beta_i$). The score is given in basis points (factor $10000$) $$S_{j} = \sum_{i \in I} \beta_i \frac{D^{i}_{j}}{\sum_j^B D^{i}_{j}}10000 \quad, \label{score}$$ where $I$ is the set of indicators $D^{i}$ and $\beta_i$ the weights from \[indicator\].
Basel III capital surcharges for G-SIBs
---------------------------------------
[p[3cm]{} p[3cm]{} p[3cm]{} p[6cm]{}]{} Bucket & Score range & Bucket thresholds & Higher loss absorbency requirement (common equity as a percentage of risk-weighted assets)\
5 & D-E & 530-629 & 3.50%\
4 & C-D & 430-529 & 2.50%\
3 & B-C & 330-429 & 2.00%\
2 & A-B & 230-329 & 1.50%\
1 & Cutoff point-A & 130-229 & 1.00%
In the Basel III “bucketing approach”, based on the scores from \[score\], banks are divided into four equally sized classes (buckets) of systemic importance, seen here in \[buckets\]. The cutoff score and bucket thresholds have been calibrated by the BCBS in such a way that the magnitude of the higher loss absorbency requirements for the highest populated bucket is 2.5% of risk-weighted assets, with an initially empty bucket of 3.5% of risk-weighted assets. The loss absorbency requirements for the lowest bucket is 1% of risk-weighted assets. The loss absorbency requirement is to be met with common equity [@BIS:2010aa]. Bucket five will initially be empty. As soon as the bucket becomes populated, a new bucket will be added in such a way that it is equal in size (scores) to each of the other populated buckets and the minimum higher loss absorbency requirement is increased by 1% of risk-weighted assets.
The model to test the efficiency of SR regulation {#model}
=================================================
We use an ABM, linking the financial and the real economy. The model consists of banks, firms and households. One pillar of the model is a well-studied macroeconomic model [@Gaffeo:2008aa; @Delli-Gatti:2008aa; @Delli-Gatti:2011aa], the second pillar is an implementation of an interbank market. In particular, we extend the model used in [@Poledna:2014aa] with an implementation of the Basel III indicator-based measurement approach and capital surcharges for SIBs. For a comprehensive description of the model, see [@Delli-Gatti:2011aa; @Gualdi:2013aa; @Poledna:2014aa].
The agents in the model interact on four different markets.
\[(i)\]
Firms and banks interact on the credit market, generating flows of loan (re)payments.
Banks interact with other banks on the interbank market, generating flows of interbank loan (re)payments.
Households and firms interact on the labour market, generating flows wage payments.
Households and firms interact on the consumption-goods market, generating flows of goods.
Banks hold all of firms and households? cash as deposits. Households are randomly assigned as owners of firms and banks (share-holders).
Agents repeat the following sequence of decisions at each time step:
firms define labour and capital demand,
banks rise liquidity for loans,
firms allocate capital for production (labour),
households receive wages, decide on consumption and savings,
firms and banks pay dividends, firms with negative cash go bankrupt,
banks and firms repay loans,
illiquid banks try to rise liquidity, and if unsuccessful, go bankrupt.
Households which own firms or banks use dividends as income, all other households use wages. Banks and firms pay $20\%$ of their profits as dividends. The agents are described in more detail below.
### Households
There are $H$ households in the model. Households can either be workers or investors that own firms or banks. Each household $j$ has a personal account $A_{j,b}(t)$ at one of the $B$ banks, where index $j$ represents the household and index $b$ the bank. Workers apply for jobs at the $F$ different firms. If hired, they receive a fixed wage $w$ per time step, and supply a fixed labour productivity $\alpha$. A household spends a fixed percentage $c$ of its current account on the consumption goods market. They compare prices of goods from $z$ randomly chosen firms and buy the cheapest.
### Firms
There are $F$ firms in the model. They produce perfectly substitutable goods. At each time step firm $i$ computes its own expected demand $d_i(t)$ and price $p_i(t)$. The estimation is based on a rule that takes into account both excess demand/supply and the deviation of the price $p_i(t-1)$ from the average price in the previous time step [@Delli-Gatti:2011aa]. Each firm computes the required labour to meet the expected demand. If the wages for the respective workforce exceed the firm’s current liquidity, the firm applies for a loan. Firms approach $n$ randomly chosen banks and choose the loan with the most favorable rate. If this rate exceeds a threshold rate $r^{\rm max}$, the firm only asks for $\phi$ percent of the originally desired loan volume. Based on the outcome of this loan request, firms re-evaluate the necessary workforce, and hire or fire accordingly. Firms sell the produced goods on the consumption goods market. Firms go bankrupt if they run into negative liquidity. Each of the bankrupted firms’ debtors (banks) incurs a capital loss in proportion to their investment in loans to the company. Firm owners of bankrupted firms are personally liable. Their account is divided by the debtors *pro rata*. They immediately (next time step) start a new company, which initially has zero equity. Their initial estimates for demand $d_i(t)$ and price $p_i(t)$ is set to the respective averages in the goods market.
### Banks
The model involves $B$ banks. Banks offer firm loans at rates that take into account the individual specificity of banks (modeled by a uniformly distributed random variable), and the firms’ creditworthiness. Firms pay a credit risk premium according to their creditworthiness that is modeled by a monotonically increasing function of their financial fragility [@Delli-Gatti:2011aa]. Banks try to grant these requests for firm loans, providing they have enough liquid resources. If they do not have enough cash, they approach other banks in the interbank market to obtain the necessary amount. If a bank does not have enough cash and can not raise the total amount requested on the interbank market, it does not pay out the loan. Interbank and firm loans have the same duration. Additional refinancing costs of banks remain with the firms. Each time step firms and banks repay $\tau$ percent of their outstanding debt (principal plus interest). If banks have excess liquidity they offer it on the interbank market for a nominal interest rate. The interbank relation network is modeled as a fully connected network and banks choose the interbank offer with the most favorable rate. Interbank rates $r_{ij}(t)$ offered by bank $i$ to bank $j$ take into account the specificity of bank $i$, and the creditworthiness of bank $j$. If a firm goes bankrupt the respective creditor bank writes off the respective outstanding loans as defaulted credits. If the bank does not have enough equity capital to cover these losses it defaults. Following a bank default an iterative default-event unfolds for all interbank creditors. This may trigger a cascade of bank defaults. For simplicity’s sake, we assume a recovery rate set to zero for interbank loans. This assumption is reasonable in practice for short term liquidity [@Cont:2013aa]. A cascade of bankruptcies happens within one time step. After the last bankruptcy is resolved the simulation is stopped.
Implementation of the Basel III indicator-based measurement approach
--------------------------------------------------------------------
In the ABM, we implement the size indicator by calculating the total exposures of banks. Total exposure includes all assets of banks excluding cash, i.e. loans to firms and loans to other banks. Interconnectedness is measured in the model by interbank assets (loans) and interbank liabilities (deposits) of banks. As a measure for substitutability we use the payment activity of banks. The payment activity is measured by the sum of all outgoing payments by banks. In the model we do not have cross-jurisdictional activity and banks are not engaged in selling complex financial products including derivatives and level 3 assets. We therefore set the weights for global (cross-jurisdictional) activity and complexity to zero.
Banks in the model have to observe loss absorbency (capital) requirements according to Basel III. They are required to hold 4.5% of common equity (up from 2% in Basel II) of risk-weighted assets (RWAs). RWAs are calculated according to the standardized approach, i.e. with fixed weights for all asset classes. As fixed weights we use 100% for interbank loans and commercial loans. We define equity capital of banks in the model as common equity.
In the model banks are allocated to the five buckets shown in \[buckets\] based on the scores obtained by \[score\]. Banks have to meet additional loss absorbency requirements as shown in \[buckets\]. The score is calculated at every time step $t$ and capital requirements must be observed before providing new loans.
Implementation of the SRT in the model
--------------------------------------
The SRT is implemented in the model as described in [@Poledna:2014aa], for details see \[srt\] and [@Poledna:2014aa]. The SRT is calculated according to \[srteq\_simple\] and is imposed on all interbank transactions. All other transactions are exempted from SRT. Before entering a desired loan contraction, loan requesting bank $i$ obtains quotes for the SRT rates from the Central Bank, for various potential lenders $j$. Bank $i$ chooses the interbank offer from bank $j$ with the smallest total rate. The SRT is collected into a bailout fund.
Results
=======
Below we present the results of three experiments. The first experiment focuses on the performance of different options for regulation of SR, the second experiment on the consequences of different capital surcharges for G-SIBs, and the third experiment on the effects of different weight distributions for the Basel III indicator-based measurement approach.
Performance of different options for regulation of SR
-----------------------------------------------------
![image](hist_losses_sib_srt.eps){width=".49\textwidth"} ![image](hist_cascades_sib_srt.eps){width=".49\textwidth"} ![image](hist_volume_sib_srt.eps){width=".49\textwidth"} ![image](dr_100_sib_srt.eps){width=".49\textwidth"}
In order to understand the effects of Basel III capital surcharges for G-SIBs on the financial system and the real economy, and to compare it with the SRT, we conduct an experiment where we consider three cases:
\[(i)\]
financial system regulated with Basel II,
financial system regulated with Basel III, with capital surcharges for G-SIBs,
financial system with the SRT.
With Basel II we require banks to hold 2% of common equity of their RWAs. RWAs are calculated according to the standardized approach.
The effects of the different regulation policies on total losses to banks ${\cal L}$ (see \[risk\_measures\]) are shown in \[fig:regulation\_type\](a). Clearly, with Basel II (red) fat tails in the loss distributions of the banking sector are visible. Basel III with capital surcharges for G-SIBs slightly reduces the losses (almost not visible). The SRT gets almost completely rid of big losses in the system (green). This reduction in major losses in the financial system is due to the fact that with the SRT the possibility of cascading failure is drastically reduced. This is shown in \[fig:regulation\_type\](b) where the distributions of cascade sizes $\mathcal{S}$ (see \[risk\_measures\]) for the three modes are compared. With the Basel regulation policies we observe cascade sizes of up to $20$ banks, while with the SRT the possibility of cascading failure is drastically reduced. Clearly, the total transaction volume $\mathcal{V}$ (see \[risk\_measures\]) in the interbank market is not affected by the different regulation policies. This is seen in \[fig:regulation\_type\](c), where we show the distributions of transaction volumes.
In \[fig:regulation\_type\](d) we show the [*SR-profile*]{} of the financial system given by the rank-ordered DebtRanks $R_i$ of all banks [@Poledna:2015aa]. The SR-profile shows the distribution of systemic impact across banks in the financial system. The bank with the highest systemic impact is to the very left. Obviously, the SRT drastically reduces the systemic impact of individual banks and leads to a more homogeneous spreading of SR across banks. Basel III with capital surcharges for G-SIBs sees a slight reduction in the systemic impact of individual banks.
Consequences of different capital surcharges for G-SIBs
-------------------------------------------------------
![image](hist_losses_sib_rw.eps){width=".49\textwidth"} ![image](hist_cascades_firms_rw.eps){width=".49\textwidth"} ![image](hist_volume_sib_rw.eps){width=".49\textwidth"} ![image](dr_100_sib_rw.eps){width=".49\textwidth"}
Next, we study the effect of different levels of capital surcharges for G-SIBs. Here we consider three different settings:
\[(i)\]
capital surcharges for G-SIBs as specified in Basel III (\[buckets\]),
double capital surcharges for G-SIBs,
threefold capital surcharges for G-SIBs.
With larger capital surcharges for G-SIBs, we observe a stronger effect of the Basel III regulation policy on the financial system. Clearly, the shape of the distribution of losses ${\cal L}$ is similar (\[fig:surcharges\](a)). The tail of the distributions is only reduced due to a decrease in efficiency (transaction volume), as is seen in \[fig:surcharges\](c). Evidently, average losses ${\cal L}$ are reduced at the cost of a loss of efficiency by roughly the same factor. This means that Basel III must reduce efficiency in order to show any effect in terms of SR, see \[discussion\].
In \[fig:surcharges\](d) we show the SR-profile for different capital surcharges for G-SIBs. Clearly, the systemic impact of individual banks is also reduced at the cost of a loss of efficiency by roughly the same factor.
In \[fig:surcharges\](b) we show the cascade size of synchronized firm defaults. Here we see an increase in cascading failure of firms with increasing capital surcharges for G-SIBs. This means larger capital surcharges for G-SIBs can have pro-cyclical side effects, see \[discussion\].
Does the Basel III indicator-based measurement approach measure SR?
-------------------------------------------------------------------
![image](hist_losses_sib_sr_cr.eps){width=".49\textwidth"} ![image](hist_volume_sib_sr_cr.eps){width=".49\textwidth"} ![image](dr_100_sib_sr_cr.eps){width=".49\textwidth"} ![image](vul_100_sib_sr_cr.eps){width=".49\textwidth"}
Finally, we study the effects of different weight distributions for the Basel III indicator-based measurement approach. In particular, we are interested in whether it is more effective to have capital surcharges for “super-spreaders” or for “super-vulnerable” financial institutions. A vulnerable financial institution in this context is an institution that is particularly exposed to failures in the financial system, i.e. has large credit or counterparty risk (CR). Holding assets is generally associated with the risk of losing the value of an investment. Whereas a liability is an obligation towards a counterparty that can have an impact if not fulfilled. To define an indicator that reflects SR and identifies “super-spreaders” we set the weight on intra-financial system liabilities to 100%. For an indicator that reflects CR of interbank assets and identifies vulnerable financial institutions of we set the weight on intra-financial system assets to 100%. Weights on all other individual indicators are set to zero. Specifically, we consider three different weight distributions.
\[(i)\]
weights as specified in Basel III (\[indicator\]),
indicator weight on interbank liabilities set to 100%,
indicator weight on interbank assets set to 100%.
To illustrate the effects of different weight distributions, we once again use larger capital surcharges for G-SIBs, as specified in Basel III. To allow a comparison between the different weight distributions, we set the level of capital surcharges for each weight distribution to result in a similar level of efficiency (credit volume). Specifically, for the weight distribution as specified in Basel III (\[indicator\]) we multiply the capital surcharges for G-SIBs as specified in Basel III (\[buckets\]) by a factor $2.75$, for indicator weight on interbank liabilities by $3$ and for indicator weight on interbank assets by $4.5$. In \[fig:weights\](b) we show that efficiency, as measured by transaction volume in the interbank market, is indeed similar for all weight distributions.
In \[fig:weights\](a) we show the losses to banks ${\cal L}$ for the different weight distributions. Clearly, imposing capital surcharges on “super-spreaders” (yellow) does indeed reduce the tail of the distribution, but does not, however, get completely rid of large losses in the system (yellow). In contrast, imposing capital surcharges on “super-vulnerable” financial institutions (red) shifts the mode of the loss distribution without reducing the tail, thus making medium losses more likely without getting rid of large losses.
To illustrate the effect of the different weight distributions, we show the risk profiles for SR and CR. Risk profiles for SR show the systemic impact as given by the DebtRank $R_{i}$ (banks are ordered by $R_{i}$, the most systemically important being to the very left). Risk profiles for CR show the distribution of *average vulnerability*, i.e. a measure for CR in a financial network, for details see \[vulnerability\_section\]. The corresponding SR- and CR-profiles are shown for the different weight distributions in \[fig:weights\](c) and \[fig:weights\](d). Imposing capital surcharges on “super-spreaders” leads to a more homogeneous spreading of SR across all agents, as shown in \[fig:weights\](c) (yellow). Whereas capital surcharges for “super-vulnerable” financial institutions leads to a more homogeneous spreading of CR (\[fig:weights\](d) (red)). Interestingly, homogeneous spreading of CR (SR) leads to a more unevenly distributed SR (CR), as seen by comparing \[fig:weights\](c) and \[fig:weights\](d).
Discussion
==========
We use an ABM to study and compare the consequences of the two different options for the regulation of SR. In particular we compare financial regulation that attempts to reduce the financial fragility of SIFIs (Basel III capital surcharges for G-SIBs) with a regulation policy that aims directly at reshaping the topology of financial networks (SRT).
SR emerges in the ABM through two mechanisms, either through interconnectedness of the financial system or through synchronization of the behavior of agents. Cascading failure of banks in the ABM can be explained as follows. Triggered by a default of a firm on a loan, a bank may suffer losses exceeding its absorbance capacity. The bank fails and due to its exposure on the interbank market, other banks may fail as well. Cascading failure can be seen in \[fig:regulation\_type\](b).
Basel III capital surcharges for G-SIBs work in the ABM as follows. Demand for commercial loans from firms depends mainly on the expected demand for goods. Firms approach different banks for loan requests. If banks have different capital requirements, banks with lower capital requirements may have a higher leverage and provide more loans. Effectively, different capital requirement produce inhomogeneous leverage levels in the banking system. Imposing capital surcharges for G-SIBs means that banks with a potentially large impact on others must have sizable capital buffers. With capital surcharges for G-SIBs, non-important banks can use higher leverage until they become systemically important themselves. This leads to a more homogenous spreading of SR, albeit that reduction and mitigation of SR is achieved “indirectly”.
The SRT leads to a self-organized reduction of SR in the following way [@Poledna:2014aa]: Banks looking for credit will try to avoid this tax by looking for credit opportunities that do not increase SR and are thus tax-free. As a result, the financial network rearranges itself toward a topology that, in combination with the financial conditions of individual institutions, will lead to a new topology where cascading failures can no longer occur. The reduction and mitigation of SR is achieved “directly” by re-shaping the network.
From the experiments we conclude that Basel III capital surcharges for G-SIBs reduce and mitigate SR at the cost of a loss of efficiency of the financial system (\[fig:surcharges\](a) and \[fig:surcharges\](c)). This is because overall leverage is reduced by capital surcharges. On the other hand, the SRT keeps the efficiency comparable to the unregulated financial system. However it avoids cascades due to the emergent network structure. This means – potentially much higher – capital requirements for “super-spreaders” do address SR but at the cost of efficiency.
SR through synchronization emerges in the ABM in the following way: Similarly to the root cause of cascading failure in the banking system, an initial shock from losses from commercial loans can lead to a synchronization of firm defaults. If banks are able to absorb initial losses, the losses still result in an implicit increase in leverage. In order to meet capital requirements (Basel III), the bank needs to reduce the volume of commercial loans. This creates a feedback effect of firms suffering from reduced liquidity, which in turn increases the probability of defaults and closes the cycle. This feedback effect is inherent to the system and can be re-inforced by Basel III capital surcharges for G-SIBs. In the experiments we see that this feedback effect gets more pronounced with larger capital surcharges for G-SIBs (\[fig:surcharges\](b)). Here we see an increase in cascading failure of firms with increasing capital surcharges for G-SIBs. This means capital surcharges for G-SIBs can have pro-cyclical side effects.
Basel III measures systemic importance with an indicator-based measurement approach. The indicator approach is based on several individual indicators consisting of banks’ assets and liabilities. By studying the effect of different weight distributions for the individual indicators, we find that Basel III capital surcharges for G-SIBs are more effective with higher weights on liabilities (\[fig:weights\](a)). This means – quite intuitively – that asset-based indicators are more suitable for controlling credit risk and liability based indicators are more suitable for controlling systemic risk. Since the indicators in the Basel proposal are predominantly based on assets, the Basel III indicator-based measurement approach captures predominantly credit risk and therefore does not meet it’s declared objective.
To conclude with a summary at a very high level, the policy implications obtained with the ABM are
\[(i)\]
re-shaping financial networks is more effective and efficient than reducing leverage,
capital surcharges for G-SIBs (“super-spreaders”) can reduce SR,
but must be larger than specified in Basel III to have an measurable impact,
and thus cause a loss of efficiency.
Basel III capital surcharges for G-SIBs can have pro-cyclical side effects.
Methods
=======
DebtRank {#debtrank_section}
--------
DebtRank is a recursive method suggested in @Battiston:2012aa to determine the systemic relevance of nodes in financial networks. It is a number measuring the fraction of the total economic value in the network that is potentially affected by a node or a set of nodes. $L_{ij}$ denotes the interbank liability network at any given moment (loans of bank $j$ to bank $i$), and $C_{i}$ is the capital of bank $i$. If bank $i$ defaults and cannot repay its loans, bank $j$ loses the loans $L_{ij}$. If $j$ does not have enough capital available to cover the loss, $j$ also defaults. The impact of bank $i$ on bank $j$ (in case of a default of $i$) is therefore defined as $$\label{impact} W_{ij} = \min \left[1,\frac{L_{ij}}{C_{j}} \right] \quad.$$ The value of the impact of bank $i$ on its neighbors is $I_{i} = \sum_{j} W_{ij} v_{j}$. The impact is measured by the [*economic value*]{} $v_{i}$ of bank $i$. For the economic value we use two different proxies. Given the total outstanding interbank exposures of bank $i$, $L_{i}=\sum_{j}L_{ji}$, its economic value is defined as $$\label{ecovalue} v_{i}=L_{i}/\sum_{j}L_{j} \quad.$$ To take into account the impact of nodes at distance two and higher, this has to be computed recursively. If the network $W_{ij}$ contains cycles, the impact can exceed one. To avoid this problem an alternative was suggested in @Battiston:2012aa, where two state variables, $h_{\rm i}(t)$ and $s_{\rm i}(t)$, are assigned to each node. $h_{\rm i}$ is a continuous variable between zero and one; $s_{\rm i}$ is a discrete state variable for three possible states, undistressed, distressed, and inactive, $s_{\rm i} \in \{U, D, I\}$. The initial conditions are $h_{i}(1) = \Psi \, , \forall i \in S ;\; h_{i}(1)=0 \, , \forall i \not \in S$, and $s_{i}(1) = D \, , \forall i \in S ;\; s_{i}(1) = U \, , \forall i \not \in S$ (parameter $\Psi$ quantifies the initial level of distress: $\Psi \in [0, 1]$, with $\Psi = 1$ meaning default). The dynamics of $h_i$ is then specified by $$h_{i}(t) = \min\left[1,h_{i}(t-1)+\sum_{j\mid s_{j}(t-1) = D} W_{ ji}h_{j}(t-1) \right] \quad.$$ The sum extends over these $j$, for which $s_{j}(t-1) = D$, $$s_{i}(t) =
\begin{cases}
D & \text{if } h_{i}(t) > 0; s_{i}(t-1) \neq I ,\\
I & \text{if } s_{i}(t-1) = D , \\
s_{i}(t-1) & \text{otherwise} \quad.
\end{cases}$$ The DebtRank of the set $S$ (set of nodes in distress at time $1$), is $R^{\prime}_S = \sum_{j} h_{j}(T)v_{j} - \sum_{j} h_{j}(1)v_{j}$, and measures the distress in the system, excluding the initial distress. If $S$ is a single node, the DebtRank measures its systemic impact on the network. The DebtRank of $S$ containing only the single node $i$ is $$\label{debtrank} R^{\prime}_{i} = \sum_{j} h_{j}(T)v_{j} - h_{i}(1)v_{i} \quad.$$ The DebtRank, as defined in \[debtrank\], excludes the loss generated directly by the default of the node itself and measures only the impact on the rest of the system through default contagion. For some purposes, however, it is useful to include the direct loss of a default of $i$ as well. The total loss caused by the set of nodes $S$ in distress at time $1$, including the initial distress is $$\label{debtrank_self} R_S = \sum_{j} h_{j}(T)v_{j} \quad.$$
Expected systemic loss
----------------------
The precise meaning of the DebtRank allows us to define the *expected systemic loss* for the entire economy [@Poledna:2015aa; @Poledna:2014aa]. Assuming that we have $B$ banks in the system, the expected systemic loss can be approximated by $$EL^{\rm syst}(t) = V(t) \sum_{i=1}^{B} p_i(t) R_i(t) \quad, \label{EL}$$ with $p_i(t)$ the probability of default of node $i$, and $V(t)$ the combined economic value of all nodes at time $t$. For details and the derivation, see [@Poledna:2015aa].
To calculate the marginal contributions to the expected systemic loss, we start by defining the *net liability network* $L^{\rm net}_{ij}(t)=\max[0,L_{ij}(t)-L_{ji}(t)]$. After we add a specific liability $L_{mn}(t)$, we denote the liability network by $$L^{(+mn)}_{ij}(t) = L^{\rm net}_{ij}(t) + \sum_{m,n}\delta_{im}\delta_{jn} L_{mn}(t) \quad,$$ where $\delta_{ij}$ is the Kronecker symbol. The marginal contribution of the specific liability $L_{mn}(t)$ on the expected systemic loss is $$\begin{gathered}
\Delta^{(+mn)} EL^{\rm syst}(t) = \\
= \sum_{i=1}^{B} p_i(t) \left(V^{(+mn)}(t)R^{(+mn)}_i(t) - V(t)R_i(t) \right) \quad , \label{marginal_effect}\end{gathered}$$ where $R^{(+mn)}_i(t)=R_i(L^{(+mn)}_{ij}(t),C_i(t))$ is the DebtRank of the liability network and $V^{(+mn)}(t)$ the total economic value with the added liability $L_{mn}(t)$. Clearly, a positive $\Delta^{(+mn)} EL^{\rm syst}(t)$ means that $L_{mn}(t)$ increases the total SR.
Finally, the marginal contribution of a single loan (or a transaction leading to that loan) can be calculated. We denote a loan of bank $i$ to bank $j$ by $l_{ijk}$. The liability network changes to $$L^{(+k)}_{ij}(t) = L^{\rm net}_{ij}(t) + \sum_{m,n,k} \delta_{im}\delta_{jn} \delta_{kk} l_{mnk}(t) \quad. \label{Lwithoutloan}$$ Since $i$ and $j$ can have a number of loans at a given time $t$, the index $k$ numbers a specific loan between $i$ and $j$. The marginal contribution of a single loan (transaction) $\Delta^{(+k)} EL^{\rm syst}(t)$, is obtained by substituting $L^{(+mn)}_{ij}(t)$ by $L^{(+k)}_{ij}(t)$ in \[marginal\_effect\]. In this way every existing loan in the financial system, as well as every hypothetical one, can be evaluated with respect to its marginal contribution to overall SR.
Systemic risk tax {#srt}
-----------------
The central idea of the SRT is to tax every transaction between any two counterparties that increases SR in the system [@Poledna:2014aa]. The size of the tax is proportional to the increase of the expected systemic loss that this transaction adds to the system as seen at time $t$. The SRT for a transaction $l_{ijk}(t)$ between two banks $i$ and $j$ is given by $$\begin{gathered}
SRT_{ij}^{(+k)}(t) = \\
= \zeta \max \left[0, \sum_i p_i(t) \left(V^{(+k)}(t)R^{(+k)}_i(t) - V(t)R_i(t) \right) \right] \quad. \label{srteq_simple} \end{gathered}$$ $\zeta$ is a proportionality constant that specifies how much of the generated expected systemic loss is taxed. $\zeta=1$ means that 100% of the expected systemic loss will be charged. $\zeta<1$ means that only a fraction of the true SR increase is added on to the tax due from the institution responsible. For details, see [@Poledna:2014aa].
Average vulnerability {#vulnerability_section}
---------------------
Using DebtRank we can also define an *average vulnerability* of a node in the liability network $L_{ij}$ [@Aoyama:2013aa]. Recall that $h_{i}(T)$ is the state variable, which describes the health of a node in terms of equity capital after $T$ time steps ($s_{\rm i} \in \{U, I\}$, i.e. all nodes are either undistressed or inactive). When calculating $h_{i}(T)$ for $S_f$ containing only a single node $i$, we can define $h_{ij}(T)$ as the health of bank $j$ in case of default of bank $i$ $$\label{hmatrix_eq} h_{ij}(T) =
\begin{bmatrix}
h_{1}(T) \dots h_{N}(T)
\end{bmatrix}
\quad.$$ Note that is necessary to simulate the default of every single node of the liability network $L_{ij}$ separately to obtain $h_{ij}$. With $h_{ij}$ we can define the average vulnerability of a node in the liability network as the average impact of other nodes on node $i$ $$V_i = \frac{1}{B}\sum_j h_{ji} \quad,$$ where $B$ is the number of nodes in the liability network $L_{ij}$.
Measures for losses, default cascades and transaction volume {#risk_measures}
------------------------------------------------------------
We use the following three observables: (1) the size of the cascade, ${\cal C}$ as the number of defaulting banks triggered by an initial bank default ($1\leq {\cal C} \leq B$), (2) the total losses to banks following a default or cascade of defaults, ${\cal L}= \sum_{i \in I}\sum_{j=1}^B L_{ ij}(t)$, where $I$ is the set of defaulting banks, and (3) the average transaction volume in the interbank market in simulation runs longer than $100$ time steps, $$\mathcal{V}=\frac{1}{T}\sum_{t=1}^T\sum_{j=1}^B\sum_{i=1}^B \sum_{k \in K}l_{jik}(t) \quad,$$ where $K$ represents new interbank loans at time step $t$.
[40]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{}
D. Aikman, A. G. Haldane, and S. Kapadia. Operationalising a macroprudential regime: Goals, tools and open issues. *Financial Stability Journal of the Bank of Spain*, 24, 2013.
Olivier De Bandt and Philipp Hartmann. Systemic risk: A survey. Technical report, CEPR Discussion Papers, 2000.
. *[Basel III]{}: A global regulatory framework for more resilient banks and banking systems*. Bank for International Settlements, 2010.
Tobias Adrian and Markus Brunnermeier. Covar. Technical report, National Bureau of Economic Research, 2011.
Viral Acharya, Lasse Pedersen, Thomas Philippon, and Matthew Richardson. Measuring systemic risk. Technical report, CEPR Discussion Papers, 2012. Available at SSRN: http://ssrn.com/abstract=1573171.
Christian T Brownlees and Robert F Engle. Volatility, correlation and tails for systemic risk measurement. *Available at SSRN 1611229*, 2012.
Xin Huang, Hao Zhou, and Haibin Zhu. Systemic risk contributions. *Journal of financial services research*, 420 (1-2):0 55–83, 2012.
Larry Eisenberg and Thomas H. Noe. Systemic risk in financial systems. *Management Science*, 470 (2):0 236–249, 2001.
J. Caballero. Banking crises and financial integration. *IDB Working Paper Series No. IDB-WP-364*, 2012.
Monica Billio, Mila Getmansky, Andrew W Lo, and Loriana Pelizzon. Econometric measures of connectedness and systemic risk in the finance and insurance sectors. *Journal of Financial Economics*, 1040 (3):0 535–559, 2012.
Camelia Minoiu, Chanhyun Kang, V. S. Subrahmanian, and Anamaria Berea. Does financial connectedness predict crises? Technical Report 13/267, International Monetary Fund, Dec 2013.
Andrew G. Haldane and Robert M. May. Systemic risk in banking ecosystems. *Nature*, 4690 (7330):0 351–355, 01 2011.
Sheri Markose, Simone Giansante, and Ali Rais Shaghaghi. financial network of [US CDS]{} market: Topological fragility and systemic risk. *Journal of Economic Behavior & Organization*, 830 (3):0 627–646, 2012.
Hyman P. Minsky. The financial instability hypothesis. *The Jerome Levy Economics Institute Working Paper*, 74, 1992.
Ana Fostel and John Geanakoplos. Leverage cycles and the anxious economy. *American Economic Review*, 980 (4):0 1211–44, 2008.
John Geanakoplos. The leverage cycle. In D. Acemoglu, K. Rogoff, and M. Woodford, editors, *NBER Macro-economics Annual 2009*, volume 24, page 165. University of Chicago Press, 2010.
Tobias Adrian and Hyun S. Shin. Liquidity and leverage. Tech. Rep. 328, Federal Reserve Bank of New York, 2008.
Markus Brunnermeier and Lasse Pedersen. Market liquidity and funding liquidity. *Review of Financial Studies*, 220 (6):0 2201–2238, 2009.
S. Thurner, J.D. Farmer, and J. Geanakoplos. Leverage causes fat tails and clustered volatility. *Quantitative Finance*, 120 (5):0 695–707, 2012.
Fabio Caccioli, Jean-Philippe Bouchaud, and J. Doyne Farmer. Impact-adjusted valuation and the criticality of leverage. in review, http://arxiv.org/abs/1204.0922, 2012.
Sebastian Poledna, Stefan Thurner, J. Doyne Farmer, and John Geanakoplos. Leverage-induced systemic risk under [Basle II]{} and other credit risk policies. *Journal of Banking & Finance*, 42:0 199–212, 2014.
Christoph Aymanns and Doyne Farmer. The dynamics of the leverage cycle. *Journal of Economic Dynamics and Control*, 50:0 155–179, 2015.
Fabio Caccioli, J Doyne Farmer, Nick Foti, and Daniel Rockmore. Overlapping portfolios, contagion, and financial stability. *Journal of Economic Dynamics and Control*, 51:0 50–63, 2015.
Tarik Roukny, Hugues Bersini, Hugues Pirotte, Guido Caldarelli, and Stefano Battiston. Default cascades in complex networks: Topology and systemic risk. *Scientific Reports*, 30 (2759), 2013.
Co-Pierre Georg. and systemic risk regulation - what way forward? Technical Report 17, Working Papers on Global Financial Markets, 2011.
Sebastian Poledna and Stefan Thurner. Elimination of systemic risk in financial networks by means of a systemic risk transaction tax. in review, 2014.
Sebastian Poledna, Jos[é]{} Luis Molina-Borboa, Seraf[í]{}n Mart[í]{}nez-Jaramillo, Marco van der Leij, and Stefan Thurner. The multi-layer network nature of systemic risk and its implications for the costs of financial crises. *Journal of Financial Stability*, 20:0 70–81, 2015.
Domenico [Delli Gatti]{}, Mauro Gallegati, Bruce C Greenwald, Alberto Russo, and Joseph E Stiglitz. Business fluctuations and bankruptcy avalanches in an evolving network economy. *Journal of Economic Interaction and Coordination*, 40 (2):0 195–212, 2009.
Stefano Battiston, Domenico Delli Gatti, Mauro Gallegati, Bruce Greenwald, and Joseph E Stiglitz. Liaisons dangereuses: Increasing connectivity, risk sharing, and systemic risk. *Journal of Economic Dynamics and Control*, 360 (8):0 1121–1141, 2012.
Gabriele Tedeschi, Amin Mazloumian, Mauro Gallegati, and Dirk Helbing. Bankruptcy cascades in interbank markets. *PloS one*, 70 (12):0 e52749, 2012.
Giampaolo Gabbi, Giulia Iori, Saqib Jafarey, and James Porter. Financial regulations and bank credit to the real economy. *Journal of Economic Dynamics and Control*, 50:0 117–143, 2015.
Stefan Thurner and Sebastian Poledna. Debtrank-transparency: Controlling systemic risk in financial networks. *Scientific reports*, 30 (1888), 2013.
P. Klimek, S. Poledna, J.D. Farmer, and S. Thurner. To bail-out or to bail-in? answers from an agent-based model. *Journal of Economic Dynamics and Control*, 50:0 144–154, 2015.
Edoardo Gaffeo, Domenico Delli Gatti, Saul Desiderio, and Mauro Gallegati. Adaptive microfoundations for emergent macroeconomics. *Eastern Economic Journal*, 340 (4):0 441–463, 2008.
D. [Delli Gatti]{}, E. Gaffeo, M. Gallegati, G. Giulioni, and A. Palestrini. *Emergent Macroeconomics: An Agent-Based Approach to Business Fluctuations*. New economic windows. Springer, 2008. ISBN 9788847007253.
Domenico [Delli Gatti]{}, Saul Desiderio, Edoardo Gaffeo, Pasquale Cirillo, and Mauro Gallegati. *Macroeconomics from the Bottom-up*. Springer Milan, 2011.
Stanislao Gualdi, Marco Tarzia, Francesco Zamponi, and Jean-Philippe Bouchaud. Tipping points in macroeconomic agent-based models. *Journal of Economic Dynamics and Control*, 50:0 29–61, 2015.
Rama Cont, Amal Moussa, and Edson Santos. Network structure and systemic risk in banking systems. In Jean-Pierre Fouque and Joseph A Langsam, editors, *Handbook of Systemic Risk*, pages 327–368. Cambridge University Press, 2013.
Stefano Battiston, Michelangelo Puliga, Rahul Kaushik, Paolo Tasca, and Guido Caldarelli. Debtrank: Too central to fail? financial networks, the [FED]{} and systemic risk. *Scientific reports*, 20 (541), 2012.
Hideaki Aoyama, Stefano Battiston, and Yoshi Fujiwara. . Discussion papers 13087, Research Institute of Economy, Trade and Industry (RIETI), October 2013.
[^1]: <http://www.crisis-economics.eu>
|
---
abstract: 'In the first part of the paper we introduce some geometric tools needed to describe slow-fast Hamiltonian systems on smooth manifolds. We start with a smooth Poisson bundle $p: M\to B$ of a regular (i.e. of constant rank) Poisson manifold $(M,\omega)$ over a smooth symplectic manifold $(B,\lambda)$, the foliation into leaves of the bundle coincides with the symplectic foliation generated by the Poisson structure on $M$. This defines a singular symplectic structure $\Omega_{\varepsilon}=$ $\omega + \varepsilon^{-1}p^*\lambda$ on $M$ for any positive small $\varepsilon$, where $p^*\lambda$ is a lift of 2-form $\lambda$ on $M$. Given a smooth Hamiltonian $H$ on $M$ one gets a slow-fast Hamiltonian system w.r.t. $\Omega_{\varepsilon}$. We define a slow manifold $SM$ of this system. Assuming $SM$ to be a smooth submanifold, we define a slow Hamiltonian flow on $SM$. The second part of the paper deals with singularities of the restriction of $p$ on $SM$ and their relations with the description of the system near them. It appears, if $\dim M = 4,$ $\dim B = 2$ and Hamilton function $H$ is generic, then behavior of the system near singularities of the fold type is described in the principal approximation by the equation Painlevé-I, but if a singular point is a cusp, then the related equation is Painlevé-II. This fact for particular types of Hamiltonian systems with one and a half degrees of freedom was discovered earlier by R.Haberman.'
author:
- |
L.M. Lerman, E.I. Yakovlev\
Lobachevsky State University of Nizhny Novgorod, Russia
title: |
Geometry of slow-fast Hamiltonian systems\
and Painlevé equations
---
Introduction
============
Slow-fast Hamiltonian systems are ubiquitous in the applications in different fields of science. These applications range from astrophysics, plasma physics and ocean hydrodynamics till molecular dynamics. Usually these problems are given in the coordinate form, moreover, in the form where a symplectic structure in the phase space is standard (in Darboux coordinates). But there are cases where this form is either nonstandard or the system under study is of a kind when its symplectic form has to be found, in particular, when we deal with the system on a manifold.
It is our aim in this paper to present basic geometric tools to describe slow-fast Hamiltonian systems on manifolds, that is in a coordinate-free way. For a general case this was done by V.I. Arnold [@Arn]. Recall that a customary slow-fast dynamical system is defined by a system of differential equations $$\label{sf}
{\varepsilon}\dot x = f(x,y,{\varepsilon}),\;\dot y = g(x,y,{\varepsilon}),\;(x,y)\in \mathbb R^m\times \mathbb R^n,$$ depending on a small positive parameter ${\varepsilon}$ (its positivity is needed to fix the positive direction of varying time $t$). It is evident that $x$-variables in the region of the phase space where $f\ne 0$ change with the speed $\sim 1/{\varepsilon}$ that is fast. In comparison with them the change of $y$-variables is slow. Therefore variables $x$ are called to be fast and those of $y$ are slow ones.
With such the system two limiting systems usually connect whose properties influence on the dynamics of the slow-fast system for a small ${\varepsilon}.$ One of the limiting system is called to be fast or layer system and is derived in the following way. One is introduced a so-called fast time $\tau = t/{\varepsilon}$, after that the system w.r.t. differentiating in $\tau$ gains the parameter ${\varepsilon}$ in the r.h.s. of the second equation and looses it in the first equation, that is, the right hand sides become dependent in ${\varepsilon}$ in a regular way $$\label{fs}
\frac{d x}{d\tau} = f(x,y,{\varepsilon}),\;\frac{d y}{d\tau} = {\varepsilon}g(x,y,{\varepsilon}),\;(x,y)\in \mathbb R^m\times \mathbb R^n.$$ Setting then ${\varepsilon}=0$ we get the system, where $y$ are constants $y=y_0$ and they can be considered as parameters for the equations in $x$. Sometimes these equations are called as layer equations. Because this system depends on parameters, it may pass through many bifurcations as parameters change and this can be useful to find some special motions in the full system as ${\varepsilon}> 0$ is small.
The slow equations are derived as follows. Let us formally set ${\varepsilon}= 0$ in the system (\[sf\]) and solve the equations $f=0$ with respect to $x$ (where it is possible). The most natural case, when this can be done, is if matrix $f_x$ be invertible at the solution point in some domain where solutions for equations $f=0$ exist. Denote the related branch of solutions as $x=h(y)$ and insert it into the second equation instead of $x$. Then one gets a differential system w.r.t. $y$ variables $$\dot y = g(h(y),y,0),$$ which is called to be the slow system. The idea behind this construction is as follows: if fast motions are directed to the slow manifold, then in a small enough neighborhood of this manifold the motion of the full system happens near this manifold and it is described in the first approximation by the slow system.
Now the primary problem for the slow-fast systems is formulated as follows. Suppose we know something about the dynamics of both (slow and fast) systems, for instance, some structure in the phase space composed from pieces of fast and slow motions. Can we say anything about the dynamics of the full system for a small positive ${\varepsilon}$ near this structure? There is a vast literature devoted to the study of these systems, see, for instance, some of the references in [@Gucken].
This set-up can be generalized to the case of manifolds in a free-coordinate manner [@Arn]. Consider a smooth bundle $M\to B$ with a leaf $F$ being a smooth manifold and assume a vertical vector field $v$ is given on $M$. The latter means that any vector $v(x)$ is tangent to the leaf $F_b$ for any $x\in M$ and $b = p(x)\in B.$ In other words, every leaf $F_b$ of $v$ is an invariant submanifold for this vector field. Let $v_{\varepsilon}$ be a smooth unfolding of $v = v_0$. Consider the set of zeroes for vector field $v$, that is, one fixes a leaf $F_b$ then on this smooth manifold $v$ generates a vector field $v^b$ and we consider its zeroes (equilibria for this vector field). If the linearization operator of $v^b$ (along the leaf) at zero $x$, being a linear operator $Dv^b_x: T_xF_b \to T_xF_b$ in invariant linear subspace $V_x = T_xF_b$ of $T_xM$, has not zero eigenvalues, then the set of zeroes is smoothly continued in $b$ for $b$ close to $b =
p(x)$. It is a consequence of the implicit function theorem. For this case one gets a local section $z: B \to M,$ $p\circ z(b) = b$, which gives a smooth submanifold $Z$ of dimension $dim B.$ One can define a vector field on $Z$ in the following way. Let us represent vector $v_{\varepsilon}(x)$ in the unique way as $v_{\varepsilon}(x)= v^1_{\varepsilon}(x) \oplus v^2_{\varepsilon}(x)$, a sum of two vectors of which $v^1_{\varepsilon}(x)$ belongs to $V_x$ and $v^2_{\varepsilon}(x)$ is in $T_xZ$. Then vector $v^2_{\varepsilon}(x)$ is of order ${\varepsilon}$ in its norm, since $v_{\varepsilon}$ smoothly depends on ${\varepsilon}$, and it is zero vector as ${\varepsilon}= 0$. Due to Arnold [@Arn] the vector field on $Z$ given as $(d/d{\varepsilon})(v^2_{\varepsilon})$ at ${\varepsilon}= 0$ is called to be slow vector field, in coordinate form it gives just what was written above.
It is worth mentioning that one can call as [*slow manifold*]{} all set in $M$ being the zero set for all vertical fields (for ${\varepsilon}=0$). Generically, this set is a smooth submanifold in $M$ but it can be tangent to leaves $F_b$ at some of its points. In this case it is also possible sometimes to define a vector field on $Z$ that can be called a slow vector field, but it is more complicated problem intimately related with degenerations of the projection of $p$ at these points (ranks of $Dp$ at these points, etc.) [@Arn].
Hamiltonian slow-fast systems
=============================
Now we turn to Hamiltonian vector fields. It is well known, in order to define in an invariant way a Hamiltonian vector field, the phase manifold $M$ has to be symplectic: a smooth nondegenerate closed 2-form $\Omega$ has to be given on $M$. A slow-fast Hamiltonian system with a Hamiltonian $H(x,y,u,v,{\varepsilon})$ in coordinates $(x,y,u,v)=(x_1,\ldots,x_n,y_1,\ldots,y_n,u_1,\ldots,u_m,v_1,\ldots,v_m)$ has the form $$\label{sfH}
\begin{array}{l}
\displaystyle{{\varepsilon}\dot{x}_i = \frac{\partial H}{\partial y_i},\;
{\varepsilon}\dot{y}_i = -\frac{\partial H}{\partial x_i},\;i= 1,\ldots, n,}\\\\
\displaystyle{\dot{u}_j = \frac{\partial H}{\partial v_j},\;
\dot{v}_j = -\frac{\partial H}{\partial u_j},\;j= 1,\ldots, m.}
\end{array}$$ It is worth mentioning that a symplectic structure for this system is given by 2-form ${\varepsilon}dx\wedge dy + du\wedge dv$ which is regularly depends on ${\varepsilon}$ and it degenerates into the Poisson co-symplectic structure at ${\varepsilon}=0$. Instead, if we introduce fast time $t/{\varepsilon}= \tau$, then for the transformed system one has 2-form $dx\wedge dy + {\varepsilon}^{-1}du\wedge dv$ which depends singularly in ${\varepsilon}.$
Let us note that the fast system here is Hamiltonian one (evidently) but the same is true for the slow system. Indeed, if $x=p(u,v),$ $y = q(u,v)$ represent solutions of the system $H_x =0,$ $H_y =0$ (the set of zeroes for the fast system), then the differential system $$\dot{u} = H_v,\;\dot{v} = - H_u,\; H(p(u,v),q(u,v),u,v,0)$$ where [*after differentiation*]{} one needs to plug $x=p(u,v),$ $y = q(u,v)$ into Hamiltonian, is Hamiltonian with the Hamilton function $h(u,v)= H(p(u,v),q(u,v),u,v,0),$ that is verified by the direct differentiation.
From this set-up it is clear that the fast system is defined on a Poisson manifold with a regular (of the same rank at any point) Poisson structure. Let us consider a smooth bundle $M\to B$ with a $C^\infty$-smooth connected Poisson manifold $(M,\omega)$ of a constant rank with a co-symplectic structure $\omega$ [@Wein; @Vais]. We assume that a symplectic foliation of $M$ into symplectic leaves is just foliation into bundle leaves. Locally such symplectic foliation exists due to the Weinstein splitting theorem [@Wein]. The dimension of these leaves, due to regularity and connectivity of $M$, is the integer $2n$ being the same for all $M$ and it is just the rank of the Poisson bracket.
Now assume in addition that $B$ is a smooth symplectic manifold with a symplectic 2-form $\lambda$ and consider a singular symplectic structure on $M$ generated by 2-form $\Omega_{\varepsilon}= \omega + {\varepsilon}^{-1}p^*\lambda$ where $p^*\lambda$ is a pullback of $\lambda$ on $M$.
For any positive ${\varepsilon}$ 2-form $\Omega_{\varepsilon}$ is nondegenerate, that is, $(M,\Omega_{\varepsilon})$ is a symplectic manifold.
The proof of this lemma will be given below.
Let $H$ be a smooth function on the total space $M$ of the bundle (the Hamiltonian) and $X_{H}$ be its Hamiltonian vector field. As is known [@Wein; @Vais], any leaf of the foliation is a smooth invariant symplectic manifold of this vector field. This means that given a smooth Hamiltonian on $M$ one gets a family of reduced Hamiltonian vector fields on symplectic leaves, that is, a family of Hamiltonian systems depending on some number of parameters $b\in B$, this number is equal to $2m$ where $2(n+m)$ is the dimension of $M$.
Hamiltonian vector fields on the symplectic leaves are usually called to be fast or layer systems. Thus, the foliation of $M$ into symplectic leaves is defined only by the Poisson structure on $M$ but families of Hamiltonian vector fields on leaves are defined by the function $H$. Let us set ${\varepsilon}=0$ and fix some symplectic leaf $F_b\subset M$. Suppose the related Hamiltonian vector field $X_{H}|_{F_b}$ to have a singular point $p$, $dH(\xi)|_p =0$ for all $\xi \in T_{p}F_b.$ If this singular point has not zero eigenvalues for its linearization along the leaf $F_b$ at $p$, then the singular point persists in parameters $b$ and all close to $F_b$ symplectic leaves have singular points for their Hamiltonian vector fields, these singular points depend smoothly on parameters. This gives locally near $p$ a smooth submanifold of the dimension $2m$ that intersect each leaf close to $F_b$ at a unique point. Thus we get in $M$ some smooth local submanifold $SM$. We call it [*slow manifold*]{} of $X_H.$
More generally, we come to the following construction. Let $M$ be a smooth manifold of an even dimension $2n+2m$ and a Poisson 2-form $\omega$ of a constant rank $2n$, be given on $M$. At every point $u\in M$ tangent space $T_uM$ contains $2n$-dimensional subspace $V_u\subset T_uM$ on which form $\omega$ is nondegenerate. Denote $N_u=(T_uM)^{\perp}$ the skew-orthogonal supplement of $T_uM$ w.r.t. $\omega$. Then one has $\dim N_u=2m$ and $T_uM=V_u\oplus N_u$, maps $u\to V_u$ and $u\to N_u$ are smooth distributions $V$ and $N$ on $M$.
Due to the Darboux-Weinstein theorem [@Wein; @Vais] distributions $V$ and $N$ are integrable. Their integral manifolds form smooth foliations $F^V$ and $F^N$ on $M$. Moreover, restrictions of $\omega$ on leaves of $F^V$ are symplectic forms but on leaves of foliation $F^N$ form $\omega$ is identically zero. Let us remark that the foliation of $M$ into leaves of $V$ can be very complicated [@MR].
Suppose the following conditions hold
- there exists $2m$-dimensional manifold $B$ and a submersion $p: M\to B$ whose leaves coincide with the leaves of foliation $F^V$;
- there is a symplectic 2-form $\lambda$ on $B$.
For any ${\varepsilon}\in\mathbb{R}$ small enough let us set $$\label{construction omega}
\Omega^{\varepsilon}=\varepsilon\omega+p^*\lambda.$$ This defines a 2-form $\Omega^{{\varepsilon}}$ on $M$.
\[omega nondegenerate\] For all ${\varepsilon}\ne 0$ form $\Omega^{\varepsilon}$ is symplectic.
[[**Proof.**]{}]{}2-forms $\omega$ and $\lambda$ are closed. Hence one has $$d\Omega^{{\varepsilon}}=d({\varepsilon}\omega)+d(p^*\lambda)={\varepsilon}d\omega+p^* d\lambda=0.$$ Consider a Darboux-Weinstein chart $(U,\varphi)$ for $\omega$ on $M$. Then for any point $u\in U$ the determinant of matrix $$A=(\omega(\partial_i^{\varphi}(u),\partial_j^{\varphi}(u))),\quad i,j=1,\dots,2n,$$ is equal to $1$ and for all $r,s=1,\dots,2m$ and $\alpha,\beta=1,\dots,2n+2m$ we get $$\omega(\partial_{2n+r}^{\varphi}(u),\partial_{\beta}^{\varphi}(u))=
\omega(\partial_{\alpha}^{\varphi}(u),\partial_{2n+s}^{\varphi}(u))=0.$$ On the other hand, the restriction $dp|_{N_u}:N_u\to T_aN$ is isomorphism, here $a=p(u)$. Therefore vectors $$X_{2n+1}=dp(\partial_{2n+1}^{\varphi}(u)),\dots,X_{2n+2m}=dp(\partial_{2n+2m}^{\varphi}(u))$$ compose a basis of $T_aN$. Thus, matrix $$R=(\lambda(X_{2n+r},X_{2n+s})),\quad r,s=1,\dots,2m,$$ is non-degenerate.
The last point is to remark that w.r.t. a holonomous basis $\partial_{1}^{\varphi}(u),\dots,\partial_{2n+2m}^{\varphi}(u)$ of the space $T_uM$ the matrix of the form $\omega$ has the form $$C=
\begin{pmatrix}
\varepsilon A & 0\\
0 & R
\end{pmatrix},$$ thus $\det{C}=\varepsilon^k\det{A}\det{R}=\varepsilon^k\det{R}\ne 0$.$\blacksquare$
Symplectic submanifolds
=======================
Consider now a $2m$-dimensional smooth submanifold $S\subset M$.
\[omega on S\] If $S$ possesses the properties
- at each point $v\in S$ submanifold $S\subset M$ intersects transversely the leaf through $v$ of foliation $F^V$,
- $S$ is compact,
then there is $\varepsilon_0 > 0$ such that for all $\varepsilon\in (0,\varepsilon_0)$ the restriction of the form $\Omega^{\varepsilon}$ on $S$ is non-degenerate and hence is a symplectic form.
[[**Proof.**]{}]{}Consider first an arbitrary point $u\in M$ and its projection $a=p(u)$. Since $p:M\to B$ is a submersion, there is a neighborhood $U'\subset M$ of $u$, a neighborhood $W'\subset B$ of $a$, a $2n$-dimensional smooth manifold $Q$ and diffeomorphism $\psi:W'\times Q\to U'$ such that $p\circ\psi(b,q)= b$ for any $b\in W'$ and $q\in Q$.
For point $a$ there is a Darboux chart $(W_a,\theta)$ of $B$ w.r.t. symplectic form $\lambda$. Without loss of generality one may assume that $W_a\subset W'$. Let us set $U_u=\psi(W_a\times Q)$ and consider $q\in Q$ such that $u=\psi(a,q)$. Since $N$, $Q$ are manifolds then for points $a$ and $q$ there are neighborhoods $W_a^0$ and $Q^0$ which closures are compact and belong to $W_a$ and $Q$, respectively. Moreover, the set $U_u^0=\psi(W_a^0\times Q^0)$ is a neighborhood of point $u$ and its closure is also compact and belongs to $U_u$.
On $W_a$ holonomous vector fields $Y_{r}=\partial_{r}^{\theta}$, $r=1,\dots,2m$ are given. At any point of $W_a$ matrix $(\lambda(Y_{r},Y_{s}))$, $r,s=1,\dots,2m$, has the canonical form. Therefore one has $$\label{det of nu}
\det(\lambda(Y_{r},Y_{s}))\equiv 1.$$ In accordance to the first condition of the lemma, for any point $v\in S\cap U_u$ one has $T_vM=V_v\oplus T_vS$. Consequently, the restriction $dp|_{T_vS}:T_vS\to T_{p(v)}B$ is isomorphism. Setting $Y_r^*(v)=(dp|_{T_vS})^{-1}(Y_r(p(v))$ for all $v\in S\cap U_u$ and $r=1,\dots,2m$ we get smooth vector fields $Y_1^*,\dots,Y_{2m}^*$ on $S\cap U_u$. At any point $v\in S\cap U_u$ vectors $Y_1^*(v),\dots,Y_{2m}^*(v)$ make up a basis of the tangent space $T_vS$.
Due to (\[construction omega\]) we get $$\Omega^{\varepsilon}(Y_r^*,Y_s^*)=\varepsilon\omega(Y_r^*,Y_s^*)+\lambda(Y_r,Y_s)$$ for all $r,s=1,\dots,2m$. Thus for matrix $D=(\Omega^{\varepsilon}(Y_r^*,Y_s^*))$ there is $$\label{det of omega on S}
\det{D}=f_{2m}\varepsilon^{2m}+\dots+f_1\varepsilon+f_0,$$ where for any $t=0,1,\dots,2m$ coefficient $f_t$ is the sum of all determinants for which $t$ rows coincide with the related rows of matrix $(\omega(Y_r^*,Y_s^*))$, but other rows belong to matrix $(\lambda(Y_r,Y_s))$. It follows from here that $f_t:S\cap U_u\to\mathbb{R}$ are smooth functions and due to identity (\[det of nu\]) one has $$\label{f_0}
f_0=\det(\lambda(Y_{r},Y_{s}))\equiv1.$$
Because the closure of the set $S\cap U_u^0$ is compact and belongs to $S\cap U_u$, functions $f_t$ are bounded on $S\cap U_u^0$. Hence, there is $\varepsilon_u>0$ such that $$\label{f_k}
|f_{2m}\varepsilon^{2m}+\dots+f_1\varepsilon|<1$$ on $S\cap U_u^0$ for any $\varepsilon\in(0,\varepsilon_u)$.
A collection $\mathcal{U}=\{U_u^0|u\in S\}$ is a cover for manifold $S$ by open sets. Since $S$ is compact we conclude $S\subset U_{u_1}^0\cup\dots\cup U_{u_l}^0$ for some finite set of points $u_1,\dots,u_l\in S$. Let us set $\varepsilon_0=\min\{\varepsilon_{u_1},\dots,\varepsilon_{u_l}\}$. Then for any $\varepsilon\in(0,\varepsilon_0)$ the inequality (\[f\_k\]) is valid on every set from $S\cap U_{u_1}^0,\dots,S\cap U_{u_l}^0$.
If now $v$ is any point of $S$ there exists an $i\in\{1,\dots,l\}$ such that $v\in S\cap U_{u_i}^0$. Here for any $\varepsilon\in(0,\varepsilon_0)$ it follows from (\[det of omega on S\]), (\[f\_0\]) and (\[f\_k\]) that $\det{D}\ne 0$. This implies 2-form $\Omega^{\varepsilon}$ be non-degenerate on the tangent space $T_vS$.$\blacksquare$
Slow manifold and nearby orbit behavior
=======================================
Henceforth we assume a Poisson bundle $p: M \to B$ be given where $M$ be a $C^\infty$-smooth Poisson manifold with 2-form $\omega$, $B$ be a smooth symplectic manifold with its 2-form $\lambda$ and $p$ be a smooth bundle map whose leaves define symplectic foliation of $M$ given by $\omega.$ Suppose a smooth function $H$ on $M$ be given. The set of all zeroes for all vertical (fast) Hamiltonian vector fields generated by $H$ forms a subset in $M$ being generically a smooth submanifold $SM$ of dimension $2m =
\mbox{dim\;}B$. We assume this is the case and call $SM$ to be the [*slow manifold*]{} of the vector field $X_H$. When restricted on $SM$ the related map $p_r: SM \to B$ may contain in $SM$ points of two types: regular and singular. By a [*regular*]{} point $s\in SM$ one understands such that $rank Dp_r(s) = 2m = dim
B$. This implies $p_r$ be a diffeomorphism near $s$. To the contrast, a point $s\in
SM$ is [*singular*]{}, if rank of $Dp_r$ at $s$ is lesser than $2m$.
Another characterization of regular and singular points appeals to a type of related equilibria for the fast Hamiltonian system on the symplectic leaf containing $s$. The point $s$ is regular if the fast vector field has a simple equilibrium for the related fast Hamiltonian vector field, i.e. such an equilibrium on the related leaf of the symplectic foliation that has not zero eigenvalues. To the contrast, for a singular point of $s\in SM$ the equilibrium on the symplectic leaf through $s$ for the fast Hamiltonian vector field is degenerate, that is it has a zero eigenvalue. All this will be seen below in local coordinates though it is possible to show it in a coordinate-free way.
Near regular points of $SM$
---------------------------
Manifold $SM$ near its regular point $s$ can be represented by the implicit function theorem as a graph of a smooth section $z: U \to M$, $p(s)\in U\subset B,$ $p\circ z = id_U$. Due to lemma \[omega on S\], such a compact piece of $SM$ is a symplectic submanifold w.r.t. the restriction of 2-form $\Omega_{{\varepsilon}}={{\varepsilon}}^{-1}\Omega^{{\varepsilon}}$ to $SM$. Hence one can define a slow Hamiltonian vector field on it generated by function $H$. In the same way one can consider the case when function $H$ itself depends smoothly on a parameter ${\varepsilon}$ on $M$. Let $X_H$ be the Hamiltonian vector field on $M$ w.r.t. 2-form $\Omega_{\varepsilon}$ generated by $H$. Denote $H^S$ the restriction of $H$ to $SM$ and consider a Hamiltonian vector field on $SM$ with the Hamiltonian $H^S$ w.r.t. the restriction of 2-form $\Omega_{\varepsilon}$ on $SM$. This vector field is of the order ${\varepsilon}$, hence there is a limit for ${\varepsilon}^{-1}H^S$ as ${\varepsilon}\to 0.$ This is what is called to be [*slow Hamiltonian vector field*]{} on $SM$.
It is an interesting problem to understand the orbit behavior of the full system (for small ${\varepsilon}> 0$) within a small neighborhood of a compact piece of regular points in $SM$. This question is very hard in the general set-up. Nevertheless, there is a rather simple general case to examine, if one assumes the hyperbolicity of this piece of $SM$. Suppose for a piece of $SM$ fast systems have equilibria at $SM\cap F_b$ (for ${\varepsilon}= 0$) without zero real parts (hyperbolic equilibria in the common terminology (see, for instance, [@SSTC; @Meiss]). Then this smooth submanifold of the vector field $X_H$ on $M$ is a normally hyperbolic invariant manifold and results of [@Fenichel; @HPS] are applicable. They say that for ${\varepsilon}> 0$ small enough there is a true invariant manifold in a $O({\varepsilon})$-neighborhood of that piece of $SM$. For the full system this invariant manifold is hyperbolic and possesses its stable and unstable local smooth invariant manifolds. The restriction of $X_{H_{\varepsilon}}$ on this slow manifold can be any possible Hamiltonian system with $m$ degrees of freedom. If we add to this picture the extended structure of the fast system along with their bifurcations w.r.t. slow variables (parameters of the fast system), then one can say a lot on the full system as ${\varepsilon}$ is positive small. This behavior is the topics of the averaging theory, theory of adiabatic invariants, etc., see, for instance, [@Verhulst; @Neishtadt].
Much more subtle problem is to understand the local dynamics of the full system near $SM$ when the fast dynamics possesses elliptic equilibria at points of $SM$, this corresponds to a one degree of freedom fast systems. Nonetheless, one can present some details of this picture when we deal with a real analytic case (manifolds and Hamiltonian). Then results of [@GL1] can be applied. For this case $SM$ was called in [@GL1] to be an [*almost invariant elliptic*]{} slow manifold. At ${\varepsilon}= 0$ near a piece of the almost elliptic slow manifold one can introduce a coordinate frame where this slow manifold corresponds to a zero section of the bundle $M\to B$. Then the main result of [@GL1] is applicable for the case when the fast system is two dimensional but the slow system can be of any finite dimension and Hamiltonian is analytic in a neighborhood of $SM$. The result says on the exponentially exact (with respect small parameter ${\varepsilon}$) reduction of the Hamiltonian to the form where fast variables $(x,y)$ enter locally near $SM$ only in the combination $I=(x^2+y^2)/2$. Thus, up to the exponentially small error the system has an additional integral $I$. In particular, if the slow system is also two dimensional, this gives up to exponentially small error an integrable system in a small neighborhood of $SM.$ All this helps a lot when one is of interest in the dynamics within this neighborhood, see, for instance, [@GL2]. The case when fast system has more degrees of freedom and the related equilibria are multi-dimensional elliptic ones is more hard and no results are known to the date. The case of fast equilibria with eigenvalues in the complex plane lying both on the imaginary axis and out of it is even lesser explored.
Near a singular point of $SM$
-----------------------------
At a singular point $s\in SM$ submanifold $SM$ is tangent to the related leaf of the symplectic foliation. More precisely, the rank of the restriction of $Dp$ to the tangent plane $T_sSM$ is less than the maximal $2m$. This means $Ker Dp|_{SM} \ne \emptyset$. If we choose some Darboux-Weinstein coordinate chart $(x,y,u,v)$ near $s$, then the Poisson form in these coordinates is given as $\Omega_0 = dx\wedge dy$ and symplectic leaves are given as $(u,v)=(u_0,v_0)$. Let $H(x,y,u,v)$ be a smooth Hamilton function written in these coordinates, $dH \ne 0$ at some point $s$. The Hamiltonian vector field near $s$ is given as $$\dot x = H_y,\; \dot y = - H_x,\; \dot u =0, \;\dot v =0.$$ The condition point $s$ be in $SM$ (a singular point for the fast vector field) means $H_y(s)=
H_x(s)=0$ and if this point is a singular point for projection $p$, then this implies $$\det\begin{pmatrix}\frac{\partial^2 H}{\partial (x,y)^2}\end{pmatrix}|_s=0,$$ otherwise the system $H_y = 0,\;H_x=0$ can be solved w.r.t. $x,y$ and points in $SM$ be regular. Thus, the fast vector field at point $s$ has zero eigenvalue, thus its eigenspace is invariant w.r.t. the linearization of the fast vector field at $s$. This is equivalent to the condition that $Dp$ restricted on $T_sSM$ has nonzero kernel and hence its rank is lesser than $2m$.
The types of degenerations for the mappings of one smooth manifold to another one (for our case it is $p_r:SM \to B$) are studied by the singularity theory of smooth manifolds [@Whitney; @Arn_GZ]. When the dimension of $B$ (and $M$) are large, these degenerations can be very complicated. Keeping this in mind, we consider below only the simplest case of one fast and one slow degrees of freedom. For this case $SM$ and $B$ have dimension 2 and we get the mapping from one two dimensional smooth manifold to another smooth two dimensional one. In such cases degenerations can be generically only of two types: folds and cusps [@Whitney]. We shall show that near a fold point the system can be reduced to the case of a family of slow varying Hamiltonian systems with the Hamiltonians $H(x,y,{\varepsilon}t,c)$ with scalar $x,y$ and positive small parameter ${\varepsilon},$ $c$ is also a small parameter. For a fixed $c$ the slow manifold of this one and a half degree of freedom system is a slow curve and the fold point corresponds to the quadratic tangency of the slow curve with the related two-dimensional symplectic leaf. The fast (frozen) Hamiltonian system with one degree of freedom has the equilibrium, corresponding to the tangency point, being generically a parabolic equilibrium point with a double non-semisimple zero eigenvalue. The local orbit behavior for these fast system do not change when $c$ varies near a fixed $c_0$.
For the case of a cusp we get again a slow fast Hamiltonian system with two dimensional slow manifold but fast Hamiltonian system has at ${\varepsilon}=0$ on the related symplectic leaf an equilibrium of the type of degenerate saddle or degenerate elliptic point (both are of codimension 2). In this case it is also possible to reduce the system to the case of a family of slow varying nonautonomous Hamiltonian systems but the behavior of these systems near $s$ depends essentially on parameter $c$.
Henceforth, we consider only the case of one slow and one fast degrees of freedom Hamiltonian systems, that is $M$ be a smooth 4-dimensional regular Poisson manifold of rank 2 with 2-form $\omega$ and $B$ be a smooth symplectic 2-dimensional manifold with a symplectic 2-form $\lambda$, $p:M \to B$ be a Poisson bundle with leaves $F_b$ which coincide with symplectic fibration generated by $\omega$. The symplectic structure on $M$ is given by 2-form $\Omega_{\varepsilon}= \omega + {\varepsilon}^{-1}p^*\lambda.$
Folds for the slow manifold projection
======================================
Let a smooth Hamilton function $H$ on $M$ be given (here and below it is assumed $C^\infty$-smoothness). We suppose $H$ to be non-degenerate within a neighborhood of a point $s$ we work: $dH(s) \ne 0$. Then levels $H=c$ are smooth 3-dimensional disks within this neighborhood. Since the consideration is local, we can work in Darboux-Weinstein coordinates near point $s$, hence we suppose 2-form $\Omega_{\varepsilon}$ be written as $\Omega_{\varepsilon}= dx\wedge dy + {\varepsilon}^{-1}du\wedge dv$ with fast variables $x,y$ and slow variables $u,v$. The related Poisson manifold is endowed locally with 2-form $\omega = dx\wedge
dy$, its symplectic leaves are given as $(u,v)=(u_0,v_0)\in U$, $U\subset B$ is a disk with coordinates $(u,v).$ The restriction of $H$ on a symplectic leaf is the function $H(x,y,u_0,v_0)$ and the orbit foliation for this fast vector field on the leaf is given in fact by level lines $H(x,y,u_0,v_0) = c.$ Without loss of generality we assume $s$ being the origin of the coordinate frame, $s=(0,0,0,0).$ Henceforth we assume $s$ to be an equilibrium of the fast Hamiltonian system on the leaf $(0,0):$ $H_y(0,0,0,0)= H_x(0,0,0,0)=0.$
The equilibrium $s$ for the fast Hamiltonian vector field $$\dot x = \frac{\partial H}{\partial y},\;\dot y = -\frac{\partial H}{\partial x},\;u,v\,\mbox{are\; parameters},$$ varies in parameters $(u_0,v_0)$ and its unfolding in parameters gives a local piece of slow manifold $SM$ near point $s=(0,0,0,0)$ in the space $(x,y,u,v).$ Locally near $s$ the slow manifold is indeed a smooth 2-dimensional disk, if rank of the matrix $$\label{2X2}
\begin{pmatrix}H_{xy}&H_{yy}&H_{uy}&H_{vy}\\H_{xx}&H_{yx}&H_{ux}&H_{vx} \end{pmatrix}$$ at $s$ is 2. We suppose this is the case. This is a genericity condition on the function $H$. Then we may consider the restriction of the projection map $p$ on $SM$, this generates the mapping $p_r$ of two smooth 2-dimensional manifolds $p_r: SM \to B$. If the inequality $\Delta = H_{xy}^2 - H_{yy}H_{xx} \ne 0$ holds at $s$ (i.e. $s$ is a regular point of $SM$ and simultaneously is a simple equilibrium of the related fast vector field, that is without zero eigenvalues), then by the implicit function theorem the set of solutions for the system $H_y =0,\;H_x=0$ near $s$ is expressed as $x = f(u,v),$ $y=g(u,v)$ and hence locally it is a section of the bundle $p: M\to B$ and $Dp_r$ does not degenerate on this set in some neighborhood of $s$: $p_r$ is a diffeomorpism. Also, we get regular points for $p_r$ near $s\in
SM$ in the sense of singularity theory [@Whitney; @Arn_GZ].
Thus, the degeneration only happens if $\Delta(s) = 0$. This equality is equivalent to a condition the fast Hamiltonian vector field on the related symplectic leaf $F_b,$ $b=p(s),$ to have a degenerate equilibrium at point $s$: it possesses zero eigenvalue (by necessity being double) for the linearization at $s$. Another characterization of such point is that it is a [*singular*]{} point of mapping $p_r$: rank of this mapping at $s$ is lesser than 2. For the further goals one needs to distinguish singular points of a general type that are possible for smooth maps of one 2-dimensional manifold to another one.
According to [@Whitney], a singular point $q$ of a $C^2$-smooth map $F: U\to V$ of two open domains in smooth 2-dimensional manifolds is [*good*]{}, if the function $J=\det\,DF$ vanishes at $q$ but its differential $dJ$ is nondegenerate at this point. In a neighborhood of a good singular point $q$ there is a smooth curve of other singular points continuing $q$ [@Whitney]. Let $\varphi(\tau)$ be a smoothly parametrized curve of singular points through a good singular point $q$ and $\tau =0$ corresponds to $q$. We shall use below the notation $A^\top$ for the transpose matrix of any matrix $A$.
According to [@Whitney], a good singular point $q$ is called to be a [*fold*]{} point if $dF(\varphi'(0))\ne (0,0)^\top$, and it is called to be a [*cusp*]{} point if at $q$ one has $dF(\varphi'(0))= (0,0)^\top$ but $d^2(F\circ \varphi)/d\tau^2|_q \ne (0,0)^\top.$ It is worth remarking that if $q$ is a fold, then for any nearby point on the curve $\varphi(\tau)$ there is a unique (up to a constant) nonzero vector $\xi$ in the tangent space of the related point such that $DF(\xi)=0.$ The direction spanned by $\xi$ is transverse to the tangent direction to the singular curve at the fold point, since $DF(\varphi'(0))\ne (0,0)^\top.$ These transverse directions form a smooth transverse direction field on the curve if this curve is at least $C^3$-smooth.
Now let us return to the set of points in $SM$ near $s$ where $p_r$ degenerates. To be precise, we assume $H_{yy}H_{ux} - H_{xy}H_{uy} \ne 0$ at $s$, then $SM$ near $s$ is represented as a graph of $y=f(x,v),$ $u=g(x,v)$, $f(0,0)=g(0,0)=0$ (we can always assume this is the case, otherwise, one can achieve this by re-ordering slow or/and fast variables). The unique case when rank equals 2 but the only nonzero minor is $H_{yu}H_{vx} - H_{xu}H_{yv}$, but all other five are zeroth, would indicate on a too degenerate case and we do not consider it below (recall that we have assumed rank be 2 for matrix (\[2X2\]). Indeed, in that case the following lemma is valid
\[matrix 2x4\] Suppose matrix $$A=
\begin{pmatrix}
a_{11} & a_{12} & a_{13} & a_{14}\\
a_{21} & a_{22} & a_{23} & a_{24}
\end{pmatrix}$$ possesses the properties:
- $$\det
\begin{pmatrix}
a_{13} & a_{14}\\
a_{23} & a_{24}
\end{pmatrix}\ne 0,$$
- all other minors of the second order for matrix $A$ vanish.
Then the following equalities hold $a_{11} = a_{12} = a_{21} = a_{22} = 0$.
[[**Proof.**]{}]{}Suppose the assertion of Lemma is false. Then up to re-enumeration of rows and first two columns one may regard $a_{11}\ne 0$. By condition, all minors of the second order for matrix $$\begin{pmatrix}
a_{11} & a_{12} & a_{13}\\
a_{21} & a_{22} & a_{23}
\end{pmatrix}$$ vanish. This means that its rows are linear dependent. But by assumption the first row is nonzero vector of $\mathbb{R}^3$. Hence, there is $\kappa_1\in\mathbb{R}$ such that $(a_{21}, a_{22}, a_{23})=\kappa_1(a_{11}, a_{12}, a_{13})$.
Analogously, if all second order minors of matrix $$\begin{pmatrix}
a_{11} & a_{12} & a_{14}\\
a_{21} & a_{22} & a_{24}
\end{pmatrix}$$ vanish, then from $a_{11}\ne 0$ follows the existence of a number $\kappa_2\in \mathbb{R}$ which satisfies the equality $(a_{21}, a_{22}, a_{24})=\kappa_2(a_{11}, a_{12}, a_{14})$. But then one has $\kappa_1a_{11}=a_{21}=\kappa_2a_{11}$, from here equalities $a_{11}(\kappa_1-\kappa_2)=0$ and $\kappa_1=\kappa_2$ follow. Thus, rows of matrix $A$ are linear dependent that contradicts to its first property of the Lemma conditions.$\blacksquare$
For the case under consideration the coordinate representation for mapping $p_r$ is the following $p_r: (x,v)\to
(u=g(x,v),v)$. Jacobian of this mapping is the matrix $$P=Dp_r =\begin{pmatrix}g_x&g_v\\0&1\end{pmatrix},$$ whose rank at $(0,0)$ is 2, if $g_x(0,0)\ne 0$ (the point is regular), and it is 1, if $g_x(0,0)= 0$ (the point is singular). The singular point $(0,0)$ is good, if $g_x(0,0)=0$ and $g_{xx}(0,0)\ne 0$ or $g_{xv}(0,0)\ne 0,$ and it is a fold if, in addition, one has $P(\xi)\ne (0,0)^\top,$ where $\xi$ is the tangent vector to the singular curve through $(0,0)$. When $g_{xx}(0,0)\ne 0$, the singular curve has a representation $(l(v),v),$ $l(0)=0,$ so $\xi$ is $(l'(0),1)^\top.$ Since $l$ solves the equation $g_x(x,v)=0,$ then $l'(0)=-g_{xv}(0,0)/g_{xx}(0,0)$ and $P(\xi)=(g_v(0,0),1)^\top\ne (0,0)^\top.$ Thus, the singular point $(0,0)$ is indeed the fold, if $g_x(0,0)=0$ but $g_{xx}(0,0)\ne 0$.
Now assume $g_x(0,0)=g_{xx}(0,0)=0$ but $g_{xv}(0,0)\ne 0.$ Then the singular curve has a representation $(x,r(x)),$ $r(0)=0,$ and vector $\xi$ is $(1,r'(0))^\top$. Because $r(x)$ again solves the equation $g_x(x,v)=0,$ we have the equality $$r'(0)=-\frac{g_{xx}(0,0)}{g_{xv}(0,0)}=0.$$ It follows from here that $P(\xi)= (0,0)^\top.$ Therefore, if $g_{xx}(0,0)=0$ the singular point $(0,0)$ is not a fold. In this latter case to verify it to be a cusp one needs to calculate $d^2(r(x),g(x,r(x)))/dx^2$ at the point $(0,0).$ The calculation gives the formula $$(g_{xx}(0,0)+2g_{xv}(0,0)r'(0)+g_{vv}(r'(0))^2 + g_v(0,0)r^{\prime\prime}(0),r^{\prime\prime}(0)) =
(g_v(0,0)r^{\prime\prime}(0),r^{\prime\prime}(0)).$$ due to equalities $g_{xx}(0,0)=0,$ $r'(0)=0.$ Thus, if $r^{\prime\prime}(0)\ne 0$ the second derivative is not zero vector, and the point is a cusp. For the derivative $r^{\prime\prime}(0)$ we have $$r^{\prime\prime}(0)= -\frac{g_{xxx}(0,0)}{g_{xv}(0,0)}.$$ Thus, the conditions for a singular point to be a cusp casts in two equalities and two inequalities $$\label{cusp}
g_x(0,0)= g_{xx}(0,0)=0,\;g_{xv}(0,0)\ne 0 \;g_{xxx}(0,0)\ne 0.$$
A position of the singular curve w.r.t. levels $H=c$ is important. In particular, we need to know when the intersection of this curve and submanifold $H=H(s)$ is transverse and when is not the case. For the transverse case for all $c$ close enough to $H(s)$ the intersection of this curve with levels $H=c$ will be also transversal and hence, consists of one point. For the nontransverse case we need to know what will happen for a close levels. In a sense, the transversality condition can be considered as some genericity condition for the chosen function $H$.
The point $s$ is a fold point for the restricted map $p_r : SM \to B$, if, in addition to the equality $\Delta(s) = 0,$ i.e. $g_{x}(0,0)=0$, the condition of quadratic tangency $g_{xx}(0,0)\ne 0$ will be satisfied. This is equivalent to the inequality $\Delta_{x}(s) \ne 0$ and allows one to express from the equation $g_x =0$ the singular curve on $SM$ near $s$ in the form $x=l(v),$ $l(0)=0.$ To determine if this curve intersects transversely the level $H=H(s)$, let us calculate the derivative $$\frac{d}{dv}H(l(v),f(l(v),v),g(l(v),v),v)|_{v=0}= H_u(0,0,0,0)g_v(0,0)+H_v(0,0,0,0).$$ Taking into account that $f(x,v),g(x,v)$ are solutions of the system $H_y =
0,$ $H_x = 0$ near point $s$ we ñan calculate $g_v(0,0)$, then the numerator of this derivative at point $s$ does not vanish if $$\label{parab}
H_{xy}[H_uH_{yv}-H_vH_{yu}]-H_{yy}[H_uH_{xv}-H_vH_{xu}] \ne 0.$$ It is hard to calculate this quantity when $H$ is taken in a general form. Therefore we transform $H$ near $s$ to a more tractable form w.r.t. variables $(x,y)$ with parameters $(u,v).$ In order not to care about the smoothness we assume henceforth all functions are $C^\infty$. The following assertion is valid
\[y\_square\] Suppose a smooth function $H(x,y,u,v)$ be given such that at parameters $(u,v)=(0,0)$ the Hamiltonian system $$\dot x = H_y,\;\dot y = - H_x$$ has a degenerate equilibrium $(x,y)=(0,0)$ of the linearized system with the double zero non-semisimple eigenvalue (the related Jordan form is two dimensional). Then there exists a $C^{\infty}$-smooth transformation $\Phi: (x,y)\to (X,Y)$ smoothly depending on parameters $(u,v)$ and respecting 2-form $dx\wedge dy$ such that the Hamiltonian $H\circ \Phi$ in new variables $(X,Y)$ takes the form $$\label{fold}
H(X,Y,u,v)= h(X,u,v)+ H_{1}(X,Y,u,v)Y^2,$$
[[**Proof.**]{}]{}We act as follows. Consider first the Hessian $(a_{ij})$ of $H$ in variables $(x,y)$ at the point $(x,y)=(0,0)$ on the leaf $(u,v)=(0,0).$ Its determinant vanishes but not all its entries are zeroes and its rank is 1, since we supposed the eigenvalues to be non-semisimple ones. Then one has either $a_{11}\ne 0$ or $a_{22}\ne 0$ due to symmetricity of the Hessian and the assumption on non-semisimplicity of zero eigenvalue. We assume $a_{22}\ne 0$ to be definite, for our case above it follows from the assumption that minor $H_{yy}H_{xu}-H_{xy}H_{yu}\ne 0.$ We first solve the equation $H_y =0$ in a neighborhood of point $(0,0,0,0).$ In view of the implicit function theorem, since $a_{22}= H_{yy}(0,0,0,0)\ne 0$, the equation has a solution $y=f(x,u,v),$ $f(0,0,0)=0.$ After shift transformation $x=X,$ $y= Y + f(x,u,v)$ we get the transformed Hamiltonian $\hat H(X,Y,u,v)$ which is of the form $$\begin{array}{l}
\hat H(X,Y,u,v)=h(X,u,v)+ Y \tilde{H}(X,Y,u,v),\;\tilde{H}(X,0,u,v)= H_y(x,f(x,u,v),u,v)\equiv 0,\\h(X,u,v)=H(x,f(x,u,v),u,v).
\end{array}$$ Thus function $\tilde{H}$ can be also represented as $\tilde{H}= Y\bar{H},$ $\bar{H}(0,0,0,0)= H_{yy}(0,0,0,0)/2 \ne 0.$ It is worth noticed that $(u,v)$ are considered as parameters, therefore the transformation $(x,y)\to (X,Y)$ respects the 2-form: $dx\wedge dy = dX\wedge dY$.$\blacksquare$
We now restore the notations $(x,y)$ and assume $H$ being in the form (\[fold\]). Then the condition $(0,0,0,0)$ to be the equilibrium of the fast system is $y=0,$ $h_x(0,0,0)=0.$ The claim it to be degenerate and non-semisimple is $h_{xx}(0,0,0)=0,$ since $H_1(0,0,0,0)\ne
0$. In this case the requirement the slow manifold to be a smooth solution of the equation $h_x(x,u,v)=0$ near $(0,0,0,0)$ leads to inequalities $h_{xu}(0,0,0)\ne 0$ or $h_{xv}(0,0,0)\ne 0$. This can be always converted into the first one renaming slow variables, we assume this to be the case. The latter means that the slow manifold has a representation $y=0,$ $u=g(x,v),$ $g(0,0)=0,$ $g_x(0,0)=0.$ At last, the point $(0,0,0,0)$ on the slow manifold will be a fold, if $g_{xx}(0,0)\ne 0$, that is equivalent to inequality $h_{xxx}(0,0,0)\ne 0.$
Now we expand $h(x,u,v)$ in $x$ up to the third order terms $$H(x,y,u,v)=h(x,u,v)+ H_1(x,y,u,v)y^2 = h_0(u,v)+a(u,v)x+b(u,v)x^2 + c(u,v)x^3 + O(x^4) + H_1y^2.$$ Here one has $a(0,0)=0$, $a_u(0,0)\ne 0,$ $b(0,0)=0,$ $c(0,0)\ne 0.$ We have some freedom to change parameters $(u,v).$ Using inequality $a_u(0,0)\ne
0,$ we introduce new parameters $u_1 = a(u,v).$ To get $v_1$ we apply a symplectic transformation using 2-form $du\wedge dv$. To that end we express $u=\hat{a}(u_1,v)= R_{v}$, where for $|u_1|, |v|$ small enough $$R(u_1,v)=\int\limits_0^v \hat{a}(u_1,z)dz,\;\frac{\partial^2 R}{\partial u_1\partial v} = \frac{\partial \hat a}{\partial u_1}\ne 0.$$ Then one has $v_1 = R_{u_1}$ and $du\wedge dv = du_1\wedge dv_1.$ After this transformation which does not touch variables $x,y$ we come to the form of $H$ $$\label{param}
H(x,y,u_1,v_1)= h_0(u_1,v_1)+ u_1x+\hat{b}(u_1,v_1)x^2 + \hat{c}(u_1,v_1)x^3 + O(x^4) + \hat{H}_1 y^2.$$ In this form we can check the transversality of the singular curve on $SM$ and submanifold $H=h_0(0,0)=c_0$ at the point $(0,0,0,0)$. Since we have $H^0_{xy}=0$, $H^0_{yy}\ne 0$ (zeroth upper script means the calculation at the point $(0,0,0,0)$), then the inequality (\[parab\]) casts (we restore notations $u,v$ again) as $$H^0_u H^0_{xv} - H^0_v H^0_{xu}\ne 0,$$ that is expressed via $h_0$ and $a = u$ looks as follows $$-\frac{\partial h_0}{\partial v}(0,0)\ne 0.$$ Thus, we come to the conclusion:
[*if a function $H$ is generic and the singular point $s$ on $SM$ is a fold for the mapping $p_r$, this is equivalent to the condition that this point on the related symplectic leaf is parabolic and the unfolding of $H$ in parameters $(u,v)$ is generic.*]{}
At the next step we want to reduce the dimension of the system near the fold point $s\in SM$ and get a smooth family of nonautonomous Hamiltonian systems in one degree of freedom. This will allow us to describe the principal part of the system near singularity using some rescaling for the system near $s$. We shall work in coordinates where $H$ takes the form (\[fold\]).
In Darboux-Weinstein coordinates $(x,y,u,v)$ the slow-fast Hamiltonian system with Hamiltonian $H$ (\[fold\]) is written near $s$ as follows $$\label{sfloc}
\dot x = H_y,\;\dot y = - H_x,\;\dot u = {\varepsilon}H_v,\;\dot v = - {\varepsilon}H_u.$$ Without a loss of generality, one can assume $H(s)=0$. Since $H_v(s)\ne 0$, then near $s$ levels $H=c$ for $c$ close to zero are given as graphs of a function $v=S(x,y,u,c),$ $S(0,0,0,0)=0$, $S_c = 1/H_v \ne 0$. These graphs intersect transversely the singular curve near $s$, thus it occurs at one point on the related graph. The intersection of $SM$ with a level $H=c$ is a smooth curve (the slow curve in this level) with unique tangency point with the leaves $(u,v)=(u_0,v_0)$ at the related point of the singular curve.
Let us perform the isoenergetic reduction of system (\[sfloc\]) on the level $H=c$, then $S$ be a new (nonautonomous) Hamiltonian and $u$ be the new “time”. After reduction the system transforms $$\label{sfnon}
{\varepsilon}\frac{dx}{du} = H_y/H_v = - S_y,\;{\varepsilon}\frac{dy}{du} = - H_x/H_v = S_x.$$ If we introduce fast time $\tau = u/{\varepsilon},$ we come to the nonautonomous Hamiltonian system which depends on slowly varying time ${\varepsilon}\tau$.
A slow curve for this system is given by equations $S_y =0,$ $S_x =0$ that is led to the same system $H_y =0,$ $H_x =0$ where one needs to plug $S$ into $H$ instead of $v$. Thus we obtain the intersection of $SM$ with the level $H=c$. For the case of $H$ we consider, this curve has a representation $y=0,$ $u-u_c = a(c)(x-x_c)^2 +
o((x-x_c)^2),$ $a(0)\ne 0,$ with $(x_c,0,u_c)$ being the coordinates of the singular curve trace on the level $H=c$ for the fixed $c$ close to $c=0$. This is extracted from the system $u=g(x,v)$, $v=S(x,0,u,c)$, therefore $u$ is expressed via $x$ by the implicit function theorem, since $1-g_vS_u = (H_u g_v + H_v)/H_v \ne 0$ at the point $(x,y,u,c)=$ $(0,0,0,0)$ (and any point close to it on the singular curve).
Fast or frozen system for the reduced system is given by setting $u=u_0$ and varying $u_0$ near $u_0 = u_c.$ On the leaf $u_0=u_c$ we get a one degree of freedom Hamiltonian system with the equilibrium at $(x_c,0)$ having double zero non-semisimple eigenvalue. Using the form (\[fold\]) of Hamiltonian we come to the one-degree-of-freedom system $$\label{nonaut_fold}
\frac{dx}{d\tau}=\frac{2yH_1 + y^2H_{1y}}{h_{0v}+b_v x^2 + c_v x^3 +y^2H_{1v}+ O(x^4)},\;
\frac{dy}{d\tau}= -\frac{{\varepsilon}\tau + 2b x + 3c x^2 +y^2H_{1x}+O(x^3)}{h_{0v}+b_v x^2 + c_v x^3 +y^2H_{1v}+ O(x^4)}.$$
We can also find the foliation of $SM$ near point $s$ into level lines of Hamiltonian $H$ restricted on this manifold. It is nothing else as the local phase portrait of a slow system near a singular curve. It is given by levels of the function $\hat h = H(x,0, g(x,v),v)=$ $h_0(g(x,v),v)+ g(x,v)x + b(g(x,v),v)x +$ $c(g(x,v),v)x^3$. The manifold $SM$ has the line of folds (= the singular curve). This line is projected in $B$ near $s$ as a smooth curve (this curve can be called as a [*discriminant curve*]{} similar to the theory of implicit differential equations, see [@DopGl]) in such a way that the image of $SM$ lies from the one side of this curve. Then foliation curves are projected as having cusps at the discriminant curve. This picture is the same as in a nonhamiltonian slow-fast system with one fast and two slow variables (see [@Arn]).
Rescaling near a fold
---------------------
Here we want to find a principal part of the system (\[nonaut\_fold\]) near a fold point. We use a blow up method like in [@DR; @Kr_Sz]. It is not surprising that we meet the Painlevé-1 equation here, see [@Haberman]. We consider our derivation as a more direct and consequent one.
We start with the observation that the one degree of freedom nonautonomous Hamiltonian $S(x,y,u,c)$ can be written in the form (\[param\]), if we expand it near the point $(x_c,0,u_c)$ being the trace of the singular curve for $H$ $$S(x,y,u,c)= s_0(u,c)+\alpha(u,c)(x-x_c)+\beta(u,c)(x-x_c)^2+\gamma(u,c)(x-x_c)^3 + O((x-x_c)^4)+y^2S_1(x,y,u,c),$$ where $\alpha(u_c,c)=\beta(u_c,c)=0,$ $\alpha_u(u_c,c)\ne 0,$ $\gamma(u_c,c)\ne
0,$ $S_1(x_c,0,u_c,c)\ne 0.$ Denote $x-x_c = \xi.$ This gives us the following form of the reduced system $$\begin{array}{l}
\displaystyle{{\varepsilon}\frac{d\xi}{du}= - \frac{\partial S}{\partial y}= -2yS_1 - y^2\frac{\partial S_1}{\partial y}},\\
\\\displaystyle{{\varepsilon}\frac{dy}{du}= \frac{\partial S}{\partial \xi}= \alpha(u,c)+ 2\beta(u,c)\xi + 3\gamma(u,c)\xi^2 + O(\xi^3)+
y^2\frac{\partial S_1}{\partial \xi}}.
\end{array}$$ Now we use the variable $\tau = (u-u_c)/{\varepsilon}$ and add two more equations $u' = du/d\tau = {\varepsilon}$ and ${\varepsilon}' = 0$ to the system. Then the suspended autonomous system will have an equilibrium at the point $(\xi,y,u,{\varepsilon})=(0,0,u_c,0)$. The linearization of the system at this equilibrium has a matrix being nothing else as 4-dimensional Jordan box. To study the solutions to this system near the equilibrium we, following the idea in [@DR; @Kr_Sz] (see also a close situation in [@chiba]), blow up a neighborhood of this point by means of the coordinate change $$\label{blow}
\xi=r^2X,\;y=r^3Y,\;u=r^4Z,\;{\varepsilon}=r^5,\; r\ge 0.$$ Since $\dot {\varepsilon}= 0$ we consider $r = {\varepsilon}^{1/5}$ as a small parameter. The system in these variables takes the form $$X' = -r(2YS_1(x_c,0,u_c,c)+\cdots],\;\dot Y = r[\alpha_u(u_c,c)Z +3\gamma(u_c,c)X^2 + \cdots],\;\dot Z = r.$$ After re-scaling time $r\tau = s$, denoting $' = d/ds$, setting $r = 0$ we get $$X' = -2Y,\;Y' = \alpha_c Z +3\gamma_c X^2,\;Z' = 1.$$ where $\alpha_c = \alpha_u(u_c,c),$ $\gamma_c = \gamma(u_c,c).$ The obtained system is equivalent to the well known Painlevé-I equation [@Pain; @Haberman; @Novok] $$\frac{d^2 X}{dZ^2} + 2\alpha_0Z + 6\gamma_0 X^2
=0.$$ By scaling variables this equation can be transformed to the standard form. It is known [@Haberman] that this equation appears at the passage through a parabolic equilibrium in the fast system. We come to this equation directly using blow-up procedure.
In fact, it is not the all story if one wants to study in details the behavior near the fold point: one need to derive more six systems. In fact the blow-up procedure means that we blow up the singular point of the suspended 4-dimensional vector field with $\dot {\varepsilon}=0$ added till the 3-dimensional sphere $S^3$ and a neighborhood of the singular point do till a neighborhood of this sphere with $r\ge 0$ being a coordinate in the transverse direction to the sphere. In order to get a sphere one needs to regard $X^2+Y^2+Z^2+E^2 =1$. But it is not convenient to work in coordinates on the sphere, therefore it is better to work in charts. These charts are obtained, if we consequently set $E=1$, then we get the system derived above. Here we consider only the chart with $E=1$, but not $E=-1$, because we assume ${\varepsilon}>0$. Other six charts are obtained if we set $X=\pm 1,$ $Y+\pm
1$ and $Z=\pm 1$, then other six system will be derived. In the principal approximation one needs to take limits $r\to 0$ in this systems. The recalculation from one system to another one is given by the main formulae for the blow-up. Then one can understand the whole picture of a passage of orbits through the neighborhood of the disruption point. This will be done elsewhere.
As is known, for 2-dim slow-fast (dissipative) system such the passage is described by the Riccati equation [@MR; @Kr_Sz]. Here we get the Painlevé-I equation.
Cusp for the slow manifold projection
=====================================
For the case when $s$ is a cusp, the related singular curve on $SM$ is tangent to the symplectic leaf through $s$ (see above the equality $P(\xi)=0$). Due to the last inequality in (\[cusp\]), this tangency takes place at only point $s$, other points of the singular curve near $s$ are folds. Below without a loss of generality we assume $s$ to be the origin $(0,0,0,0).$
The singular curve is also tangent at $s$ to the level $H=H(s)$. Indeed, for the case of a cusp the singular curve on $SM$ has a representation $x=r(v),$ $r(0)=r'(0)=0$ near $s$ (see above). Then from equalities $H_x(0,0,0,0) = H_y(0,0,0,0) = 0,$ $g_x(0,0),$ $r'(0)= 0$ the tangency follows $$\frac{d}{dx}H(x,f(x,r(x)),g(x,r(x)),r(x))|_{x=0}=0.$$
Consider for ${\varepsilon}=0$ the leaf $F_b,$ $b=p(s),$ of the symplectic foliation and the Hamiltonian system with Hamiltonian $H$ restricted to this leaf (the fast Hamiltonian system). This one degree of freedom system has an equilibrium at $s$. This equilibrium is degenerate: it has double zero eigenvalue as for a fold but this equilibrium is more degenerate than that being parabolic one. To make further calculations we again use lemma \[y\_square\] and the normal form method in some neighborhood of this equilibrium. We want to show the equilibrium be of co-dimension 2 though it has the same linear part as for a parabolic point, but it obeys two additional equalities. The partial normal form for such an equilibrium depending on parameters $(u,v)$ looks as follows $$\label{cusp_ham}
H(x,y,u,v)= h_0(u,v)+ a_1(u,v)x + \frac{a_2(u,v)}{2}x^2 + \frac{a_3(u,v)}{3}x^3 + \frac{a_4(u,v)}{4}x^4 + O(x^5)+ y^2H_1(x,y,u,v)$$ with $dh_0 (0,0)\ne 0, $ $H_1(0,0,0,0)\ne 0$, $a_4(0,0)\ne 0,$ $a_1(0,0)=0$. The first condition for the co-dimension 2 here is equalities $a_2(0,0)=a_3(0,0)=0.$ These two conditions follow from the assumption for $s$ to be a cusp for the map $p: M\to B$, then $g_x(0,0)=0$ implies $a_2(0,0)=0$ and $g_{xx}(0,0)=0$ does $a_3(0,0)=0.$ Inequality $g_{xxx}(0,0)\ne 0$ mean here $a_4(0,0)\ne 0$. It appears the sign of $a_4(0,0)$ is essential and opposite signs lead to different structures of nearby fast systems on the neighboring leaves (see Figs. 1a-1b).
![image](FIG1A.jpg){width="60.00000%"}
![image](FIG1B.jpg){width="60.00000%"}
In order the unfolding in $(u,v)$ would be generic, the condition ${\rm det} (D(a_1,a_2 )/D(u,v))\ne 0$ at $(u,v)=(0,0)$ has to be met. This just corresponds to that when rank of $(3\times 4)$-matrix above is 3 and then the curve of singular points here is expressed as $(x,y(x),u(x),v(x))$ and $y'(0)=u'(0)=$ $v'(0)=0.$ The projection of the singular curve on $B$ ($(u,v)$-coordinates) is the cusp-shaped curve which is up to higher order terms is given parametrically in $x$ as $$a_1(u,v)+a_2(u,v)x + a_3(u,v)x^2 + a_4(u,v)x^3 +O(x^4)=0,\; a_2(u,v) + 2a_3(u,v)x + 3a_4(u,v)x^2 +O(x^3)=0.$$
To ease again the further calculations we take functions $a_1,a_2$ as new parameters instead of $u,v$ using the fact that ${\rm det} (D(a_1,a_2)/D(u,v))\ne 0$. Keeping in mind that $(u,v)$ are symplectic coordinates w.r.t. 2-form $du\wedge
dv$ we make the change via a symplectic transformation. To this end, we assume first the determinant be positive, otherwise we make the redesignation $(a_1,a_2)\to (-a_1,a_2).$ One of partial derivatives of $a_1$ in $u$, $v$ at $(0,0)$ does not vanish, so one can take $a_1$ as a new slow variable $u_1 = a_1(u,v).$ Adding to that some $v_1$ in order to get a symplectic transformation $(u,v)\to (u_1,v_1)$ we come to the same form of $H$ w.r.t the new variables $(u_1,v_1)$ and new coefficients $a_1(u_1,v_1)=u_1, a_2(u_1,v_1)$ which we again denote as $(u,v)$ and $u,a_2(u,v)$. Then we get $\partial a_2/\partial v \ne 0$, since ${\rm det} D(a_1,a_2)/D(u,v)\ne 0$.
At the next step we want to use the isoenergetic reduction and get again a family of nonautonomous Hamiltonian system in one degrees of freedom depending on a parameter $c$ in a neighborhood of point $s=(0,0,0,0)$. To this end, we have to know which of partial derivatives of $h_0$ is nonzero. Recall that the singular curve on $SM$ at the point $s$ is tangent to $H=H(s)$. We assume without loss of generality that $H(s)=0$. For Hamiltonian (\[cusp\_ham\]) the derivative of $H$ along the tangent vector to singular point at $s$ is equal to $H^0_u H^0_{xv}-H^0_v H^0_{xu}
=0$ but $H^0_{xv}= 0$ and $H^0_{xu} = 1$ (since $a_1(u,v)=u$), hence one has $H^0_v =0.$ Due to the assumption $dH \ne 0$ at $s$ we come to the inequality $H^0_{u} \ne 0.$ This implies the equation $H=c$ near $s$ be solved as $u=S(x,y,v,c),$ $S(0,0,0,0)=0.$ The derivative $S^0_c$ does not vanish, since it is equal at $s$ to $(\partial h/\partial u)^{-1} \ne 0.$ $$\label{nonaut_cusp}
\begin{array}{l}
\displaystyle{\frac{dx}{d\tau}=-\frac{2yH_1 + y^2H_{1y}}{h_{0u}+ x + a_{2u}x^2/2 + a_{3u} x^3/3 + a_{4u} x^4/4 + O(x^5)+ y^2H_{1u}}},\\
\displaystyle{\frac{dy}{d\tau}= \frac{u + a_2 x + a_3 x^2 + a_4x^3 + y^2H_{1x}+O(x^4)}{h_{0u}+ x + a_{2u}x^2/2 + a_{3u} x^3/3 +
a_{4u} x^4/4 + O(x^5)+ y^2H_{1u}}},\\
\displaystyle{\frac{dv}{d\tau}={\varepsilon},\;\frac{d{\varepsilon}}{d\tau}}=0.
\end{array}$$ We added here more two equations $dv/d\tau ={\varepsilon}$ and $d{\varepsilon}/d\tau =0$ and get again an equilibrium at $(0,0,0,0).$ To study the system near the equilibrium we perform the blow-up transformation $$x\to rX,\;y\to r^2Y,\;v\to r^2Z,\;u\to r^3C\;{\varepsilon}\to r^3E.$$ We present here only the system with $E=1$, then $r={\varepsilon}^{1/3}.$ After writing the system in new coordinates, scaling time $r\tau =s$, setting $r=0$ and denoting constants $\rho = 2H_1(0,0,0,0)(\partial h_0(0,0)/\partial u)^{-1},$ $\sigma = (\partial h_0(0,0)/\partial u)^{-1},$ $\beta = (\partial a_{2u}(0,0)/\partial v)(\partial h_0(0,0)/\partial u)^{-1}$, $\alpha=a_4(0,0),$ $A=C(\partial h_0(0,0)/\partial u)^{-1}$, we come to the system $$\dot X = - \sigma Y,\;\dot Y = A + \beta ZX + \alpha X^3,\;\dot Z = 1,$$ that is just the Painlevé-II equation. If one introduce the new time $s\sigma = \xi$ and reduce it to the standard form.
It represents the behavior of solutions inside of a 4-dimension disk. Other equations should be written and understood in order to catch the whole picture.
Thus, we have proved a theorem that gives a connection between an orbit behavior of a slow fast Hamiltonian system near its disruption point and solutions to the related Painlevé equations. We formulate these results as follows.
Let a smooth slow fast Hamiltonian vector field with a Hamiltonian $H$ be given on a smooth Poisson bundle $p: M\to B$ with a Poisson 2-form $\omega$ and $B$ being a smooth symplectic manifold with a symplectic 2-form $\nu$. We endow $M$ with the singular symplectic structure $\omega
+{\varepsilon}^{-1}p_*\nu$. Suppose the set of zeroes $SM$ of the fast vector fields on symplectic leaves generated by $H$ forms a smooth submanifold in $M$. Points of $SM$ being tangent to symplectic leaves and other than critical points of $H$ we call to be disruption points. If $M$ is four-dimensional and $B$ is two-dimensional ones then generically a disruption point $s$ can be only of two types w.r.t. the map $p|_{SM}$: a fold or a cusp. Then the slow-fast system near $s$ can be reduced in the principal approximation after the isoenergetical reduction and some blow-up to either the Painlevé-I equation, if $s$ is a fold, or to the Painlevé-II equation, if $s$ is a cusp.
Acknowledgement
===============
L.M.L. acknowledges a partial support from the Russian Foundation for Basic Research (grant 14-01-00344a) and the Russian Science Foundation (project 14-41-00044). E.I.Y. is thankful for a support to the Russian Ministry of Science and Education (project 1.1410.2014/K, target part).
[99]{} V.S.Afraimovich, V.I.Arnold, Yu. S. Ilyashenko, L.P.Shilnikov, Dynamical Systems. V. Bifurcation Theory and Catastrophe Theory, Encyclopaedia of Mathematical Sciences, Vol.5, (V.I.Arnold, ed.), Springer-Verlag, Berlin and New York, 1994. V.I. Arnold, S.M. Gusein-Zade, and A.N. Varchenko. Singularities of Differentiable Maps, Volume I, volume 17. Birkhäuser, 1985. V.I. Arnold. Geometrical methods in the theory of ordinary differential equations, Grundlehren der mathematischen Wissenschaften, v.250, Springer-Verlag, New York, 1983, x + 334 pp. A.Baider, J.A.Sanders, Unique Normal Forms: the Nilpotent Hamiltonian Case, J. Diff. Equat., v.92 (1991), 282-304. H. Chiba, Periodic orbits and chaos in fast–slow systems with Bogdanov–Takens type fold points, J. Diff. Equat., v.250 (2011), 112-160. M.Desroches, J.Guckenheimer, B.Krauskopf, C.Kuehn, H.Osinga, M.Wechselberger, Mixed-Mode Oscillations with Multiple Time Scales, SIAM Review, Vol. 54 (2012), No. 2, pp. 211–288. F. Dumortier, R. Roussarie, Canard cycles and center manifolds, Memoirs of the AMS, [**557**]{} (1996). FenichelN. Asymptotic stability with rate conditions. *Indiana Univ. Math. J.*, 1974, vol.23, no.12, pp.1109-1137. N. Fenichel, Geometric singular perturbation theory for ordinary differential equations, J. Diff. Equat., v.31 (1979), 53-98. V. Gelfreich, L. Lerman, Almost invariant elliptic manifold in a singularly perturbed Hamiltonian system, Nonlinearity, v. 15 (2002), 447-457. V. Gelfreich, L. Lerman, Long periodic orbits and invariant tori in a singularly perturbed Hamiltonian system, Physica D 176 (2003), 125–146. R.Haberman, Slowly Varying Jump and Transition Phenomena Associated with Algebraic Bifurcation Problems, SIAM J. Appl. Math., v.37 (1979), No.1, 69-106. D.C. Diminnie, R.Haberman, Slow passage through a saddle-center bifurcation, J. Nonlinear Science 10 (2), 197-221 M. Hirsch, C. Pugh, M. Shub, Invariant manifolds, Lect. Notes in Math., v. , Springer-Verlag, M. Krupa, P. Szmolyan, Geometric analysis of the singularly perturbed planar fold, in [*Multiple-Time-Scale Dynamical Systems*]{}, C.K.T.R. Jones et al (eds.), Springer Science + Business Media, New York, 2001. J.E.Marsden, T.S.Ratiu. Introduction to Mechanics and Symmetry, Second edition, Texts in Appl. Math., v.17, Springer-Verlag, 1999. J.D. Meiss. Differential Dynamical Systems, volume 14 of Mathematical modeling and computation. SiAM, Philadelphia, 2007. A.I. Neishtadt, Averaging, passage through resonance, and capture into resonance in two-frequency systems, Russ. Math. Surv., v.69 (2014), No.5, 771-843. A.R. Its, V.Yu. Novokshenov. The Isomonodromic Deformation Method in the Theory of Painleve equations. Lecture Notes in Mathematics, vol. 1191, 313 p. Springer-Verlag, 1986. P. Painlevé, Mémoire sur les équationes differentielles dont l’intégrale générale est uniforme, Bull. Soc. Math. 28 (1900) 201–261; P. Painlevé, Sur les équationes differentielles du second ordre et d’ordre supérieur don’t l’intégrale générale est uniforme, Acta Math. 21 (1902) 1–85. J.A. Sanders, F. Verhulst, J. Murdock, Averaging Methods in Nonlinear Dynamical Systems (Second edition), Springer-Verlag, Appl. Math. Sci., v.59, 2007. L.P. Shilnikov, A.L. Shilnikov, D.V. Turaev, L.O. Chua. Methods of Qualitative Theory in Nonlinear Dynamics: In two vols., World Sci. Ser. Nonlinear Sci. Ser. A Monogr. Treatises, vols. 4, 5, River Edge, NJ: World Sci. Publ., 1998, 2001. I. Vaisman, Lectures on the Geometry of Poisson Manifolds, Birkhauser, Basel-Boston-Berlin, 1994. A. Weinstein, The local structure of Poisson manifolds, J. Diff. Geom., v.18 (1983), 523-557. H. Whitney, On Singularities of Mappings of Euclidean Spaces. I. Mappings of the Plane into the Plane, Ann. Math., v.62 (1955), No.3, 374-410.
|
---
abstract: 'Relying on work of Kashiwara-Schapira and Schmid-Vilonen, we describe the behaviour of characteristic cycles with respect to the operation of geometric induction, the geometric counterpart of taking parabolic or cohomological induction in representation theory. By doing this, we are able to describe the characteristic cycle associated to an induced representation in terms of the characteristic cycle of the representation being induced. As a consequence, we prove that the cohomology packets defined by Adams and Johnson in [@Adams-Johnson] are micro-packets, that is to say that the cohomological constructions of [@Adams-Johnson] are particular cases of the sheaf-theoretic ones in [@ABV]. It is important to mention that the equality between the packets defined in [@Adams-Johnson] and the ones in [@ABV] is known for the experts, but to my knowledge no proof of it can be found in the literature.'
author:
- Nicolás Arancibia Robert
bibliography:
- 'reference.bib'
title: 'Characteristic cycles, micro local packets and packets with cohomology'
---
Introduction
============
Let $G$ be a connected reductive algebraic group defined over a number field $F$. In [@Arthur84] and [@Arthur89], Arthur gives a conjectural description of the discrete spectrum of $G$ by introducing at each place $v$ of $F$ a set of parameters $\Psi_v(G)$, that should parameterize all the unitary representations of $G(F_v)$ that are of interest for global applications. More precisely, Arthur conjectured that attached to every parameter $\psi_v\in \Psi_v(G)$ we should have a finite set $\Pi_{\psi_v}(G(F_v))$, called an $A$-packet, of irreducible representations of $G(F_v)$, uniquely characterized by the following properties:
- $\Pi_{\psi_v}(G(F_v))$ consists of unitary representations.
- The parameter $\psi_v$ corresponds to a unique $L$-parameter $\varphi_{\psi_v}$ and $\Pi_{\psi_v}(G(F_v))$ contains the $L$-packet associated to $\varphi_{\psi_v}$.
- $\Pi_{\psi_v}(G(F_v))$ is the support of a stable virtual character distribution on $G(F_v)$.
- $\Pi_{\psi_v}(G(F_v))$ verifies the ordinary and twisted spectral transfer identities predicted by the theory of endoscopy.
Furthermore, any representation occurring in the discrete spectrum of square integrable automorphic representations of $G$, should be a restricted product over all places of representations in the corresponding $A$-packets. In the case when $G$ is a real reductive algebraic group, Adams, Barbasch and Vogan proposed in [@ABV] a candidate for an $A$-packet, proving in the process all of the predicted properties with the exception of the twisted endoscopic identity and unitarity. The packets in [@ABV], that we call micro-packets or ABV-packets, are defined by means of sophisticated geometrical methods. As explained in the introduction of [@ABV], the inspiration behind their construction comes from the combination of ideas of Langlands and Shelstad (concerning dual groups and endoscopy) with those of Kazhdan and Lusztig (concerning the fine structure of irreducible representations), to describe the representations of $G(\mathbb{R})$ in terms of an appropriate geometry on an $L$-group. The geometric methods are remarkable, but they have the constraint of being extremely difficult to calculate in practice. Without considering some exceptions, like for example ABV-packets attached to tempered Arthur parameters (see Section 7.1 below) or to principal unipotent Arthur parameters (See Chapter 7 [@ABV] and Section 7.2 below), we cannot identify the members of an ABV-packet in any known classification (in the Langlands classification for example). The difficulty comes from the central role played by characteristic cycles in their construction. This cycles are geometric invariants that can be understood as a way to measure how far a constructible sheaf is from a local system. In the present article, relying on work of Kashiwara-Schapira and Schmid-Vilonen, we describe the behaviour of characteristic cycles with respect to the operation of geometric induction, the geometric counterpart of taking parabolic or cohomological induction in representation theory. By doing this, we are able to describe the characteristic cycle associated to an induced representation, in terms of the characteristic cycle of the representation being induced (see Proposition \[prop:ccLG\] below). Before continuing with a more detailed description on the behaviour of characteristic cycles under induction, let us mention some consequences of it.
As a first application we have the proof that the cohomology packets defined by Adams and Johnson in [@Adams-Johnson] are micro-packets. In more detail, Adams and Johnson proposed in [@Adams-Johnson] a candidate for an $A$-packet by attaching to any member in a particular family of Arthur parameters (see points (AJ1), (AJ2) and (AJ3) Section 7.3), a packet consisting of representations cohomologically induced from unitary characters. Now, from the behaviour of characteristic cycles under induction, the description of the ABV-packets corresponding to any Arthur parameter in the family studied in [@Adams-Johnson], reduces to the description of ABV-packets corresponding to essentially unipotent Arthur parameters (see Section 7.2), and from this reduction we prove in Theorem \[theo:ABV-AJ\], that the cohomological constructions of [@Adams-Johnson] are particular cases of the ones in [@ABV]. It is important to point out that the equality between Adams-Johnson and ABV-packets is known to experts, but to my knowledge no proof of it can be found in the literature. Let us also say that from this equality and the proof in [@AMR] that for classical groups the packets defined in [@Adams-Johnson] are $A$-packets ([@Arthur]), we conclude that in the framework of [@Adams-Johnson] and for classical groups, the three constructions of $A$-packets coincide.
As a second application we can mention that, an important step in the proof that for classical groups the $A$-packets introduced in [@Arthur] are ABV-packets (work in progress with Paul Mezo), is the description of the ABV-packets for the general linear group. The understanding of the behaviour of characteristic cycles under induction will show to be important in the proof that for the general linear group, ABV-packets are Langlands packets, that is, they consist of a single representation.
Let us give now a quick overview on how geometric induction affects characteristic cycles. We begin by introducing the geometric induction functor. Suppose $G$ is a connected reductive complex algebraic group defined over $\mathbb{R}$ with Lie algebra $\mathfrak{g}$. Denote by $K$ the complexification in $G$ of some maximal compact subgroup of $G(\mathbb{R})$. Write $X_G$ for the flag variety of $G$, and suppose $Q$ is a parabolic subgroup of $G$ with Levi decomposition $Q=LN$. Consider the fibration $X_G\rightarrow G/Q$. Its fiber over $Q$ can be identified with the flag variety $X_L$ of $L$. We denote the inclusion of that fiber in $X_G$ by $$\begin{aligned}
\label{eq:mapvarieties}
\iota:X_L&\longrightarrow X_G.\end{aligned}$$ Let $D_{c}^{b}(X_G,K)$ be the $K$-equivariant bounded derived category of sheaves of complex vector spaces on $X_G$ having cohomology sheaves constructible with respect to an algebraic stratification of $X_G$. Living inside this category we have the subcategory $\mathcal{P}(X_G,K)$ of $K$-equivariant perverse sheaves on $X_G$. Set $\mathcal{D}_{X_G}$ to be the sheaf of algebraic differential operators on $X_G$. The Riemann-Hilbert correspondence (see Theorem 7.2.1[@Hotta] and Theorem 7.2.5 [@Hotta]) defines an equivalence of categories between $\mathcal{P}(X_G,K)$ and the category $\mathcal{D}(X_G,K)$ of $K$-equivariant $\mathcal{D}_{X_{G}}$-modules on $X_G$. Now, write $\mathcal{M}(\mathfrak{g},K)$ for the category of $(\mathfrak{g}, K)$-modules of $G$, and $\mathcal{M}(\mathfrak{g},K, I_{X_G})$ for the subcategory of $(\mathfrak{g}, K)$-modules of $G$ annihilated by the kernel $I_{X_G}$ of the operator representation (see equations (\[eq:operatorrepresentation\]) and (\[eq:keroperatorrepresentation\]) below). The categories $\mathcal{M}(\mathfrak{g},K, I_{X_G})$ and $\mathcal{D}(X_G,K)$ are identified through the Beilinson-Bernstein correspondence ([@BB]), and composing this functor with the Riemann-Hilbert correspondence we obtain the equivalence of categories: $$\begin{aligned}
\label{eq:rhbb}
\Phi_{X_G}:\mathcal{M}(\mathfrak{g},K,I_{X_G}) \xrightarrow{\sim} \mathcal{P}(X_G,K)\end{aligned}$$ and consequently a bijection between the corresponding Grothendieck groups $K\mathcal{M}(\mathfrak{g},K,I_{X_G})$ and $K\mathcal{P}(X_G,K)$. Similarly, by denoting $\mathfrak{l}$ the lie algebra of $L$ and $K_L=K\cap L$ we define as for $X_G$, $\mathcal{P}(X_L,K_L)$, $\mathcal{M}(\mathfrak{l},K_L,I_{X_L})$ and $\Phi_{X_L}:\mathcal{M}(\mathfrak{l},K_L,I_{X_L}) \xrightarrow{\sim} \mathcal{P}(X_L,K_L)$. Now, let $$\begin{aligned}
\mathscr{R}_{(\mathfrak{l},K_{L})}^{(\mathfrak{g},K)}:\mathcal{M}(\mathfrak{l},K_L)\rightarrow \mathcal{M}(\mathfrak{g},K) \end{aligned}$$ be the cohomological induction functor of Vogan-Zuckerman. The induction functor in representation theory has a geometric analogue $$I_{L}^{G}:D_{c}^{b}(X_L,K_L)\rightarrow D_{c}^{b}(X_G,K)$$ defined through the Bernstein induction functor (see [@ViMir]). The Bernstein induction functor $$\Gamma_{L}^{G}:D_{c}^{b}(X_G,K_L)\rightarrow D_{c}^{b}(X_G,K)$$ is the right adjoint of the forgetful functor $\text{Forget}_{K}^{K_L}:D_{c}^{b}(X_G,K)\rightarrow D_{c}^{b}(X_G,K_L)$. The geometric induction functor is defined then as the composition: $$I_{L}^{G}=\Gamma_{L}^{G}\circ R\iota_{\ast}.$$ It satisfies the identity $$I_{L}^{G}\circ \Phi_{X_L}=\Phi_{X_G}\circ \mathscr{R}_{(\mathfrak{l},K_{L})}^{(\mathfrak{g},K)},$$ which induces the following commutative diagram between the corresponding Grothendieck groups $$\begin{aligned}
\label{eq:diagramintroduction}
\xymatrix{
K\mathcal{M}(\mathfrak{g},K,I_{X_{G}}) \ar[r]^{\Phi_{X_G}} & K\mathcal{P}(X_G,K)\\
K\mathcal{M}(\mathfrak{l},K_{L},I_{X_{L}}) \ar[u]^{\mathscr{R}_{(\mathfrak{l},K_{L})}^{(\mathfrak{g},K)}}\ar[r]^{\Phi_{X_L}} & K\mathcal{P}(X_L,K_{L})\ar[u]^{I_{L}^{G}}. }\end{aligned}$$
Now that the geometric induction functor has been introduced, we turn to the description of how geometric induction affects characteristic cycles. The characteristic cycle of a perverse sheaf can be constructed through Morse theory, or via the Riemann-Hilbert correspondence as the characteristic cycle of the associated $\mathcal{D}$-module. In any case, the characteristic cycle can be seen as a map $CC:K\mathcal{P}(X_G,K)\rightarrow \mathscr{L}(X_G,K)$ from $K\mathcal{P}(X_G,K)$ to the set of formal sums $$\mathscr{L}(X_G,K)=\left\{\sum_{{K-\mathrm{orbits }~S~\mathrm{in}~X_G}}m_S[\overline{T^{\ast}_{S}X_G}]:m_S\in\mathbb{Z}_{\geq 0}\right\}.$$
Schmid and Vilonen, combining the work of Kashiwara-Schapira on proper direct images, and their own work on direct open embeddings, describe in [@SV] the effect on characteristic cycles of taking direct images by an arbitrary algebraic morphism. This, applied to the Bernstein induction functor, leads to a map $\left(I_{L}^{G}\right)_{\ast}:\mathscr{L}(X_L,K_L)\rightarrow\mathscr{L}(X_G,K)$ that extends (\[eq:diagramintroduction\]) into the commutative diagram: $$\begin{aligned}
\xymatrix{
K\mathcal{M}(\mathfrak{g},K,I_{X_{G}}) \ar[r]^{\Phi_{X_G}} & K\mathcal{P}(X_{G},K)\ar[r]^{CC} & \mathscr{L}(X_G,K)\\
K\mathcal{M}(\mathfrak{l},K_{L},I_{X_{L}}) \ar[u]^{\mathscr{R}_{(\mathfrak{l},K_{L})}^{(\mathfrak{g},K)}}\ar[r]^{\Phi_{X_L}} & K\mathcal{P}(X_{L},K_{L})\ar[u]^{I_{L}^{G}}\ar[r]^{CC} & \mathscr{L}(X_{L},K_L).\ar[u]^{\left(I_{L}^{G}\right)_{\ast}}
}\end{aligned}$$ Consequently, for every $K_L$-equivariant perverse sheaf $\mathcal{F}_L$ on $X_{L}$ we have $$CC(I_{L}^{G}\mathcal{F}_L)=\left(I_{L}^{G}\right)_{\ast}CC(\mathcal{F}_L).$$ The description of the image of $\left(I_{L}^{G}\right)_{\ast}$ is given in Proposition \[prop:ccLG\] through a formula for $CC(I_{L}^{G}\mathcal{F}_L)$ in terms of the characteristic cycle of $\mathcal{F}_L$. More explicitly, writing $$CC(\mathcal{F}_{L})=\sum_{K_{L}-\mathrm{orbits }~S_L~\mathrm{in}~X_{L}} m_{S_L}[\overline{T_{S_L}^{\ast}X_{L}}]$$ for the characteristic cycle of $\mathcal{F}_{L}$, we prove that the image of $\mathcal{F}_L$ under $\left(I_{L}^{G}\right)_{\ast}$ is equal to $$\begin{aligned}
\label{eq:dernierintro}
CC(\Gamma_{L}^{G}\mathcal{F}_L)&=\left(I_{L}^{G}\right)_{\ast}CC(\mathcal{F}_L)\\
&=\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} m_{S_{L}}[\overline{T_{K\cdot\iota(S_L)}^{\ast}X_G}]+
\sum_{G-orbits~S\text{~in~}\partial(G\cdot \iota(X_{L}))} m_{S}[\overline{T_{S}^{\ast}X_G}].\end{aligned}$$ From this equality we are able to deduce that the cohomology packets introduced by Adams and Johnson in [@Adams-Johnson], are examples of micro-packets. The comparison between both types of packets is done in Section 7 where we also describe the family of Arthur parameters considered in the work of Adams and Johnson and give a description of the Adams-Johnson packets.
We continue by outlining the contents of the paper. In Section 2 we describe the context in which we will do representation theory. We recall the notion of real form and introduce the concepts of extended group and representation of a strong real form. This is just a short review of Chapters 2 [@ABV].
Section 3 is devoted to the introduction of the geometric objects required in the definition of micro-packets, namely: the category of perverse sheaves, the category of $\mathcal{D}$-modules, and the notion of characteristic variety and characteristic cycle. We follow mainly Chapter 2 [@Hotta] and Chapter 7 and 19 [@ABV].
In Section 4 we introduce the geometric induction functor as a geometric counterpart of the induction functor in representation theory. We review the work of Kashiwara-Schapira [@Kashiwara-Schapira] on proper direct images and the work of Schmid-Vilonen [@SV] on open direct images. We end the section by explaining how characteristic cycles behave under geometric induction.
In Section 5 we recall the extended Langlands correspondence and in Section 6 we express the Langlands classification in a more geometric manner. More precisely, following Chapter 6 of [@ABV] we introduce a topological space whose set of ${}^{\vee}{G}$-orbits, where ${}^{\vee}{G}$ denotes the dual group of $G$, is in bijection with the set of ${}^{\vee}{G}$-conjugacy classes of Langlands parameters and express Langlands classification in this new setting. The interest of this new space resides in its richer geometry when compared with the geometry of the usual space of Langlands parameters. We start Section 7 by introducing micro-packets. Then we describe the micro-packets corresponding to tempered parameters (Section 7.1) and to essentially unipotent Arthur parameters (Section 7.2). We end the section by studying the case of cohomologically induced packets or Adams-Johnson packets. Finally, relying on the work done in Section 4 we prove that Adams-Johnson packets are micro-packets (Section 7.3).
It is in sections 7.2 and 7.3, and in the study of Section 4 of the geometric induction functor, that most of the original work of the present article is done.\
\
To end with this introduction, let us mention that in [@Trappa] (see Proposition 2.4 [@Trappa] and Corollary 2.5 [@Trappa]) the authors obtain a similar result to the one of Proposition \[prop:ccLG\] (see Equation (\[eq:dernierintro\]) above) by using the definition of characteristic cycles in terms of normal slices (see Chapter II.6.A [@GM]).\
**Acknowledgements:** The author wishes to thank Paul Mezo for useful discussions and enlightening remarks during the preparation of this document.
Structure theory: real forms and extended groups
================================================
In this section we describe the context in which we will do representation theory. We begin with a short review on some basics facts about real forms of reductive groups, to recall next the notion of extended group, strong real forms, and of representations of a strong real form. Following the philosophy of [@ABV], in this article we are not going to fix a real form and study the corresponding set of representations, instead, we fix an inner class of real forms and consider at the same time the set of representations of each real form in the inner class. Extended groups have been introduced in [@ABV], as a manner to study and describe in an organized and uniform way, the representation theory corresponding to an inner class of real forms. We follow Chapter 2 [@ABV].\
Let ${G}$ be a connected reductive complex algebraic group with Lie algebra $\mathfrak{g}$. A **real form** of ${G}$ is an antiholomorphic involutive automorphism $$\sigma:G\rightarrow G$$ with group of real points given by $$G(\mathbb{R},\sigma):=G^{\sigma}=\{g\in G:\sigma(g)=g\}.$$ The group $G(\mathbb{R},\sigma)$ is a real Lie group with Lie algebra $$\mathfrak{g}(\mathbb{R},\sigma):=\mathfrak{g}^{d\sigma}.$$ Among all the real forms of $G$, of particular interest is the **compact real form** $\sigma_{c}$. It is characterized up to conjugation by $G$ by the requirement that $G(\mathbb{R},\sigma_{c})$ is compact. Moreover, Cartan showed that $\sigma_{c}$ may be chosen to commute with $\sigma$. The commutativity implies that the composition $$\begin{aligned}
\label{eq:Cinvolution}
\theta_{\sigma}=\sigma\circ\sigma_{c}=\sigma_{c}\circ\sigma\end{aligned}$$ defines an algebraic involution of $G$ of order two, called the Cartan involution; it is determined by $\sigma$ up to conjugation by $G(\mathbb{R},\sigma)$. The group of fixed points $$\begin{aligned}
\label{eq:complexification}
K_{\sigma}=G^{\theta_{\sigma}}\end{aligned}$$ is a (possibly disconnected) complex reductive algebraic subgroup of $G$. The corresponding group of real points $$K_{\sigma}(\mathbb{R})=G(\mathbb{R},\sigma)\cap K_{\sigma}=G(\mathbb{R},\sigma)\cap G(\mathbb{R},\sigma_{c})=
G(\mathbb{R},\sigma)\cap K_{\sigma}$$ is a maximal compact subgroup of $G(\mathbb{R},\sigma)$ and a maximal compact subgroup of $K_{\sigma}$ (see (5d)-(5g) [@AVParameters]).\
Let us introduce now the notion of an extended group. We start by recalling the notion of inner real forms.
Two real forms $\sigma$ and $\sigma'$ are said to be **inner to each other** if there is an element $g\in G$ such that $$\sigma'=\text{Ad}(g)\circ \sigma.$$
\[deftn:extendedgroup\] An **extended group containing** $G$ is a real Lie group $G^{\Gamma}$ subject to the following conditions.
1. $G^{\Gamma}$ contains $G$ as a subgroup of index two. That is, there is a short exact sequence $$\begin{aligned}
1\rightarrow {G}\rightarrow {}G^{\Gamma}\rightarrow \Gamma\rightarrow 1,\end{aligned}$$ where $\Gamma=\mathrm{Gal}(\mathbb{C}/\mathbb{R})$.
2. Every element of $G^{\Gamma}-G$ acts on $G$ as an antiholomorphic automorphism.
A **strong real form** of $G^{\Gamma}$ is an element $\delta\in G^{\Gamma}$ such that $\delta^{2}\in Z(G)$ has finite order. To each strong real form $\delta\in G^{\Gamma}$ we associate a real form $\sigma_{\delta}$ of $G$ defined by conjugation by $\delta$: $$\sigma_{\delta}(g):=\delta g\delta^{-1}.$$ The group of real points of $\delta$ is defined to be the group of real points of $\sigma_{\delta}$: $$G(\mathbb{R},\delta):=G(\mathbb{R},\sigma_{\delta})=\{g\in G:\sigma_{\delta}(g)=\delta g\delta^{-1}=g \}.$$ Two strong real forms of $G^{\Gamma}$ are called equivalent if they are conjugate by $G$.
Notice that what we call here an extended group is called a weak extended group in [@ABV]. The next result gives a classification of extended groups.
\[prop:classificationextended\] Suppose $G$ is a connected reductive complex algebraic group, and write $\Psi_0(G)$ for the based root datum of $G$.
i. Fix a weak extended group $G^{\Gamma}$ for $G$. Let $\sigma_Z$ be the antiholomorphic involution of $Z(G)$ defined by the conjugation action of any element of $G^{\Gamma}-G$. We can attach to $G^{\Gamma}$ two invariants. The first of these is an involutive automorphism $$a\in \mathrm{Aut}(\Psi_0(G))$$ of the based root datum of $G$. The second is a class $$\overline{z}\in Z(G)^{\sigma_Z}/(1+\sigma_Z)Z(G),$$ where $$(1+\sigma_Z)Z(G)=\{z\sigma_Z (z):z\in Z(G)\}.$$
ii. Suppose $G^{\Gamma}$ and $(G^{\Gamma})'$ are weak extended groups for G with the same invariants $(a,\overline{z})$. Then the identity map on $G$ extends to an isomorphism from $G^{\Gamma}$ to $(G^{\Gamma})'$.
iii. Suppose $a\in \mathrm{Aut}(\Psi_0(G))$ is an involutive automorphism and let $\sigma$ be any real form in the inner class corresponding to $a$ by Proposition 2.12 [@ABV]. Write $\sigma_Z$ for the antiholomorphic involution of $Z(G)$ defined by the action of $\sigma$; and suppose $\overline{z}\in Z(G)^{\sigma_Z}/(1+\sigma_Z)Z(G).$ Then there is a weak extended group $G^{\Gamma}$ with invariants $(a,\overline{z})$.
The next result states the relation between extended groups and inner classes of real forms.
Suppose $G^{\Gamma}$ is a weak extended group. Then the set of real forms of $G$ associated to strong real forms of $G^{\Gamma}$ constitutes exactly one inner class of real forms.
We end this section with the definition of a representation of a strong real form.
\[deftn:representationstrongrealform\] A **representation of a strong real form** of $G^{\Gamma}$ is a pair $(\pi,\delta)$ subject to
1. $\delta$ is a strong real form of $G^{\Gamma}$.
2. $\pi$ is an admissible representation of $G(\mathbb{R},\delta)$.
Two such representations $(\pi,\delta)$ and $(\pi',\delta')$ are said to be equivalent if there is an element $g\in G$ such that $g\delta g^{-1}=\delta'$ and $\pi\circ Ad(g^{-1})$ is (infinitesimally) equivalent to $\pi'$. Finally, define $$\Pi(G/\mathbb{R})$$ to be the set of infinitesimal equivalence classes of irreducible representations of strong real forms of $G^{\Gamma}$.
$\mathcal{D}$-modules, Perverse sheaf and Characteristic Cycles
===============================================================
In this section we introduce the geometric objects required to the definition in Section 7 of the micro-packets. We begin with the definition of the categories that are going to be involved in their construction, to recall next the concepts of characteristic variety and characteristic cycle, and describe some of their properties. We follow mainly Chapter 2 [@Hotta] and Chapter 19 [@ABV].\
Suppose $X$ is a smooth complex algebraic variety on which an algebraic group $H$ acts with finitely many orbits. Define (see Appendix B [@ViMir] and Definition 7.7 [@ABV]):\
$\bullet$ $D_{c}^{b}(X)$ to be the bounded derived category of sheaves of complex vector spaces on $X$ having cohomology sheaves constructible with respect to an algebraic stratification on $X$.\
$\bullet$ $D_{c}^{b}(X,H)$ to be the subcategory of $D_{c}^{b}(X)$ consisting of $H$-equivariant sheaves of complex vector spaces on $X$ having cohomology sheaves constructible with respect to the algebraic stratification defined by the $H$-orbits on $X$.\
\
Living inside this last category we have the category of $H$-equivariant perverse sheaves on $X$. We write\
$\bullet$ $\mathcal{P}(X,H)$ for be the category of $H$-equivariant perverse sheaves on $X$.\
\
Next, set $\mathcal{D}_{X}$ to be the sheaf of algebraic differential operators on $X$ and define:\
$\bullet$ $\mathcal{D}(X,H)$ to be the category of $H$-equivariant coherent sheaves of $\mathcal{D}_{X}$-modules on $X$.\
$\bullet$ ${D}^{b}(\mathcal{D}_X,H)$ to be the $H$-equivariant bounded derived category of sheaves of $\mathcal{D}_{X}$-modules on $X$ having cohomology sheaves coherent.\
\
The categories $\mathcal{P}(X,H)$ and $\mathcal{D}(X,H)$ are abelian, and every object has finite length. To each of them correspond then a Grothendieck group that we denote respectively by $$\label{eq:ggroups}
K\mathcal{P}(X,H)\quad \text{and}\quad K\mathcal{D}(X,H).$$ The four previous categories are related through the Riemann-Hilbert correspondence.
\[theo:rhcorrespondence\] The de Rham functor induces an equivalence of categories $$\begin{aligned}
DR:{D}^{b}(\mathcal{D}_X,H)\rightarrow D_{c}^{b}(X,H)\end{aligned}$$ such that if we restrict $DR$ to the full subcategory $\mathcal{D}(X,H)$ of ${D}^{b}(\mathcal{D}_X,H)$ we obtain an equivalence of categories $$\begin{aligned}
DR:\mathcal{D}(X,H)\rightarrow\mathcal{P}(X,H).\end{aligned}$$ This induces an isomorphism of Grothendieck groups $$\begin{aligned}
DR:K\mathcal{D}(X,H)\rightarrow K\mathcal{P}(X,H)\end{aligned}$$
We use the previous isomorphism to identify the Grothendieck groups of (\[eq:ggroups\]) writing simply $K(X,H)$ instead of $K\mathcal{P}(X,H)$ and $K\mathcal{D}(X,H).$\
In this paper we are principally interested in the case of $X$ being a flag variety. More precisely, let $G$ be a connected reductive complex algebraic group with Lie algebra $\mathfrak{g}$. Fix a real form $\sigma$ of $G$ and as in (\[eq:complexification\]), write $K_{\sigma}$ for the group of fixed points of the corresponding Cartan involution. The flag variety $X_G$ of $G$ is defined as the set of all Borel subgroups of $G$ (or equivalently as the set of all Borel subalgebras of $\mathfrak{g}$). The group $G$ acts on $X_G$ by conjugation, this action is transitive and if we restrict it to $K_\sigma$, then the number of $K_{\sigma}$-orbits is finite. Moreover, for any fixed Borel subgroup $B\in X_{G}$ the normalizer of $B$ in $G$ is $B$ itself, thus we obtain a bijection $$X_G\cong G/B.$$ The flag variety has then a natural structure of an algebraic variety.\
We turn now to a short discussion on characteristic varieties and characteristic cycles. These two objects are going to play a central role in the definition of micro-packets. We begin with the definition of the characteristic variety of a $\mathcal{D}_{X}$-module.
Let $M$ be a coherent $\mathcal{D}_{X}$-module of $X$. Choose a good filtration $F$ on $X$ (we invite the reader to see definition (2.1.2) of [@Hotta] for the definition of a good filtration and Theorem (2.1.3) of the same book for the proof that every coherent $\mathcal{D}_{X}$-module admits one). Write $\mathrm{gr}^{F} M$ for the corresponding graded module. Let $\pi:T^{\ast}X\rightarrow X$ be the cotangent bundle of $X$. Since we have $\mathrm{gr}^{F} \mathcal{D}_{X}\cong \pi_{\ast}\mathcal{O}_{T^{\ast}X}$, the graded module $\mathrm{gr}^{F} M$ is a coherent module over $\pi_{\ast}\mathcal{O}_{T^{\ast}X}$ (Proposition 2.2.1 [@Hotta]), and we can define the coherent $\mathcal{O}_{T^{\ast}X}$-module $$\widetilde{\mathrm{gr}^{F} M}:=\mathcal{O}_{T^{\ast}X}\otimes_{\pi^{-1}\pi_{\ast}\mathcal{O}_{T^{\ast}X}}\pi^{-1}(\mathrm{gr}M),$$ where $\pi^{-1}$ is the inverse image functor of $\pi$. The support of $\widetilde{\mathrm{gr}^{F} M}$ is independent of the choice of a good filtration (see for example Theorem (2.2.1) of [@Hotta]). It is called the **characteristic variety** of $M$ and is denoted by $$\mathrm{Ch}(M):=\mathrm{supp}({\widetilde{\mathrm{gr}^{F} M}}).$$ Since $\widetilde{\mathrm{gr}^{F} M}$ is a graded module over the graded ring $\mathcal{O}_{T^{\ast}X}$, the characteristic variety $\mathrm{Ch}(M)$ is a closed conic algebraic subset in $T^{\ast}X$.
To define the characteristic cycle of a $\mathcal{D}_{X}$-module, we need to introduce first the more general notion of an associated cycle.
Let $X$ be any variety, and let $X_{1},\cdots, X_{n}$ be the irreducible components of $X$. The geometric multiplicity of $X_{i}$ on $X$ is defined to be the length of the local ring $\mathcal{O}_{X_{i},X}$: $$m_{X_{i}}(X)=l_{\mathcal{O}_{X_{i},X}}(\mathcal{O}_{X_{i},X})$$ (Since the local rings $\mathcal{O}_{X_{i},X}$ are all zero-dimensional, their length is well defined). We define the **associated cycle** $Cyc(X)$ of $X$ to be the formal sum $$Cyc(X)=\sum_{i=1}^{n}m_{X_{i}}(X)[X_{i}].$$
\[deftn:defPCycle\] Let $M$ be a coherent $\mathcal{D}_{X}$-module. Denote by $I(\mathrm{Ch}(M))$ the set of the irreducible components of $\mathrm{Ch}(M)$. We define the **characteristic cycle** of $M$ by the formal sum $$CC(M)=Cyc(\mathrm{Ch}(M))=\sum_{C\in I(\mathrm{Ch}(M))}m_{C}(\mathrm{Ch}(M))[C].$$ For $d\in \mathbb{N}$ we denote its degree $d$ part by $$CC_{d}(M)=\sum_{\substack{C\in I(\mathrm{Ch}(M))\\
\dim C=d}}m_{C}(\mathrm{Ch}(M))[C].$$
Let $M$ be a coherent $\mathcal{D}_{X}$-module. Following the notation of [@ABV] (Definition 1.30 [@ABV] and Proposition 19.12 [@ABV]) for each irreducible subvariety $C$ of $T^{\ast}X$ we denote $$\begin{aligned}
\label{eq:microlocalmultiplicity}
\chi_{C}^{mic}(M):=\left\{
\begin{array}{cl}
m_{C}(\mathrm{Ch}(M))& \text{if }C\in I(\mathrm{Ch}(M)),\\
0 &\text{otherwise}.
\end{array}
\right.\end{aligned}$$ and call $\chi_{C}^{mic}(M)$ the microlocal multiplicity along $C$.
\[prop:additive\] Let $$0\rightarrow M\rightarrow N\rightarrow L\rightarrow 0$$ be an exact sequence of coherent $\mathcal{D}_{X}$-modules. Then for any irreducible subvariety $C$ of $T^{\ast}X$ such that $C\in I(\mathrm{Ch}(N))$ we have $$\chi_{C}^{mic}(N)=\chi_{C}^{mic}(M)+\chi_{C}^{mic}(L).$$ In particular, for $d=\mathrm{dim}(\mathrm{Ch}(N))$ we have $$CC_{d}(N)=CC_{d}(M)+CC_{d}(L).$$
From the previous result, for each irreducible subvariety $C$ of $T^{\ast}X$ the microlocal multiplicity along $C$ defines and additive function for short exact sequences: $$\chi_{C}^{mic}:\mathcal{D}(X,H)\rightarrow \mathbb{Z}.$$ Therefore, the microlocal multiplicities define $\mathbb{Z}$-linear functionals: $$\chi_{C}^{mic}:K(X,H)\rightarrow \mathbb{Z}.$$
A coherent $\mathcal{D}_{X}$-module $M$ is called a holonomic $\mathcal{D}_{X}$-module (or a holonomic system) if it satisfies $\mathrm{dim}(\mathrm{Ch}(M))=\mathrm{dim}(X).$ From Theorem (11.6.1) [@Hotta] every $H$-equivariant coherent $\mathcal{D}_{X}$-module is holonomic. Furthermore, for any $H$-equivariant coherent $\mathcal{D}_{X}$-module, the stratification of $X$ defined by the $H$-orbits induces a stratification of $\mathrm{Ch}(M)$ by the closure of the conormal bundle to the $H$-orbits, and a more explicit description of the characteristic cycles is therefore possible. More precisely, denote at each point $x\in X$ the differential of the $H$-action by $$\begin{aligned}
\label{eq:actionmap}
\mathcal{A}_{x}:\mathfrak{h}\rightarrow T_{x}X.\end{aligned}$$ Regarding $\mathfrak{h}\times X$ as a trivial bundle over $X$, we get a bundle map $$\begin{aligned}
\label{eq:actionmap2}
\mathcal{A}:\mathfrak{h}\times Y\rightarrow TX.\end{aligned}$$ We define the conormal bundle to the $H$-action as the annihilator of the image of $\mathcal{A}$: $$T_{H}^{\ast}X=\{(\lambda,x):\lambda\in T_{y}^{\ast}X,\lambda(a_{x}(\mathfrak{h}))=0\}.$$ If $x$ belongs to the $H$-orbit $S$, then $$a_{x}(\mathfrak{h})=T_{x}S$$ and the fiber of $T_{H}^{\ast}X$ at $x$ is the conormal bundle to $S$ at $x$: $$T_{H,x}^{\ast}X=T_{S,x}^{\ast}X.$$ Therefore $$T_{H}^{\ast}X=\bigcup_{H-\text{orbits }S~\mathrm{in}~X}T_{S}^{\ast}X.$$ We have the following result.
\[theo:defcc\] Let $\mathcal{M}$ be a $H$-equivariant coherent $\mathcal{D}_{X}$-module. We have:
i. The characteristic variety of $\mathcal{M}$ is contained in the conormal bundle to the $H$-action $T_{H}^{\ast}X$.
ii. The $H$-components of $\mathrm{Ch}(\mathcal{M})$ are closures of conormal bundles of $H$-orbits on $X$. Consequently $$CC(\mathcal{M})=\sum_{{H-\mathrm{orbits }~S~\mathrm{in}~X}}\chi_{S}^{mic}(\mathcal{M})[\overline{T^{\ast}_{S}X}],$$ where $\chi_{S}^{mic}(\mathcal{M})$ denotes the microlocal multiplicity along $\overline{T^{\ast}_{S}X}$ (i.e. $\chi_{S}^{mic}(\mathcal{M})=\chi_{\overline{T^{\ast}_{S}X}}^{mic}(\mathcal{M})$).
iii. The support of $\mathcal{M}$ is given by $$\mathrm{Supp}(\mathcal{M})=\bigcup_{\chi_{S}^{mic}(\mathcal{M})\neq 0} \overline{S}.$$
\[deftn:ccycleperverse\] Let $P$ be an $H$-equivariant perverse sheaf on $X$ and write $M$ for the corresponding $\mathcal{D}_{X}$-module under the Riemann-Hilbert correspondence. We define the characteristic cycle of $P$ as $$CC(P):=CC(M).$$
More generally, let $\mathcal{F}\in D_{c}^{b}(X,H)$ and write $\mathcal{M}$ for the corresponding element in $D_{c}^{b}(\mathcal{D}_X,H)$ under the Riemann-Hilbert correspondence. We define the characteristic variety of $\mathcal{F}$ as $$\mathrm{Ch}(\mathcal{F}):=\bigcup_i \mathrm{Ch}(H^{i}\mathcal{M}),$$ and the characteristic cycle of $\mathcal{F}$ to be the formal sum $$CC(\mathcal{F}):=\sum_{C\in I(\mathrm{Ch}(\mathcal{F}))}m_{C}[C],$$ where $I(\mathrm{Ch}(\mathcal{F}))$ denotes the set of the irreducible components of $\mathrm{Ch}(M)$ and for each $C\in I(\mathrm{Ch}(\mathcal{F}))$, $m_{C}$ is an integer defined as in Equation (2.5) [@SV].
Now, if we denote $\mathscr{L}(X)$ to be the set of formal linear combination with $\mathbb{Z}$-coefficients of irreducible **analytic Lagrangian conic subvarieties** in the cotangent bundle $T^{\ast}X$ of the variety $X$, then $CC(\mathcal{F})\in \mathscr{L}(X)$. In particular, if we define $\mathscr{L}(X,H)$ to be the set of formal sums $$\mathscr{L}(X,H)\subset\left\{\sum_{{H-\mathrm{orbits }~S~\mathrm{in}~X}}m_S[\overline{T^{\ast}_{S}X}]:m_S\in\mathbb{Z}\right\},$$ taking characteristic cycles defines a map $$CC:D_{c}^{b}(X,H)\rightarrow \mathscr{L}(X,H)$$ and from the remark following Proposition (\[prop:additive\]) we obtain a $\mathbb{Z}$-linear map $$CC:K(X,H)\rightarrow \mathscr{L}(X,H).$$
Finally, for each formal sum $C=\sum_{i}m_{C_{i}}[C_{i}]\in \mathscr{L}(X)$ we define the support $|C|$ of $C$, as $$|C|=\overline{\bigcup_{m_{i}\neq 0}C_{i}}.$$ Notice that by definition, for every $\mathcal{F}\in \mathcal{P}(X,H)$, $|CC(\mathcal{F})|=\mathrm{Ch}(\mathcal{F})$.
Geometric induction
===================
In this section we recall Bernstein’s induction functor and use it to define a geometric analogue of the induction functor in representation theory. Once the geometric induction functor has been introduced, we turn to explain how it affects characteristic cycles. By doing this, we are able to describe the characteristic cycle associated to an induced representation (through Beilinson-Bernstein Correspondence), in terms of the characteristic cycle of the representation being induced. As we will see in Section 7.3, this will have as a consequence the possibility to reduce in some particular cases the computation of the micro-packets associated to a group, to the computation of the micro-packets associated to a Levi subgroup.\
Suppose $G$ is a connected reductive complex algebraic group with Lie algebra $\mathfrak{g}$, and let $\sigma$ be a real form of $G$. As in Equation (\[eq:complexification\]), denote by $K$ the group of fixed points of the Cartan involution associated to $\sigma$. Define $$\begin{aligned}
\mathcal{M}(\mathfrak{g},K) \text{ to be the category of }(\mathfrak{g}, K)\text{-modules of }G. \end{aligned}$$ We have an equivalence of categories between $\mathcal{M}(\mathfrak{g},K)$ and the category of (infinitesimal equivalence classes of) admissible representations of $G(\mathbb{R},\sigma)$. Following this equivalence, in this article we shall blur the distinction between these two categories by referring to their objects indiscriminately as representations of $G(\mathbb{R},\sigma)$.
We begin by recalling how the Beilinson-Bernstein correspondence relates the categories introduced in the previous section with the category of $(\mathfrak{g}, K)$-modules of $G$. We follow Chapter 8 [@ABV]. Let $X_{G}$ be the flag variety of $G$, and as in the previous section write $\mathcal{D}_{X_{G}}$ for the sheaf of algebraic differential operators on $X_{G}$. We define $$D_{X_{G}}=\Gamma \mathcal{D}_{X_{G}}$$ to be the algebra of global sections of $\mathcal{D}_{X_{G}}$. We recall that every element of $\mathfrak{g}$ defines a global vector field on $X_{G}$ and that this identification extends to an algebra homomorphism $$\begin{aligned}
\label{eq:operatorrepresentation}
\psi_{X_{G}}:U(\mathfrak{g})\longrightarrow D_{X_{G}}\end{aligned}$$ called the **operator representation** of $U(\mathfrak{g})$. The kernel of $\psi_{X_{G}}$ is a two-sided ideal denoted by $$\begin{aligned}
\label{eq:keroperatorrepresentation}
I_{X_{G}}=\text{ker}\psi_{X_{G}}.\end{aligned}$$ Now, if $\mathcal{M}$ is any sheaf of $\mathcal{D}_{X_{G}}$-modules, then the vector space $M=\Gamma \mathcal{M}$ obtained by taking global sections is in a natural way a $D_{X_{G}}$-module and therefore, via $\psi_{X_{G}}$, a module for $U(\mathfrak{g})/I_{X_{G}}$. The functor sending the $\mathcal{D}_{X_{G}}$-module $\mathcal{M}$ to the $U(\mathfrak{g})/I_{X_{G}}$-module $M$ is called the **global sections functor**. In the other direction, if $M$ is any module for $U(\mathfrak{g})/I_{X_{G}}$ then we may form the tensor product $$\mathcal{M}=\mathcal{D}_{X_{G}}\otimes_{\psi_{X_{G}}(U(\mathfrak{g})/I_{X_{G}})} M.$$ This is a sheaf of $\mathcal{D}_{X_{G}}$-modules on $X_{G}$. The functor sending $M$ to $\mathcal{M}$ is called **localization**.
\[theo:bbcorespondance\] We have:
i. The operator representation $\psi_{X_{G}}:U(\mathfrak{g})\longrightarrow D_{X_{G}}$ is surjective.
ii. The global sections and localization functors provide an equivalence of categories between quasicoherent sheaves of $\mathcal{D}_{X_{G}}$-modules on $X_G$ and modules for $U(\mathfrak{g})/I_{X_{G}}$.
iii. Let $$\begin{aligned}
\mathcal{M}(\mathfrak{g},K,I_{X_{G}})\text{ be the category of }(\mathfrak{g}, K)\text{-modules of }G\text{ annihilated by }I_{X_G}. \end{aligned}$$ Then the global sections functor and localization functor provide an equivalence of categories between: $$\mathcal{D}(X_G,K)\quad\text{ and }\quad \mathcal{M}(\mathfrak{g},K,I_{X_{G}}).$$
Suppose $Q$ is a parabolic subgroup of $G$ with Levi decomposition $Q=LN$, and such that $L$ is stable under $\sigma$. Consider the fibration $X_G\rightarrow G/Q$. Its fiber over $Q$ can be identified with the flag variety $X_L$ of $L$. We denote the inclusion of that fiber in $X_G$ by $$\begin{aligned}
\label{eq:mapvarieties}
\iota:X_L&\longrightarrow X_G\end{aligned}$$ Finally, denote $K_{L}=K\cap L$ and let $\mathfrak{l}$ be the lie algebra of $L$. We define $$\begin{aligned}
\label{eq:cohomologicalinduction}
\mathscr{R}_{(\mathfrak{l},K_{L})}^{(\mathfrak{g},K)}(\cdot):\mathcal{M}(\mathfrak{l},K_L)\rightarrow \mathcal{M}(\mathfrak{g},K) \end{aligned}$$ to be the cohomological induction functor (see (5.3a)-(5.3b) [@Knapp-Vogan] and (11.54a)-(11.54b) [@Knapp-Vogan]). Since we are identifying representations with the underlying $(\mathfrak{g},K)$-module, following Proposition 11.57 [@Knapp-Vogan], we are not going to make any distinction between parabolic and cohomological induction, and use the functor in (\[eq:cohomologicalinduction\]) to express both types of induction.
We now begin with the description of the geometric induction functor. The objective is to define a functor $$I_{L}^{G}:D_{c}^{b}(X_L,K_{L})\rightarrow D_{c}^{b}(X_G,K),$$ that makes the following diagram commutative $$\begin{aligned}
\label{eq:cdiagramme1}
\xymatrix{
K\mathcal{M}(\mathfrak{g},K,I_{X_{G}}) \ar[r] & K(X_G,K)\\
K\mathcal{M}(\mathfrak{l},K_{L},I_{X_{L}}) \ar[u]^{\mathscr{R}_{(\mathfrak{l},K_{L})}^{(\mathfrak{g},K)}}\ar[r] & K(X_L,K_{L})\ar[u]^{I_{L}^{G}}, }.\end{aligned}$$ Here the horizontal arrows are given by Theorem \[theo:bbcorespondance\]. The construction of $I_{L}^{G}$ is based on Bernstein’s geometric functor.
\[deftn:geominduction0\] Suppose $Y$ is a smooth complex algebraic variety on which the algebraic group $G$ acts with finitely many orbits. For any subgroup $H$ of $G$ we define the Bernstein induction functor $$\Gamma_{H}^{G}:D_{c}^{b}(Y,H)\rightarrow D_{c}^{b}(Y,G)$$ as the right adjoint of the forgetful functor from $D_{c}^{b}(Y,G)$ to $D_{c}^{b}(Y,H)$ (see [@Bi] for the proof of its existence). More precisely, consider the diagram $$\begin{aligned}
\label{eq:diagraminduction1}
\xymatrix{
G\times Y \ar[r]^\mu \ar[d]_p & G\times_{H} Y \ar[d]^a \\
Y & Y
}\end{aligned}$$ given by $$\xymatrix{
(g,y) \ar[r] \ar[d] & \overline{(g,y)} \ar[d] \\
y & g\cdot y
}$$ where $G\times_{H} Y$ is the quotient of $G\times Y$ by the $H$-action $h\cdot(g,y)=(gh^{-1},hy)$. From Theorem (A.2) (iii) [@ViMir], for $\mathcal{F}\in D_{c}^{b}(Y,H)$ there is an unique $\widetilde{\mathcal{F}}\in D_{c}^{b}(G\times_{H}Y,G)$ such that $p^{\ast}\mathcal{F}=\mu^{\ast}\widetilde{\mathcal{F}}$. Bernstein’s induction functor is defined as $$\begin{aligned}
\Gamma_{H}^{G}\mathcal{F}=Ra_{\ast}\widetilde{\mathcal{F}},\quad \mathcal{F}\in D_{c}^{b}(Y,B),\end{aligned}$$ where $R{a_{\ast}}:D_{c}^{b}(G\times_{H}Y,G)\rightarrow D_{c}^{b}(Y,G)$ is the right derived functor of the direct image functor defined by $a$. Equivalently, one can also define $\Gamma_{H}^{G}:D_{c}^{b}(Y,H)\rightarrow D_{c}^{b}(Y,G)$ via the diagram $$\begin{aligned}
\label{eq:diagraminduction2}
\xymatrix{
G\times Y \ar[r]^\nu \ar[d]_b & G/H\times Y \ar[d]^p \\
Y & Y
}\end{aligned}$$ given by $$\xymatrix{
(g,y) \ar[r] \ar[d] & {(gH,y)} \ar[d] \\
g^{-1}\cdot y & y
}$$ Then for $\mathcal{F}\in D_{c}^{b}(Y,H)$ we have $$\begin{aligned}
\Gamma_{H}^{G}\mathcal{F}=Rp_{\ast}{\mathcal{F}'},\end{aligned}$$ where $\mathcal{F}'$ is the unique element in $D_{c}^{b}(G/H\times Y,G)$ such that $b^{\ast}\mathcal{F}=\nu^{\ast}{\mathcal{F}'}$ (Theorem (A.2) (iii) [@ViMir]) and $R{p_{\ast}}:D_{c}^{b}(G/{H}\times Y,G)\rightarrow D_{c}^{b}(Y,G)$ is the right derived functor of the direct image functor defined by $p$.
The next lemma relates the characteristic variety of $\mathcal{F}$ with the characteristic variety of $\Gamma_{H}^{G}(\mathcal{F})$.
\[lem:contention\] In the setting of Definition \[deftn:geominduction0\], let $\mathcal{F}\in D_{c}^{b}(G,H)$. Then $$\begin{aligned}
\label{eq:contention}
G\cdot \mathrm{Ch}(\mathcal{F}) \subset \mathrm{Ch}(\Gamma_{H}^{G}\mathcal{F})
\subset \overline{G\cdot \mathrm{Ch}(\mathcal{F})} \end{aligned}$$
The right inclusion in (\[eq:contention\]) is Lemma (1.2) [@ViMir]. For the left inclusion we consider the definition of $\Gamma_{H}^{G}$ via Diagram (\[eq:diagraminduction2\]). Let $\mathcal{F}'\in D_{c}^{b}(G/H\times Y,G)$ be such that $b^{\ast}\mathcal{F}=\nu^{\ast}{\mathcal{F}'}$. We use the description of Ch$(Rp_{\ast}\mathcal{F}')$ given in Proposition B2 [@ViMir]. Let $\overline{G/H}$ be a smooth compactification of $G/H$. Then $p:G/H\times Y\rightarrow Y$ factors as $p:G/H \times Y\xrightarrow{i} \overline{G/H}\times Y\xrightarrow{q} Y$, and as explained in the proof of Lemma B2 [@ViMir] we have $$\mathrm{Ch}(Rp_{\ast}\mathcal{F}')=pr(\mathrm{Ch}(R{i}_{\ast}\mathcal{F}')),$$ where $pr$ denote the projection $pr:T^{\ast}(G/H\times Y)\rightarrow T^{\ast}Y$. Since $i:G/H \times Y\rightarrow \overline{G/H}\times Y$ is an open embedding $$\mathrm{Ch}(\mathcal{F}')\subset \mathrm{Ch}(R{i}_{\ast}\mathcal{F}').$$ Therefore $$pr( \mathrm{Ch}(\mathcal{F}'))\subset \mathrm{Ch}(Rp_{\ast}\mathcal{F}')=\mathrm{Ch}(\Gamma_{H}^{G}\mathcal{F}).$$ Finally, by Proposition B1 [@ViMir], $pr( \mathrm{Ch}(\mathcal{F}'))=pr( \mathrm{Ch}(b^{\ast}\mathcal{F}))$ and from the proof of Lemma (1.2) [@ViMir] we obtain $pr( \mathrm{Ch}(b^{\ast}\mathcal{F}))=
G\cdot \mathrm{Ch}(\mathcal{F})$. Equation (\[eq:contention\]) follows.
\[deftn:geominduction\] Let $$\iota:X_L\longrightarrow X_G$$ be the inclusion defined in Equation (\[eq:mapvarieties\]) and write $R{\iota_{\ast}}:D_{c}^{b}(X_L,K_L)\rightarrow D_{c}^{b}(X_G,K_L)$ for the right derived functor of the direct image functor defined by $\iota$. The geometric induction functor $I_{L}^{G}:D_{c}^{b}(X_L,K_L)\rightarrow D_{c}^{b}(X_G,K)$ is defined as $$\begin{aligned}
\label{eq:induction}
I_{L}^{G}(\mathcal{F})=\Gamma_{K_L}^{K}(R\iota_{\ast}\mathcal{F})=Ra_{\ast}(\widetilde{R\iota_{\ast}\mathcal{F}}),\quad \mathcal{F}\in D_{c}^{b}(X_L,K_L),\end{aligned}$$ where $\widetilde{R\iota_{\ast}\mathcal{F}}$ is the unique element in $D_{c}^{b}(K\times_{K_L}X_G,K)$ satisfying $p^{\ast}(R{\iota}_{\ast}\mathcal{F})=\mu^{\ast}(\widetilde{R\iota_{\ast}\mathcal{F}})$. Equivalently, the geometric induction functor can also be defined as $$\begin{aligned}
I_{L}^{G}(\mathcal{F})=\Gamma_{K_L}^{K}(R\iota_{\ast}\mathcal{F})=Rp_{\ast}(({R\iota_{\ast}\mathcal{F}})'),\quad \mathcal{F}\in D_{c}^{b}(X_L,K_L),\end{aligned}$$ where $({R\iota_{\ast}\mathcal{F}})'$ is the unique element in $D_{c}^{b}(K/{K_L}\times X_G,K)$ satisfying $b^{\ast}(R{\iota}_{\ast}\mathcal{F})=\nu^{\ast}(({R\iota_{\ast}\mathcal{F}})')$.
The objective of this section is to explain how $I_{L}^{G}$ affects characteristic cycles. We do this by giving a formula for $CC(I_{L}^{G}(\mathcal{F}))$ in terms of the characteristic cycle of $\mathcal{F}$. From Definition \[deftn:geominduction\] it is clear that in order to compute $CC(I_{L}^{G}(\mathcal{F}))$ it will be necessary first to describe the behaviour of characteristic cycles under taking the direct images $R\iota_{\ast}\text{ and }Ra_{\ast},$ and second to reduce the characterization of $CC(\widetilde{R\iota_{\ast}\mathcal{F}})$ to the one of $CC({\mathcal{F}})$. The second step will be done in Proposition \[prop:reduction2\] and Corollary \[cor:reductionstep2\] below. Another option to compute $CC(I_{L}^{G}(\mathcal{F}))$ is to make use of $Rp_{\ast}$ instead of $Ra_{\ast}$, and to reduce the characterization of $CC(({R\iota_{\ast}\mathcal{F}})')$ to the one of $CC({\mathcal{F}})$, but since Proposition 7.14 [@ABV] give us a description of $CC(\widetilde{R\iota_{\ast}\mathcal{F}})$, in this article we work principally with the definition of $I_{L}^{G}$ via Diagram (\[eq:diagraminduction1\]). To deal with the direct images $R\iota_{\ast}$ and $Ra_{\ast}$, we describe the pushforward of cycles $$\iota_{\ast}:\mathscr{L}(X_{L},K_L)\rightarrow \mathscr{L}(X_{G},K_L)\quad\text{ and }\quad a_{\ast}:\mathscr{L}(K\times_{K_L}X_{G},K)\rightarrow \mathscr{L}(X_{G},K)$$ that make the diagrams $$\begin{aligned}
\label{eq:diagramcycle1}
\xymatrix{
{D}_{c}^{b}(X_{G},K_L)\ar[r]^{CC} & \mathscr{L}(X_{G},K_L)\\
{D}_{c}^{b}(X_{L},K_L)\ar[u]^{R\iota_{\ast}}\ar[r]^{CC} &
\mathscr{L}(X_{L},K_L)\ar[u]^{\iota_{\ast}}
}
~\quad\text{ and }
\xymatrix{
{D}_{c}^{b}(X_{G},K)\ar[r]^{CC} & \mathscr{L}(X_{G},K)\\
{D}_{c}^{b}(K\times_{K_L}X_{G},K)\ar[u]^{Ra_{\ast}}\ar[r]^{CC} & \mathscr{L}(K\times_{K_L}X_{G},K)\ar[u]^{a_{\ast}}
}\end{aligned}$$ commutative. This will be done in a more general context than the one of the functions $\iota:X_L\rightarrow X_G$ and $a:K\times_{K_L}X_{G}\rightarrow X_{G}$. Consider a morphism $F:X\rightarrow Y$ between two smooth algebraic varieties. We work initially in the derived categories and restrict our attention to the equivariant subcategories when working with the equivariants maps $a$ and $\iota$. The definition of $F_{\ast}:\mathscr{L}(X)\rightarrow \mathscr{L}(Y)$ and proof of the commutativity of the diagram $$\begin{aligned}
\xymatrix{
D_{c}^{b}(Y)\ar[r]^{CC} & \mathscr{L}(Y)\\
D_{c}^{b}(X)\ar[u]^{RF_{\ast}}\ar[r]^{CC} & \mathscr{L}(X)\ar[u]^{F_{\ast}}
}\end{aligned}$$ is due principally to the work of Kashiwara-Schapira [@Kashiwara-Schapira] and Schmid-Vilonen [@SV]. We give a short review of their work. We begin by noticing that Schmid-Vilonen work in the derived category of sheaves having cohomology sheaves constructible with respect to a ***semi-algebraic*** stratification. We consider then the derived categories introducted at Section 3 as subcategories of this larger category, and restrict their result to our framework when working with an algebraic map $F:X\rightarrow Y$. By abuse of notation we write, as in definition \[deftn:ccycleperverse\], $\mathscr{L}(X)$, respectively $\mathscr{L}(Y)$, for the set of ***semi-algebraic Lagrangian cycles*** in $T^{\ast}X$, respectively $T^{\ast}Y$. The definition of $F_{\ast}$ for an arbitrary algebraic map reduces to the case of proper maps and open embeddings. We begin by describing $F_{\ast}$ in the case that $F$ is a proper map. Consider the diagram $$\begin{aligned}
\label{eq:dF}
T^{\ast}X\xleftarrow{dF} X\times_{Y}T^{\ast} Y\xrightarrow{\tau} T^{\ast}Y, \end{aligned}$$ where $\tau:X\times_{Y}T^{\ast} Y\rightarrow T^{\ast}Y$ is the projection on the second coordinate and for all $(x,(F(x),\lambda))\in X\times_Y T^{\ast}Y$ we have $dF(x,(F(x),\lambda))=(x,\lambda\circ dF_{x})$. The assumption of $F$ being proper implies that $\tau$ is proper. Hence we can as in Section 1.4 [@fulton] define a pushforward of cycles $\tau_{\ast}:\mathscr{L}(X\times_{Y}T^{\ast}Y)\rightarrow\mathscr{L}(T^{\ast}Y)$. Moreover, intersection theory (see Section 6.1, 6.2 and 8.1 [@fulton]) allows us to construct a pullback of cycles $dF^{\ast}:\mathscr{L}(T^{\ast}X)\rightarrow\mathscr{L}(X\times_{Y}T^{\ast}Y)$. The map $F_{\ast}:\mathscr{L}(X)\rightarrow \mathscr{L}(Y)$ is then defined as the composition of these two functions (see Equation 2.16 [@SV]) $$\begin{aligned}
\label{eq:Gysin}
F_{\ast}:=\tau_{\ast}\circ dF^{\ast}.\end{aligned}$$ The following result due to Kashiwara-Schapira relates $F_{\ast}$ to the right derived functor $RF_{\ast}:D_{c}^{b}(X)\rightarrow D_{c}^{b}(Y)$ and proves the commutativity of (\[eq:diagramcycle1\]) in the case of proper maps.
\[prop:KS\] Let $F:X\rightarrow Y$ be a proper map. Then for all $\mathcal{F}\in D_{c}^{b}(X)$ $$CC(RF_{\ast}\mathcal{F})=F_{\ast}CC(\mathcal{F}).$$
As explained at the end of Section 3 [@SV], we can give a more explicit description of $F_{\ast}CC(\mathcal{F})$ by choosing a transverse family of cycles with limit equal to $CC(\mathcal{F})$. More precisely, suppose $C\in \mathscr{L}(X)$ is transverse to the map $dF: X\times_{Y} T^{\ast}Y \rightarrow T^{\ast}X$. Then the geometric inverse image $dF^{-1}(C)$ of $C$ is well-defined as a cycle in $X\times_{Y} T^{\ast}Y$ and we have $$dF^{\ast}(C)=dF^{-1}(C).$$ Consequently, $\tau_{\ast}dF^{-1}(C)$ is a well-defined cycle in $\mathscr{L}(Y)$. Now, by Lemma 3.26 [@SV] we can choose for every cycle $C_{0}\in \mathscr{L}(X)$ a family $\{C_{s}\}_{s\in (0,b)}\subset \mathscr{L}(X)$ such that the map $dF: X\times_{Y} T^{\ast}Y \rightarrow T^{\ast}X$ is transverse to supp$(C_{s})\subset T^{\ast}X$; for every $s\in (0,b)$ and $$C_0=\lim_{s\rightarrow 0}C_{s}.$$ For more details about the construction of this family of cycles, see Equation (3.10) and (3.11) [@SV]. Equations (3.12-3.16) [@SV] provide the notion of limit of a family of cycles. Schmid and Vilonen prove:
\[prop:familylimit\] Suppose $F:X\rightarrow Y$ is proper. Let $C_{0}\in \mathscr{L}(X)$. Choose a family $\{C_{s}\}_{s\in (0,b)}\subset \mathscr{L}(X)$ with limit $C_{0}$ and such that $dF: X\times_{Y} T^{\ast}Y \rightarrow T^{\ast}X$ is transverse to the support $|C_{s}|\subset T^{\ast}X$; for every $s\in (0,b)$. Then $$\begin{aligned}
\label{eq:familylimit}
F_{\ast}(C_{0})=\lim_{s\rightarrow 0}\tau_{\ast}dF^{-1}(C_{s}).\end{aligned}$$
Having described $F_{\ast}$ when $F$ is proper, we explain now how to define $F_{\ast}:\mathscr{L}(X)\rightarrow \mathscr{L}(Y)$ in the case when $F:X\rightarrow Y$ is an open embedding. We follow Chapter 4 [@SV]. We start by choosing a real valued, semialgebraic $C^{1}$-function $f:X\rightarrow \mathbb{R}$, such that:
1. the boundary $\partial X$ is the zero set of $f$,
2. $f$ is positive on $X$.
For more details on the existence of this map, see Equation (4.1) [@SV] and Proposition I.4.5 [@Shiota].
Suppose $C\in \mathscr{L}(X)$, and for each $s>0$ define $C + sd\log f$ as the cycle of $X$ equal to the image of $C$ under the automorphism of $T^{\ast}X$ defined by $$\begin{aligned}
\label{eq:open1}
(x,\xi)\mapsto \left(x,\xi+s\frac{df_{x}}{f(x)}\right).\end{aligned}$$ Theorem 4.2 [@SV] relates the limit of the family of cycles $\{C+s d\log f\}_{s>0}$ (see page 468 [@SV] for the proof of why this family defines a family of cycles) to the direct image $RF_{\ast}: D_c^{b}(X)\rightarrow D_c^{b}(Y)$. We state it here as:
\[prop:openembeding\] Suppose $F:X\rightarrow Y$ is an open embedding. Let $\mathcal{F}\in D_{c}^{b}(X)$, then $$CC(RF_{\ast}\mathcal{F})=\lim_{s\rightarrow 0} CC(\mathcal{F})+s d\log f.$$
We notice that, while the family of cycles $\{CC(\mathcal{F})+s d\log f\}_{s>0}$ not necessarily lives in the set of characteristic cycles for the derived category of sheaves whose cohomology is constructible with respect to an algebraic stratification, the limit does.
Following Proposition \[prop:openembeding\] we define for each $\mathcal{F}\in D_{c}^{b}(X)$ $$\begin{aligned}
\label{eq:openpush}
F_{\ast}CC(\mathcal{F}):=\lim_{s\rightarrow 0} CC(\mathcal{F})+s d\log f.\end{aligned}$$
Finally, to treat the case of an arbitrary algebraic map $F:X\rightarrow Y$ we follow Chapter 6 [@SV]. We embed $X$ as an open subset of a compact algebraic manifold $\overline{X}$, and we factor $F$ into a product of three mappings: the closed embedding $$\begin{aligned}
i:X &\rightarrow X\times Y\\
x&\mapsto(x,F(x))\end{aligned}$$ which is a simple case of a proper direct image, the open inclusion $$j : X\times Y \rightarrow \overline{X} \times Y ,$$ and the projection $$\overline{p} : \overline{X}\times Y \rightarrow Y$$ which is also a proper map. Then we can factor the derived functor $RF_{\ast}:D_{c}^{b}(X)\rightarrow D_{c}^{b}(Y)$ into the product $$RF_{\ast}=R\overline{p}_{\ast}\circ Rj_{\ast}\circ Ri_{\ast}.$$ From Theorem \[prop:KS\] and Theorem \[prop:openembeding\] for each $\mathcal{F}\in D_{c}^{b}(X)$ we have $$\begin{aligned}
\label{eq:generalpushforward}
CC(RF_{\ast}\mathcal{F})&=CC(R\overline{p}_{\ast}\circ Rj_{\ast}\circ Ri_{\ast}(\mathcal{F}))\\
&=\overline{p}_{\ast}CC(Rj_{\ast}\circ Ri_{\ast}(\mathcal{F}))\nonumber\\
&=(\overline{p}_{\ast}\circ j_{\ast})CC(Ri_{\ast}\mathcal{F})\nonumber\\
&=(\overline{p}_{\ast}\circ j_{\ast}\circ i_{\ast})CC(\mathcal{F}).\nonumber\end{aligned}$$ Consequently, we define $$\begin{aligned}
\label{eq:svfunctor}
F_{\ast}&:\mathscr{L}(X)\rightarrow \mathscr{L}(Y)\\
F_{\ast}&:=\overline{p}_{\ast}\circ j_{\ast}\circ i_{\ast}.\nonumber\end{aligned}$$ Let us return to our map $a:K\times_{K_L}X_G\rightarrow X_G$ of Definition \[deftn:geominduction0\]. Suppose $\mathcal{F}\in D_{c}^{b}(X_{L},K_L)$ and consider the sheaf $I_{L}^{G}\mathcal{F}=Ra_{\ast}(\widetilde{R\iota_{\ast}\mathcal{F}})$ of Definition \[deftn:geominduction\]. Our objective is to give a more explicit description of $CC(I_{L}^{G}\mathcal{F})$ by reducing its computation to the one of the cycle $CC(\mathcal{F})$. From (\[eq:generalpushforward\]) we can write $$\begin{aligned}
\label{eq:reduction}
CC(I_{L}^{G}\mathcal{F})=CC(Ra_{\ast}(\widetilde{R\iota_{\ast}\mathcal{F}}))=a_{\ast}CC
(\widetilde{R\iota_{\ast}\mathcal{F}}).\end{aligned}$$ Thus, to be able to compute $CC(I_{L}^{G}\mathcal{F})$ the first step is to relate the characteristic cycle of $\widetilde{R\iota_{\ast}\mathcal{F}}$ to the characteristic cycle of $\mathcal{F}$. This is done in the two following results. The first is a reformulation of Proposition 7.14 [@ABV], Proposition 20.2 [@ABV] and Lemma 1.4 [@ViMir].
\[prop:reduction2\] Suppose $X$ is a smooth complex algebraic variety on which an algebraic group $H$ acts with finitely many orbits. Suppose $G$ is an algebraic group containing $H$. Consider the bundle $$\begin{aligned}
Y=G\times_{H}X,\end{aligned}$$ on which the group G acts by $$g\cdot (g',x)=(gg',x).$$ Then:
1. The inclusion $$\begin{aligned}
i:X&\rightarrow Y\\
x& \mapsto \text{ equivalence class of }(e,x)\end{aligned}$$ induces a bijection from $H$-orbits on $X$ to $G$-orbits on $Y$. Furthermore, this bijection preserves the closure relations of closures.
2. There are natural equivalences of categories, $$\begin{aligned}
D_{c}^{b}(X,H)\cong D_{c}^{b}(G\times_{H}X,G),\quad
\mathcal{P}(X,H)\cong \mathcal{P}(G\times_{H}X,G),\quad
\mathcal{D}(X,H)\cong \mathcal{D}(G\times_{H}X,G).\end{aligned}$$
3. Write $j:\mathfrak{h}\times X\rightarrow \mathfrak{g}\times X$ for the inclusion, and consider the bundle map $$j\times \mathcal{A}:\mathfrak{h}\times X\rightarrow \mathfrak{g}\times TX,$$ with $\mathcal{A}:\mathfrak{h}\times X\rightarrow TX$ defined as in (\[eq:actionmap2\]). Write $Q$ for the quotient bundle: the fiber at $x$ is $$Q_x=(\mathfrak{g}\times T_x X)/\{(X,\mathcal{A}_{x}(X)):X\in\mathfrak{h}\},$$ with $\mathcal{A}_{x}:\mathfrak{h}\times Y\rightarrow TY$ defined as in (\[eq:actionmap\]). Then the tangent bundle of $Y$ is naturally isomorphic to the bundle on $Y$ induced by $Q$ $$TY\cong G\times_{H}Q.$$
4. The action mapping $\mathcal{A}_{y}:\mathfrak{h}\times Y\rightarrow TY$ (Equation (\[eq:actionmap\])) may be computed as follows. Fix a representative $(g,x)$ for the point $y$ of $G\times_{H}X$, and an element $Z\in\mathfrak{g}$. Then $$\mathcal{A}_{y}(Z)=\mathrm{~class~of~}(g,(\mathrm{Ad}(g^{-1})Z,(x,0))).$$ Here $(x,0)$ is the zero element of $T_x X$, so the term paired with $g$ on the right side represents a class in $Q_x$.
5. The conormal bundle to the $G$-action on the induced bundle $G\times_{H}X$ is naturally induced by the conormal bundle to the $H$-action on $X$: $$\begin{aligned}
T_{G}^{\ast}(G\times_{H}X)\cong G\times_{H}T_{H}^{\ast}X.\end{aligned}$$
6. Suppose $\mathcal{F}$ and $\widetilde{\mathcal{F}}$ correspond through any of the equivalences of categories of (2), above. Then $$CC(\widetilde{\mathcal{F}})=G\times_{H} CC({\mathcal{F}}).$$ In particular, the microlocal multiplicities (see Equation (\[eq:microlocalmultiplicity\]) and Theorem \[theo:defcc\]$(ii)$) are given by $$\chi_{G\times_{H}S}^{mic}(\widetilde{\mathcal{F}})=\chi_{S}^{mic}(\mathcal{F})$$ and $$CC(\widetilde{\mathcal{F}})=\sum_{H-\mathrm{orbits }~S~\mathrm{in}~X}\chi_{S}^{mic}(\mathcal{F})[\overline{T_{G\times_{H}S}^{\ast}(G\times_{H}X)}].$$
\[cor:reductionstep2\] Suppose $\mathcal{F}\in D_{c}^{b}(X_{L},K_{L})$. Then $$\begin{aligned}
\label{eq:reductionstep2}
CC(\widetilde{R\iota_{\ast}\mathcal{F}})=\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_{L}} \chi_{S_L}^{mic}(\mathcal{F})[\overline{T_{K\times_{K_L} S_L}^{\ast}(K\times_{K_L}X_{G}})].\end{aligned}$$ where $\widetilde{R\iota_{\ast}\mathcal{F}}$ is the unique element in $D_{c}^{b}(K\times_{K_L}X_G,K)$ satisfying $p^{\ast}(R{\iota}_{\ast}\mathcal{F})=\mu^{\ast}(\widetilde{R\iota_{\ast}\mathcal{F}})$.
Before giving the proof of the Corollary, notice that from the definition of $\widetilde{R\iota_{\ast}\mathcal{F}}$ and the proof of Proposition \[prop:reduction2\](2) (see pages 93-94 [@ABV]), $R\iota_{\ast}\mathcal{F}$ and $\widetilde{R\iota_{\ast}\mathcal{F}}$ correspond through the equivalence of categories of Proposition \[prop:reduction2\](2).
Suppose $\mathcal{F}\in D_{c}^{b}(X_{L},K_{L})$ and write $$CC(\mathcal{F})=\sum_{K_{L}-\mathrm{orbits }~S_L~\mathrm{in}~X_{L}} \chi_{S_L}^{mic}(\mathcal{F})[\overline{T_{S}^{\ast}X_{L}}]$$ for the corresponding characteristic cycle. From Proposition \[prop:reduction2\](6), the characteristic cycle of $\widetilde{R\iota_{\ast}\mathcal{F}}$ may be identified as a cycle in $T^{\ast}_{K}(K\times_{K_L}X)$ as $$\begin{aligned}
CC(\widetilde{R\iota_{\ast}\mathcal{F}})=K\times_{K_L} CC(R\iota_{\ast}\mathcal{F}).\end{aligned}$$ From Proposition 6.21 [@ABV] the inclusion $\iota:X_L \longrightarrow X_G$ is a closed immersion and in consequence proper. Consider the diagram $$\begin{aligned}
T^{\ast}X_{L}\xleftarrow{d\iota} X_{L}\times_{X_G}T^{\ast}X_G\xrightarrow{\tau} T^{\ast}X_{L}. \end{aligned}$$ By Proposition \[prop:KS\] we have $$\begin{aligned}
CC(R\iota_{\ast}\mathcal{F})&=\iota_{\ast}CC(\mathcal{F})\\
&=\tau_{\ast}\circ d\iota^{\ast}CC(\mathcal{F}),\end{aligned}$$ where $\tau_{\ast}$ and $d\iota^{\ast}$ are defined as in (\[eq:Gysin\]). Writing $CC(\mathcal{F})$ as the limit of a family of cycles transverse to $d\iota$, by Proposition \[prop:familylimit\] we conclude that $\iota_{\ast}CC(\mathcal{F})$ is the inverse image of $CC(\mathcal{F})$ under $$d\iota:T^{\ast}{X_G}|_{\iota(X_{L})}\rightarrow T^{\ast}X_{L}.$$ Since the inverse image of the conormal bundle of each $K_L$-orbit $S_{L}$ in $X_{L}$ is given by $$d\iota^{-1}(T_{S_{L}}^{\ast}X_{L})=T_{\iota(S_{L})}^{\ast}X_G$$ we obtain $$\begin{aligned}
CC(R\iota_{\ast}\mathcal{F})&=\iota_{\ast}CC(\mathcal{F})\\
&=\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{\iota(S_{L})}^{\ast}X_G}].\end{aligned}$$ Consequently $$\begin{aligned}
CC(\widetilde{R\iota_{\ast}\mathcal{F}})&=K\times_{K_L} \sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{\iota(S_{L})}^{\ast}X_G}]\\
&=\sum_{K_L-\mathrm{orbits }~S~\mathrm{in}~X_{G}} \chi_{S}^{mic}({R\iota_{\ast}\mathcal{F}})[\overline{T_{K\times_{K_L} \iota_{\ast}(S)}^{\ast}(K\times_{K_L}X_G})]\\
&=\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{K\times_{K_L} \iota(S_{L})}^{\ast}(K\times_{K_L}X_G})].\end{aligned}$$
The remaining step in the characterization of $CC(I_{L}^{G}\mathcal{F})$ is to describe the effect of taking $a_{\ast}$ on $CC(\widetilde{R\iota_{\ast}\mathcal{F}}).$ This is done in the proof of the following proposition.
\[prop:ccLG\] Suppose $\mathcal{F}\in D_{c}^{b}(X_{L},K_{L})$. Then $$CC\left(I_{L}^{G}\mathcal{F}\right)=\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{K\cdot\iota(S_L)}^{\ast}X_G}]+
\sum_{K-orbits~S\text{~in~}\partial(G\cdot \iota(X_{L}))} \chi_{S}^{mic}(I_{L}^{G}\mathcal{F})[\overline{T_{S}^{\ast}X_G}].$$
Suppose $\mathcal{F}\in D_{c}^{b}(X_{L},K_{L})$ and write $$CC(\mathcal{F})=\sum_{K_{L}-\mathrm{orbits }~S_L~\mathrm{in}~X_{L}} \chi_{S_L}^{mic}(\mathcal{F})[\overline{T_{S_L}^{\ast}X_{L}}]$$ for the corresponding characteristic cycle. From Equation (\[eq:reduction\]) and Corollary \[cor:reductionstep2\]
we can write $$\begin{aligned}
CC\left(I_{L}^{G}\mathcal{F}\right)&=CC(Ra_{\ast}(\widetilde{R\iota_{\ast}\mathcal{F}}))\\
&=a_{\ast}(CC(\widetilde{R\iota_{\ast}\mathcal{F}}))\\
&=a_{\ast}\left( \sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{K\times_{K_L} \iota(S_{L})}^{\ast}(K\times_{K_L}X_G})]\right)\end{aligned}$$ We recall how $a_{\ast}$ is defined. As explained after Proposition \[prop:openembeding\], we embed ${K\times_{K_L} X_G}$ as an open subset of a compact algebraic manifold $\overline{K\times_{K_L} X_G}$ and factorize $a$ into a product of three maps: the closed embedding $$\begin{aligned}
i : K\times_{K_L} X_G &\rightarrow (K\times_{K_L} X_G)\times X_G,\\
(k,x)&\mapsto ((k,x),a(k,x))=((k,x),kx)\end{aligned}$$ the open inclusion $$j : (K\times_{K_L} X_G)\times X_G \rightarrow \overline{K\times_{K_L} X_G}\times X_G,$$ and the projection $$\overline{p} : \overline{K\times_{K_L} X_G}\times X_G\rightarrow X_G.$$ For later use we also denote the restriction of $\overline{p}$ to ${K\times_{K_L} X_G}\times X_G$ as $$p : {K\times_{K_L} X_G}\times X_G\rightarrow X_G.$$ The map $a_{\ast}$ is defined by $$a_{\ast}:=\overline{p}_{\ast}\circ j_{\ast}\circ i_{\ast}$$ with $\overline{p}_{\ast}$ and $i_{\ast}$ defined by (\[eq:Gysin\]) and $j_{\ast}$ as in (\[eq:openpush\]). We have
$$\begin{aligned}
CC\left(I_{L}^{G}\mathcal{F}\right)&=\overline{p}_{\ast}(j_{\ast}(i_{\ast}(CC(\widetilde{R\iota_{\ast}\mathcal{F}}))))\\
&=\overline{p}_{\ast}\left(j_{\ast}\left(i_{\ast}\left( \sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{K\times_{K_L} \iota(S_{L})}^{\ast}(K\times_{K_L}X_G})]\right)\right)\right).\end{aligned}$$
We start by describing $i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})$. Notice that the same argument used to compute $\iota_{\ast}CC(\mathcal{F})$ in Corollary \[cor:reductionstep2\] allows us to conclude that $i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})$ is the inverse image of $CC(\widetilde{R\iota_{\ast}\mathcal{F}})$ under $$di:T^{\ast}((K\times_{K_L}X_G)\times X_G)|_{i(K\times_{K_L}X_G)}\rightarrow T^{\ast} (K\times_{K_L}X_G).$$ Since the map $a$ occurs in the definition of $i$, to compute the inverse image of $di$ we need first to compute the inverse image under $$da: (K\times_{K_L}X_G)\times_{X_G}T^{\ast}X_G\rightarrow T^{\ast}(K\times_{K_L}X_G)$$ of the conormal bundle of each $K$-orbit in $K\times_{K_L}X_G$. To do that we begin by noticing that, since by Proposition \[prop:reduction2\]$(3)$ for each $(k,x)\in K\times_{K_L}X_G$ we have $$T_{(k,x)}(K\times_{K_L}X_G)\cong (\mathfrak{k}\times T_{x}X_G)/\{(X,\mathcal{A}_{x}(X)):X\in \mathfrak{k}\cap \mathfrak{l}\},$$ with $\mathcal{A}_{x}:\mathfrak{k}\rightarrow T_{x}X_G$ defined as in (\[eq:actionmap\]), the map $da_{(k,x)}:T_{(k,x)}(K\times_{K_L}X_G)\rightarrow T_{kx}X_G$ can be represented as $$da_{(k,x)}=\text{Ad}(k)\circ \mathcal{A}_{x}+\text{Ad}(k).$$ Then from Proposition \[prop:reduction2\]$(4)$, is an easy exercise to verify that for each $K$-orbit $K\times_{K_L} \iota(S_{L})$ in $K\times_{K_L}X_G$, we have $$d{a}^{-1}({T_{(k,x),K\times_{K_L} \iota(S_{L})}^{\ast}(K\times_{K_L}X_G)})={T_{kx,K\cdot\iota(S_L)}^{\ast}X_G}.$$ Now, for each $((k,x),x')\in (K\times_{K_L}\times X_G)\times X_G$ we have $$T^{\ast}_{((k,x),x')}((K\times_{K_L}X_G)\times X_G)\cong
T^{\ast}_{(k,x)}(K\times_{K_L}X_G)\times T^{\ast}_{x'}X_G,$$ and the image of each element $(\lambda,\lambda')\in T^{\ast}_{(k,x)}(K\times_{K_L}X_G)\times T^{\ast}_{kx}X_G$ under the map $di$ is $$(\lambda,\lambda')\mapsto \lambda+\lambda'\circ da_{(k,x)}.$$ Consequently, each element of $T^{\ast}_{(k,x)}(K\times_{K_L}X_G)\times T^{\ast}_{kx}X_G$ in the preimage under $di$ of the annihilator of $T_{(k,x)} K\times_{K_L} \iota(S_{L})$ in $T^{\ast}_{(k,x)} (K\times_{K_L}X_G)$ must be a linear combination of elements of the form
- $(\lambda,0)\in T^{\ast}_{(k,x)}(K\times_{K_L}X_G)\times T^{\ast}_{kx}X_G$, $\lambda|_{T_{(k,x)} K\times_{K_L} \iota(S_{L})}=0,$
- $(0,\lambda')\in T^{\ast}_{(k,x)}(K\times_{K_L}X_G)\times T^{\ast}_{kx}X_G$, $\lambda'|_{T_{kx} K\cdot\iota(S_{L})}=0,$
- $(\lambda,\lambda')\in T^{\ast}_{(k,x)}(K\times_{K_L}X_G)\times T^{\ast}_{kx}X_G$, $(\lambda+\lambda'\circ da_{(k,x)})|_{T_{(k,x)} K\times_{K_L} \iota(S_{L})}=0,$
and one may verify that this space is ${T_{((k,x),kx),i(K\times_{K_L}\iota(S_{L}))}^{\ast}((K\times_{K_L}X_G)\times X_G)}$. Therefore, for the conormal bundle of each $K$-orbit $K\times_{K_L} \iota(S_{L})$ in $K\times_{K_L}X_G$ we obtain $$\begin{aligned}
\label{eq:before}
di^{-1}({T_{K\times_{K_L} S_{L}}^{\ast}(K\times_{K_L}X_G)})
={T_{i(K\times_{K_L}\iota(S_{L}))}^{\ast}((K\times_{K_L}X_G)\times X_G)}\end{aligned}$$ and so $$\begin{aligned}
i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})
&=i_{\ast}\left( \sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{K\times_{K_L} \iota(S_{L})}^{\ast}(K\times_{K_L}X_G})]\right)\\
&=\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{i(K\times_{K_L}\iota(S_{L}))}^{\ast}((K\times_{K_L}X_G)\times X_G)}].\end{aligned}$$ Next we compute $j_{\ast}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}}))$. In order to do this we fix as in the paragraph previous to (\[eq:open1\]) a function $$f:\overline{K\times_{K_L}X_G}\rightarrow \mathbb{R},$$ which takes strictly positive values on $K\times_{K_L}X_G$ and vanishes on the boundary $\partial({K\times_{K_L}X_G})$. Define the family of cycles $\{i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+sd\log f\}_{s>0}$ as in (\[eq:open1\]). From Proposition \[prop:openembeding\] we have $$\begin{aligned}
\label{eq:intermediate}
j_{\ast}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}}))&=\lim_{s\rightarrow 0}
i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+sd\log f\\
&=\lim_{s\rightarrow 0} \sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{i(K\times_{K_L}\iota(S_{L}))}^{\ast}((K\times_{K_L}X_G)\times X_G)}]+sd\log f.\nonumber\end{aligned}$$ It only remains to compute the image under $\overline{p}_{\ast}$ of $j_{\ast}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}}))$. Consider the diagrams $$\begin{aligned}
T^{\ast}(\overline{K\times_{K_L}X_G}\times X_G)\xleftarrow{d\overline{p}}(\overline{K\times_{K_L}X_G}\times X_G)\times_{X_G} T^{\ast}X_G\xrightarrow{\bar{\tau}}T^{\ast}X_G\\
T^{\ast}({K\times_{K_L}X_G}\times X_G)\xleftarrow{d{p}}({K\times_{K_L}X_G}\times X_G)\times_{X_G} T^{\ast}X_G\xrightarrow{{\tau}}T^{\ast}X_G\end{aligned}$$ Then by (\[eq:familylimit\]) and (\[eq:intermediate\]) $$\begin{aligned}
\overline{p}_{\ast}(j_{\ast}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})))=\overline{\tau}_{\ast} d\overline{p}^{\ast}\left(\lim_{s\rightarrow 0}
i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+sd\log f\right).\end{aligned}$$ By Lemma 6.4 [@SV], the function $f$ can be chosen in such a way that for every sufficiently small $s>0$, $T^{\ast}_{K}(K\times_{K_L}X_G\times X_G)+sd\log f$ is transverse to $(\overline{K\times_{K_L}X_G}\times X_G)\times_{X_G} T^{\ast}X_G$. The transversality condition implies that the geometric inverse image $d\overline{p}^{-1}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+sd\log f)$ is well-defined as a cycle and $$d\overline{p}^{\ast}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+sd\log f)=
d\overline{p}^{-1}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+sd\log f).$$ Moreover, by Proposition \[prop:familylimit\] $$\begin{aligned}
\label{eq:dercor}
\overline{\tau}_{\ast} d\overline{p}^{\ast}\left(\lim_{s\rightarrow 0}
i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+sd\log f\right)
=\lim_{s\rightarrow 0}\overline{\tau}_{\ast}d\overline{p}^{-1}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+sd\log f).\end{aligned}$$ Next, from Equation (\[eq:open1\]) for each $s>0$, $i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+s d\log f$ is a cycle of $K\times_{K_L}X_G\times X_G$. Consequently, the family of cycles $\{i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+s d\log f\}_{s>0}$ lies entirely in $T^{\ast}(K\times_{K_L}X_G\times X_G)$; this permits us to use the map $\tau_{\ast}$ and $dp$ instead of $\overline{\tau}_{\ast}$ and $d\overline{p}$ on the right hand side of (\[eq:dercor\]) and write $$\begin{aligned}
\label{eq:eqcorfinal}
\overline{p}_{\ast}(j_{\ast}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})))&=
\lim_{s\rightarrow 0}\tau_{\ast}d{p}^{-1}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+s d\log f)\\
&=\lim_{s\rightarrow 0}\tau_{\ast}d{p}^{-1}\left(\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{i(K\times_{K_L}\iota(S_{L}))}^{\ast}((K\times_{K_L}X_G)\times X_G)}]+s d\log f\right).\nonumber\end{aligned}$$ To compute the right hand side of (\[eq:eqcorfinal\]) we begin by noticing that $$(K\times_{K_L}X_G)\times X_G\times_{X_G} T^{\ast}X_G=(K\times_{K_L}X_G)\times T^{\ast}X_G=T^{\ast}_{K\times_{K_L}X_G}(K\times_{K_L}X_G)\times T^{\ast}X_G.$$ Hence $dp$ can be written as $$dp: T^{\ast}_{K\times_{K_L}X_G}(K\times_{K_L}X_G)\times T^{\ast}X_G \rightarrow T^{\ast}(K\times_{K_L}X_G\times X_G),$$ with the image of each point $(0,\lambda)\in T^{\ast}_{(k,x),K\times_{K_L}X_G}(K\times_{K_L}X_G)\times T_{x'}^{\ast}X_G$ given by $$\begin{aligned}
(0,\lambda) \mapsto \lambda\circ dp_{((k,x),x')}=(0,\lambda) \end{aligned}$$ Therefore $dp$ defines an embedding, and we conclude that $dp^{-1}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+
s d\log f)$ is simply the intersection between $T_{K\times_{K_L}X_G}^{\ast}(K\times_{K_L}X_G)\times T^{\ast}X_G$ and $i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+s d\log f$. Now, by Lemma \[lem:contention\] the space $K\cdot \mathrm{Ch}(R{\iota}_{\ast}\mathcal{F})$ is contained in the characteristic variety of $I_{L}^{G}(\mathcal{F})$. Moreover, for each $s>0$, $dp^{-1}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+
s d\log f)$ defines a non-zero cycle. Hence from the description of the elements in $T_{((k,x),kx),i(K\times_{K_L}\iota(S_{L}))}^{\ast}((K\times_{K_L}X_G)\times X_G)$ given before Equation (\[eq:before\]), we obtain that for each point $(k,x)\in K\times_{K_L}\iota(S_{L})$ there exists $\lambda(f)_{kx}\in T_{kx}^{\ast}X_G$ such that $$\frac{df_{(k,x)}}{f(k,x)}=(\lambda(f)_{kx}\circ da_{(k,x)}).$$ Each point at the intersection of $T_{(k,x,kx),K\times_{K_L}X_G}(K\times_{K_L}X_G)\times T^{\ast}X_G$ and $i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+s d\log f$ is then of the form $$(-s\lambda(f)_{kx}\circ da_{(k,x)},\lambda+s\lambda(f)_{kx})+\left(s\frac{df_{(k,x)}}{f(k,x)},0\right)=(0,\lambda+s\lambda(f)_{kx}),\text{ where }\lambda\in T_{kx,K\cdot\iota(S_L)}^{\ast}X_G.$$ Consequently $$\tau_{\ast}dp^{-1}\left({T_{((k,x),kx),i(K\times_{K_L}\iota(S_{L}))}^{\ast}(K\times_{K_L} X_G)\times X_G}+s \frac{df_{(k,x)}}{f(k,x)}\right)=
{T_{kx,K\cdot\iota(S_L)}^{\ast}X_G}+s\lambda(f)_{kx}.$$ Thus, if for each cycle $C\in \mathscr{L}(X_G,K)$ with $|C|\subset \overline{K\cdot \iota(X_L)}$ (See Definition \[deftn:ccycleperverse\]), and for each $s>0$ we define $C + s\lambda(f)$ as the cycle of $X_G$ equal to the image of $C$ under the automorphism $$\begin{aligned}
(x,\xi)\mapsto (x,\xi+s\lambda (f)_{x}),\quad x\in \overline{K\cdot \iota(X_L)},~\xi \in T_{x}^{\ast}X_G,\end{aligned}$$ we obtain $$\begin{aligned}
\tau_{\ast}d{p}^{-1}(i_{\ast}CC(\widetilde{R\iota_{\ast}\mathcal{F}})+s d\log f)
&=\tau_{\ast}dp^{-1}\left(\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{i(K\times_{K_L}\iota(S_{L}))}^{\ast}((K\times_{K_L}X_G)\times X_G)}]\right)\\
&=\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{K\cdot\iota(S_L)}^{\ast}X_G}]+s\lambda(f).\end{aligned}$$ Therefore $$\begin{aligned}
CC\left(I_{L}^{G}\mathcal{F}\right)&=a_{\ast}(CC(\widetilde{R\iota_{\ast}\mathcal{F}}))\\
&=\overline{p}_{\ast}(j_{\ast}(i_{\ast}(CC(\widetilde{R\iota_{\ast}\mathcal{F}}))))\\
&=\lim_{s\rightarrow 0}\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{K\cdot\iota(S_L)}^{\ast}X_G}]+s\lambda(f).\end{aligned}$$ Next, since for each $x\in a(K\times_{K_L}\iota(X_{L}))=K\cdot \iota(X_{L}) $ and every element $\lambda\in T_{x,K\cdot \iota(S_L)}^{\ast}X_G$, we have $\lim_{s\rightarrow 0} \lambda+ s\lambda(f)|_{x}=\lambda,$ the restriction of $CC(I_{L}^{G}\mathcal{F})$ to $a(K\times_{K_L}\iota(X_{L}))$ must coincide with the cycle $$\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{K\cdot\iota(S_L)}^{\ast}X_G}].$$ Consequently, if we write $$CC_{\partial} =CC\left(I_{L}^{G}\mathcal{F}\right)-\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{K\cdot\iota(S_L)}^{\ast}X_G}],$$ then the support $|CC_{\partial}|$ of the cycle $CC_{\partial}$ will be contained in the inverse image of the boundary $\partial(K\cdot \iota(X_{L}))$ in $T^{\ast}X_G$. By a similar argument to the one of (4.6c)[@SV], we can moreover conclude that $|CC_{\partial}|$ is the union of the conormal bundles of a family of $K$-orbits in the boundary of $K\cdot \iota(X_{L})$. This leads us to the desired equality $$CC\left(I_{L}^{G}\mathcal{F}\right)=\sum_{K_L-\mathrm{orbits }~S_L~\mathrm{in}~X_L} \chi_{S_{L}}^{mic}(\mathcal{F})[\overline{T_{K\cdot\iota(S_L)}^{\ast}X_G}]+
\sum_{K-orbits~S\text{~in~}\partial(K\cdot \iota(X_{L}))} \chi_{S}^{mic}(I_{L}^{G}\mathcal{F})[\overline{T_{S}^{\ast}X_G}].$$
To end with this section notice that from (\[eq:diagramcycle1\]) and Proposition \[prop:reduction2\](6), we can by defining $$\begin{aligned}
\left(I_{L}^{G}\right)_{\ast}:\mathscr{L}(X_{L},K_L)&\rightarrow \mathscr{L}(X_{G},K)\\
\sum_{K_{L}-\text{orbits }S\text{ in }X_L}m_S[\overline{T_{S}^{\ast}X_L}]& \mapsto a_{\ast}\left(
\sum_{K_{L}-\text{orbits }S\text{ in }X_L}m_S[\overline{T_{K\times_{K_L}\iota(S)}^{\ast}(K\times_{K_L} X_G)}]\right)\nonumber\end{aligned}$$ extend (\[eq:cdiagramme1\]) to obtain the commutative diagram $$\begin{aligned}
\xymatrix{
K\mathcal{M}(\mathfrak{g},K,I_{X_{G}}) \ar[r] & K(X_{G},K)\ar[r]^{CC} & \mathscr{L}(X_G,K)\\
K\mathcal{M}(\mathfrak{l},K_{L},I_{X_{L}}) \ar[u]^{\mathscr{R}_{(\mathfrak{l},K_{L})}^{(\mathfrak{g},K)}}\ar[r] & K(X_{L},K_{L})\ar[u]^{I_{L}^{G}}\ar[r]^{CC} & \mathscr{L}(X_{L},K_L).\ar[u]^{\left(I_{L}^{G}\right)_{\ast}}
}\end{aligned}$$
The Langlands Correspondence
============================
In this section we give a quick review on the Langlands classification as explained in [@ABV]. We follow Chapters 4, 5 and 10 [@ABV]. The Local Langlands Correspondence gives a classification of representations of strong real forms of $G$ in terms of a set of parameters of an $L$-group of $G$. In [@ABV], the authors generalize the Langlands Correspondence to include in the classification, representations of a special type of covering group of $G$. In this more general setting the role of the $L$-groups in the descriptions of the representations is played by the more general notion of $E$-group. The $E$-groups are introduced for the first time by Adams and Vogan in [@AV]. $E$-groups will also be necessary to describe in Section 7.3 the Adams-Johnson packets.
The section is divided as follows. We begin by introducing the notions of $L$-group and $E$-group. Next, we recall the definition of an $L$-parameter. The section ends with the formulation of the Local Langlands Correspondence.\
Let $\Psi_{0}(G)=\left(X^{\ast},\Delta,X_{\ast},\Delta^{\vee}\right)$ be the based root datum of $G$ and write ${}^{\vee}\Psi_{0}(G)=\left(X_{\ast},\Delta^{\vee},X^{\ast},\Delta\right)$ for the dual based root datum to $\Psi_{0}(G)$. Then $$\begin{aligned}
\label{eq:dualautomorphism1}
\text{Aut}(\Psi_{0}({G}))\cong\text{Aut}\left({}^{\vee}\Psi_{0}({G})\right).\end{aligned}$$
Suppose $G$ is a complex connected reductive algebraic group. A **dual group** for $G$ is a complex connected reductive algebraic group ${}^{\vee}{G}$ whose based root datum is dual to the based root datum of $G$, i.e. $$\begin{aligned}
\label{eq:dualautomorphism2}
\Psi_{0}\left({}^{\vee}{G}\right)\cong{}^{\vee}\Psi_{0}({G}).\end{aligned}$$ A **weak $E$-group** for $G$ is an algebraic group ${}^{\vee}G^{\Gamma}$ containing the dual group ${}^{\vee}{G}$ for $G$ as a subgroup of index two. That is, there is a short exact sequence $$\begin{aligned}
1\rightarrow {}^{\vee}{G}\rightarrow {}^{\vee}G^{\Gamma}\rightarrow \Gamma\rightarrow 1,\end{aligned}$$ where $\Gamma=\mathrm{Gal}(\mathbb{C}/\mathbb{R})$.
As explained in Chapter 4 of [@ABV], we can give a simple classification of weak $E$-groups ${}^{\vee}G^{\Gamma}$ for $G$. The classification is similar to that given in Proposition \[prop:classificationextended\] for extended groups. In order to give this description in more details we begin by recalling (see for example Proposition 2.11 of [@ABV]) that there is a natural short exact sequence $$\begin{aligned}
\label{eq:sequenceexactmorphism}
1\rightarrow \text{Int}\left({}^{\vee}G\right)\rightarrow \text{Aut}\left({}^{\vee}G\right)\xrightarrow{\Psi_0} \text{Aut}\left(\Psi_{0}\left({}^{\vee}{G}\right)\right)\rightarrow 1,\end{aligned}$$ and that this sequence splits (not canonically), as follows; Choose a Borel subgroup ${}^{\vee}B$ of ${}^{\vee}G$, a maximal torus ${}^{\vee}T\subset {}^{\vee}B$, and a set of basis vectors $\{X_{\alpha}\}$ for the simple root spaces of ${}^{\vee}T$ in the Lie algebra ${}^{\vee}\mathfrak{b}$ of ${}^{\vee}B$; and define $\text{Aut}\left({}^{\vee}G,^{\vee}B,^{\vee}T,\{X_{\alpha}\}\right)$ to be the set of algebraic automorphisms of ${}^{\vee}G$ preserving ${}^{\vee}B$, ${}^{\vee}T$, and $\{X_{\alpha}\}$ as sets. Then the restriction of $\Psi_0$ to $\text{Aut}\left({}^{\vee}G,^{\vee}B,^{\vee}T,\{X_{\alpha}\}\right)$ is an isomorphism. An automorphism belonging to one of the sets $\text{Aut}(^{\vee}G,^{\vee}B,^{\vee}T,\{X_{\alpha}\})$ is called ***distinguished***.
Now, let ${}^{\vee}\delta$ be any element in ${}^{\vee}G^{\Gamma}-{}^{\vee}G$ and write $\sigma_{{}^{\vee}\delta}$ for the automorphism of ${}^{\vee}G$ defined by the conjugation action of ${}^{\vee}\delta$ in ${}^{\vee}G$. From (\[eq:sequenceexactmorphism\]), we see that $\sigma_{{}^{\vee}\delta}$ induces an involutive automorphism of the based root datum $$a=\Psi_0(\sigma_{{}^{\vee}\delta})\in \text{Aut}\left(\Psi_0({}^{\vee}G)\right)$$ which is independent of the choice of ${}^{\vee}\delta$. Suppose moreover that $\sigma_{{}^{\vee}\delta}$ is a distinguished automorphism of ${}^{\vee}G$. Set $\theta_Z$ to be the restriction of $\sigma_{{}^{\vee}\delta}$ to $Z({}^{\vee}G)$, which is independent of the choice of ${}^{\vee}\delta$, and consider the class $$\overline{z}\in Z\left({}^{\vee}G\right)^{\theta_Z}/(1+\theta_Z)Z({}^{\vee}G)$$ of the element $z={}^{\vee}\delta^{2}\in Z({}^{\vee}G)^{\theta_Z}$, where $$(1+\theta_Z)Z({}^{\vee}G)=\{z\theta_Z (z):z\in Z({}^{\vee}G)\}.$$ We notice that $\overline{z}$ is independent of the choice of the distinguished automorphism $\sigma_{{}^{\vee}\delta}$, indeed, from (\[eq:sequenceexactmorphism\]), any other element ${}^{\vee}\delta$ defining a distinguished automorphism of ${}^{\vee}G$ will be of the form ${}^{\vee}\delta'=z_1g{}^{\vee}\delta g^{-1}$ for some $z_1\in Z({}^{\vee}G)$ and $g\in {}^{\vee}G$. Consequently $({}^{\vee}\delta')^{2}=z_1\theta_Z(z_1)^{\vee}\delta^{2}$, thus ${}^{\vee}\delta^{2}$ and $({}^{\vee}\delta')^{2}$ belong to the same class. Therefore, to any week $E$-group ${}^{\vee}G^{\Gamma}$ for $G$ we can attach two invariants: $${}^{\vee}G^{\Gamma}\longrightarrow (a,\overline{z}).$$ Furthermore, from point $b)$ of Proposition 4.4 [@ABV], any two $E$-groups with the same couple of invariants are isomorphic. Conversely, suppose $a\in \text{Aut}(\Psi({}^{\vee}G))$ is an involutive automorphism. Write $\theta_Z$ for the involutive automorphism of $Z({}^{\vee}G)$ defined by the action of any automorphism of ${}^{\vee}G$ corresponding to $a$ under $\Psi_{0}$, and suppose $$\overline{z}\in Z({}^{\vee}G)^{\theta_Z}/(1+\theta_Z)Z({}^{\vee}G).$$ Then from point $c)$ of Proposition 4.4 of [@ABV], there is a weak $E$-group with invariants $(a,\overline{z})$.
\[deftn:egroup\] Suppose $G$ is a complex connected reductive algebraic group. An **$E$-group** for $G$ is a pair $\left({}^{\vee}G^{\Gamma},\mathcal{S}\right)$, subject to the following conditions.
1. ${}^{\vee}G^{\Gamma}$ is a weak $E$-group for $G$.
2. $\mathcal{S}$ is a conjugacy class of pairs $({}^{\vee}\delta,{}^{d}B)$ with ${}^{\vee}\delta$ an element of finite order in ${}^{\vee}G^{\Gamma}-{}^{\vee}{G}$ and ${}^{d}B$ a Borel subgroup of ${}^{\vee}G$.
3. Suppose $\left({}^{\vee}\delta,{}^{d}B\right)\in \mathcal{S}$. Then conjugation by ${}^{\vee}{\delta}$ is a distinguished involutive automorphism ${}^{\vee}\sigma$ of ${}^{\vee}G$ preserving ${}^{d}B$.
The invariants of the $E$-group are the automorphism $a$ attached to ${}^{\vee}G^{\Gamma}$ as before and the element $$z={}^{\vee}\delta^{2}\in Z({}^{\vee}G)^{\theta_Z}$$ with ${}^{\vee}\delta$ any element in $\left({}^{\vee}\delta,{}^{d}B\right)$.
An **$L$-group** for $G$ is an $E$-group whose second invariant is equal to $1$, that is to say $z={}^{\vee}\delta^{2}=1$.
Just as for weak $E$-groups, $E$-groups can be completely classified by the couple of invariants $(a,z)$ described above. From point $a)$ of Proposition 4.7 of [@ABV], two $E$-groups with the same couple of parameters are isomorphic. Furthermore, if we fix a weak $E$-group for $G$ with invariants $(a,\overline{z})$ and if $\left({}^{\vee}G^{\Gamma},\mathcal{S}\right)$ is an $E$-group, then from point $c)$ of the same proposition, its second invariant is a representative for the class of $\overline{z}$. Conversely, if $z\in Z({}^{\vee}G)^{\theta_Z}$ is an element of finite order representing the class of $\overline{z}$, then there is an $E$-group structure on ${}^{\vee}G^{\Gamma}$ with second invariant $z$.\
Suppose now that $G^{\Gamma}$ is an extended group for $G$. As mentioned earlier, to each extended group correspond an inner class of real forms. Furthermore, from proposition (2.12) [@ABV], inner classes of real forms are in one-to-one correspondence with involutive automorphisms of $\Psi_{0}({G})$, and from (\[eq:dualautomorphism1\]) and (\[eq:dualautomorphism2\]) with involutive automorphisms of $\Psi_{0}({}^{\vee}{G})$. Let $a$ be the involutive automorphism of $\Psi_{0}({}^{\vee}{G})$ corresponding to the inner class of real form defined by $G^{\Gamma}$. Then a (weak) $E$-group ${}^{\vee}G^{\Gamma}$ with first invariant $a$ will be called ***a (weak)** $E$**-group for** $G^{\Gamma}$* or ***a (weak)** $E$**-group for** $G$ **and the specified inner class of real forms***.\
As explained at the beginning of this section, to describe the Langlands classification in the more general setting of [@ABV], we need to introduce a family of covering groups of $G$. We do this in the next definition.
\[deftn:canonicalcover\] Suppose $G^{\Gamma}$ is an extended group for $G$. A connected finite covering group $$1\rightarrow F\rightarrow \widetilde{G}\rightarrow G\rightarrow 1$$ is said to be **distinguished** if the following two conditions are satisfied.
1. For every $x\in G^{\Gamma}-G$, the conjugation action $\sigma_x$ of $x$ lifts to an automorphism of $\widetilde{G}$.
2. The restriction of $\sigma_x$ to $Z(\widetilde{G})$, which is independent of the choice of $x$, sends every element of $F$ to its inverse.
We define the **canonical covering** $G^{can}$ of $G$ as the projective limit of all the distinguished coverings of $G$ and we write $$1\rightarrow \pi_1(G)^{can}\rightarrow {G}^{can}\rightarrow G\rightarrow 1.$$ Now, if $\delta$ is a strong real form of $G^{\Gamma}$, let ${G}(\mathbb{R},\delta)^{can}$ be the preimage of ${G}(\mathbb{R},\delta)$ in ${G}^{can}$. Then there is a short exact sequence $$1\rightarrow \pi(G)^{can}\rightarrow {G}(\mathbb{R},\delta)^{can}\rightarrow
{G}(\mathbb{R},\delta)\rightarrow 1.$$ A **canonical projective representation of a strong real form** of $G^{\Gamma}$ is a pair $(\pi,\delta)$ subject to
1. $\delta$ is a strong real form of $G^{\Gamma}$.
2. $\pi$ is an admissible representation of $G^{can}(\mathbb{R},\delta)$.
With equivalence defined as in definition \[deftn:representationstrongrealform\].
Let $a$ be the involutive automorphism of $\Psi_{0}({}^{\vee}{G})$ corresponding to the inner class of real form attached to $G^{\Gamma}$ and write $\theta_Z$ for the involutive automorphism of $Z({}^{\vee}G)$ defined by the action of any automorphism of ${}^{\vee}G$ corresponding to $a$ under $\Psi_{0}$. From Lemma 10.2$(d)$ [@ABV] we have an isomorphism $$\begin{aligned}
\text{Elements of finite order in }Z({}^{\vee}G)^{\theta_Z}&\rightarrow \mathrm{Hom}_{\mathrm{cont}}(\pi_{1}(G)^{can},C^{\times})\\
z&\mapsto \chi_z.\end{aligned}$$ Suppose $z\in Z({}^{\vee}G)^{\theta_Z}$. We say that $(\pi,\delta)$ is of **type z** if the restriction of $\pi$ to $\pi_{1}(G)^{can}$ is a multiple of $\chi_z$. Finally define $$\Pi^{z}(G/\mathbb{R})$$ to be the set of infinitesimal equivalence classes of irreducible canonical representations of type $z$.
We turn now to the definition of the Langlands parameters. Fix an extended group $G^{\Gamma}$ for $G$, and an $E$-group $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ for the corresponding inner class of real forms. We begin by recalling the definition of the Weil group of $\mathbb{R}$.
The Weil group of $\mathbb{R}$ it’s a non-split extension of the Galois group of $\mathbb{C}/\mathbb{R}$ by the group of non-zero complex numbers $\mathbb{C}^{\times}$: $$1\rightarrow \mathbb{C}^{\times}\rightarrow W_{\mathbb{R}}\rightarrow \mathrm{Gal}(\mathbb{C}/\mathbb{R})\rightarrow 1$$ By identifying $\mathbb{C}^{\times}$ with his image in $W_{\mathbb{R}}$ and $\mathrm{Gal}(\mathbb{C}/\mathbb{R})$ with $\{\pm 1\}$, we can see that is generated by $\mathbb{C}^{\times}$ and a distinguished element $j$, subject to the relations $$j^{2}=-1\in\mathbb{C}^{\times},\qquad jzj^{-1}=\bar{z},~z\in\mathbb{C}^{\times}.$$
(see [@Langlands]) A **Langlands parameter** $\varphi$ for the weak $E$-group ${}^{\vee}{G}^{\Gamma}$ is a continuous group homomorphism, $$\varphi:W_{\mathbb{R}}\longrightarrow {}^{\vee}{G}^{\Gamma},$$ such that:
- the diagram $$\begin{aligned}
\xymatrix{ W_{\mathbb{R}} \ar[rr]^{\varphi} \ar[rd] && {}^{\vee}{G}^{\Gamma}\ar[ld] \\ & \Gamma }\end{aligned}$$ is commutative.
- for every $w\in W_{\mathbb{R}}$ the projection of $\varphi(w)$ in ${}^{\vee}{{G}}$ is semisimple.
The set of Langlands parameters will be denoted $P\left({}^{\vee}{G}^{\Gamma}\right)$. We make ${}^{\vee}{G}$ act on $P\left({}^{\vee}{G}^{\Gamma}\right)$ by conjugation and denote the set of conjugacy classes of Langlands parameters by $\Phi\left({}^{\vee}{G}^{\Gamma}\right)$.
(Definition 5.3 [@ABV]) Suppose $\left({}^{\vee}G^{\Gamma},\mathcal{S}\right)$ is an $L$-group Then we write $$\Phi\left(G/\mathbb{R}\right)=\Phi\left({}^{\vee}G^{\Gamma}\right).$$ More generally, if $\left({}^{\vee}G^{\Gamma},\mathcal{S}\right)$ is an $E$-group for $G$ with second invariant $z$, then we write $$\Phi^{z}\left(G/\mathbb{R}\right)=\Phi\left({}^{\vee}G^{\Gamma}\right).$$
(Definition 5.11 [@ABV])\[deftn:completeLanglandsparameter\] Write ${}^{\vee}G^{alg}$ for the algebraic universal covering of ${}^{\vee}G$, i.e. the projective limit of all the finite covers of ${}^{\vee}G$. Suppose $\varphi\in P\left({}^{\vee}{G}^{\Gamma}\right)$. Set $$\begin{aligned}
{}^{\vee}{G}_{\varphi}=\text{centralizer~in~} {}^{\vee}{G}\text{~of~} \varphi(W_{\mathbb{R}})\end{aligned}$$ and write $$\begin{aligned}
{}^{\vee}{G}_{\varphi}^{alg}=\text{preimage~of~}{}^{\vee}{G}_{\varphi} \text{~in~the~ algebraic~universal~cover~of~}{}^{\vee}{G}.\end{aligned}$$ Then we define the Langlands component group for $\varphi$ to be the quotient $$A_{\varphi}={}^{\vee}{{G}}_{\varphi}/({}^{\vee}{{G}}_{\varphi})_{0},$$ where $({}^{\vee}{G}_{\varphi})_0$ denotes the identity component of ${}^{\vee}{G}_{\varphi}$. Similarly, the universal component group for $\varphi$ is defined as $$A_{\varphi}^{alg}={}^{\vee}{{G}}_{\varphi}^{alg}/({}^{\vee}{{G}}_{\varphi}^{alg})_{0}.$$ A **complete Langlands parameter** for ${}^{\vee}{G}^{\Gamma}$ is a pair $(\varphi,\tau)$ with $\varphi\in P\left({}^{\vee}{G}^{\Gamma}\right)$ and $\tau$ an irreducible representation of $A_{\varphi}^{alg}$. We make ${}^{\vee}{G}$ act on the set of complete Langlands parameter by conjugation and write $$\Xi\left({}^{\vee}{G}^{\Gamma}\right):=\text{Set~of~conjugacy~classes~of~complete~Langlands~parameters~for~} {}^{\vee}{G}^{\Gamma}.$$ Finally, let $\left({}^{\vee}G^{\Gamma},\mathcal{S}\right)$ be an $E$-group for $G$ with second invariant $z$, then we write $$\Xi^{z}({G}/\mathbb{R})=\Xi\left({}^{\vee}{G}^{\Gamma}\right)$$ or simply $\Xi({G}/\mathbb{R})$ if ${}^{\vee}{G}^{\Gamma}$ is an $L$-group.
We can now state the Local Langlands Correspondence (Theorem 10.4 [@ABV]).
\[theo:4.1\] Suppose $G^{\Gamma}$ is an extended group for $G$, and $({}^{\vee}{G}^{\Gamma},\mathcal{S})$ is a $E$-group for the corresponding inner class of real forms. Write $z$ for the second invariant of the $E$-group. Then there is a natural bijection between the set $\Pi^{z}({G}/\mathbb{R})$ of equivalence classes of canonical irreducible projective representations of strong real forms of ${G}$ of type $z$ and the set $\Xi^{z}({G}/\mathbb{R})$ of complete Langlands parameters for ${}^{\vee}{G}^{\Gamma}$.
We notice that, when $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ is a $L$-group for $G$ the projective representations in the Correspondence are actual representations of strong real forms of ${G}$. Moreover, in this parameterization, the set of representations of a fixed real form ${G}(\mathbb{R},\delta)$ corresponding to complete Langlands parameter supported on a single orbit is precisely the $L$-packet for ${G}(\mathbb{R},\delta)$ attached to that orbit.
Geometric parameters
====================
Following Chapter 6 [@ABV], in this section we introduce a new set of parameters that is going to replace the set of Langlands parameters in the Local Langlands Correspondence.
The section is divided as follows. We begin with a short motivation for the definition by Adams, Barbasch and Vogan of this new set of parameters. Next, we describe some of its properties, to end with the reformulation in this new setting of the Local Langlands Correspondence. The Langlands classification stated in the previous section can be expressed in a more geometric manner. We can think of the space $P\left({}^{\vee}{G}^{\Gamma}\right)$ as a variety with an action of ${}^{\vee}{G}$ by conjugation. The ${}^{\vee}G$-conjugacy class of any $L$-parameter $\varphi\in P\left({}^{\vee}{G}^{\Gamma}\right)$ is thus a ${}^{\vee}{G}$-orbit $S_{\varphi}$ on $P\left({}^{\vee}{G}^{\Gamma}\right)$ and $$S_{\varphi}\cong {}^{\vee}{G}/{}^{\vee}{G}_{\varphi}.$$ Furthermore, a complete Langlands parameter $(\varphi,\tau)$ for ${}^{\vee}{G}^{\Gamma}$ corresponds to a ${}^{\vee}G$-equivariant local system $\mathcal{V}_{\varphi}$ on $S_{\varphi}$ (i.e. a ${}^{\vee}G$-equivariant vector bundle with a flat connection): $$\begin{aligned}
\label{eq:localsystem}
\mathcal{V}_{\varphi}\cong {}^{\vee}{{G}}\times_{{}^{\vee}{{G}}_{\varphi}}V_{\tau}\rightarrow S_{\varphi}.\end{aligned}$$ As explained in the introduction of [[@ABV]]{}, with this more geometric viewpoint one might hope that, by analogy with the theory created by Kazhdan-Lusztig and Beilinson-Bernstein in [[@KL]]{} and [[@BB]]{}, information about irreducible characters should be encoded by perverse sheaves on the closure of ${}^{\vee}{{G}}$-orbits on $P\left({}^{\vee}{G}^{\Gamma}\right)$. Unfortunately, the orbits on $P\left({}^{\vee}{G}^{\Gamma}\right)$ are already closed, hence these perverse sheaves are nothing but the local systems of (\[eq:localsystem\]). To remedy to this situation Adams, Barbasch and Vogan introduce a new space with a ${}^{\vee}{{G}}$-action having the same set of orbits as $P\left({}^{\vee}{G}^{\Gamma}\right)$, but with a more interesting geometry in which orbits are not necessary closed. This new set will be called, set of geometric parameters.
Let $\lambda\in {}^{\vee}{\mathfrak{g}}$ be a semisimple element. Set $$\begin{aligned}
\label{eqt:nilpotentpart}
{}^{\vee}{\mathfrak{g}}(\lambda)_{n}&=\{\mu\in {}^{\vee}{\mathfrak{g}}:[\lambda,\mu]=n\mu\},\quad n\in\mathbb{Z}\nonumber\\
\mathfrak{n}(\lambda)&=\sum_{n=1}^{\infty}{}^{\vee}{\mathfrak{g}}(\lambda)_{n}.\end{aligned}$$ The **canonical flat through** $\lambda$ is the affine subspace $$\begin{aligned}
\Lambda=\lambda+\mathfrak{n}(\lambda).\end{aligned}$$
\[deft:geomP\] Suppose ${}^{\vee}{G}^{\Gamma}$ is a weak $E$-group for $G$. A **geometric parameter** for ${}^{\vee}{G}^{\Gamma}$ is a pair $(y,\Lambda)$ satisfying
i. $y\in{}^{\vee}{G}^{\Gamma}-{}^{\vee}{{G}}$.
ii. $\Lambda=\lambda+\mathfrak{n}(\lambda)\subset {}^{\vee}{\mathfrak{g}}$ is a canonical flat.
iii. $y^{2}=\exp(2\pi i\lambda)$.
The set of geometric parameters of ${}^{\vee}{G}^{\Gamma}$ is denoted by $X\left({}^{\vee}{G}^{\Gamma}\right)$. As for Langlands parameters, we make ${}^{\vee}{{G}}$ act on $X\left({}^{\vee}{G}^{\Gamma}\right)$ by conjugation. Two geometric parameters are called equivalent if they are conjugate.
Let us give a more explicit description of $X\left({}^{\vee}{G}^{\Gamma}\right)$. For every semisimple $\lambda\in{}^{\vee}\mathfrak{g}$, let $\mathfrak{n}(\lambda)$ be as in (\[eqt:nilpotentpart\]) and define $$\begin{aligned}
\label{eq:groupslambda}
{}^{\vee}{G}(\lambda)&=\text{centralizer in }{}^{\vee}{G}\text{ of }\exp(2\pi i\lambda)\\
L(\lambda)&=\text{centralizer in }{}^{\vee}{G}\text{ of }\lambda\nonumber\\
N(\lambda)&=\text{connected unipotent subgroup with Lie algebra }\mathfrak{n}(\lambda)\nonumber\\
P(\lambda)&=L(\lambda)N(\lambda).\nonumber\end{aligned}$$ From Proposition 6.5 [@ABV] for every $\lambda'\in\Lambda=\lambda + \mathfrak{n}(\lambda)$ we have $${}^{\vee}{G}(\lambda')={}^{\vee}{G}(\lambda)\quad {L}(\lambda')={L}(\lambda)\quad
{N}(\lambda')={N}(\lambda).$$ Therefore, we can respectively define ${}^{\vee}{G}(\Lambda)={}^{\vee}{G}(\lambda),~{L}(\Lambda)={L}(\lambda),~N(\Lambda)={N}(\lambda)$ and $P(\Lambda)=P(\lambda)$. Now, for every ${}^{\vee}{G}$-orbit $\mathcal{O}$ of semi-simple elements in $\mathfrak{g}$ write $$X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)=\left\{(y,\Lambda)\in X\left({}^{\vee}{G}^{\Gamma}\right):\Lambda\subset \mathcal{O}\right\}$$ and set $$\begin{aligned}
\mathcal{F}(\mathcal{O})&=\text{set of canonical flats in }\mathcal{O}.\end{aligned}$$ Fix $\Lambda\in\mathcal{F}(\mathcal{O})$ and consider the sets $$\begin{aligned}
\mathcal{C}(\mathcal{O})=\{ge(\Lambda)g^{-1}:g\in {}^{\vee}{G}\}\quad\text{ and }\quad
\mathcal{I}(\mathcal{O})=\{y\in {}^{\vee}{G}^{\Gamma}-{}^{\vee}{G}:y^{2}\in\mathcal{C}(\mathcal{O})\}.\end{aligned}$$ From Proposition 6.13 [@ABV] the set $\mathcal{I}(\mathcal{O})$ decomposes into a finite number of ${}^{\vee}{G}$-orbits. List the orbits as $\mathcal{I}_{1}(\mathcal{O}),\cdots,
\mathcal{I}_{r}(\mathcal{O})$ and for each $1\leq i\leq r$ choose a point $$y_{i}\in \mathcal{I}_{i}(\mathcal{O})\quad\text{with}\quad y_{i}^{2}=e(\Lambda).$$ Then conjugation by $y_{i}$ defines an involutive automorphism of ${}^{\vee}{G}(\Lambda)$ with fixed point set denoted by $K_{y_i}$. It is immediate from the definition of $X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)$ that $$\begin{aligned}
X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)=\mathcal{F}(\mathcal{O})\times_{\mathcal{C}(\mathcal{O})}\mathcal{I}(\mathcal{O}).\end{aligned}$$ Therefore, $X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)$ is the disjoint union of $r$-closed subvarieties $$\begin{aligned}
\label{eq:varieties}
X_{y_i}\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)&=\left\{(y,\mu)\in X\left({}^{\vee}{G}^{\Gamma}\right):y\in {}^{\vee}{G}\cdot y_{i},~\mu\in\mathcal{O}\right\}\\
&=\mathcal{F}(\mathcal{O})\times_{\mathcal{C}(\mathcal{O})}\mathcal{I}_{i}(\mathcal{O})\nonumber\end{aligned}$$ and from Proposition 6.16 [@ABV] for each one of this varieties we have $$\begin{aligned}
\label{eq:varietiesisomor}
X_{y_i}\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)\cong {}^{\vee}{G}\times_{K_{y_i}}{}^{\vee}{G}(\Lambda)/P(\Lambda).\end{aligned}$$ In particular, the orbits of ${}^{\vee}{G}$ on $X_{y_i}\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)$ are in one-to-one correspondence with the orbits of $K_{y_i}$ on the partial flag variety ${}^{\vee}{G}(\Lambda)/P(\Lambda)$ and from proposition 7.14 [@ABV] this correspondence preserves their closure relations and the nature of the singularities of closures. Consequently $X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)$ has in a natural way the structure of a smooth complex algebraic variety, on which ${}^{\vee}{G}$ acts with a finite number of orbits.\
Now that the geometric parameters have been introduced, the first question that needs to be answered is whether the Langlands Correspondence (Theorem \[theo:4.1\]) still holds. That is, whether the ${}^{\vee}{G}$-orbits on $X\left({}^{\vee}{G}^{\Gamma}\right)$ are in bijection with the ones on $P\left({}^{\vee}{G}^{\Gamma}\right)$. To express this bijection, a more explicit description of the Langlands parameters of ${}^{\vee}{G}^{\Gamma}$ is needed.
Suppose $\varphi\in P\left({}^{\vee}{G}^{\Gamma}\right)$. We begin by noticing that the restriction of $\varphi$ to $\mathbb{C}^{\times}$ takes the following form: there exist $\lambda$, $\lambda'\in {}^{\vee}{\mathfrak{g}}$ semisimple elements with $[\lambda,\lambda']=0$ and exp$(2\pi i(\lambda-\lambda'))=1$ such that $$\begin{aligned}
\label{eq:lparameters}
\varphi(z)=z^{\lambda}\bar{z}^{\lambda'}.\end{aligned}$$ Now, write $$y=\text{exp}(\pi i\lambda)\varphi(j).$$ Since $j$ acts on $\mathbb{C}^{\times}$ by complex multiplication, we have $$\text{Ad}(\varphi(j))(\lambda)=\lambda'$$ and because Ad$(\text{exp}(\pi i\lambda))$ fixes $\lambda$, we can write $$\text{Ad}(y)(\lambda)=\lambda'.$$ Moreover, it’s not difficult to prove that $$y^{2}=\exp(2\pi i\lambda).$$ Thus, to each Langlands parameter $\varphi\in P\left({}^{\vee}{G}^{\Gamma}\right)$ we can attach a couple $$\begin{aligned}
\label{eq:descriptionLanglands}
\varphi\mapsto (y,\lambda),\end{aligned}$$ satisfying
- $y\in{}^{\vee}{G}^{\Gamma}-{}^{\vee}{{G}}$ and $\lambda\in {}^{\vee}{\mathfrak{g}}$ is a semisimple element,
- $y^{2}=\exp(2\pi i\lambda)$,
- $[\Lambda,\text{Ad}(y)\lambda]=0$
and from Proposition 5.6 [@ABV], the map in (\[eq:descriptionLanglands\]) defines a bijection. With this description of Langlands parameters in hand, we can consider their relation to geometric parameters and reformulate Langlands classification in a more geometrical setting.\
For each $\varphi\in P\left({}^{\vee}{G}^{\Gamma}\right)$, let $(y(\varphi),\lambda(\varphi))$ be as in (\[eq:descriptionLanglands\]) and define $\Lambda(\varphi)$ as in $ii.$ of (\[deft:geomP\]). Then from Proposition 6.17 [@ABV] we have that the map $$\begin{aligned}
\label{eq:langlandsgeometric}
p:P\left({}^{\vee}{G}^{\Gamma}\right)&\rightarrow X\left({}^{\vee}{G}^{\Gamma}\right)\\
\varphi &\mapsto (y(\varphi),\Lambda(\varphi))\nonumber\end{aligned}$$ induces a bijection of ${}^{\vee}{G}$-orbits on $P\left({}^{\vee}{G}^{\Gamma}\right)$ onto ${}^{\vee}{G}$-orbits on $X\left({}^{\vee}{G}^{\Gamma}\right)$. Theorem \[theo:4.1\] gives us a surjection $$\begin{aligned}
\label{eq:reciprocity}
\Pi(G/\mathbb{R})\rightarrow \left\{\text{Set~of~conjugacy~classes~of~geometric~parameters~of~} {}^{\vee}{G}^{\Gamma}\right\},\end{aligned}$$ whose fibers are the $L$-packets. To refine (\[eq:reciprocity\]) into a bijection, we need, as for Langlands parameters, to introduce the notion of complete geometric parameters.
\[deftn:geometricparameter\] Suppose ${}^{\vee}{G}^{\Gamma}$ is a weak $E$-group for $G$ and let $x\in X\left({}^{\vee}{G}^{\Gamma}\right)$. Set $${}^{\vee}{G}_{x}=\text{centralizer~in~} {}^{\vee}{G}\text{~of~} x$$ and write $${}^{\vee}{G}_{x}^{alg}=\text{preimage~of~}{}^{\vee}{G}_{x} \text{~in~the~ algebraic~universal~cover~of~}{}^{\vee}{G}.$$ We define the equivariant fundamental group at $x$ to be the quotient $$A_{x}={}^{\vee}{{G}}_{x}/({}^{\vee}{{G}}_{x})_{0},$$ where $({}^{\vee}{G}_{x})_0$ denotes the identity component of ${}^{\vee}{G}_{\varphi}$. Similarly, the universal fundamental group at $x$ is defined as $$A_{x}^{alg}={}^{\vee}{{G}}_{x}^{alg}/({}^{\vee}{{G}}_{x}^{alg})_{0}.$$ A **complete geometric parameter** for ${}^{\vee}{G}^{\Gamma}$ is then a pair $(x,\tau)$ with $x\in X\left({}^{\vee}{G}^{\Gamma}\right)$ and $\tau$ an irreducible representation of $A_{\varphi}^{alg}$.
Let $\varphi\in P\left({}^{\vee}{G}^{\Gamma}\right)$ and set $x=p(\varphi)$ for the geometric parameter attached to $\varphi$ by (\[eq:langlandsgeometric\]). From Lemma 7.5 [@ABV] we have $$A_{x}=A_{\varphi}\quad\text{ and }\quad A_{x}^{alg}=A_{\varphi}^{alg}.$$ Therefore, (\[eq:langlandsgeometric\]) provides a bijection of $\Xi\left({}^{\vee}{G}^{\Gamma}\right)$ with the set of conjugacy classes of complete geometric parameters and from Theorem \[theo:4.1\], Local Langlands correspondence can be rephrased as a bijection $$\Pi^{z}(G/\mathbb{R})\longleftrightarrow\{\text{Set~of~conjugacy~classes~of~complete~geometric~parameters~of~} {}^{\vee}{G}^{\Gamma}\}.$$ From now on, and to simplify notation, this last set will also be denoted $\Xi\left({}^{\vee}{G}^{\Gamma}\right)$ and for each ${}^{\vee}{G}$-orbit $\mathcal{O}$ of semi-simple elements in $\mathfrak{g}$, we define $\Xi\left(\mathcal{O},{}^{\vee}G^{\Gamma}\right)$ as the subset of $\Xi\left({}^{\vee}{G}^{\Gamma}\right)$ consisting of ${}^{\vee}G$-orbits ${}^{\vee}G\cdot (x,\tau)$ with $x\in X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)$. In the case of an $E$-group $\left({}^{\vee}G^{\Gamma},\mathcal{S}\right)$ for $G$ with second invariant $z$, we write $$\Xi^{z}\left(\mathcal{O},{G}/\mathbb{R}\right)=\Xi\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right).$$ To end this section suppose ${}^{\vee}{H}^{\Gamma}$ and ${}^{\vee}{G}^{\Gamma}$ are weak $E$-groups, and let $$\iota_{H,G}:{}^{\vee}{H}^{\Gamma}\rightarrow {}^{\vee}{G}^{\Gamma}$$ be an $L$-homomorphism (i.e. a morphism of algebraic groups, with the property that the diagram $$\begin{aligned}
\xymatrix{ {}^{\vee}{H}^{\Gamma} \ar[rr]^{\varphi} \ar[rd] && {}^{\vee}{G}^{\Gamma}\ar[ld] \\ & \Gamma }\end{aligned}$$ commutes). Then we have the two following results (see Proposition 5.4 and Corollary 6.21 [@ABV] for a proof) that relate $L$-parameters and geometric parameters of ${}^{\vee}{H}^{\Gamma}$ with $L$-parameters and geometric parameters of ${}^{\vee}{G}^{\Gamma}$.
Suppose ${}^{\vee}{H}^{\Gamma}$ and ${}^{\vee}{G}^{\Gamma}$ are weak $E$-groups, and $$\iota_{H,G}:{}^{\vee}{H}^{\Gamma}\rightarrow {}^{\vee}{G}^{\Gamma}$$ is an $L$-homomorphism. Then composition with $\iota_{H,G}$ defines a map $$P(\iota_{H,G}):P\left({}^{\vee}{H}^{\Gamma}\right)\rightarrow P\left({}^{\vee}{G}^{\Gamma}\right)$$ on Langlands parameters, which descends to a map $$\Phi(\iota_{H,G}):\Phi\left({}^{\vee}{H}^{\Gamma}\right)\rightarrow \Phi\left({}^{\vee}{G}^{\Gamma}\right)$$ on equivalence classes.
\[prop:mapvarieties\] Suppose ${}^{\vee}{H}^{\Gamma}$ and ${}^{\vee}{G}^{\Gamma}$ are weak $E$-groups, and $\iota_{H,G}:{}^{\vee}{H}^{\Gamma}\rightarrow {}^{\vee}{G}^{\Gamma}$ is an $L$-homomorphism. Then there is a natural map $$X(\iota_{H,G}):X\left({}^{\vee}{H}^{\Gamma}\right)\rightarrow X\left({}^{\vee}{G}^{\Gamma}\right)$$ on geometric parameters, compatible with the maps $P(\iota_{H,G})$ and $\Phi(\iota_{H,G})$ of the previous result. Fix a orbit $\mathcal{O}\subset {}^{\vee}{\mathfrak{h}}$ of semisimple elements, and define $\iota_{H,G}(\mathcal{O})$ to be the unique orbit containing $dX(\iota_{H,G})(\mathcal{O})$. Then $X(\iota_{H,G})$ restricts to a morphism of algebraic varieties $$X(\mathcal{O},\iota_{H,G}):X\left(\mathcal{O},{}^{\vee}{H}^{\Gamma}\right)\rightarrow X(\iota_{H,G}\left(\mathcal{O}),{}^{\vee}{G}^{\Gamma}\right).$$ If $\iota_{H,G}$ is injective, then $X(\mathcal{O},\iota_{H,G})$ is a closed immersion.
Micro-packets
=============
In this section we give a quick review on micro-packets. For a more complete exposition on the subject see Chapters 7, 19 and 22 of [@ABV].\
For all of this section, let $G^{\Gamma}$ be an extended group for $G$ (Definition \[deftn:extendedgroup\]), and let $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ be an $E$-group (Definition \[deftn:egroup\]) for the corresponding inner class of real forms. Suppose $(x,\tau)$ is a complete geometric parameter for ${}^{\vee}{G}^{\Gamma}$ (Definition \[deftn:geometricparameter\]). Write $V_{\tau}$ for the space of $\tau$ and $S_{x}={}^{\vee}{{G}}\cdot x$ for the corresponding ${}^{\vee}{G}$-orbit on $X\left({}^{\vee}{G}^{\Gamma}\right)$. From Lemma 7.3 [@ABV], by regarding $\tau$ as a representation of ${}^{\vee}{{G}}_{x}^{alg}$ trivial on $({}^{\vee}{{G}}_{x}^{alg})_{0}$, the induced bundle $$\mathcal{V}_{x,\tau}:={}^{\vee}{{G}}^{alg}\times_{{}^{\vee}{{G}}_{x}^{alg}}V_{\tau}\rightarrow S_{x},$$ carries a ${}^{\vee}{G}^{alg}$-invariant flat connection. Therefore, $\mathcal{V}_{x,\tau}$ defines an irreducible ${}^{\vee}{G}^{alg}$-equivariant local system on $S_x$. Moreover, by Lemma 7.3(e) [@ABV], the map $$\begin{aligned}
\label{e{appendices}q:bijectionLangGeo}
(x,\tau)\mapsto \xi_{x,\tau}:=(S_{x},\mathcal{V}_{x,\tau})\end{aligned}$$ induces a bijection between $\Xi\left({}^{\vee}G^{\Gamma}\right)$ the set of equivalence classes of complete geometric parameters on $X\left({}^{\vee}G^{\Gamma}\right)$, and the set of couples $(S,\mathcal{V})$ where $S$ is an orbit of ${}^{\vee}{G}$ on $X\left({}^{\vee}G^{\Gamma}\right)$ and $\mathcal{V}$ an irreducible ${}^{\vee}{G}^{alg}$-equivariant local system on $S$. Following this bijection, the set of couples $(S,\mathcal{V})$ will also be denoted $\Xi\left({}^{\vee}G^{\Gamma}\right)$ and for each ${}^{\vee}{G}$-orbit $\mathcal{O}$ of semi-simple elements in $\mathfrak{g}$, we identify $\Xi\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)$ with the set of couples $(S,\mathcal{V})$ with $S\subset X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)$.\
Let $\mathcal{O}$ be a ${}^{\vee}{G}$-orbit of semisimple elements in ${}^{\vee}\mathfrak{g}$. As in Section 2, write $$\mathcal{P}\left(X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right),{}^{\vee}{{G}}^{alg}\right)\quad\text{and}\quad \mathcal{D}\left(X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right),{}^{\vee}{{G}}^{alg}\right)$$ for the category of ${}^{\vee}{{G}}^{alg}$-equivariant perverse sheaves and ${}^{\vee}{{G}}^{alg}$-equivariant coherent $\mathcal{D}$-modules on $X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)$. We define $\mathcal{P}\left(X\left({}^{\vee}{G}^{\Gamma}\right), {}^{\vee}{{G}}^{alg}\right)$ [and]{} $\mathcal{D}\left(X\left({}^{\vee}{G}^{\Gamma}\right),
{}^{\vee}{{G}}^{alg}\right)$ to be the direct sum over semisimple orbits $\mathcal{O}\subset {}^{\vee}{\mathfrak{g}}$ of the categories $\mathcal{P}\left(X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right),{}^{\vee}{{G}}^{alg}\right)$ and $\mathcal{D}\left(X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right),{}^{\vee}{{G}}^{alg}\right)$ respectively. The last necessary step before the introduction of micro-packets, is to explain how the irreducible objects in $\mathcal{P}\left(X\left({}^{\vee}{G}^{\Gamma}\right),{}^{\vee}{{G}}^{alg}\right)$ and $\mathcal{D}\left(X\left({}^{\vee}{G}^{\Gamma}\right),{}^{\vee}{{G}}^{alg}\right)$ are parameterized by the set $\Xi\left({}^{\vee}{G}^{\Gamma}\right)$ of equivalence classes of complete geometric parameters. Fix $\xi=(S,\mathcal{V})\in \Xi\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right)$. Write $$j:S\rightarrow \overline{S},$$ for the inclusion of $S$ in its closure and $$i:\overline{S}\rightarrow X\left(\mathcal{O},{}^{\vee}{G}^{\Gamma}\right),$$ for the inclusion of the closure of $S$ in $X(\mathcal{O},{}^{\vee}{G}^{\Gamma})$. Let $d=d(S)$ be the dimension of $S$. If we regard the local system $\mathcal{V}$ as a constructible sheaf on $S$, the complex $\mathcal{V}[-d]$, consisting of the single sheaf $\mathcal{V}$ in degree $-d$ defines an ${}^{\vee}{{G}}^{alg}$-equivariant perverse sheaf on $S$. Applying to it the intermediate extension functor $j_{!\ast}$, followed by the direct image $i_{\ast}$, we get an irreducible perverse sheaf on $X(\mathcal{O},{}^{\vee}{G}^{\Gamma})$ $$\begin{aligned}
\label{eq:irredPerv}
P(\xi)=i_{\ast}j_{!\ast}\mathcal{V}[-d],\end{aligned}$$ the perverse extension of $\xi$, and by applying the de Rham functor (see Theorem \[theo:rhcorrespondence\]) to it $$\begin{aligned}
\label{eq:irredDM}
D(\xi)=DR^{-1}(P(\xi)),\end{aligned}$$ we obtain an irreducible $\mathcal{D}$-module. That every irreducible ${}^{\vee}{G}^{alg}$-equivariant perverse sheaf on $X({}^{L}G)$ is of this form, follows from Theorem 4.3.1 [@BBD], and it’s a consequence of Riemann-Hilbert correspondence (see Theorem \[theo:rhcorrespondence\]) that the same can be said in the case of $\mathcal{D}$-modules. The sets $$\begin{aligned}
\left\{P(\xi):\xi\in \Xi\left({}^{\vee}{G}^{\Gamma}\right)\right\},\quad \left\{D(\xi):\xi\in \Xi\left(({}^{\vee}{G}^{\Gamma}\right)\right\}\end{aligned}$$ are therefore bases of the Grothendieck group $KX\left({}^{\vee}{G}^{\Gamma}\right)$. We can finally give the definition of a micro-packet.
Let $S$ be a ${}^{\vee}{G}$-orbit in $X\left({}^{\vee}{G}^{\Gamma}\right)$. To every complete geometric parameter $\xi\in\Xi\left({}^{\vee}{G}^{\Gamma}\right)$ we have attached in (\[eq:irredPerv\]) a perverse sheaf $P(\xi)$ (and in (\[eq:irredDM\]) a $\mathcal{D}$-module $D(\xi)$). From Theorem \[theo:defcc\], the conormal bundle $T^{\ast}_{S}\left(X({}^{\vee}{G}^{\Gamma})\right)$ has a non-negative integral multiplicity $\chi_{S}^{mic}(\xi)$ in the characteristic cycle $CC(P(\xi))$ of $P(\xi)$ (or equivalently in the characteristic cycle $CC(D(\xi))$ of $D(\xi)$). We define the micro-packet of geometric parameters attached to $S$, to be the set of complete geometric parameters for which this multiplicity is non-zero $$\begin{aligned}
\Xi\left({}^{\vee}{G}^{\Gamma},{}^{\vee}{G}\right)^{mic}_{S}=\left\{\xi\in\Xi\left({}^{\vee}{G}^{\Gamma}\right):\chi_{S}^{mic}(\xi)\neq 0\right\}.\end{aligned}$$
Suppose $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ is an $E$-group for $G$ with second invariant $z$. Let $\varphi\in\Phi^{z}({G}/\mathbb{R})$ be an equivalence class of Langlands parameters for ${}^{\vee}{G}^{\Gamma}$ and write $S_{\varphi}$ for the corresponding orbit of ${}^{\vee}{{G}}$ in $X\left({}^{\vee}{G}^{\Gamma}\right)$. Then we define the micro-packet of geometric parameters attached to $\varphi$ as $$\begin{aligned}
\Xi^{z}({G}/\mathbb{R})_{\varphi}=\Xi\left(X\left({}^{\vee}{G}^{\Gamma}\right),{}^{\vee}{G}\right)^{mic}_{S_{\varphi}}.\end{aligned}$$ For any complete parameter $\xi\in \Xi^{z}\left({G}/\mathbb{R}\right)$, let $\pi(\xi)$ be the representation in $\Pi^{z}({G}/\mathbb{R})$ associated to $\xi$ by Theorem \[theo:4.1\]. Then the micro-packet of $\varphi$ is defined as $$\begin{aligned}
\Pi^{z}({G}/\mathbb{R})^{mic}_{\varphi}=\{\pi(\xi'):\xi'\in \Xi^{z}({G}/\mathbb{R})^{mic}_{\varphi}\}.\end{aligned}$$ Finally, let $\delta$ be a strong real form of $G^{\Gamma}$, then we define the restriction of $\Pi({G}/\mathbb{R})^{mic}_{\varphi}$ to $\delta$ as $$\begin{aligned}
\Pi^{z}({G}(\mathbb{R},\delta))^{mic}_{\varphi}=\{\pi\in \Pi^{z}({G}/\mathbb{R})^{mic}_{\varphi}: \pi \text{ is a representation of }G(\mathbb{R},\delta) \}.\end{aligned}$$
We notice that the Langlands packet attached to a Langlands parameter $\varphi$ is always contained in the corresponding micro-packet $$\begin{aligned}
\label{eq:contentionoflpackets}
\Pi^{z}({G}/\mathbb{R})_{\varphi}\subset\Pi^{z}({G}/\mathbb{R})^{mic}_{\varphi}.\end{aligned}$$ This is a consequence of $(ii)$ of the following lemma. Point $(i)$ of the lemma will show to be quite useful later in this section. For a proof, see Lemma 19.14 [@ABV].
\[lem:lemorbit\] Let $\eta=(S,\mathcal{V})$ and $\eta'=\left(S',\mathcal{V}'\right)$ be two geometric parameters for ${}^{\vee}{G}^{\Gamma}$.
i. If $\eta'\in \Xi\left(X\left({}^{\vee}{G}^{\Gamma}\right),{}^{\vee}{G}\right)_{S}^{mic}$, then $S\subset \overline{S}'$.
ii. If $S'=S$, then $\eta'\in \Xi\left(X\left({}^{\vee}{G}^{\Gamma}\right),{}^{\vee}{G}\right)_{S}^{mic}$.
We are going to be mostly interested in the case of micro-packets attached to Langlands parameters coming from Arthur parameters. These types of micro-packets, are going to be called Adams-Barbasch-Vogan packets or simply ABV-packets. We continue by recalling the definition of an Arthur parameter.\
An Arthur parameter is a homomorphism $$\begin{aligned}
\psi:W_{\mathbb{R}}\times\textbf{SL}(2,\mathbb{C})\longrightarrow {}^{\vee}{G}^{\Gamma},\end{aligned}$$ satisfying
- The restriction of $\psi$ to $W_{\mathbb{R}}$ is a tempered Langlands parameter (i.e. the closure of $\varphi(W_{\mathbb{R}})$ in the analytic topology is compact).
- The restriction of $\psi$ to $\mathbf{SL}(2,\mathbb{C})$ is holomorphic.
Two such parameters are called equivalent if they are conjugate by the action of ${}^{\vee}{{G}}$. The set of equivalences classes is written $\Psi\left({}^{\vee}{G}^{\Gamma}\right)$ or $\Psi^{z}({G}/\mathbb{R})$, when we want to specify that ${}^{\vee}{G}^{\Gamma}$ is an $E$-group for $G$ with second invariant $z$.
To every Arthur parameter $\psi$, we can associate a Langlands parameter $\varphi_{\psi}$, by the following formula (see Section 4 [@Arthur89]) $$\begin{aligned}
\varphi_{\psi}&:W_{\mathbb{R}}\longrightarrow {}^{\vee}G^{\Gamma},\\
\varphi_{\psi}(w)&=\psi\left(w,\left(\begin{array}{cc}
|w|^{1/2}& 0\\
0& |w|^{-1/2}
\end{array}\right)\right).\end{aligned}$$ Now, to $\varphi_{\psi}$ correspond an orbit $S_{\varphi_{\psi}}$ of ${}^{\vee}{G}$ on $X\left({}^{\vee}{G}^{\Gamma}\right)$. We define $$\begin{aligned}
S_{\psi}=S_{\varphi_{\psi}}.\end{aligned}$$
\[deftn:abvpackets\] Suppose $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ is an $E$-group for $G$ with second invariant $z$. Let $\psi\in\Psi^{z}({G}/\mathbb{R})$ be an Arthur parameter. We define the Adams-Barbasch-Vogan packet $\Pi^{z}({G}/\mathbb{R})_{\psi}^{\mathrm{ABV}}$ of $\psi$, as the micro-packet of the Langlands parameter $\varphi_{\psi}$ attached to $\psi$: $$\begin{aligned}
\Pi^{z}({G}/\mathbb{R})_{\psi}^{\mathrm{ABV}}=\Pi^{z}({G}/\mathbb{R})_{\varphi_{\psi}}^{mic}.\end{aligned}$$
In other words, $\Pi^{z}({G}/\mathbb{R})_{\psi}^{\mathrm{ABV}}$ is the set of all irreducible representations with the property that the corresponding irreducible perverse sheaf contains the conormal bundle $T_{S_{\psi}}^{\ast}\left(X\left({}^{\vee}{G}^{\Gamma}\right)\right)$ in its characteristic cycle.
Micro-packets attached to Arthur parameters satisfy the following important properties.
\[theo:abvproperties\] Let $\psi$ be an Arthur parameter for ${}^{\vee}{G}^{\Gamma}$.
i. $\Pi^{z}(G/\mathbb{R})_{\psi}^{mic}$ contains the $L$-packet $\Pi^{z}(G/\mathbb{R})_{\varphi_{\psi}}$.
ii. $\Pi^{z}(G/\mathbb{R})_{\psi}^{mic}$ is the support of a stable formal virtual character: $$\eta_{\psi}^{mic}=\sum_{\xi\in\Xi({G}/\mathbb{R})_{\varphi_{\psi}}}e(\xi)(-1)^{d(S_{\xi})-d(S_{\psi})}\chi_{S_{\xi}}^{mic}(P(\xi))\pi(\xi).$$ where for each orbit $S$, $d(S)$ is the dimension of the orbit and $e(\xi)$ the Kottwitz sign attached to the real form of which $\pi(\xi)$ is a representation.
iii. $\Pi^{z}(G/\mathbb{R})_{\psi}^{mic}$ satisfies the ordinary endoscopic identities predicted by the theory of endoscopy.
As explained after (\[eq:contentionoflpackets\]), point $(i)$ is a consequence of Lemma \[lem:lemorbit\]$(ii)$. For a proof of the second statement, see Corollary 19.16[@ABV] and Theorem 22.7[@ABV]. Finally, for a proof and a more precise statement of the last point, see Theorem 26.25 [@ABV].
Tempered representations
------------------------
In this section we study ABV-packets attached to tempered Langlands parameters. Let’s recall that $\varphi\in P({}^{\vee}{G}^{\Gamma})$ is said to be **tempered** if the closure of $\varphi(W_{\mathbb{R}})$ in the analytic topology is compact.
Let $\varphi$ be a tempered Langlands parameter for ${}^{\vee}{G}^{\Gamma}$, then the corresponding orbit $S_{\varphi}$ of ${}^{\vee}{G}$ in $X\left({}^{\vee}{G}^{\Gamma}\right)$ is open and dense.
The assertion follows from Proposition 22.9$(b)$ [[@ABV]]{}, applied to an Arthur parameter with trivial $\textbf{SL}(2,\mathbb{C})$ part. Indeed, suppose $\psi$ is an Arthur parameter with restriction to $W_{\mathbb{R}}$ equal to $\varphi$ and trivial $\textbf{SL}(2,\mathbb{C})$ part. Let $(y,\lambda)$ be the couple corresponding to $\varphi$ under Equation (\[eq:descriptionLanglands\]). Define $P(\lambda)$ as in (\[eq:groupslambda\]), and write $K(y)$ for the centralizer of $y$ in ${}^{\vee}G^{\Gamma}$. Finally, set $$E_{\psi}=d\psi|_{\textbf{SL}(2,\mathbb{C})}
\left(\begin{array}{cc}
0& 1\\
0& 0
\end{array}\right)=0.$$ Then by Proposition 22.9$(b)$ [[@ABV]]{} for each $x\in S_{\varphi}$ the orbit $(P(\lambda)\cap K(y))\cdot E_{\psi}$ is dense in $T_{S_{\varphi},x}^{\ast}\left(X\left({}^{\vee}G^{\Gamma}\right)\right)$. But $E_{\psi}=0$, hence $T_{S_{\varphi},x}^{\ast}\left(X\left({}^{\vee}G^{\Gamma}\right)\right)=0$, that is, the annihilator of $T_x S_{\varphi}$ in $T_x^{\ast}(X({}^{\vee}G^{\Gamma}))$ is equal to zero. Therefore, $T_{x}\left(X\left({}^{\vee}G^{\Gamma}\right)\right)=T_x S_{\varphi}$ and the result follows.
Suppose $({}^{\vee}{G}^{\Gamma},\mathcal{S})$ is an $E$-group for $G$ with second invariant $z$. Let $\varphi$ be a tempered Langlands parameter for ${}^{\vee}G^{\Gamma}$, then $$\Pi^{z}({G}/\mathbb{R})_{\varphi}^{\mathrm{ABV}}=\Pi^{z}({G}/\mathbb{R})_{\varphi}.$$ where at right, $\Pi^{z}({G}/\mathbb{R})_{\varphi}$ denotes the Langlands packet of $\varphi$.
By Theorem \[theo:abvproperties\]($i$), the $L$-packet $\Pi^{z}({G}/\mathbb{R})_{\varphi}$ is contained in $\Pi^{z}({G}/\mathbb{R})_{\varphi}^{\text{mic}}$. We only need to show the opposite inclusion. Let $\pi\in\Pi^{z}({G}/\mathbb{R})_{\varphi}^{\text{ABV}}$ and write $S$ for the orbit of ${}^{\vee}{G}$ in $X({}^{\vee}{G}^{\Gamma})$ corresponding to the Langlands parameter of $\pi$ under the map defined in (\[eq:langlandsgeometric\]). From Lemma \[lem:lemorbit\] and the definition of $\Pi^{z}({G}/\mathbb{R})_{\varphi}^{\text{ABV}}$, the orbit $S$ contains $S_{\varphi}$ in its closure. As $S_{\varphi}$ is open, $S_{\varphi}\cap S\neq \emptyset$ and we have $S_{\varphi}=S$. By Lemma \[lem:lemorbit\]($ii$), $\pi\in\Pi^{z}(G/\mathbb{R})_{\varphi}$ and we can conclude the desired inclusion.
Essentially unipotent Arthur parameters
---------------------------------------
In this section we give a full description of the ABV-packets attached to essentially unipotent Arthur parameters, such that the image of $\mathbf{SL}(2,\mathbb{C})$ contains a principal unipotent element. Our goal is to give a slight generalization of Theorem 27.18 [@ABV] (see Theorem \[theo:unipotentparameter\] below) that only treats the unipotent case, and prove that the ABV-packets corresponding to essentially principal unipotent Arthur parameters consists of characters, one for each real form of $G$ in our inner class. This result will show to be fundamental in the proof, next section, that the packets defined in [@Adams-Johnson] are ABV-packets.\
Let $\psi$ be an Arthur parameter of ${}^{\vee}G^{\Gamma}$. We say that:\
$\bullet$ $\psi$ is **unipotent**, if its restriction to the identity component $\mathbb{C}^{\times}$ of $W_{\mathbb{R}}$ is trivial.\
More generally, we say that:\
$\bullet$ $\psi$ is **essentially unipotent**, if the image of its restriction to the identity component $\mathbb{C}^{\times}$ of $W_{\mathbb{R}}$ is contained in the center $Z({}^{\vee}{G})$ of ${}^{\vee}{G}$.\
Next, fix a morphism $$\begin{aligned}
\psi_{1}:\mathbf{SL}(2,\mathbb{C})\longrightarrow {}^{\vee}{G}.\end{aligned}$$ Then we say that $\psi_1$ is **principal**, if $\psi(\mathbf{SL}(2,\mathbb{C}))$ contains a principal unipotent element. Finally, the (essentially) unipotent Arthur parameter $\psi$ is called **(essentially) principal unipotent Arthur parameter** if $\psi|_{\mathbf{SL}(2,\mathbb{C})}$ is principal.\
The next result due to Adams, Barbasch and Vogan (see Theorem 27.18 [@ABV] and the remark at the end of page 310 of [@ABV]) gives a description of the set $\Pi(G/\mathbb{R})_{\psi}^{\text{ABV}}$ for $\psi$ a principal unipotent Arthur parameter.
\[theo:unipotentparameter\] Suppose $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ is an $E$-group for $G$ with second invariant $z$. Fix a principal morphism $$\begin{aligned}
\psi_{1}:\mathbf{SL}(2,\mathbb{C})\longrightarrow {}^{\vee}{G}.\end{aligned}$$
a) The centralizer $S_{0}$ of $\psi_{1}$ in ${}^{\vee}{G}$ is $Z({}^{\vee}{G})$.
b) Suppose $z=1$ (i.e. $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ is an $L$-group). Then the set of equivalence classes of unipotent Arthur parameters attached to $\psi_{1}$ may be identified with $$H^{1}(\Gamma,Z({}^{\vee}{G}))=\{z\in Z({}^{\vee}{G}):z\theta_{Z}(z)=1\}/\{w\theta_{Z}^{-1}(w):w\in Z({}^{\vee}{G})\}.$$ More generally, if $z\in (1+\theta_{Z})Z({}^{\vee}G)$ then the set of equivalence classes of unipotent Arthur parameters attached to $\psi_{1}$ is a principal homogeneous space for $H^{1}(\Gamma,Z({}^{\vee}{G})).$
c) The unipotent representations of type $z$ (of some real form $G(\mathbb{R},\delta))$ attached to $\psi_{1}$ are precisely the projective representations of type z trivial on the identity component $G(\mathbb{R},\delta))_0$.
d) Suppose $z\in (1+\theta_{Z})Z({}^{\vee}G)$. Let $\delta$ be any strong real form of $G^{\Gamma}$ and write $$\mathrm{Hom}^{z}_{\mathrm{cont}}({G}^{can}(\mathbb{R},\delta)/{G}^{can}(\mathbb{R},\delta)_{0},\mathbb{C}^{\times}),$$ for the set of characters of type $z$ of $G^{can}(\mathbb{R},\delta)$ trivial on $G^{can}(\mathbb{R},\delta)_0$. Then there is a natural surjection $$\begin{aligned}
\label{eq:mapunipotent}
H^{1}(\Gamma,Z({}^{\vee}{G}))\rightarrow \mathrm{Hom}^{z}_{\mathrm{cont}}({G}^{can}(\mathbb{R},\delta)/{G}^{can}(\mathbb{R},\delta)_{0},\mathbb{C}^{\times}).\end{aligned}$$
e) Suppose $\psi$ is a unipotent Arthur parameter attached to $\psi_1$ and $\delta$ is a strong real form of $G^{\Gamma}$. Write $\pi(\psi,\delta)$ for the character of $G^{can}(\mathbb{R},\delta)$ trivial on $G^{can}(\mathbb{R},\delta)_0$ attached to $\psi$ by composing the bijection of $(b)$ with the surjection of $(c)$. Let $P(\psi,\delta)$, be the perverse sheaf on $X\left({}^{\vee}G^{\Gamma}\right)$ corresponding to $\pi(\psi,\delta)$ under the Local Langlands Correspondence (see Theorem \[theo:4.1\]). Then for any perverse sheaf $P$ on $X\left({}^{\vee}G^{\Gamma}\right)$, we have $$\chi_{S_{\psi}}^{mic}(P)=\left\{\begin{array}{cl}
1& \text{if }P\cong P(\psi,\delta),\quad\text{ for some strong real form }\delta\text{ of }G\\
0& otherwise.
\end{array}\right.$$ Consequently, $$\begin{aligned}
\label{eq:repunipotenttordu}
\Pi^{z}(G/\mathbb{R})_{\psi}^{\mathrm{ABV}}=\{\pi(\psi,\delta)\}_{\delta \text{ strong real form of } G^{\Gamma}}.\end{aligned}$$
We turn now to the study of ABV-packets attached to essentially principal unipotent parameters. We begin by describing the behaviour of micro-packets under twisting. We record this in the next result. From this we will be able to give a first generalization of Theorem \[theo:unipotentparameter\]. In order to enunciate the result we need first to recall that for each strong real form $\delta$ of $G^{\Gamma}$, there is a natural morphism $$\begin{aligned}
\label{eq:cocylemap}
H^{1}(W_{\mathbb{R}},Z({}^{\vee}{G})) &\rightarrow \mathrm{Hom}_{\mathrm{cont}}({G}(\mathbb{R},\delta),\mathbb{C}^{\times})\\
\mathbf{a}&\mapsto \chi(\mathbf{a},\delta),\nonumber\end{aligned}$$ which is surjective and maps cocyles with compact image to unitary characters of $G(\mathbb{R},\delta)$ (see for example section 2 of [@Langlands]).
\[prop:torsion\] Suppose $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ is an $E$-group for $G$ with second invariant $z$. Suppose $\mathbf{a}\in H^{1}(W_{\mathbb{R}},Z({}^{\vee}{G}))$ and let the cocycle $\mathrm{a}$ be a representative of $\mathbf{a}$. For each strong real form $\delta$ of $G^{\Gamma}$ let $\chi(\mathbf{a},\delta)$ be the character of $G(\mathbb{R},\delta)$ attached to $\mathbf{a}$ as in (\[eq:cocylemap\]).
i. If $\varphi$ is a Langlands parameter for ${}^{\vee}G^{\Gamma}$, then the morphism $$\varphi_{\mathbf{a}}:W_{\mathbb{R}}\rightarrow{}^{\vee}G^{\Gamma},\quad w\mapsto (\mathrm{a}(w),1)\varphi(w)$$ defines a Langlands parameter of ${}^{\vee}G^{\Gamma}$ whose equivalence class depends exclusively on $\mathbf{a}$ and the equivalence class of $\varphi$. Furthermore $$\begin{aligned}
\Pi^{z}(G/\mathbb{R})^{mic}_{\varphi_{\mathbf{a}}}=\bigsqcup_{\delta \text{ strong real form of } G^{\Gamma}}\Pi^{z}(G(\mathbb{R},\delta))^{mic}_{\varphi_{\mathbf{a}}}\end{aligned}$$ where $$\begin{aligned}
\Pi^{z}(G(\mathbb{R},\delta))^{mic}_{\varphi_{\mathbf{a}}}=\{\pi\otimes\chi(\mathbf{a},\delta):\pi\in \Pi^{z}(G(\mathbb{R},\delta))^{mic}_{\varphi}\}.\end{aligned}$$
ii. If $\psi$ is an Arthur parameter for ${}^{\vee}G^{\Gamma}$, then the morphism $$\begin{aligned}
\label{eq:arthurcocycle}
\psi_{\mathbf{a}}:W_{\mathbb{R}}\times\mathbf{SL}(2,\mathbb{C})\rightarrow{}^{\vee}G^{\Gamma},\quad (w, g)\mapsto (a(w), 1)\psi(w,g)\end{aligned}$$ defines an Arthur parameter of ${}^{\vee}G^{\Gamma}$ whose equivalence class depends exclusively on $\mathbf{a}$ and the equivalence class of $\psi$. Furthermore $$\Pi^{z}(G/\mathbb{R})_{\psi_{\mathbf{a}}}^{\mathrm{ABV}}=\bigsqcup_{\delta \text{ strong real form of } G^{\Gamma}}\Pi^{z}(G(\mathbb{R},\delta))^{mic}_{\psi_{\mathbf{a}}}$$ where $$\begin{aligned}
\Pi^{z}(G(\mathbb{R},\delta))^{mic}_{\psi_{\mathbf{a}}}=\{\pi\otimes\chi(\mathbf{a},\delta):\pi\in \Pi^{z}(G(\mathbb{R},\delta))^{mic}_{\psi}\}.\end{aligned}$$
To check that $\varphi_{\mathbf{a}}$ defines a Langlands parameter is straightforward. Let $\lambda_{\varphi}$ be defined as in (\[eq:lparameters\]). By the definition of $\varphi_{\mathbf{a}}$ there exist $\lambda_{\mathbf{a}}\in Z({}^{\vee}\mathfrak{g})$ such that $\lambda_{\varphi_{\mathbf{a}}}=
\lambda_{\varphi}+\lambda_{\mathbf{a}}$. Write $$\mathcal{O}_{\varphi}={}^{\vee}G\cdot \lambda_{\varphi}\quad
\text{ and }\quad \mathcal{O}_{\varphi_{\mathbf{a}}}={}^{\vee}G\cdot \lambda_{\varphi_{\mathbf{a}}}.$$ Then $$\begin{aligned}
X\left(\mathcal{O}_{\varphi},{}^{\vee}{G}^{\Gamma}\right)&\rightarrow X\left(\mathcal{O}_{\varphi_{\mathbf{a}}},{}^{\vee}{G}^{\Gamma}\right)\\
(y,\Lambda)&\mapsto (\text{exp}(\pi i \lambda_{\mathbf{a}})y,\Lambda+\lambda_{\mathbf{a}}),\end{aligned}$$ defines and isomorphism of varieties, that induces a bijection of orbits $$\begin{aligned}
{}^{\vee}G-\text{orbits on }X(\mathcal{O}_{\varphi},{}^{\vee}{G}^{\Gamma}) &\rightarrow
{}^{\vee}G-\text{orbits on }X(\mathcal{O}_{\varphi_{\mathbf{a}}},{}^{\vee}{G}^{\Gamma})\\
S&\mapsto S',\end{aligned}$$ and of geometric parameters $$\begin{aligned}
\Xi^{z}(\mathcal{O}_{\varphi},G/\mathbb{R})&\rightarrow \Xi^{z}(\mathcal{O}_{\varphi_{\mathbf{a}}},G/\mathbb{R})\\
\xi&\mapsto\xi'.\end{aligned}$$ Furthermore, from the description of irreducibles perverse sheaves given in Equation (\[eq:irredPerv\]), we have an isomorphism $KX\left(\mathcal{O}_{\varphi},{}^{\vee}{G}^{\Gamma}\right)\cong KX\left(\mathcal{O}_{\varphi_{\mathbf{a}}},{}^{\vee}{G}^{\Gamma}\right)$ that restrict for each $\xi\in \Xi^{z}(\mathcal{O}_{\varphi},G/\mathbb{R})$ into an isomorphism $P(\xi)\cong P(\xi')$. Therefore, for each ${}^{\vee}G$-orbit $S$ on $X\left(\mathcal{O}_{\varphi},{}^{\vee}G^{\Gamma}\right)$ we obtain $$\begin{aligned}
\label{eq:tensorcycles}
\chi_{S}^{mic}(P(\xi))=\chi_{S'}^{mic}(P(\xi')).\end{aligned}$$ Now, from the properties of $L$-packets described in Section 3 [@Langlands], we deduce that for each strong real form $\delta\in G^{\Gamma}$, the set of irreducible representations of $G(\mathbb{R},\delta)$ corresponding to complete geometric parameters for $X\left(\mathcal{O}_{\varphi_{\mathbf{a}}},{}^{\vee}G^{\Gamma}\right)$, under the Local Langlands Correspondence (see Theorem \[theo:4.1\]), is equal to the set of irreducible representations of $G(\mathbb{R},\delta)$ attached to complete geometric parameters for $X\left(\mathcal{O}_{\varphi},{}^{\vee}G^{\Gamma}\right)$ tensored by $\chi({\mathbf{a}},\delta)$. Point $(i)$ follows then from the definition of micro-packets and equality (\[eq:tensorcycles\]). Point $(ii)$ is a direct consequence of point $(i)$ applied to $\varphi_{\psi_{\mathbf{a}}}$ and the definition of ABV-packets.
The following corollary generalizes Theorem \[theo:unipotentparameter\] to the case of essentially principal unipotent Arthur parameters for $E$-groups with second invariant $z$, satisfying $z\in (1+\theta_{Z})Z({}^{\vee}G)$ (i.e. $E$-groups admitting principal unipotent Arthur parameters).
\[cor:essentialunipotent\] Suppose $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ is an $E$-group for $G$. Write $z$ for the second invariant of the $E$-group, and suppose $z\in (1+\theta_{Z})Z({}^{\vee}G)$. Let $\psi$ be an essentially unipotent Arthur parameter for ${}^{\vee}{G}^{\Gamma}$ such that $\psi|_{\mathbf{SL}(2,\mathbb{C})}$ is principal. Then there exist a cocycle $\mathbf{a}\in H^{1}(W_{\mathbb{R}},Z({}^{\vee}{G}))$ and a principal unipotent Arthur parameter $\psi_{u}$ for ${}^{\vee}G^{\Gamma}$, such that for all $(w,g)\in W_{\mathbb{R}}\times \mathbf{SL}(2,\mathbb{C})$ we have $$\psi(w,g)=(\mathrm{a}(w),g)\psi_u(w,g),$$ where $\mathrm{a}$ is a representative of $\mathbf{a}$. Finally, for each strong real form $\delta$ of $G^{\Gamma}$, let $\chi(\mathbf{a},\delta)$ be the character of $G(\delta,\mathbb{R})$ corresponding to $\mathbf{a}$ under the map in (\[eq:cocylemap\]), and let $\pi(\psi_u,\delta)$ be the character of $G(\delta,\mathbb{R})$ attached to $\psi_u$ in point Theorem \[theo:unipotentparameter\]$(d)$ . Define $$\pi(\psi,\delta)=\chi(\mathbf{a},\delta)\pi(\psi_u,\delta).$$ Then $$\begin{aligned}
\Pi^{z}(G/\mathbb{R})_{\psi}^{\mathrm{ABV}}=\{\pi(\psi,\delta)\}_{\delta \text{ strong real form of } G^{\Gamma}}.\end{aligned}$$
Let $\psi$ be an essentially unipotent Arthur parameter of ${}^{\vee}G^{\Gamma}$ such that $\psi|_{\mathbf{SL}(2,\mathbb{R})}=\psi_1$, is a principal morphism. Let $\psi_{u}$ be any unipotent Arthur parameter extending $\psi_1$. It’s straightforward to check that $$(\psi|_{W_{\mathbb{R}}})(\psi_{u}|_{W_{\mathbb{R}}})^{-1},$$ corresponds to a cocycle $a\in Z^{1}(W_{\mathbb{R}},Z({}^{\vee}{G}))$. The first part of the corollary follows. For each strong real form $\delta$ of $G^{\Gamma}$ let $\chi(\mathbf{a},\delta)$ and $\pi(\psi_u,\delta)$, be as in the statement of the corollary. From Theorem \[theo:unipotentparameter\]$(e)$ we have $$\Pi^{z}(G/\mathbb{R})_{\psi_u}^{\mathrm{ABV}}=\{\pi(\psi_u,\delta)\}_{\delta \text{ strong real form of } G^{\Gamma}}.$$ Hence from Proposition \[prop:torsion\]($ii)$ we conclude $$\begin{aligned}
\Pi(G/\mathbb{R})_{\psi}^{\mathrm{ABV}}
&=\{\pi(\psi_u,\delta)\chi(\mathbf{a},\delta)\}_{\delta \text{ strong real form of } G^{\Gamma}}\\
&=\{\chi(\psi,\delta)\}_{\delta \text{ strong real form of } G^{\Gamma}}\end{aligned}$$
The next result fully generalizes Theorem \[theo:unipotentparameter\] to $E$-groups admitting essentially principal unipotent Arthur parameters. The techniques employed in the proof are the same as the one used to show Theorem \[theo:unipotentparameter\] in [@ABV](see pages 306-310 [@ABV]). They are based on results coming from Chapters 20-21 [@ABV], and still valid in our framework, and from Chapter 27, only proved in [@ABV] in the case of representations with infinitesimal character $\mathcal{O}$ arising from homomorphisms $\mathbf{SL}(2,\mathbb{C})\rightarrow {}^{\vee}G$. Since the infinitesimal character of the representations that we consider here differ from $\mathcal{O}$ by a central element, the results on Chapter 27, easily generalize to our setting.
\[theo:essentiallyunipotentparameter\] Suppose $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ is an $E$-group for $G$. Write $z$ for the second invariant of the $E$-group, and suppose there is $\lambda$, a central element in ${}^{\vee}{\mathfrak{g}}$ with $[\lambda,\theta_{Z}(\lambda)]=0$, such that $$z\exp(\pi i(\lambda-\theta_{Z}(\lambda)))\in (1+\theta_Z)Z({}^{\vee}G).$$ Fix a principal morphism $$\begin{aligned}
\psi_{1}:\mathbf{SL}(2,\mathbb{C})\longrightarrow {}^{\vee}{G},\end{aligned}$$ and write $\mathcal{O}={}^{\vee}G\cdot (\lambda+\lambda_1)$, where $\lambda_1=d\psi_{1}
\left(\begin{array}{cc}1/2& 0\\
0& 1/2
\end{array}\right)$.
a) The set of equivalence classes of essentially unipotent Arthur parameters attached to $\psi_{1}$, with corresponding orbit contained in $X\left(\mathcal{O},{}^{\vee}G^{\Gamma}\right)$, may be identified with $$H^{1}(\Gamma,Z({}^{\vee}{G}))=\{z\in Z({}^{\vee}{G}):z\theta_{Z}(z)=1\}/\{w\theta_{Z}^{-1}(w):w\in Z({}^{\vee}{G})\}.$$
b) The projective representations (of some real form $G^{can}(\mathbb{R},\delta)$) having infinitesimal character $\mathcal{O}$ attached to $\psi_{1}$ are precisely the projective characters of type $z$ with infinitesimal character $\mathcal{O}$.
c) Let $\delta$ be any strong real form of $G^{\Gamma}$ and write $\mathrm{Hom}^{z}_{\mathrm{cont}}({G}^{can}(\mathbb{R},\delta),\mathbb{C}^{\times})$, for the set of characters of type $z$ of $G^{can}(\mathbb{R},\delta)$. Then there is a natural map $$\begin{aligned}
\label{eq:mapessentially}
H^{1}(\Gamma,Z({}^{\vee}{G}))\rightarrow \mathrm{Hom}^{z}_{\mathrm{cont}}({G}^{can}(\mathbb{R},\delta),\mathbb{C}^{\times}),\end{aligned}$$ whose image is the set of projective characters of $G^{can}(\mathbb{R},\delta)$ of type $z$, having infinitesimal character $\mathcal{O}$.
d) Suppose $\psi$ is an essentially unipotent Arthur parameter attached to $\psi_1$ with corresponding orbit contained in $X\left(\mathcal{O},{}^{\vee}G^{\Gamma}\right)$. Let $\delta$ be a strong real form of $G^{\Gamma}$. Write $\pi(\psi,\delta)$ for the projective character of type $z$ of $G^{can}(\mathbb{R},\delta)$ attached to $\psi$ by composing the bijection of $a)$ with the map of $c)$. Let $P(\psi,\delta)$, be the perverse sheaf on $X\left({}^{\vee}G^{\Gamma}\right)$ corresponding to $\pi(\psi,\delta)$ under the Local Langlands Correspondence (see Theorem \[theo:4.1\]). Then for all perverse sheaf $P$ on $X\left({}^{\vee}G^{\Gamma}\right)$, we have $$\chi_{S_{\psi}}^{mic}(P)=\left\{\begin{array}{cl}
1& \text{if }P\cong P(\psi,\delta),\quad\text{ for some strong real form }\delta\text{ of }G\\
0& otherwise.
\end{array}\right.$$ Consequently, $$\begin{aligned}
\label{eq:repunipotenttordu}
\Pi^{z}(G/\mathbb{R})_{\psi}^{\mathrm{ABV}}=\{\pi(\psi,\delta)\}_{\delta \text{ strong real form of } G^{\Gamma}}.\end{aligned}$$
For $(a)$ we start by fixing an essentially unipotent Arthur parameter $\psi_0$ attached to $\psi_1$. Notice that for each essentially unipotent Arthur parameter $\psi$ attached to $\psi_1$, the equivalence class of the product $$(\psi|_{W_{\mathbb{R}}})(\psi_{0}|_{W_{\mathbb{R}}})^{-1},$$ defines a cocycle $\mathbf{a}_{\psi}\in H^{1}(\Gamma,Z({}^{\vee}{G}))$. Therefore, each essentially unipotent Arthur parameter $\psi$ attached to $\psi_1$ can be written as $$\psi(w,g)=(\mathrm{a}_{\psi}(w),g)\psi_0(w,g),$$ where $\mathrm{a}_{\psi}$ is a representative of $\mathbf{a}$. Point $(a)$ follows. For $(b)$ we notice that the infinitesimal character corresponding to $\mathcal{O}$ (see Lemma 15.4 [@ABV]) is the infinitesimal character of a one dimensional representation, so the corresponding maximal ideal in $U(\mathfrak{g})$ (see Theorem 21.8 [@ABV]) is the annihilator of a one dimensional representation. Point $(b)$, like Theorem \[theo:unipotentparameter\](c), follows from Corollary 27.13 [@ABV]. We notice that Corollary 27.13 [@ABV] has been demonstrated in [@ABV], only in the case when the infinitesimal character $\mathcal{O}$ arises from a homomorphism of $\mathbf{SL}(2,\mathbb{C})$ into ${}^{\vee}G$, but is easily generalized to our setting. Indeed, Corollary 27.13 [@ABV] is a consequence of Theorem 27.10 and Theorem 27.12 [@ABV], whose proof are still valid in the case of our infinitesimal character $\mathcal{O}$, and of Theorem 21.6 and Theorem 21.8 [@ABV], whose proof are valid for any infinitesimal character.
For (c), we start as in (a) by fixing an essentially unipotent Arthur parameter $\psi_0$ attached to $\psi_1$. Write $\varphi_{\psi_0}$ for the corresponding Langlands parameter. Let $\delta$ be a strong real form of $G^{\Gamma}$, whose associated real form is quasi-split. Then the Langlands packets $\Pi^{z}(G(\mathbb{R},\delta))_{\varphi_{\psi_0}}$ contains exactly one representation, a canonical projective character of type $z$ that we denote $\chi(\psi,\delta_0)$. Next, for each strong real form $\delta$ of $G^{\Gamma}$ define a canonical projective character $\chi(\psi,\delta)$ of type $z$, as follows; Suppose $T$ is a Cartan subgroup of $G$ with $\text{Ad}(\delta)T=T$, and such that $T(\mathbb{R},\delta)$ is a maximally split Cartan subgroup of $G(\mathbb{R},\delta)$. Let $g\in G$ be such that $\delta=\text{Ad}(g)\delta$, then $gTg^{-1}(\mathbb{R},\delta)$ defines a Cartan subgroup of $G(\mathbb{R},\delta)$. For any $t\in T(\delta',\mathbb{R})$ set $$\pi(\psi,\delta)(t)=\pi(\psi,\delta_0)(\text{Ad}(g)t).$$ Then the conditions of Lemma 2.5.2 [@Adams-Johnson] hold, and $\chi(\psi,\delta)$ extends uniquely to a one-dimensional representation, also denoted $\chi(\psi,\delta)$ of $G(\delta',\mathbb{R})$. Now, for each essentially unipotent Arthur parameter $\psi$ attached to $\psi_1$, let $\mathbf{a}_{\psi}\in H^{1}(\Gamma,Z({}^{\vee}{G}))$ be defined as in $(a)$. From \[theo:unipotentparameter\](d), we know how to attach to each couple $(\mathbf{a}_{\psi},\delta)$ a character $\chi(\mathbf{a}_{\psi},\delta)\in \mathrm{Hom}_{\mathrm{cont}}({G}^{can}(\mathbb{R},\delta)/{G}^{can}(\mathbb{R},\delta)_{0},\mathbb{C}^{\times})$. We can now define the natural map of $(c)$; for each strong real form $\delta$ of $G^{\Gamma}$ we define (\[eq:mapessentially\]) by sending $\mathbf{a}_{\psi}\in H^{1}(\Gamma,Z({}^{\vee}{G}))$ to $$\mathbf{a}_{\psi}\mapsto \chi(\mathbf{a}_{\psi},\delta)\chi({\psi_0},\delta).$$ The surjectivity of (\[eq:mapessentially\]) in the set of projective characters of $G^{can}(\mathbb{R},\delta)$ of type $z$, having infinitesimal character $\mathcal{O}$, is a consequence of the surjectivity of (\[eq:mapunipotent\]).
Finally, the proof of $(d)$ can be done along the same lines as that of Theorem \[theo:unipotentparameter\](e) (see page 309 [@ABV]). For each strong real form of $G$ we just need to replace the characters appearing in \[theo:unipotentparameter\](d) with the ones in the image of the map on (c), and use point (b) instead of \[theo:unipotentparameter\](c).
The Adams-Johnson construction
------------------------------
In this section we study ABV-packets attached to a particular family of Arthur parameters, namely those related to representations with cohomology. In [@Adams-Johnson], Adams and Johnson attached to any Arthur parameter in this family a packet consisting of representations cohomologically induced from unitary characters. Moreover, they proved that each packet defined in this way is the support of a stable distribution (see Theorem (2.13) [@Adams-Johnson]), and that these distributions satisfy the ordinary endoscopic identities predicted by the theory of endoscopy (see Theorem (2.21) [@Adams-Johnson]). The objective of this section is to show that the packets defined by Adams-Johnson are ABV-packets, that is, for the family of Arthur parameters studied in [@Adams-Johnson], the packet associated in (\[deftn:abvpackets\]) to any parameter in this family, coincides with the packet defined by Adams and Johnson.
We begin the section by describing the family of Arthur parameters studied in [@Adams-Johnson] and the construction of Adams-Johnson packets. The description of the parameters and of the Adams-Johnson packets that we give here, is inspired by Section 3.4.2.2 [@Otaibi] and by Section 5 [@Arthur89] (see also Section 5 [@AMR] version 1). The main difference with these two references is that in this article, we use the Galois form of the $L$-group instead of the Weil form. By doing this we are able to describe the Adams-Johnson construction in the language of extended groups of [@ABV]. The only complication by using the Galois form is that to define the packets of Adams-Johnson it will be necessary to work with an $E$-group of some Levi subgroup of $G$, and consequently to use the canonical cover (see Definition \[deftn:canonicalcover\]) of this Levi subgroup, and to work with some projective characters of strong real forms of this cover.\
Suppose $\left({}^{\vee}{G}^{\Gamma},\mathcal{S}\right)$ is an $L$-group for $G$ (see Definition \[deftn:egroup\]). We recall that $\mathcal{S}$ is a ${}^{\vee}{G}$-conjugacy class of pairs $\left({}^{\vee}\delta,{}^{d}{B}\right)$, with ${}^{d}{B}$ a Borel subgroup ${}^{d}{B}$ of ${}^{\vee}G$ and ${}^{\vee}\delta$ an element of order two in ${}^{\vee}G^{\Gamma}-{}^{\vee}{G}$ such that conjugation by ${}^{\vee}\delta$ is a distinguished involutive automorphism $\sigma_{{}^{\vee}\delta}$ of ${}^{\vee}G$ preserving ${}^{d}B$. Since $\sigma_{{}^{\vee}\delta}$ is distinguished, it preserves an splitting $\left({}^{\vee}G,{}^{d}{B},{}^{d}{T},\{X_{\alpha}\}\right)$ of $^{\vee}G$ that we fix from now on. We notice that the $L$-group ${}^{\vee}G^{\Gamma}$ can be more explicitly described as the disjoint union of ${}^{\vee}G$ and the coset ${}^{\vee}G{}^{\vee}\delta$, with multiplication on ${}^{\vee}G^{\Gamma}$ defined by the rules: $$(g_1{}^{\vee}\delta)(g_2{}^{\vee}\delta)=g_1\sigma_{{}^{\vee}\delta}(g_2) ,\qquad
(g_1{}^{\vee}\delta)(g_2)=g_1\sigma_{{}^{\vee}\delta}(g_2){}^{\vee}\delta$$ and the obvious rules for the other two kinds of products. This explicit description of ${}^{\vee}G^{\Gamma}$, amounts to the usual description of an $L$-group as the semi-direct product of ${}^{\vee}G$ with the Galois group.
Now, let $\psi$ be an Arthur parameter for ${}^{\vee}G^{\Gamma}$. As explained in the case of Langlands parameters, the restriction of $\psi$ to $\mathbb{C}^{\times}$ takes the following form: there exist $\lambda$, $\lambda'\in X_{\ast}({}^{d}{T})\otimes\mathbb{C}$ with $\lambda-\lambda'\in X_{\ast}({}^{d}{T})$ such that $$\begin{aligned}
\label{eq:AJparameterequation0}
\psi(z)=z^{\lambda}\bar{z}^{\lambda'}.\end{aligned}$$ We may suppose $\lambda$ is dominant for ${}^{d}{B}$. Let ${}^{d}{L}$ be the centralizer of $\psi(\mathbb{C}^{\times})$ in ${}^{\vee}{{G}}$ and write ${}^{d}\mathfrak{l}$ for its lie algebra. We have $$Z\left({}^{d}{L}\right)_{0}\subset {}^{d}{T}\subset{}^{d}{L}$$ and $$\lambda,\lambda'\in X_{\ast}\left(Z\left({}^{d}{L}\right)_{0}\right)\otimes\mathbb{C}\quad \text{with}\quad
\lambda-\lambda'\in X_{\ast}\left(Z\left({}^{d}{L}\right)_{0}\right).$$ Set $\psi(j)=n{}^{\vee}\delta$ where $n\in{}^{\vee}G$. Then $$\text{Ad}(n{}^{\vee}\delta)(\lambda)=\lambda'\quad \text{and} \quad (n{}^{\vee}\delta)^{2}=\psi(-1)=(-1)^{\lambda+\lambda'}.$$ Set ${}^{d}{B}_{L}={}^{d}{L}\cap {}^{d}{B}$ and write ${}^{\vee}{\rho}_{{}^{d}{L}}$ for the half-sum of the positive coroots defined by the system of positive roots $R\left({}^{d}{B}_{{}^{d}{L}},{}^{d}{T}\right)$. The family of Arthur parameters studied by Adams and Johnson are those which satisfy the following properties:
1. The identity component of $Z\left({}^{d}{L}\right)^{\psi(j)}$ is contained in $Z\left({}^{\vee}{G}\right)$.
2. $\psi(\mathbf{SL}(2,\mathbb{C}))$ contains a principal unipotent element of ${}^{d}{L}$.
3. $\left<\lambda+{}^{\vee}{\rho}_{{}^{d}{L}},\alpha\right>\neq 0$ for all root $\alpha\in R\left({}^{\vee}{G},{}^{d}{T}\right)$.
Let $S=\psi(\mathbf{SL}(2,\mathbb{C}))$. It is immediate from (AJ2), that $S$ is $\mathbf{SL}_2$-principal in ${}^{\vee}{G}$. Write $\mathfrak{s}$ for the lie algebra of $S$, and let $(h,e,f)$ be a $\mathfrak{sl}_{2}$-triplet generating $\mathfrak{s}$. Up to conjugation we can, and will, suppose that $$h={}^{\vee}{\rho}_{{}^{d}{L}}\in {}^{d}{T}.$$ Since $\psi(j)=n{}^{\vee}\delta$ commutes with $S$, conjugation by $n{}^{\vee}\delta$ fixes $h$, and because $h$ is regular in ${}^{d}{L}$, $n{}^{\vee}\delta$ normalizes ${}^{d}{T}$. It is also obvious that $n{}^{\vee}\delta$ normalizes ${}^{d}{L}$ and $Z\left({}^{d}{L}\right)$. Now, ${}^{\vee}\delta$ normalizes ${}^{d}{T}$, so $n$ normalizes ${}^{d}{T}$. From now on we assume that ${}^{\vee}{G}$ is semi-simple. We make this assumption just to simplify the exposition that follows, the conclusions in (\[eq:AJLparameter\]) and (\[eq:AJLparameter2\]) remain true in the reductive case. Point (AJ1) is therefore equivalent to\
$\mathrm{AJ1}^{\prime}$. $Z({}^{d}{L})_{0}^{\psi(j)}$ is trivial.\
\
As $\mathbb{R}^{\times}$ is the center of $W_{\mathbb{R}}$, the group $\psi(\mathbb{R}^{\times})$ commutes with $\psi(\mathbb{R}^{\times}\times \mathbf{SL}(2,\mathbb{C}))$. Hence $\psi(\mathbb{R}^{\times})\subset Z\left({}^{d}{L}\right)^{\psi(j)}$ and it follows that $\psi(\mathbb{R}_{+}^{\times})\subset Z\left({}^{d}{L}\right)_{0}^{\psi(j)}$ is trivial. Consequently $$\lambda+\lambda'=0$$ and we can write $$\psi(z)=z^{\lambda}\bar{z}^{-\lambda}=\left(\frac{z}{|z|}\right)^{2\lambda}.$$ Hence $\psi(j)^{2}=(-1)^{2\lambda}$. In particular Ad$(\psi(j))$ is a linear automorphism of order two of ${}^{\vee}{\mathfrak{g}}$. It is semi-simple with eigenvalues equal to $\pm 1$. Condition (AJ1$^{\prime}$) is then equivalent to $$\text{Ad}(\psi(j))|_{\mathfrak{z}({}^{d}\mathfrak{l})}=-\text{Id}|_{\mathfrak{z}({}^{d}\mathfrak{l})}.$$ The following arguments are taken from 3.4.2.2 [@Otaibi].
Let $\mathbf{n}:W\left({}^{\vee}{G},{}^{d}{T}\right)\rtimes \Gamma\rightarrow N\left({}^{\vee}{G},{}^{d}{T}\right)\rtimes \Gamma$ be the section defined in Section 2.1 [@Langlands-Shelstad]. Let $w_0$ be the longest element in $W({}^{\vee}{G},{}^{d}{T})$ and let $w_{{}^{d}{L}}$ be the longest element in $W\left({}^{d}{L},{}^{d}{T}\right)$. Write $$\begin{aligned}
\label{eq:n1}
n_{1}{}^{\vee}\delta=\mathbf{n}(w_{0}w_{{}^{d}{L}}{}^{\vee}\delta). \end{aligned}$$ Let $\Delta\left({}^{d}{B}_{{}^{d}{L}},{}^{d}{T}\right)$, be the set of simple roots of the positive root system $R\left({}^{d}{B}_{{}^{d}{L}},{}^{d}{T}\right)$. Since $w_0$ (resp. $w_{{}^{d}{L}}$) sends positive roots in $R\left({}^{d}{B},{}^{d}{T}\right)$ (resp. $R\left({}^{d}{B}_{{}^{d}{L}},{}^{d}{T}\right)$) to negative roots, and because $\sigma_{{}^{\vee}\delta}$ preserves the splitting $\left({}^{\vee}G,{}^{d}{B},{}^{d}{T},\{{X}_{\alpha}\}\right)$, we can conclude that $n_{1}{}^{\vee}\delta$ preserves $\Sigma\left({}^{d}{B}_{{}^{d}{L}},{}^{d}{T}\right)$. Moreover, $n_{1}{}^{\vee}\delta$ acts as -Id on $\mathfrak{z}({}^{d}\mathfrak{l})$ and by $t\mapsto t^{-1}$ in $Z\left({}^{d}{L}\right)$. In particular, Ad$(n_{1}{}^{\vee}\delta)\lambda=-\lambda=\lambda'$. Thus $$(n_{1}{}^{\vee}\delta)\psi(z)(n_{1}{}^{\vee}\delta)^{-1}=\psi(\overline{z}),$$ the element $nn_{1}^{-1}$ commutes with $\psi(\mathbb{C}^{\times})$, and we have $nn_{1}^{-1}\in {}^{d}{L}$. Furthermore, from Proposition 9.3.5 [@Springer], $w_0w_{{}^{d}{L}}{}^{\vee}\delta$ preserves the splitting $\left({}^{d}{L},{}^{d}{B}_{{}^{d}{L}},{}^{d}{T},\{{X}_{\alpha}\}_{\alpha\in \Delta\left({}^{d}{B}_{{}^{d}{L}},{}^{d}{T}\right)}\right)$. Hence $n_{1}{}^{\vee}\delta$ commutes with $S$, and we can say the same for $nn_{1}^{-1}$. Now, $S$ is principal in ${}^{\vee}{G}$ so $nn_{1}^{-1}\in Z\left({}^{d}{L}\right)$ and there exists $t\in Z\left({}^{d}{L}\right)$ such that $$\begin{aligned}
\label{eq:AJparameterequation01}
tn_1=n.\end{aligned}$$ We compute $$\begin{aligned}
\label{eq:ajparameteridentity1}
(-1)^{2\lambda}=\psi(j)^{2}=(n{}^{\vee}\delta)^{2}=&(t n_{1}{}^{\vee}\delta)^{2}=t(n_1{}^{\vee}\delta)t(n_1{}^{\vee}\delta)^{-1}(n_1{}^{\vee}\delta )^{2}\\
=&tt^{-1}(n_1{}^{\vee}\delta)^{2}=(n_1{}^{\vee}\delta)^{2}.\nonumber\end{aligned}$$ Let ${}^{d}{Q}={}^{d}{L}{}^{d}{U}$ be the parabolic subgroup of ${}^{\vee}{G}$ containing ${}^{d}{T}$ and such that the roots of ${}^{d}{T}$ in ${}^{d}{Q}$ are the $\alpha\in R\left({}^{\vee}{G},{}^{d}T\right)$ satisfying $\left<\lambda,\alpha\right>\geq 0$. In particular ${}^{d}{L}$ is a Levi factor of ${}^{d}{Q}$. The section $\mathbf{n}:W\left({}^{\vee}{G},{}^{d}{T}\right)\rtimes \Gamma\rightarrow N\left({}^{\vee}{G},{}^{d}{T}\right)\rtimes \Gamma$ defined in [@Langlands-Shelstad] has the property that $$\begin{aligned}
\label{eq:ajparameteridentity}
(n_{1}{}^{\vee}\delta)^{2}=\mathbf{n}(w_0w_{{}^{d}{L}})=\prod_{\alpha\in R\left({}^{d}{T},{}^{d}{U}\right)}\alpha^{\vee}(-1).\end{aligned}$$ Therefore, $\prod_{\alpha\in R\left({}^{d}{U},{}^{d}T\right)}\alpha^{\vee}(-1)=(-1)^{2\lambda}$ and from Proposition 1.3.5 [@Shelstad81] we can conclude $$\lambda\in X_{\ast}\left(Z\left({}^{d}{L}\right)\right)+\frac{1}{2}\sum_{\alpha\in R\left({}^{d}{U},{}^{d}T\right)}\alpha^{\vee}.$$ Let $$\begin{aligned}
\label{eq:AJLparameter}
\mu=\lambda+{}^{\vee}{\rho}_{{}^{d}{L}},\quad \mu'=\mu=-\lambda+{}^{\vee}{\rho}_{{}^{d}{L}}.\end{aligned}$$ Then $$\begin{aligned}
\label{eq:higuest}
\mu,\mu'\in X_{\ast}({}^{d}{T})+\frac{1}{2}\sum_{\alpha\in R\left({}^{d}{B},{}^{d}T\right)}\alpha^{\vee}\end{aligned}$$ and $$\text{Ad}(\psi(j))\cdot\mu=\mu'.$$ The Langlands parameter corresponding to $\psi$ verifies for every $z\in\mathbb{C}^{\times}$ $$\begin{aligned}
\label{eq:AJLparameter2}
\varphi_{\psi}(z)=z^{\mu}\bar{z}^{\mu'}.\end{aligned}$$ We point out that from point (AJ3) above, $\mu$ is regular and from (\[eq:higuest\]), $\mu$ is the infinitesimal character of a finite-dimensional representation of some real form of $G$ with highest weight (relative to the root system $R\left({}^{d}{B},{}^{d}T\right)$) equal to $$\mu-\frac{1}{2}\sum_{\alpha\in R\left({}^{d}{T},{}^{d}{B}\right)}\alpha^{\vee}=\lambda-\frac{1}{2}\sum_{\alpha\in R\left({}^{d}{T},{}^{d}{U}\right)}\alpha^{\vee}.$$
This concludes the description of the Arthur parameters studied in [@Adams-Johnson]. To define the cohomology packets we need first to connect the Levi subgroup ${}^{d}{L}$ of ${}^{\vee}G$ to an extended group for some Levi subgroup $L$ of $G$, and factor $\psi$ through an Arthur parameter $\psi_L$ of some $E$-group for $L$. The second invariant of this $E$-group will be equal to $(n_1{}^{\vee}\delta)^{2}$, and since this element is not necessarily one, it is not going to be always possible to factor $\psi$ through the Arthur parameter of an $L$-group for $L$. Because of the Local Langlands Correspondence, this will have as a consequence the necessity of using the canonical cover of $L$. The final step in the construction of the packets is to use Theorem \[theo:essentiallyunipotentparameter\] to associate to $\psi_L$ a family of canonical projective characters of strong real forms of $L$ of type $(n_1{}^{\vee}\delta)^{2}$ and apply cohomological induction to them.\
Let $\delta_{qs}$ be a strong real form of $G^{\Gamma}$, whose associated real form $\sigma_{\delta_{qs}}$ is quasi-split. Write $\theta_{\delta_{qs}}$ for the corresponding Cartan involution. Suppose $T$ is a Cartan subgroup of $G$ stable under $\sigma_{\delta_{qs}}$ and $\theta_{\delta_{qs}}$. We have a canonical isomorphism between the based root data $$(X^{\ast}({T}),\Delta^{\vee},X_{\ast}({T}),\Delta)\quad\text{and}
\quad \left(X^{\ast}({}^{d}{T}),{}^{d}\Delta,X_{\ast}({}^{d}{T}),{}^{d}\Delta^{\vee}\right).$$ Using this isomorphism, to the couple $\left({}^{d}{Q},{}^{d}{L}\right)$ we can associate a parabolic group $Q=LU$ of $G$ containing $T$ such that ${}^{\vee}L\cong {}^{d}{L}$. Now, since conjugation by the element $n_1{}^{\vee}\delta$ defined in (\[eq:n1\]) preserves the splitting $\left({}^{d}{L},{}^{d}{B}_{{}^{d}{L}},{}^{d}{T},\{{X}_{\alpha}\}_{\alpha\in \Delta\left({}^{d}{B}_{{}^{d}{L}},{}^{d}{T}\right)}\right)$, through the isomorphism ${}^{\vee}L\cong {}^{d}{L}$ it translates into a distinguished involutive automorphism of ${}^{\vee}L$. Write $a_L$ for the automorphism of the based root datum of ${}^{\vee}L$ (or equivalently of the based root datum of $L$) induced in this way by $n_1{}^{\vee}\delta$. We define (see Proposition \[prop:classificationextended\]($iii$)) $$\begin{aligned}
\label{eq:extendedgroupL}
L^{\Gamma} \text{ to be the extended group of $L$ with invariants }(a_L,\overline{1}). \end{aligned}$$ Let $a_G$ be the automorphism of the based root datum of $G$ induced by ${}^{\vee}\delta$ through duality. Since $n_{1}{}^{\vee}\delta$ induces the same automorphism, we have $a_G|_{\Psi_{0}(L)}=a_L$. Consequently, each real form in the inner class corresponding to $a_L$, is the restriction to $L$ of a real form in the inner class corresponding to $a_G$, that is $L^{\Gamma}\subset G^{\Gamma}$. Next, write $$\begin{aligned}
\label{eq:sinvariant7}
z=(n_1{}^{\vee}\delta)^{2}\end{aligned}$$ and define (see Proposition 4.7(c) [@ABV]) $$\begin{aligned}
\label{eq:e-groupL}
\left({}^{\vee}L^{\Gamma},\mathcal{S}_L\right) \text{ to be the unique } E\text{-group with invariants }(a_L,z).\end{aligned}$$ More precisely, let ${}^{\vee}{\sigma}_{L}$ be any distinguished automorphism of ${}^{\vee}L$ corresponding to $a_L$. Then the group ${}^{\vee}L^{\Gamma}$ is defined as the union of ${}^{\vee}{L}$ and the set ${}^{\vee}{L}^{\vee}\delta_{L}$ of formal symbols $l{}^{\vee}\delta_{L}$, with multiplication defined according to the rules: $$(l_1{}^{\vee}\delta_L)(l_2{}^{\vee}\delta_L)=l_1\sigma_{L}(l_2) z,\qquad (l_1{}^{\vee}\delta_L)(l_2)=l_1\sigma_{L}(l_2){}^{\vee}\delta_L$$ and the obvious rules for the other two kinds of product.
We explain now how to extend the isomorphism ${}^{\vee}{L}\cong{}^{d}{{L}}$ to an embedding $$\begin{aligned}
\label{eq:embedding1}
\iota_{{L},{G}}:{}^{\vee}L^{\Gamma}\longrightarrow {}^{\vee}{G}^{\Gamma}.\end{aligned}$$ Since ${}^{\vee}L^{\Gamma}$ is the disjoint union of ${}^{\vee}L$ and the coset ${}^{\vee}L {}^{\vee}\delta_{L}$ we just need to define $\iota_{{L},{G}}$ on ${}^{\vee}L{}^{\vee}\delta_{L}$. We do this by sending each element $l{}^{\vee}\delta_L\in {}^{\vee}L {}^{\vee}\delta_{L}$ to: $$\begin{aligned}
\label{eq:embedding2}
\iota_{{L},{G}}(l{}^{\vee}\delta_L)=ln_1{}^{\vee}\delta.\end{aligned}$$ And because ${}^{\vee}\delta_L^{2}=z=(n_1{}^{\vee}\delta)^{2}$, is straightforward to verify that the embedding is well-defined.
Let us go back now to our Arthur parameter $\psi$ satisfying points (AJ1), (AJ2) and (AJ3), above. From the properties satisfied by $\psi$ and the definition of ${}^{\vee}{L}^{\Gamma}$ there exists (up to conjugation) a unique essentially unipotent Arthur parameter $$\begin{aligned}
\psi_{L}:W_{\mathbb{R}}\times SL(2,\mathbb{C})\longrightarrow{}^{\vee}{L}^{\Gamma}, \end{aligned}$$ with restriction to $\mathbf{SL}(2,\mathbb{C})$ equal to the principal morphism, such that up to conjugation by ${}^{\vee}{{G}}$ we have $$\begin{aligned}
\psi:W_{\mathbb{R}}\times SL(2,\mathbb{C})\xrightarrow{\psi_{{L}}}{}^{\vee}{L}^{\Gamma} \xrightarrow{\iota_{L,G}}{}^{\vee}{G}^{\Gamma}. \end{aligned}$$ Indeed, with notation as in (\[eq:AJparameterequation0\]) and (\[eq:AJparameterequation01\]), we just need to define $\psi_L$ by $$\begin{aligned}
\label{eq:arthurparameterL}
\psi_{L}(z)=\psi(z)=z^{\lambda}z^{\lambda'}\quad\text{ and }\quad \psi_L(j)=t{}^{\vee}\delta_L. \end{aligned}$$ Next, since $\psi_{{L}}$ is an essentially unipotent Arthur parameter, by Theorem \[theo:essentiallyunipotentparameter\](d) there exists for each strong real form $\delta$ of $L^{\Gamma}$ a projective character $\chi(\psi_L,\delta)$ of type $z$ of $L^{can}(\delta,\mathbb{R})$. We notice that since is $\psi_{L}|_{\mathbb{C}^{\times}}$ is bounded, the character $\chi(\psi_L,\delta)$ is unitary. Suppose now that $\delta\in L^{\Gamma}$ is a strong real form of $L^{\Gamma}$ so that $\delta^{2}\in Z({}G)$. Then $\delta$ can be seen as a strong real form of $G^{\Gamma}$. Write $$O_{\delta}=\{g\delta g^{-1}:g\in N_{G}(L)\}$$ and make $L$ act on $O_{\delta}$ by conjugation. Denote by $\mathcal{O}_{\delta}$ the set of $L$-conjugacy classes of $O_{\delta}$. Each $L$-orbit in $O_{\delta}$ defines a $L$-conjugacy class of strong real forms in $L^{\Gamma}$, belonging to the same $G$-conjugacy class of strong real forms in $G^{\Gamma}$. For each $s\in \mathcal{O}_\delta$, let $\delta_s$ be a representative of the corresponding $L$-conjugacy class, and as above write $\chi(\psi_L,\delta_s)$ for the unitary character of type $z$ of ${L}^{can}(\mathbb{R},\delta_s)$ corresponding to $\delta_s$ and $\psi_L$. Now, let
- $K_{\delta_s}^{can}$ be the preimage in $G^{can}$ of $K_{\delta_s}$, the set of fixed points of the Cartan involution corresponding to $\sigma_{\delta_s}$.
- $\mathfrak{l}=\text{Lie}({L})$.
- $i=(1/2)\dim(\mathfrak{k}_{\delta_s}/\mathfrak{l}\cap\mathfrak{k}_{\delta_s})$.
and consider the cohomologically induced representation $$\begin{aligned}
\left(\mathscr{R}_{\mathfrak{l},K_{\delta_s}^{can}\cap L^{can}}^{\mathfrak{g},K_{\delta_s}^{can}}\right)^{i}(\chi(\psi_L,\delta_s)). \end{aligned}$$ We notice that the $L$-parameter associated to $\left(\mathscr{R}_{\mathfrak{l},K_{\delta_s}^{can}\cap L^{can}}^{\mathfrak{g},K_{\delta_s}^{can}}\right)^{i}(\chi(\psi_L,\delta_s))$ is the $L$-parameter of an $L$-group obtained by composing the $L$-parameter corresponding to $\chi(\psi_L,\delta_s)$ with the embedding $\iota_{{L},{G}}:{}^{\vee}L^{\Gamma}\rightarrow {}^{\vee}{G}^{\Gamma}$. Therefore, using the Local Langlands Correspondence (see Theorem \[theo:4.1\]) we conclude that $\left(\mathscr{R}_{\mathfrak{l},K_{\delta_s}^{can}\cap L^{can}}^{\mathfrak{g},K_{\delta_s}^{can}}\right)^{i}(\chi(\psi_L,\delta_s))$ is of type 1, that is to say trivial on $\pi_1(G)^{can}$, and consequently can be seen as a representation of $G(\delta_{s},\mathbb{R})$.
We can know give the definition of the packets defined by Adams and Johnson in [@Adams-Johnson].
\[deftn:ajpackets\] Let $\psi$ be an Arthur parameter for ${}^{\vee}G^{\Gamma}$ satisfying points $\mathrm{(AJ1)}$, $\mathrm{(AJ2)}$ and $\mathrm{(AJ3)}$, above. The Adams-Johnson packet corresponding to $\psi$ is defined as the set $$\begin{aligned}
\Pi(G/\mathbb{R})_{\psi}^{\mathrm{AJ}}:=\left\{\left(\mathscr{R}_{\mathfrak{l},K_{\delta_s}^{can}\cap L^{can}}^{\mathfrak{g},K_{\delta_s}^{can}}\right)^{i}(\chi(\psi_L,\delta_s)) :\delta\in L^{\Gamma}-L
\mathrm{~with~} \delta^{2}\in Z(G)\mathrm{~and~}s\in \mathcal{O}_{\delta}\right\}. \end{aligned}$$
The next result enumerates some important properties satisfied by Adams-Johnson packets.
\[ajpproperties\] Let $\psi$ be an Arthur parameter for ${}^{\vee}G^{\Gamma}$ satisfying points $\mathrm{(AJ1)}$, $\mathrm{(AJ2)}$ and $\mathrm{(AJ3)}$, above.
i. $\Pi(G/\mathbb{R})_{\psi}^{\mathrm{AJ}}$ contains the $L$-packet $\Pi(G/\mathbb{R})_{\varphi_{\psi}}$.
ii. $\Pi(G/\mathbb{R})_{\psi}^{\mathrm{AJ}}$ is the support of a stable formal virtual character (For a precise description of the stable character see Theorem 2.13 [@Adams-Johnson]).
iii. $\Pi(G/\mathbb{R})_{\psi}^{\mathrm{AJ}}$ satisfies the ordinary endoscopic identities predicted by the theory of endoscopy (For a more precise statement of this point see Theorem 2.21 [@Adams-Johnson]).
We turn now to the proof that Adams-Johnson packets are ABV-packets. We begin by noticing that from Theorem \[theo:essentiallyunipotentparameter\] we have $$\Pi(L/\mathbb{R})_{\psi_L}^{\text{ABV}}:=\left\{\chi(\psi_L,\delta):\delta \text{ a strong real form of }L^{\Gamma} \right\}.$$ Thus, we can give a reformulation of Definition \[deftn:ajpackets\] in terms of the the ABV-packet attached to $\psi_{L}$ as follows $$\begin{aligned}
\label{eq:reformulationAJpackets}
\Pi(G/\mathbb{R})_{\psi}^{\mathrm{AJ}}:=\left\{\left(\mathscr{R}_{\mathfrak{l},K_{\delta_s}^{can}\cap L^{can}}^{\mathfrak{g},K_{\delta_s}^{can}}\right)^{i}(\pi) :\pi\in\Pi^{z}(L(\mathbb{R},\delta_s))_{\psi_L}^{\mathrm{ABV}},~ \delta\in L^{\Gamma}-L
\mathrm{~with~} \delta^{2}\in Z(G)\mathrm{~and~}s\in \mathcal{O}_{\delta}\right\} .\end{aligned}$$ The next two results are going to be needed in what follows. The first is Proposition 1.11 [@ABV].
\[prop:quasisplitrepresentation\] Suppose $\sigma$ is a quasisplit real form of $G$. Let $\varphi,~\varphi'\in \Phi({}^{\vee}{G}^{\Gamma})$ be two Langlands parameters, with $S$ and $S'$ as the corresponding ${}^{\vee}{{G}}$-orbits. Then the following conditions are equivalent:
i. $S$ is contained in the closure of $S'$.
ii. There are irreducible representations $\pi\in \Pi(G(\sigma,\mathbb{R}))_{\varphi}$ and $\pi'\in\Pi(G(\sigma,\mathbb{R}))_{\varphi'}$ with the property that $\pi'$ is a composition factor of the standard module of which $\pi$ is the unique Langlands quotient.
We remark that $(ii)$ implies $(i)$ even for $G(\delta,\mathbb{R})$ not quasi-split.
\[lem:orbit\] Let $\psi$ be an Arthur parameter for ${}^{\vee}{G}^{\Gamma}$ satisfying points $\mathrm{(AJ1)}$, $\mathrm{(AJ2)}$ and $\mathrm{(AJ3)}$, above. Let $L^{\Gamma}$ be the extended group defined in (\[eq:extendedgroupL\]), and let $\left({}^{\vee}{L}^{\Gamma},\mathcal{S}_L\right)$ be the $E$-group defined in (\[eq:e-groupL\]). Write $\psi_{{L}}$ for the unique (up to conjugation) Arthur parameter of ${}^{\vee}{L}^{\Gamma}$ satisfying $\psi=\iota_{L,G}\circ\psi_{L}$. Let $\iota:X\left({}^{\vee}{L}^{\Gamma}\right)\rightarrow X\left({}^{\vee}{G}^{\Gamma}\right)$ be the closed immersion induced from the inclusion ${}^{\vee}{L}^{\Gamma}\hookrightarrow {}^{\vee}{G}^{\Gamma}$ (see Proposition \[prop:mapvarieties\]). Write $S_{\psi}$ for the ${}^{\vee}{G}$-orbit in $X\left({}^{\vee}{G}^{\Gamma}\right)$ corresponding to $\psi$ and $S_{\psi_{L}}$ for the ${}^{\vee}{L}$-orbit in $X\left({}^{\vee}{G}^{\Gamma}\right)$ corresponding to $\psi_{L}$. Suppose $S$ is an ${}^{\vee}{G}$-orbit in $X\left({}^{\vee}{G}^{\Gamma}\right)$ containing $S_{\psi}$ in its closure, then there exist an orbit $S_{L}$ of ${}^{\vee}{L}$ in $X\left({}^{\vee}{L}^{\Gamma}\right)$ with $S_{\psi_{L}}\subset \overline{S}_{{L}}$ and such that $$S={}^{\vee}{G}\cdot \iota(S_{L}).$$
Let $\delta\in G^{\Gamma}$ be a strong real form of $G^{\Gamma}$, whose associated real form is quasi-split. Set $\varphi_{\psi}$ to be the Langlands parameter attached to $\psi$. Suppose $S$ is a ${}^{\vee}{G}$-orbit in $X\left({}^{\vee}G^{\Gamma}\right)$ containing $S_{\psi}$ in its closure, and write $\varphi$ for the Langlands parameter corresponding to $S$ under the map defined in (\[eq:langlandsgeometric\]). From Proposition \[prop:quasisplitrepresentation\] there are irreducible representations $\pi\in \Pi_{\varphi_{\psi}}({G}(\mathbb{R},\delta))$ and $\pi'\in \Pi_{\varphi}({G}(\mathbb{R},\delta))$, with the property that $\pi'$ is a composition factor of the standard module $M(\pi)$ of which $\pi$ is the unique quotient.
Let $\Pi({G}(\mathbb{R},\delta))_{\psi}^{\text{AJ}}$ be the restriction to $\delta$ of the Adams-Johnson packet attached to $\psi.$ By Theorem \[ajpproperties\]$(i)$, the $L$-packet $\Pi_{\varphi_{\psi}}({G}(\mathbb{R},\delta))$ is contained in $\Pi_{\psi}^{\text{AJ}}({G}(\mathbb{R},\delta))$. By Definition \[deftn:ajpackets\] there is an element $s\in\mathcal{O}_{\delta}$, to which we can attach a strong real form $\delta_s$ of $L^{\Gamma}$, such that $\pi$ is cohomologically induced from the unitary character $\chi(\psi_L,\delta_s)$ of ${L}^{can}(\mathbb{R},\delta_s)$ corresponding to $\delta_s$ and $\psi_L$ as above , i.e. $$\pi=\left(\mathscr{R}_{\mathfrak{l},K_{\delta_s}\cap L}^{\mathfrak{g},K_{\delta_s}}\right)^{i}(\chi(\psi_L,\delta_s)).$$ Now, by dualizing the resolution of Theorem 1 of Section 6 [@Johnson], we find from the exactness of $\left(\mathscr{R}_{\mathfrak{l},K_{\delta_s}^{can}\cap L^{can}}^{\mathfrak{g},K_{\delta_s}^{can}}\right)^{i}$ that $\pi$ possess a resolution by direct sums of standard modules $$\begin{aligned}
\label{eq:resolution}
0\leftarrow \pi\leftarrow M(\pi)\leftarrow \cdots \leftarrow M_{0}\leftarrow 0,\end{aligned}$$ cohomologically induced from a resolution by direct sums of standard modules of $\chi(\psi_L,\delta_s)$ $$\begin{aligned}
\label{eq:resolutionL}
0\leftarrow \chi(\psi_L,\delta_s)\leftarrow M(\chi(\psi_L,\delta_s))\leftarrow \cdots \leftarrow M_{0}^{L}\leftarrow 0.\end{aligned}$$ Consequently, every composition factor of $M(\pi)$ is obtained after applying cohomological induction to a composition factor of $M(\chi(\psi_L,\delta_s))$. Therefore, there exists $\pi_{L}'$, the unique quotient of a standard module appearing in (\[eq:resolutionL\]), such that $$\begin{aligned}
\label{eq:resolutionL2}
\pi'=\left(\mathscr{R}_{\mathfrak{l},K_{\delta_s}^{can}\cap L^{can}}^{\mathfrak{g},K_{\delta_s}^{can}}\right)^{i}(\pi_{L}').\end{aligned}$$ Let $\varphi_{L}$ be the Langlands parameter corresponding to $\pi_{L}'$ and write $S_{{L}}$ for the orbit of ${}^{\vee}{L}$ in $X\left({}^{\vee}{L}^{\Gamma}\right)$ associated to $\varphi_L$ under (\[eq:langlandsgeometric\]). Since $\pi_L'$ is a composition factor of $X\left(\chi(\psi_L,\delta_s)\right)$, the remark after Proposition \[prop:quasisplitrepresentation\]$(ii)$, implies that the orbit $S_{\psi_{L}}$ is contained in the closure of ${S_{{L}}}$. Furthermore, from Equation (\[eq:resolutionL2\]) and the relation between the data parameterizing $\pi_L'$ and $\pi$ in [@Johnson] (see page 392 of [@Johnson]), we may verify that the Langlands parameter $\varphi$ factors through $\varphi_{L}$. Thus by Proposition \[prop:mapvarieties\] the orbits $S_L$ and $S$ correspond under the map $\iota:X\left({}^{\vee}{L}^{\Gamma}\right)\rightarrow X\left({}^{\vee}{G}^{\Gamma}\right)$, that is to say $$S={}^{\vee}{G}\cdot \iota(S_{L}).$$
\[theo:ABV-AJ\] Let $\psi$ be an Arthur parameter for ${}^{\vee}G^{\Gamma}$ satisfying points $\mathrm{(AJ1)}$, $\mathrm{(AJ2)}$ and $\mathrm{(AJ3)}$, above. Then $$\Pi(G/\mathbb{R})_{\psi}^{\mathrm{ABV}}=\Pi(G/\mathbb{R})_{\psi}^{\mathrm{AJ}}.$$
Let $L^{\Gamma}$ be the extended group defined in (\[eq:extendedgroupL\]), and let $\left({}^{\vee}{L}^{\Gamma},\mathcal{S}_L\right)$ be the $E$-group defined in (\[eq:e-groupL\]). Write $\psi_{{L}}$ for the unique (up to conjugation) Arthur parameter of ${}^{\vee}{L}^{\Gamma}$ satisfying $\psi=\iota_{L,G}\circ\psi_{L}$. Let $\lambda_\psi$ and $\lambda_\psi'$ be defined as in (\[eq:AJLparameter\]). Then by (\[eq:AJLparameter2\]), (\[eq:arthurparameterL\]) and the definition of the embedding ${}^{\vee}{L}^{\Gamma}\hookrightarrow {}^{\vee}{G}^{\Gamma}$, we have $$\varphi_{\psi_L}(z)=\varphi_{\psi}(z)=z^{\lambda_{\psi}}z^{\lambda'_{\psi}},\quad z\in\mathbb{C}^{\times}.$$ Following the characterization of Arthur parameters related to Adams-Johnson packets, $\lambda_{\psi}$ is an integral and regular semisimple element of ${}^{\vee}{\mathfrak{g}}$. Write $$\mathcal{O}_{\psi}={}^{\vee}{G}\cdot\lambda_{\psi}\quad\text{ and }\quad \mathcal{O}_{\psi_L}={}^{\vee}{L}\cdot\lambda_{\psi}.$$ Let $\xi=\left(\left(y_{\xi},\Lambda_{\xi}\right),\tau_{\xi}\right)$ be a complete geometric parameter for ${}^{\vee}G^{\Gamma}$ such that its corresponding orbit ${S_{\xi}}$ of ${}^{\vee}{G}$ in $X\left({}^{\vee}G^{\Gamma}\right)$, contains $S_{\psi}$ in its closure. Let $\pi(\xi)$ be the irreducible representation corresponding to $\xi$ under the Local Langlands Correspondence (see Theorem \[theo:4.1\]) and write $\delta_{\xi}$ for the strong real form of $G^{\Gamma}$ of which $\pi(\xi)$ is a representation (i.e. $\pi(\xi)\in \Pi(G(\mathbb{R},\delta_{\xi}))$). Let $K_\xi$ be the set of fixed points of the Cartan involution corresponding to ${\delta_{\xi}}$. By Lemma \[lem:orbit\] there exists an orbit $S_{L}$ of ${}^{\vee}{L}$ in $X\left({}^{\vee}L^{\Gamma}\right)$ satisfying $$\begin{aligned}
\label{eq:orbitsrelation}
S_{\psi_{L}}\subset \overline{S}_{{L}}\quad \text{and}\quad S_{\xi}={}^{\vee}{G}\cdot \iota (S_{L}).\end{aligned}$$ Therefore, the Langlands parameter associated to $\xi$ factors through ${}^{\vee}L^{\Gamma}$ and $\pi(\xi)$ is cohomologically induced from an irreducible representation of some real form of $L$. More precisely, there exist a complete geometric parameter $\xi_{L}=((y_{\xi_L},\Lambda_{\xi_L}),\tau_{\xi_L})$ for ${}^{\vee}L^{\Gamma}$ with $S_{\xi_L}=S_{L}$ and such that if we write $\pi_{L}(\xi_{L})$ for the irreducible representation corresponding to $\xi_{L}$ under Langlands correspondence, then $$\begin{aligned}
\label{eq:LtoG1}
\pi(\xi)=\left(\mathscr{R}_{\mathfrak{l},K_{\xi}^{can}\cap L^{can}}^{\mathfrak{g},K_{{\xi}}^{can}}\right)^{i}(\pi(\xi_{L})).\end{aligned}$$ Now, since $\lambda_{\xi} \in \mathcal{O}_{\psi}$, point (AJ3) above, and equation (\[eq:higuest\]) implies that $\lambda_{\xi}$ is a regular and integral element of ${}^{\vee}\mathfrak{g}$. Similarly, since $\lambda_{\xi_L} \in \mathcal{O}_{\psi_L}$, we deduce that $\lambda_{\xi_L}$ is a regular and integral element of ${}^{\vee}\mathfrak{l}$. Consequently, with notation as in (\[eq:groupslambda\]), we obtain that ${}^{\vee}{G}(\Lambda_{\xi})={}^{\vee}{G}$, ${}^{\vee}{L}(\Lambda_{\xi_L})={}^{\vee}{L}$ and that $P(\Lambda_{\xi})$, respectively $P(\Lambda_{\xi_L})$, is a Borel subgroup of ${}^{\vee}{G}$, respectively of ${}^{\vee}{L}$. Let $X_{y_{\xi}}\left(\mathcal{O}_{\psi},{}^{\vee}G^{\Gamma}\right)$ and $X_{y_{\xi_L}}\left(\mathcal{O}_{\psi_L},{}^{\vee}L^{\Gamma}\right)$ be the smooth subvarieties of $X\left({}^{\vee}G^{\Gamma}\right)$ and $X\left({}^{\vee}L^{\Gamma}\right)$ defined in (\[eq:varieties\]). It’s clear from the definition of these varieties that $$S_{\xi}\subset X_{y_{\psi}}\left(\mathcal{O}_{\psi},{}^{\vee}G^{\Gamma}\right)\quad\text{and}\quad S_{\xi_L}\subset X_{y_{\psi_L}}\left(\mathcal{O}_{\psi_L},{}^{\vee}L^{\Gamma}\right).$$ Let ${}^{\vee}K_\xi$ be the set of fixed points of the involutive automorphism defined by $y_\xi$. Define ${}^{\vee}K_{\xi_L}$ similarly. We notice that, since $\varphi$ factors trough $\varphi_{L}$, we have $y_{\xi}=\iota_{L,G}(y_{\xi_L})$ (see Equation \[eq:embedding1\] and \[eq:embedding2\]) and thus ${}^{\vee}K_{\xi_L}={}^{\vee}K_{\xi}\cap {}^{\vee}L$. As explained in (\[eq:varietiesisomor\]) the varieties $X_{y_{\xi}}\left(\mathcal{O}_{\psi},{}^{\vee}G^{\Gamma}\right)$ and $X_{y_{\xi_L}}\left(\mathcal{O}_{\psi_L},{}^{\vee}L^{\Gamma}\right)$ satisfy $$X_{y_{\xi}}\left(\mathcal{O}_{\psi},{}^{\vee}G^{\Gamma}\right)\cong {}^{\vee}{G}\times_{{}^{\vee}K_{\xi}}{}^{\vee}{G}(\Lambda_{\xi})/P(\Lambda_{\xi})\quad\text{and}\quad
X_{y_{\xi_L}}(\mathcal{O}_{\psi_L},{}^{\vee}L^{\Gamma})\cong{}^{\vee}{L}\times_{{}^{\vee}K_{\xi_L}}{}^{\vee}{L}(\Lambda_{\xi_L})/P(\Lambda_{\xi_L}).$$ Thus, denoting the flag varieties of ${}^{\vee}{G}$ and ${}^{\vee}{L}$ by $X_{{}^{\vee}{G}}$ and $X_{{}^{\vee}{L}}$, we can write $$X_{y_{\xi}}\left(\mathcal{O}_{\psi},{}^{\vee}G^{\Gamma}\right)\cong{}^{\vee}{G}\times_{{}^{\vee}K_\xi}X_{{}^{\vee}{G}} \quad\text{and}\quad
X_{y_{\xi_L}}\left(\mathcal{O}_{\psi_L},{}^{\vee}L^{\Gamma}\right)\cong{}^{\vee}{L}\times_{{}^{\vee}K_{\xi_L}}X_{{}^{\vee}{L}}.$$ Define $\Xi\left(X_{{}^{\vee}{G}}\right)$ to be the set of couples $(S,\mathcal{V})$ with $S$ an orbit of ${}^{\vee}K_\xi$ on $ X_{{}^{\vee}{G}}$ and $\mathcal{V}$ an irreducible ${}^{\vee}K_\xi$-equivariant local system on $S$. Define $\Xi\left(X_{{}^{\vee}{L}}\right)$ similarly. From Proposition 7.14 [@ABV] the map in Proposition \[prop:reduction2\](2) is compatible with the parameterization of irreducibles by $\Xi_{y_{\xi_L}}\left(\mathcal{O}_{\psi_L},{}^{\vee}L^{\Gamma}\right)$ and $\Xi\left(X_{{}^{\vee}L}\right)$, respectively by $\Xi_{y_\xi}\left(\mathcal{O}_{\psi},{}^{\vee}G^{\Gamma}\right)$ and $\Xi\left(X_{{}^{\vee}G}\right)$. In other words we have bijections $$\begin{aligned}
\label{eq:bijectioncomparision}
\Xi_{y_{\xi_{L}}}\left(\mathcal{O}_{\psi_L},{}^{\vee}L^{\Gamma}\right)\longrightarrow \Xi(X_{{}^{\vee}L}) \quad\text{ and }\quad
\Xi_{y_\xi}\left(\mathcal{O}_{\psi},{}^{\vee}G^{\Gamma}\right)\longrightarrow \Xi(X_{{}^{\vee}G}).\end{aligned}$$ Let $\xi'$ be the complete geometric parameter of $X_{{}^{\vee}{G}}$ corresponding to $\xi$ under (\[eq:bijectioncomparision\]) and write $P(\xi')$ for the irreducible perverse sheaves on $X_{{}^{\vee}{G}}$ defined from $\xi$ as in (\[eq:irredPerv\]). Let ${}^{\vee}\pi(\xi')$ be the $({}^{\vee}{\mathfrak{g}},{}^{\vee} K_\xi)$-module corresponding to $P(\xi')$ under Beilinson-Bernstein localization (see Theorem \[theo:bbcorespondance\]). Define $\xi_L'$, $P(\xi_{L}')$ and ${}^{\vee}\pi(\xi_{L}')$ similarly. Then ${}^{\vee}\pi(\xi')$ and ${}^{\vee}\pi(\xi'_L)$ are Vogan duals (See Theorem 11.9 [@ICIV] and Definition 2-28 [@Adams]) of $\pi(\xi)$ and $\pi(\xi_L)$ respectively. Now, Vogan duality commutes with cohomological induction. Indeed, this is clear from the data parameterizing the representations in Definition 2-9 [@Adams], and is straightforward to translate the data in [@Adams] into complete Langlands parameters (Definition \[deftn:completeLanglandsparameter\]). Equation (\[eq:LtoG1\]) implies then that ${}^{\vee}\pi(\xi')$ and ${}^{\vee}\pi(\xi'_L)$ are related trough the equation $$\begin{aligned}
{}^{\vee}\pi(\xi')=\left(\mathscr{R}_{{}^{\vee}{\mathfrak{l}},{}^{\vee}K_{\xi_L}}^{{}^{\vee}{\mathfrak{g}},{}^{\vee}K_{\xi}}\right)^{i}\left({}^{\vee}\pi(\xi'_{L})\right).\end{aligned}$$ Therefore, from the commutativity of (\[eq:cdiagramme1\]) we deduce the equality $$P(\xi')=I_{{}^{\vee}{L}}^{{}^{\vee}{G}}P(\xi'_{L}),$$ where $I_{{}^{\vee}{L}}^{{}^{\vee}{G}}$ denotes the geometric induction functor of Definition \[deftn:geominduction\]. On the level of characteristic cycles we have from Proposition \[prop:ccLG\] that $$\begin{aligned}
\label{eq:cycleequation}
CC(P(\xi'))=\sum_{{}^{\vee}K_{\xi_L}-\mathrm{orbits }~S_{{}^{\vee}{L}}~\mathrm{in}~X_{{}^{\vee}{L}}} \chi_{S_{{}^{\vee}{L}}}^{mic}(P_{\xi'_{{L}}})&[\overline{T_{{}^{\vee}K_{\xi}\cdot\iota(S_{{}^{\vee}{L}})}^{\ast}X_{{}^{\vee}{G}}}]\\
&+
\sum_{{}^{\vee}K_{\xi}-\text{orbits}~S'\text{~in~}\partial({}^{\vee}K_{\xi}\cdot \iota(X_{{}^{\vee}{L}}))} \chi_{S'}^{mic}(P(\xi'))[\overline{T_{S'}^{\ast}X_{{}^{\vee}{G}}}].\nonumber\end{aligned}$$ Now, for each ${}^{\vee}{G}$-orbit $S$ in $X_{y_{\xi}}\left(\mathcal{O}_{\psi},{}^{\vee}G^{\Gamma}\right)$ let $S'$ be the ${}^{\vee}K_{\xi}$-orbit in $X_{{}^{\vee}{G}}$ corresponding to $S$ under (\[eq:bijectioncomparision\]). Then from Proposition \[prop:reduction2\]$(1)$ and \[prop:reduction2\](6) we have $$S_{\psi}\subset \overline{S}\quad \text{iff}\quad S_{\psi}'\subset \overline{S'}\quad \text{ and that} \quad
\chi_{S}^{\mathrm{mic}}(P(\xi))=\chi_{S'}^{\mathrm{mic}}(P(\xi')).$$ We have similar relations between ${}^{\vee}L^{\Gamma}$-orbits in $X\left(\mathcal{O}_{\psi_{L}},{}^{\vee}L^{\Gamma}\right)$ and ${}^{\vee}K_{\xi_{L}}$-orbits in $X_{{}^{\vee}{L}}$. For each orbit $S'$ in the boundary $\partial({}^{\vee}K_{\xi}\cdot \iota(X_{{}^{\vee}{L}}))$ we have $d(S')<d(S_{\psi}')$, thus from (\[eq:cycleequation\]) we can conclude $$\begin{aligned}
\label{eq:cycleLcycleG}
\chi_{S_{\psi}}^{\mathrm{mic}}(P(\xi))=\chi_{S_{\psi}'}^{\mathrm{mic}}(P(\xi'))\neq 0\quad \text{if and only if}\quad
\chi_{S_{\psi_{L}}}^{\mathrm{mic}}(P(\xi_{L}))=\chi_{S_{\psi_{L}}'}^{\mathrm{mic}}(P(\xi_{L}'))\neq 0.\end{aligned}$$ Hence, from \[lem:lemorbit\]($i$) and the definition of micro-packets we obtain that equation (\[eq:cycleLcycleG\]) translates as: $$\begin{aligned}
\label{eq:last}
\text{Let }\xi\in \Xi\left({}^{\vee}G^{\Gamma}\right)\text{, then }\pi(\xi)\in\Pi(G/\mathbb{R})_{\psi}^{\text{ABV}}
\text{ if and only if there exists }\xi_L \in \Xi\left({}^{\vee}L^{\Gamma}\right)\\
\text{ such that } \pi(\xi_{L})\in\Pi^{z}(L/\mathbb{R})_{\psi_{L}}^{\text{ABV}}
\text{ and }\pi(\xi)=\left(\mathscr{R}_{\mathfrak{l},K_{\xi}^{can}\cap L^{can}}^{\mathfrak{g},K_{\xi}^{can}}\right)&^{i}(\pi(\xi_{L})).\nonumber \end{aligned}$$ Finally, the reformulation of the definition of Adams-Johnson packets expressed in (\[eq:reformulationAJpackets\]) implies that Equation (\[eq:last\]) is equivalent to $$\Pi(G/\mathbb{R})_{\psi}^{\mathrm{ABV}}=\Pi(G/\mathbb{R})_{\psi}^{\mathrm{AJ}}.$$
------------------------------------------------------------------------
\
\
Nicolás Arancibia Robert, Carleton University - École internationale des sciences du traitement de l’information (EISTI) $\bullet$ *E-mail:* **nicolas.robert@carleton.ca**
|
---
abstract: 'Let $G$ be a group hyperbolic relative to a finite collection of subgroups $\mc P$. Let $\mathcal F$ be the family of subgroups consisting of all the conjugates of subgroups in $\mc P$, all their subgroups, and all finite subgroups. Then there is a cocompact model for $E_{\mc F} G$. This result was known in the torsion-free case. In the presence of torsion, a new approach was necessary. Our method is to exploit the notion of dismantlability. A number of sample applications are discussed.'
address:
- 'Memorial University, St. John’s, Newfoundland, Canada A1C 5S7 '
- 'McGill University, Montreal, Quebec, Canada H3A 0B9'
author:
- 'Eduardo Martinez-Pedroza'
- Piotr Przytycki
title: Dismantlable classifying space for the family of parabolic subgroups of a relatively hyperbolic group
---
Introduction
============
Let $G$ be a finitely generated group hyperbolic relative to a finite collection $\mc P=\{P_\lambda\}_{\lambda\in \Lambda}$ of its subgroups (for a definition see Section \[sec:Rips\]). Let $\mathcal F$ be the collection of all the conjugates of $P_\lambda$ for $\lambda\in\Lambda$, all their subgroups, and all finite subgroups of $G$. *A model for $E_{\mc F}G$* is a $G$-complex $X$ such that all point stabilisers belong to $\mc F$, and for every $H\in \mc F$ the fixed-point set $X^H$ is a (nonempty) contractible subcomplex of $X$. A model for $E_{\mc F}G$ is also called the *classifying space for the family $\mc F$*. In this article we describe a particular classifying space for the family $\mc F$. It admits the following simple description.
Let $S$ be a finite generating set of $G$. Let $V=G$ and let $W$ denote the set of cosets $gP_\lambda$ for $g\in G$ and $\lambda\in \Lambda$. We consider the elements of $W$ as subsets of the vertex set of the Cayley graph of $G$ with respect to $S$. Then $|\cdot , \cdot|_S$, which denotes the distance in the Cayley graph, is defined on $V\cup W$. The *$n$-Rips graph* $\Gamma_n$ is the graph with vertex set $V\cup W$ and edges between $u,u'\in V\cup W$ whenever $|u,u'|_S\leq n$. The *$n$-Rips complex* $\Gamma_n^\blacktriangle$ is obtained from $\Gamma_n$ by spanning simplices on all cliques. It is easy to prove that $\Gamma_n$ is a fine $\delta$-hyperbolic connected graph (see Section \[sec:Rips\]). Our main result is the following.
\[thm:MMain\] For $n$ sufficiently large, the $n$-Rips complex $\Gamma_n^\blacktriangle$ is a cocompact model for $E_{\mc F}G$.
Theorem \[thm:MMain\] was known to hold if
- $G$ is a torsion-free hyperbolic group and $\mc
P=\emptyset$, since in that case the $n$-Rips complex $\Gamma_n^\blacktriangle$ is contractible for $n$ sufficiently large [@ABC91 Theorem 4.11].
- $G$ is a hyperbolic group and $\mc P=\emptyset$, hence $\mathcal F$ is the family of all finite subgroups, since in that case $\Gamma_n^\blacktriangle$ is $\underline{E}(G)$ [@MS02 Theorem 1], see also [@HOP14 Theorem 1.5] and [@La13 Theorem 1.4].
- $G$ is a torsion-free relatively hyperbolic group, but with different definitions of the $n$-Rips complex, the result follows from the work of Dahmani [@Da03-2 Theorem 6.2], or Mineyev and Yaman [@MiYa07 Theorem 19].
In the presence of torsion, a new approach was necessary. Our method is to exploit the notion of *dismantlability*. Dismantlability, a property of a graph guaranteeing strong fixed-point properties (see [@Po93]) was brought to geometric group theory by Chepoi and Osajda [@ChOs15]. Dismantlability was observed for hyperbolic groups in [@HOP14], following the usual proof of the contractibility of the Rips complex [@BrHa99 Prop III.$\Gamma$ 3.23].
While we discuss the $n$-Rips complex only for finitely generated relatively hyperbolic groups, Theorem \[thm:MMain\] has the following extension.
\[cor:infinite\] If $G$ is an infinitely generated group hyperbolic relative to a finite collection $\mc P$, then there is a cocompact model for $E_{\mc F}G$.
By [@Os06 Theorem 2.44], there is a finitely generated subgroup $G'\leq G$ such that $G$ is isomorphic to $G'$ amalgamated with all $P_\lambda$ along $P'_\lambda=P_\lambda
\cap G'$. Moreover, $G'$ is hyperbolic relative to $\{P_\lambda'\}_{\lambda\in \Lambda}$. Let $S$ be a finite generating set of $G'$. While $S$ does not generate $G$, we can still use it in the construction of $X=\Gamma^\blacktriangle_n$. More explicitly, if $X'$ is the $n$-Rips complex for $S$ and $G'$, then $X$ is a tree of copies of $X'$ amalgamated along vertices in $W$. Let $\mathcal{F}'$ be the collection of all the conjugates of $P'_\lambda$, all their subgroups, and all finite subgroups of $G'$. By Theorem \[thm:MMain\], we have that $X'$ is a cocompact model for $E_{\mc F'}G'$, and it is easy to deduce that $X$ is a cocompact model for $E_{\mc F}G$.
Applications {#applications .unnumbered}
------------
On our way towards Theorem \[thm:MMain\] we will establish the following, for the proof see Section \[sec:Rips\]. We learned from François Dahmani that this corollary can be also obtained from one of Bowditch’s approaches to relative hyperbolicity.
\[sec:subgroups\] There is finite collection of finite subgroups $\{F_1,\ldots, F_k\}$ such that any finite subgroup of $G$ is conjugate to a subgroup of some $P_\lambda$ or some $F_i$.
Note that by [@Os06 Theorem 2.44], Corollary \[sec:subgroups\] holds also if $G$ is infinitely generated, which we allow in the remaining part of the introduction. Our next application regards the cohomological dimension of relatively hyperbolic groups in the framework of Bredon modules. Given a group $G$ and a nonempty family $\mc F$ of subgroups closed under conjugation and taking subgroups, the theory of (right) modules over the orbit category $\mc{O_F}(G)$ was established by Bredon [@Br67], tom Dieck [@tD87] and L[ü]{}ck [@Lu89]. In the case where $\mc F$ is the trivial family, the $\Or$-modules are $\ZG$-modules. The notions of cohomological dimension $\cdF(G)$ and finiteness properties $FP_{n, \mc F}$ for the pair $(G, \mc
F)$ are defined analogously to their counterparts $\cd(G)$ and $FP_n$. The geometric dimension $\gdF(G)$ is defined as the smallest dimension of a model for $E_{\mc F}G$. A theorem of Lück and Meintrup [@LuMe00 Theorem 0.1] shows that $$\cdF(G) \leq \gdF(G) \leq \max\{3, \cdF(G)\}.$$ Together with Theorem \[thm:MMain\], this yields the following. Here as before $\mathcal F$ is the collection of all the conjugates of $\{P_\lambda\}$, all their subgroups, and all finite subgroups of $G$.
Let $G$ be relatively hyperbolic. Then $\cdF(G)$ is finite.
The *homological Dehn function* $\operatorname{\mathsf{FV}}_X(k)$ of a simply-connected cell complex $X$ measures the difficulty of filling cellular $1$-cycles with $2$-chains. For a finitely presented group $G$ and $X$ a model for $EG$ with $G$-cocompact $2$-skeleton, the growth rate of $\operatorname{\mathsf{FV}}_{G}(k):=\operatorname{\mathsf{FV}}_X(k)$ is a group invariant [@Fl98 Theorem 2.1]. The function $\operatorname{\mathsf{FV}}_G(k)$ can also be defined from algebraic considerations under the weaker assumption that $G$ is $FP_2$, see [@HaMa15 Section 3]. Analogously, for a group $G$ and a family of subgroups $\mc F$ with a cocompact model for $E_{\mc F}G$, there is relative homological Dehn function $\operatorname{\mathsf{FV}}_{G,\mc F}(k)$ whose growth rate is an invariant of the pair $(G, \mc F)$, see [@MP15-2 Theorem 4.5].
Gersten proved that a group $G$ is hyperbolic if and only if it is $FP_2$ and the growth rate of $\operatorname{\mathsf{FV}}_G(k)$ is linear [@Ge96 Theorem 5.2]. An analogous characterisation for relatively hyperbolic groups is proved in [@MP15-2 Theorem 1.11] relying on the following corollary. We remark that a converse of Corollary \[thm:char\] requires an additional condition that $\{P_\lambda\}$ is an almost malnormal collection, see [@MP15-2 Theorem 1.11(1) and Remark 1.13].
\[thm:char\] Let $G$ be relatively hyperbolic. Then $G$ is $FP_{2, \mc F}$ and $\operatorname{\mathsf{FV}}_{G, \mc F}(k)$ has linear growth.
The existence of a cocompact model $X=\Gamma^\blacktriangle_n$ for $E_{\mc F}(G)$ implies that $G$ is $FP_{2,\mc F}$. Since $X$ has fine and hyperbolic $1$-skeleton and has finite edge $G$-stabilisers, it follows that $\operatorname{\mathsf{FV}}_{G,\mc F}(k):=\operatorname{\mathsf{FV}}_X(k)$ has linear growth by [@MP15 Theorem 1.7].
**Organisation**. In Section \[sec:Rips\] we discuss the basic properties of the $n$-Rips complex $\Gamma_n^\blacktriangle$, and state our main results on the fixed-point sets, Theorems \[thm:fixpoint\] and \[thm:contractible\]. We prove them in Section \[sec:proofs\] using a graph-theoretic notion called dismantlability. We also rely on a thin triangle Theorem \[thm:stronglythin\] for relatively hyperbolic groups, which we prove in Section \[sec:triangle\].
**Acknowledgements.** We thank Damian Osajda for discussions, and the referees for comments. Both authors acknowledge funding by NSERC. The second author was also partially supported by National Science Centre DEC-2012/06/A/ST1/00259 and UMO-2015/18/M/ST1/00050.
Rips complex {#sec:Rips}
============
Rips graph
----------
We introduce relatively hyperbolic groups following Bowditch’s approach [@Bo12]. A *circuit* in a graph is an embedded closed edge-path. A graph is *fine* if for every edge $e$ and every integer $n$, there are finitely many circuits of length at most $n$ containing $e$.
Let $G$ be a group, and let $\mc
P=\{P_\lambda\}_{\lambda\in \Lambda}$ be a finite collection of subgroups of $G$. A *$(G, \mc P)$-graph* is a fine $\delta$-hyperbolic connected graph with a $G$-action with finite edge stabilisers, finitely many orbits of edges, and such that $\mc P$ is a set of representatives of distinct conjugacy classes of vertex stabilisers such that each infinite stabiliser is represented.
Suppose $G$ is finitely generated, and let $S$ be a finite generating set. Let $\Gamma$ denote the Cayley graph of $G$ with respect to $S$. Let $V$ denote the set of vertices of $\Gamma$, which is in correspondence with $G$. A *peripheral left coset* is a subset of $G$ of the form $gP_\lambda$. Let $W$ denote the set of peripheral left cosets, also called *cone vertices*. The *coned-off Cayley graph* $\hat \Gamma$ is the connected graph obtained from $\Gamma$ by enlarging the vertex set by $W$ and the edge set by the pairs $(v,w)\in V\times W$, where the group element $v$ lies in the peripheral left coset $w$.
We say that $G$ is *hyperbolic relative to $\mc P$* if $\hat \Gamma$ is fine and $\delta$-hyperbolic, which means that it is a *$(G, \mc P)$-graph*. This is equivalent to the existence of a $(G, \mc P)$-graph. Indeed, if there is a $(G, \mc P)$-graph, a construction of Dahmani [@Da03 Page 82, proof of Lemma 4] (relying on an argument of Bowditch [@Bo12 Lemma 4.5]) shows that $\hat \Gamma$ is a $G$-equivariant subgraph of a $(G, \mc P)$-graph $\Delta$, and therefore $\hat \Gamma$ is fine and quasi-isometric to $\Delta$. In particular the definition of relative hyperbolicity is independent of $S$. From here on, we assume that $G$ is hyperbolic relative to $\mc P$.
Extend the word metric (which we also call *$S$-distance*) $|\cdot , \cdot|_S$ from $V$ to $V\cup W$ as follows: the distance between cone vertices is the distance in $\Gamma$ between the corresponding peripheral left cosets, and the distance between a cone vertex and an element of $G$ is the distance between the corresponding peripheral left coset and the element. Note that $|\cdot,\cdot|_S$ is not a metric on $V\cup W$. It is only for $v\in V$ that we have the triangle inequality $|a,b|_S\leq|a,v|_S+|v,b|_S$ for any $a,b\in V\cup
W$.
Let $n\geq 1$. The *$n$-Rips graph* $\Gamma_n$ is the graph with vertex set $V\cup W$ and edges between $u,u'\in
V\cup W$ whenever $|u,u'|_S\leq n$.
\[lem:GPgraph\] The $n$-Rips graph $\Gamma_n$ is a $(G, \mc P)$-graph.
Note that the graphs $\hat \Gamma$ and $\Gamma_n$ have the same vertex set. In particular since $\hat \Gamma$ is connected and contained in $\Gamma_n$, it follows that $\Gamma_n$ is connected.
Since $\Gamma$ is locally finite and there are finitely many $G$-orbits of edges in $\Gamma$, it follows that there are finitely many $G$-orbits of edge-paths of length $n$ in $\Gamma$. Since $\mc P$ is finite, there are finitely many $G$-orbits of edges in $\Gamma_n$.
Since $G$ acts on $\hat \Gamma$ with finite edge stabilisers and $\hat \Gamma$ is fine, it follows that for distinct vertices in $V\cup W$, the intersection of their $G$-stabilisers is finite [@MW11 Lemma 2.2]. Thus the pointwise $G$-stabilisers of edges in $\Gamma_n$ are finite, and hence the same holds for the setwise $G$-stabilisers of edges.
It remains to show that $\Gamma_n$ is fine and $\delta$-hyperbolic. Since there are finitely many $G$-orbits of edges in $\Gamma_n$, the graph $\Gamma_n$ is obtained from $\hat \Gamma$ by attaching a finite number of $G$-orbits of edges. This process preserves fineness by a result of Bowditch [@Bo12 Lemma 2.3], see also [@MW11 Lemma 2.9]. This process also preserves the quasi-isometry type [@MW11 Lemma 2.7], thus $\Gamma_n$ is $\delta$-hyperbolic.
For a graph $\Sigma$, let $\Sigma^\blacktriangle$ be the simplicial complex obtained from $\Sigma$ by spanning simplices on all cliques. We call $\Gamma_n^\blacktriangle$ the *$n$-Rips complex*.
\[cor:barycenters\] The $G$-stabiliser of a barycentre of a simplex $\Delta$ in $\Gamma_n^\blacktriangle$ that is not a vertex is finite.
Let $F$ be the stabiliser of the barycentre of $\Delta$. Then $F$ contains the pointwise stabiliser of $\Delta$ as a finite index subgroup. The latter is contained in the pointwise stabiliser of an edge of $\Delta$, which is finite by Lemma \[lem:GPgraph\]. Therefore $F$ is finite.
\[cor:cocompact\] The $G$-action on $\Gamma_n^\blacktriangle$ is cocompact.
By Lemma \[lem:GPgraph\], the $n$-Rips graph $\Gamma_n$ is fine. Hence every edge $e$ in $\Gamma_n$ is contained in finitely many circuits of length $3$. Thus $e$ is contained in finitely many simplices of $\Gamma^\blacktriangle_n$. By Lemma \[lem:GPgraph\], there are finitely many $G$-orbits of edges in $\Gamma_n$. It follows that there are finitely many $G$-orbits of simplices in $\Gamma^\blacktriangle_n$.
Fixed-point sets
----------------
The first step of the proof of Theorem \[thm:MMain\] is the following fixed-point theorem.
\[thm:fixpoint\] For sufficiently large $n$, each finite subgroup $F\leq G$ fixes a clique of $\Gamma_n$.
The proof will be given in Section \[sub:quasi\]. As a consequence we obtain the following.
By Corollary \[cor:cocompact\], there are finitely many $G$-orbits of simplices in $\Gamma_n^\blacktriangle$. From each orbit of simplices that are not vertices pick a simplex $\Delta_i$, and let $F_i$ be the stabiliser of its barycentre. By Corollary \[cor:barycenters\], the group $F_i$ is finite.
Choose $n$ satisfying Theorem \[thm:fixpoint\]. Then any finite subgroup $F$ of $G$ fixes the barycentre of a simplex $\Delta$ in $\Gamma^\blacktriangle_n$. If $\Delta$ is a vertex, then $F$ is contained in a conjugate of some $P_\lambda$. Otherwise, $F$ is contained in a conjugate of some $F_i$.
It was observed by the referee that if one proved in advance Corollary \[sec:subgroups\], one could deduce from it Theorem \[thm:fixpoint\] (without control on $n$).
The second step of the proof of Theorem \[thm:MMain\] is the following, whose proof we also postpone, to Section \[sub:contr\].
\[thm:contractible\] For sufficiently large $n$, for any subgroup $H \leq G$, its fixed-point set in $\Gamma_n^\blacktriangle$ is either empty or contractible.
We conclude with the following.
The point stabilisers of $\Gamma_n^\blacktriangle$ belong to $\mc
F$ by Corollary \[cor:barycenters\]. For every $H\in \mc F$ its fixed-point set $(\Gamma_n^\blacktriangle)^H$ is nonempty by Theorem \[thm:fixpoint\]. Consequently, $(\Gamma_n^\blacktriangle)^H$ is contractible by Theorem \[thm:contractible\].
Dismantlability {#sec:proofs}
===============
The goal of this section is to prove Theorems \[thm:fixpoint\] and \[thm:contractible\], relying on the following.
Thin triangle theorem
---------------------
We state an essential technical result of the article, a thin triangles result for relatively-hyperbolic groups. We keep the notation from the Introduction, where $\Gamma$ is the Cayley graph of $G$ with respect to $S$ etc. By *geodesics* we mean geodesic edge-paths.
[@HK08 Definition 8.9] Let $p=(p_j)_{j=0}^\ell$ be a geodesic in $\Gamma$, and let $\epsilon, R$ be positive integers. A vertex $p_i$ of $p$ is *$(\epsilon, R)$-deep* in the peripheral left coset $w\in
W$ if $R\leq i \leq \ell- R$ and $|p_j, w|_S \leq \epsilon$ for all $|j-i|\leq R$. If there is no such $w\in W$, then $p_i$ is an *$(\epsilon, R)$-transition vertex* of $p$.
\[rem:atmostone\] \[lem:uniqueness\] [@HK08 Lemma 8.10] For each $\epsilon$ there is a constant $R$ such that for any geodesic $p$ in $\Gamma$, a vertex of $p$ cannot be $(\epsilon,
R)$-deep in two distinct peripheral left cosets.
For $a,b \in V\cup W$, a *geodesic from $a$ to $b$ in $\Gamma$* is a geodesic in $\Gamma$ of length $|a,b|_S$ such that its initial vertex equals $a$ if $a\in V$, or is an element of $a$ if $a\in W$, and its terminal vertex equals $b$ if $b\in V$, or is an element of $b$ if $b\in W$.
Throughout the article we adopt the following convention. For an edge-path $(p_j)_{j=0}^\ell$, if $i>\ell$ then $p_i$ denotes $p_\ell$.
\[thm:stronglythin\] There are positive integers $\epsilon, R$ and $D$, satisfying Lemma \[rem:atmostone\], such that the following holds. Let $a,b,c\in V\cup W$ with $a\neq b$, and let $p^{ab}, p^{bc},
p^{ac}$ be geodesics in $\Gamma$ from $a$ to $b$, from $b$ to $c$, and from $a$ to $c$, respectively. Let $\ell=|a,b|_S$ and let $0\leq i \leq \ell$.
If $p^{ab}_i$ is an $(\epsilon, R)$-deep vertex of $p^{ab}$ in the peripheral left coset $w$, then let $z=w$, otherwise let $z=p^{ab}_i$.
Then $|z, p^{ac}_i|_S\leq D$ or $|z, p^{bc}_{\ell-i}|_S \leq
D$.
Note that the condition $a\neq b$ is necessary, since for $a=b\in W$ we could take for $p^{ab}$ any element of $a$, leading to counterexamples.
While Theorem \[thm:stronglythin\] seems similar to various other triangle theorems in relatively hyperbolic groups, its proof is surprisingly involved, given that we rely on these previous results. We postpone the proof till Section \[sec:triangle\]. In the remaining part of the section, $\epsilon, R,D$ are the integers guaranteed by Theorem \[thm:stronglythin\]. We can and will assume that $D\geq \epsilon$.
Quasi-centres {#sub:quasi}
-------------
In this subsection we show how to deduce Theorem \[thm:fixpoint\] from thin triangle Theorem \[thm:stronglythin\]. This is done analogously as for hyperbolic groups, using quasi-centres (see [@BrHa99 Lemma III.$\Gamma$.3.3]).
Let $U$ be a finite subset of $V\cup W$. The *radius* $\rho(U)$ of $U$ is the smallest $\rho$ such that there exists $z\in V\cup W$ with $|z,u|_S\leq \rho$ for all $u\in U$. The *quasi-centre* of $U$ consists of $z\in V\cup W$ satisfying $|z,u|_S\leq \rho(U)$ for all $u\in U$.
\[lem:quasicentre\] Let $U$ be a finite subset of $V\cup W$ that is not a single vertex of $W$. Then for any two elements $a,b$ of the quasi-centre of $U$, we have $|a,b|_S\leq 4D$.
Assume first $\rho(U)\leq D$. If $U$ is a single vertex $v\in
V$, then $|a,b|_S\leq |a,v|_S+|v,b|_S\leq 2D$ and we are done. If there are $u\neq u'\in U$, then let $v$ be the first vertex on a geodesic from $u$ to $u'$ in $\Gamma$. Since the vertex $v$ is not $(\epsilon, R)$-deep, by Theorem \[thm:stronglythin\] applied to $u,u', a$ (respectively $u,u',b$), we obtain a vertex $v_a$ (respectively $v_b$) on a geodesic in $\Gamma$ from $a$ (respectively $b$) to $u$ or $u'$ satisfying $|v_a,v|_S\leq D$ (respectively $|v_b,v|_S\leq D$). Consequently $|a,b|_S\leq
|a,v_a|_S+|v_a,v|_S+|v,v_b|_S+|v_b,b|_S\leq 2\rho(U)+2D\leq
4D$, as desired.
Henceforth, we assume $\rho(U)\geq D+1$. Let $\ell=|a,b|_S$. If $\ell> 4D$, then choose any $2D\leq i\leq \ell-2D$, and any geodesic $p^{ab}$ from $a$ to $b$ in $\Gamma$. If $p^{ab}_i$ is an $(\epsilon, R)$-deep vertex of $p^{ab}$ in the peripheral left coset $w$, then let $z=w$, otherwise let $z=p^{ab}_i$. We claim that for any $c\in U$ we have $|z,c|_S\leq \rho(U)-1$, contradicting the definition of $\rho(U)$, and implying $\ell\leq 4D$.
Indeed, we apply Theorem \[thm:stronglythin\] to $a,b,c,$ and any geodesics $p^{bc},p^{ac}$. Without loss of generality assume that we have $|z,p^{ac}_{i}|_S\leq D$. Note that if $i>|a,c|_S$, then $p^{ac}_i$ lies in (or is equal to) $c$, and consequently $|z,c|_S\leq |z,p^{ac}_{i}|_S\leq D\leq\rho(U)-1$, as desired. If $i\leq|a,c|_S$, then $|p^{ac}_i,a|_S=i\geq 2D$, and hence $$|z,c|_S\leq
|z,p^{ac}_{i}|_S+|p^{ac}_{i},c|_S\leq
D+(|c,a|_S - |p^{ac}_i,a|_S)\leq D+ (\rho(U)-2D),$$ as desired.
Let $n\geq 4D$. Consider a finite orbit $U\subset V\cup W$ of $F$. If $U$ is a single vertex, then there is nothing to prove. Otherwise, by Lemma \[lem:quasicentre\] the quasi-centre of $U$ forms a fixed clique in $\Gamma_n$.
Convexity
---------
Let $\mu$ be a positive integer. A subset $U\subset V\cup W$ is *$\mu$-convex* with respect to $u\in U$ if for any geodesic $(p_j)_{j=0}^\ell$ in $\Gamma$ from $u$ to $u'\in U$, for any $j\leq \ell-\mu$, we have
(i) $p_j\in U$, and
(ii) for each $w\in W$ with $|w,p_j|_S\leq \epsilon$ we have $w\in U$.
\[def:conv\] Let $r$ be a positive integer and let $U\subset V\cup W$ be a finite subset. The *$r$-hull* $U_r$ of $U$ is the union of
(i) all the vertices $v\in V$ with $|v,u|_S\leq r$ for each $u\in U$, and
(ii) all the cone vertices $w\in W$ with $|w,u|_S\leq
r+\epsilon$ for each $u\in U$.
\[rem:finiteU\] If $|U|\geq 2$, then each $U_r$ is finite.
Choose $u\neq u'\in U$. Assume without loss of generality $r\geq |u,u'|_S$. Each vertex of $U_r$ distinct from $u$ and $u'$ forms with $u$ and $u'$ a circuit in $\Gamma_{r+\epsilon}$ of length $3$. There are only finitely many such circuits, since $\Gamma_{r+\epsilon}$ is fine by Lemma \[lem:GPgraph\].
\[lem:hull\] Let $U\subset V\cup W$ be a finite subset with $|u,u'|_S\leq \mu$ for all $u,u'\in U$. Then each $U_r$, with $r\geq \mu$, is $(\mu+2D)$-convex with respect to all $b\in U$.
Let $b\in U$, and $a\in U_r$. Let $(p^{ab}_j)_{j=0}^\ell$ be a geodesic from $a$ to $b$ in $\Gamma$, and let $\mu+2D\leq j\leq \ell$. By Definition \[def:conv\], we have $\ell\leq r+\epsilon$. To prove the lemma, it suffices to show that $p^{ab}_j\in U_r$.
Consider any $c\in U$ and apply Theorem \[thm:stronglythin\] with $i=\ell$. In that case $p_i^{ab}$ is not $(\epsilon,
R)$-deep and thus $z=p_i^{ab}$. Consequently, we have $|p_\ell^{ab} , p^{ac}_\ell|_S\leq D$ or $|p_\ell^{ab} ,
p^{bc}_{0} |_S\leq D.$
In the second case, using $\epsilon\leq D$, we have $$|p^{ab}_j,c|_S\leq |p^{ab}_j,p^{ab}_\ell|_S+|p^{ab}_\ell,
p^{bc}_0|_S+|p^{bc}_0, c|_S\leq
\big((r+\epsilon)-(\mu+2D)\big)+D+\mu \leq r,$$ as desired.
In the first case, if $\ell> |a,c|_S$, then $p^{ac}_\ell$ lies in (or is equal to) $c$ and hence $$|p^{ab}_j,c|_S\leq |p^{ab}_j,p^{ab}_\ell|_S+|p^{ab}_\ell, p^{ac}_\ell|_S\leq \big((r+\epsilon)-(\mu+2D)\big)+D\leq
r.$$ If in the first case $\ell\leq |a,c|_S$, then $$|p^{ab}_j,c|_S\leq |p^{ab}_j,p^{ab}_\ell|_S+|p^{ab}_\ell, p^{ac}_\ell|_S+|p^{ac}_\ell,
c|_S\leq \big(\ell-(\mu+2D)\big)+D+(r+\epsilon-\ell)\leq r.$$
Contractibility {#sub:contr}
---------------
In this subsection we prove Theorem \[thm:contractible\]. To do that, we use dismantlability.
We say that a vertex $a$ of a graph is *dominated* by an adjacent vertex $z\neq a$, if all the vertices adjacent to $a$ are also adjacent to $z$.
A finite graph is *dismantlable* if its vertices can be ordered into a sequence $a_1, \ldots, a_k$ so that for each $i<k$ the vertex $a_i$ is dominated in the subgraph induced on $\{a_i,\ldots,a_k\}$.
Polat proved that the automorphism group of a dismantlable graph fixes a clique [@Po93 Theorem A] (for the proof see also [@HOP14 Theorem 2.4]). We will use the following strengthening of that result.
\[thm:Damian\] Let $\Gamma$ be a finite dismantlable graph. Then for any subgroup $H\leq\mathrm{Aut}(\Gamma)$, the fixed-point set $(\Gamma^\blacktriangle)^H$ is contractible.
Our key result is the dismantlability in the $n$-Rips graph.
\[lem:dismantlable\] Let $U\subset V\cup W$ be a finite subset that is $6D$-convex with respect to some $b\in U$. Then for $n\geq 7D$, the subgraph of $\Gamma_n$ induced on $U$ is dismantlable.
We order the vertices of $U$ according to $|\cdot,b|_S$, starting from $a\in U$ with maximal $|a,b|_S$, and ending with $b$.
We first claim that unless $U=\{b\}$, the set $U-\{a\}$ is still $6D$-convex with respect to $b$. Indeed, let $u\in
U-\{a\}$ and let $(p_j)_{j=0}^\ell$ be a geodesic from $b$ to $u$ in $\Gamma$. Let $j\leq \ell-6D$. Then $|p_j,b|_S\leq
\ell-6D<|a,b|_S$, so $p_j\neq a$ and hence $p_j\in U-\{a\}$ since $U$ was $6D$-convex. Similarly, if $w\in W$ and $|w,p_j|_S\leq \epsilon$, then $|w,b|\leq
\epsilon+(\ell-6D)<|a,b|_S$, so $w\neq a$ and hence $w\in
U-\{a\}$. This justifies the claim.
It remains to prove that $a$ is dominated in the subgraph of $\Gamma_n$ induced on $U$ by some vertex $z$. Let $(p^{ab}_j)_{j=0}^\ell$ be a geodesic from $a$ to $b$ in $\Gamma$. If $\ell\leq 7D$, then we can take $z=b$ and the proof is finished. We will henceforth suppose $\ell>7D$. If $p^{ab}_{6D}$ is an $(\epsilon, R)$-deep vertex of $p^{ab}$ in the peripheral left coset $w$, then let $z=w$, otherwise let $z=p^{ab}_{6D}$. Note that by definition of convexity, we have $z\in U$. We will show that $z$ dominates $a$.
Let $c\in U$ be adjacent to $a$ in $\Gamma_n$, which means $|a,c|_S\leq n$. We apply Theorem \[thm:stronglythin\] to $a,b,c,i=6D$ and any geodesics $(p^{bc}_j),(p^{ac}_j)$. Consider first the case where $|z,p^{ac}_{6D}|_S\leq D$. If $6D\leq |a,c|_S$, then $$|c,z|_S\leq |c,p^{ac}_{6D}|_S
+|p^{ac}_{6D},z|_S\leq (n-6D)+D<n,$$ so $c$ is adjacent to $z$ in $\Gamma_n$, as desired. If $|a,c|_S<6D$, then $p^{ac}_{6D}$ lies in (or is equal to) $c$ and hence $|c,z|_S\leq D<n$ as well.
Now consider the case where $|z, p^{bc}_{\ell-6D}|_S \leq D$. Since $a$ was chosen to have maximal $|a,b|_S$, we have $|a,b|_S\geq |c,b|_S$, and hence $|c,p^{bc}_{\ell-6D}|_S\leq
6D$. Consequently, $$|c,z|_S\leq |c,p^{bc}_{\ell-6D}|+|p^{bc}_{\ell-6D},z|_S+\leq
6D+D\leq n.$$
We are now ready to prove the contractibility of the fixed-point sets.
Let $n\geq 7D$. Suppose that the fixed-point set $\mathrm{Fix}=(\Gamma_n^\blacktriangle)^H$ is nonempty.
**Step 1.** *The fixed-point set $\mathrm{Fix'}=(\Gamma_{4D}^\blacktriangle)^H$ is nonempty.*
Let $U$ be the vertex set of a simplex in $\Gamma_n^\blacktriangle$ containing a point of $\mathrm{Fix}$ in its interior. Note that $U$ is $H$-invariant. If $U$ is a single vertex $u$, then $u\in\mathrm{Fix'}$ and we are done. Otherwise, by Lemma \[lem:quasicentre\], the quasi-centre of $U$ spans a simplex in $\Gamma_{4D}^\blacktriangle$. Consequently, its barycentre lies in $\mathrm{Fix}'$.
**Step 2.** *If $\mathrm{Fix}'$ contains at least $2$ points, then it contains a point outside $W$.*
Otherwise, choose $w\neq w'\in\mathrm{Fix}'$ with minimal $|w,w'|_S$. If $|w,w'|_S\leq 4D$, then the barycentre of the edge $ww'$ lies in $\mathrm{Fix}'$, which is a contradiction. If $|w,w'|_S> 4D$, then $\rho(\{w,w'\})\leq
\big\lceil\frac{|w,w'|_S}{2}\big\rceil<|w,w'|_S$. Let $U'$ be the quasi-centre of $\{w,w'\}$. By Lemma \[lem:quasicentre\], we have that $U'$ spans a simplex in $\Gamma_{4D}^\blacktriangle$, with barycentre in $\mathrm{Fix}'$. If $U'$ is not a single vertex, this is a contradiction. Otherwise, if $U'$ is a single vertex $w''\in
W$, then $|w,w''|_S\leq \rho(\{w,w'\})<|w,w'|_S$, which contradicts our choice of $w,w'$.
**Step 3.** *$\mathrm{Fix}$ is contractible.*
By Step 1, the set $\mathrm{Fix}'$ is nonempty. If $\mathrm{Fix}'$ consists of only one point, then so does $\mathrm{Fix}$, and there is nothing to prove. Otherwise, let $\Delta$ be the simplex in $\Gamma_{4D}^\blacktriangle$ containing in its interior the point of $\mathrm{Fix}'$ guaranteed by Step 2. Note that $\Delta$ is also a simplex in $\Gamma_n^\blacktriangle$ with barycentre in $\mathrm{Fix}$. Since $\Delta$ is not a vertex of $W$, by Lemma \[rem:finiteU\] all its $r$-hulls $\Delta_r$ are finite. By Lemma \[lem:hull\], each $\Delta_r$ with $r\geq
4D$ is $6D$-convex. Thus by Lemma \[lem:dismantlable\], the $1$-skeleton of the span $\Delta^\blacktriangle_r$ of $\Delta_r$ in $\Gamma_n^\blacktriangle$ is dismantlable. Hence by Theorem \[thm:Damian\], the set $\mathrm{Fix}\cap
\Delta^\blacktriangle_r$ is contractible. Note that $\Delta_r$ exhaust entire $V\cup W$. Consequently, entire $\mathrm{Fix}$ is contractible, as desired.
Edge-dismantlability {#sub:edge-dis}
--------------------
Mineyev and Yaman introduced for a relatively hyperbolic group a complex $X(\mathcal{G},\mu)$, which they proved to be contractible [@MiYa07 Theorem 19]. However, analysing their proof, they exhaust the 1-skeleton of $X(\mathcal{G},\mu)$ by finite graphs that are not dismantlable but satisfy a slightly weaker relation, which we can call *edge-dismantlability*.
An edge $(a,b)$ of a graph is *dominated* by a vertex $z$ adjacent to both $a$ and $b$, if all the other vertices adjacent to both $a$ and $b$ are also adjacent to $z$. A finite graph $\Gamma$ is *edge-dismantlable* if there is a sequence of subgraphs $\Gamma=\Gamma_1\supset \Gamma_2\supset
\cdots \supset\Gamma_k$, where for each $i<k$ the graph $\Gamma_{i+1}$ is obtained from $\Gamma_i$ by removing a dominated edge or a dominated vertex with all its adjacent edges, and where $\Gamma_k$ is a single vertex.
In dimension 2 the notion of edge-dismantlability coincides with collapsibility, so by [@Se] the automorphism group of $\Gamma$ fixes a clique, similarly as for dismantlable graphs.
Does the automorphism group of an arbitrary edge-dismantlable graph $\Gamma$ fix a clique? For arbitrary $H\leq\mathrm{Aut}(\Gamma)$, is the fixed-point set $(\Gamma^\blacktriangle)^H$ contractible?
Proof of the thin triangle theorem {#sec:triangle}
==================================
Preliminaries
-------------
Given an edge-path $p=(p_i)_{i=0}^\ell$, we use the following notation. The length $\ell$ of $p$ is denoted by $l(p)$, the initial vertex $p_0$ of $p$ is denoted by $p_-$, and its terminal vertex $p_\ell$ is denoted by $p_+$. For integers $0\leq
j\leq k\leq l(p)$, we denote by $p[j,k]$ the subpath $(p_i)_{i=j}^k$, and by $p[k,j]$ we denote the inverse path.
The group $G$ is hyperbolic relative to $\mc P$ in the sense of Osin [@Os06 Definition 1.6, Theorem 1.5]. We first recall two results from [@Os06; @Os07]. Consider the alphabet $\mathscr{P}=S\sqcup \bigsqcup_\lambda P_\lambda$. Every word in this alphabet represents an element of $G$, and note that distinct letters might represent the same element. Let $\bar
\Gamma$ denote the Cayley graph of $G$ with respect to $\mathscr{P}$.
\[thm:OsinTriangle\] There is a constant $K>0$ with the following property. Consider a triangle whose sides $p$, $q$, $r$ are geodesics in $\bar \Gamma$. For any vertex $v$ of $p$, there exists a vertex $u$ of $q\cup r$ such that $|u,v|_S \leq K.$
Let $q$ be an edge-path in $\bar \Gamma$. Subpaths of $q$ with at least one edge are called *non-trivial*. A *$gP_\lambda$-component* of $q$ is a maximal non-trivial subpath $r$ such that the label of $r$ is a word in $P_\lambda
- \{1\}$ and a vertex of $r$ (and hence all its vertices) belong to $gP_\lambda$. We refer to $gP_\lambda$-components as *$\mc P$-components* if there is no need to specify $gP_\lambda$. A $gP_\lambda$-component of $q$ is *isolated* if $q$ has no other $gP_\lambda$-components. Note that $gP_\lambda$-components of geodesics in $\bar \Gamma$ are single edges and we call them *$gP_\lambda$-edges*.
\[thm:n-gon\] There is a constant $K>0$ satisfying the following condition. Let $\Delta$ be an $n$-gon in $\bar\Gamma$, which means that $\Delta$ is a closed path that is a concatenation of $n$ edge-paths $\Delta=q^1 q^2 \dots
q^n$. Suppose that $I \subset \{1, \dots , n\}$ is such that
1. for $i \in I$ the side $q^i$ is an isolated $\mc P$-component of $\Delta$, and
2. for $i \not \in I$ the side $q^i$ is a geodesic in $\bar\Gamma$.
Then $ \sum_{i \in I} |q^i_- , q^i_+|_S \leq Kn$.
We now recall a result of Hruska [@HK08] and another of Druţu–Sapir [@DrSa08] on the relation between the geometry of $\bar \Gamma$ and the Cayley graph $\Gamma$ of $G$ with respect to $S$.
Let $p$ be a geodesic in $\Gamma$, and let $\epsilon, R$ be positive integers. Let $w\in W$ be a peripheral left coset. An *$(\epsilon, R)$-segment in $w$* of $p$ is a maximal subpath such that all its vertices are [$(\epsilon, R)$-deep]{} in $w$. Note that an $(\epsilon, R)$-segment could consist of a single vertex.
Edge-paths $p$ and $q$ in $\bar \Gamma$ are *$K$-similar* if $|p_-,q_-|_S\leq K$ and $|p_+,q_+|_S\leq K$.
\[prop:transition\]\[cor:segments\] There are constants $\epsilon$, $R$ satisfying Lemma \[lem:uniqueness\], and a constant $K$ such that the following holds. Let $p$ be a geodesic in $\Gamma$ and let $\bar p$ be a geodesic in $\bar \Gamma$ with the same endpoints as $p$.
(i) The set of vertices of $\bar p$ is at Hausdorff distance at most $K$ from the set of $(\epsilon,
R)$-transition vertices of $p$, in the metric $|\cdot,\cdot|_S$.
(ii) If $p[j,k]$ is an $(\epsilon, R)$-segment in $w$ of $p$, then there are vertices $\bar p_m$ and $\bar p_n$ of $\bar p$ such that $|p_j, \bar p_m|_S\leq K$ and $|p_k, \bar p_n|_S \leq K$.
(iii) For any subpath $\bar p[m,n]$ of $\bar p$ with $m\leq n$ there is a $K$-similar subpath $p[j,k]$ of $p$ with $j\leq k$.
The existence of $\epsilon$, $R$, and $K$ satisfying Lemma \[lem:uniqueness\] and (i) is [@HK08 Proposition 8.13], except that Hruska considers the set of transition points instead of vertices. However, after increasing his $R$ by $1$, we obtain the current statement, and moreover each pair of distinct $(\epsilon, R)$-segments is separated by a transition vertex. Consequently, by increasing $K$ by $1$, we obtain (ii).
For the proof of (iii), increase $K$ so that it satisfies Theorem \[thm:OsinTriangle\]. We will show that $3K$ satisfies statement (iii). By (i), there is a vertex $p_{k}$ such that $|\bar p_n, p_{k}|_S\leq K$. Let $q,\bar{q}$ be geodesics in $\Gamma,\bar\Gamma$ from $\bar p_n$ to $p_{k}$. Let $\bar r$ be a geodesic in $\bar \Gamma$ from $p_0$ to $p_{k}$. Consider the geodesic triangle in $\bar \Gamma$ with sides $\bar p[0, n]$, $\bar q$, and $\bar r$. By Theorem \[thm:OsinTriangle\], there is a vertex $v$ of $\bar
q \cup \bar r$ such that $|\bar p_m, v|_S\leq K$.
Suppose first that $v$ lies in $\bar r$. By (i) there is a vertex $p_{j}$ of $p[0,k]$ such that $|p_{j},v|_S\leq K$. It follows that $|p_{j}, \bar p_m|_S\leq 2K$. Now suppose that $v$ lies in $\bar q$. By (i) the vertex $v$ is at $S$-distance $\leq K$ from a vertex of $q$. Since $l(q)\leq K$, it follows that $|p_{k}, \bar p_m|_S \leq 3K$, and we can assign $j=k$.
\[lem:quasiconvexity\] There is $K>0$ such that the following holds. Let $w\in W$ be a peripheral left coset, let $A$ be a positive integer, and let $p$ be a geodesic in $\Gamma$ with $|p_-, w|_S\leq A$ and $|p_+, w|_S\leq A$. Then any vertex $p_i$ of $p$ satisfies $|p_i, w|_S\leq KA$.
\[constants\] From here on, the constants $(\epsilon, R, K)$ are assumed to satisfy the statement of Proposition \[prop:transition\]. By increasing $K$, we also assume that $K$ satisfies the conclusions of Theorems \[thm:OsinTriangle\] and \[thm:n-gon\], the quasiconvexity Lemma \[lem:quasiconvexity\], and $K\geq \max\{\epsilon,
R\}$.
We conclude with the following application of Theorem \[thm:n-gon\].
\[lem:deep2\] Let $p$ be a geodesic in $\Gamma$, and let $\bar p$ be a geodesic in $\bar \Gamma$ with the same endpoints as $p$. Let $p[j,k]$ be an $(\epsilon, R)$-segment of a peripheral left coset $w\in W$. If $k-j>8K^2$, then $\bar p$ contains a $w$-edge which is $5K^2$-similar to $p[j,k]$.
There are vertices $r_-$ and $r_+$ of $w$ such that $|p_j,r_-|_S\leq \epsilon\leq K$ and $|p_k, r_+|_S \leq
\epsilon\leq K.$ Let $r$ be a $w$-edge from $r_-$ to $r_+$. By Proposition \[prop:transition\](ii), there are vertices $\bar{p}_m,\bar{p}_n$ at $S$-distance at most $K$ from $p_j,p_k$, respectively. Let $[r_-, \bar p_m]$ and $[\bar p_n,
r_-]$ be geodesics in $\Gamma$ between the corresponding vertices; note that the labels of these paths are words in the alphabet $S$, and they both have length at most $2K$.
Suppose for contradiction that $\bar p$ does not contain a $w$-edge. Consider the closed path $[r_-, \bar p_m] \bar
p[m,n] [\bar p_n, r_+] r$, viewed as a polygon $\Delta$ obtained by subdividing $[r_-, \bar p_m]$ and $[\bar p_n, r_+]$ into edges. Since $\bar p[m,n]$ is a geodesic in $\bar \Gamma$, the number of sides of $\Delta$ is at most $2+|r_-,\bar
p_m|_S+|r_+,\bar p_n|_S \leq 6K$. We have that $r$ is an isolated $w$-component of $\Delta$. Then Theorem \[thm:n-gon\] implies that $|r_-, r_+|_S \leq 6K^2$. It follows that $|p_j, p_k|_S \leq 8K^2$. This is a contradiction, hence $\bar p[m,n]$ contains a $w$-edge $t$.
Now we prove that $|t_-, p_j|_S\leq 5K^2$. Let $[\bar p_m,
t_-]$ be the subpath of $\bar p[m,n]$ from $\bar p_m$ to $t_-$, and note that this is a geodesic in $\bar \Gamma$ without $w$-components. Let $[t_-,r_-]$ be a $w$-edge from $t_-$ to $r_-$. Consider the polygon $[\bar p_m, t_-] [t_-,r_-]
[r_-,\bar p_m]$, where the path $[r_-,\bar p_m]$ is subdivided into at most $2K$ edges. Observe that $[t_-,r_-]$ is an isolated $w$-component. Theorem \[thm:n-gon\] implies that $|t_-, p_j|_S \leq |t_-, r_-|_S+|r_-, p_j|_S \leq K(2K+2)+K\leq
5K^2$.
Analogously one proves that $|t_+, p_k|_S\leq 5K^2$.
Proof
-----
We are now ready to start the proof of Theorem \[thm:stronglythin\]. Let $D=53K^2$.
Let $a,b,c\in V\cup W$ with $a\neq b$, and let $p^{ab}, p^{bc},
p^{ac}$ be geodesics in $\Gamma$ from $a$ to $b$, from $b$ to $c$, and from $a$ to $c$, respectively. Let $\ell=|a,b|_S$ and let $0\leq i \leq \ell $. If $p^{ab}_i$ is an $(\epsilon,
R)$-deep vertex of $p^{ab}$ in the peripheral left coset $w$ then let $z=w$, otherwise let $z=p^{ab}_i$.
We define the following paths illustrated in Figure .
(0,0)![Paths in the proof of the thin triangle theorem[]{data-label="fig:hexagon"}](onigiri.pdf "fig:")
\#1\#2\#3\#4\#5[ @font ]{}
(5261,5219)(905,-4708) (4011,-2106)[(0,0)\[lb\]]{} (3314,-4600)[(0,0)\[lb\]]{} (1460,-1509)[(0,0)\[lb\]]{} (5305,-1518)[(0,0)\[lb\]]{} (3412,-3228)[(0,0)\[lb\]]{} (2709,-2103)[(0,0)\[lb\]]{} (2027,-3934)[(0,0)\[lb\]]{} (4877,-3892)[(0,0)\[lb\]]{} (5209,-2951)[(0,0)\[lb\]]{} (3998,-736)[(0,0)\[lb\]]{} (2910,-688)[(0,0)\[lb\]]{} (1525,-2921)[(0,0)\[lb\]]{} (2242,-3185)[(0,0)\[lb\]]{} (4586,-3134)[(0,0)\[lb\]]{} (3473,-1212)[(0,0)\[lb\]]{} (3474,216)[(0,0)\[lb\]]{} (1133,-3828)[(0,0)\[lb\]]{} (5799,-3843)[(0,0)\[lb\]]{}
Let $\bar p^{ab}, \bar p^{bc}, \bar p^{ac}$ be geodesics in $\bar \Gamma$ with the same endpoints as $p^{ab}$, $p^{bc}$, $p^{ac}$, respectively. If $a\in W$ and $\bar{p}^{ab}$ starts with an $a$-edge, then we call this edge $s^a$; otherwise let $s^a$ be the trivial path. We define $s^b$ analogously. Then $\bar p^{ab}$ is a concatenation $$\bar p^{ab} = s^a q^{ab} s^b.$$ Similarly, the paths $\bar p^{ac},\bar p^{bc}$ can be expressed as concatenations $$\nonumber \bar p^{ac} = u^a q^{ac} u^c, \quad \quad \bar p^{bc} = t^b q^{bc} t^c.$$ Let $r^a$ be a path in $\bar{\Gamma}$ from $u^a_+$ to $s^a_+$ that is a single $a$-edge if $u^a_+\neq s^a_+$, or the trivial path otherwise. We define $r^b, r^c$ analogously. Let $\Pi$ be the geodesic hexagon in $\bar \Gamma$ given by $$\Pi = r^a q^{ab} r^b q^{bc} r^c q^{ca} .$$
\[step:first\] Paths $p^{ab}$ and $q^{ab}$ are $2K$-similar. The same is true for the pair $p^{ac}$ and $q^{ac}$, and the pair $p^{bc}$ and $q^{bc}$. \[eq:01\]
By Proposition \[prop:transition\](i), there is a vertex $p^{ab}_j$ such that $|s^a_+, p^{ab}_j|_S \leq K$. Since $p^{ab}$ is a geodesic from $a$ to $b$ in $\Gamma$, it follows that $j\leq K$, and consequently $|s^a_-, s^a_+|_S \leq 2K$. The remaining assertions are proved analogously.
In view of Proposition \[prop:transition\](i), there is a vertex of $\bar p^{ab}$ at $S$-distance $\leq K$ from $p^{ab}_i$. While this vertex might be $s_-^a$ or $s_-^b$, Step \[step:first\] guarantees that there is a vertex $q^{ab}_h$ at $S$-distance $\leq 3K$ from $p^{ab}_i$.
The following step should be considered as the bigon case of Theorem \[thm:stronglythin\].
\[Step:combined\] If $q^{ac}$ contains a vertex in $b$, in the case where $b\in W$, or equal to $b$, in the case where $b\in V$, then $|z , p^{ac}_i |_S< D$.
Similarly, if $q^{bc}$ contains a vertex in $a$, in the case where $a\in W$, or equal to $a$, in the case where $a\in V$, then $|z ,
p^{bc}_{\ell-i} |_S< D$.
Note that here we keep the convention that for $i>l(p^{ac})$ the vertex $p^{ac}_i$ denotes $p^{ac}_+$, and similarly if $\ell-i>l(p^{bc})$, then the vertex $p^{bc}_{\ell-i}$ denotes $p^{bc}_+$.
By symmetry, it suffices to prove the first assertion. We focus on the case $b\in W$, the case $b\in V$ follows by considering $x$ below as the trivial path.
Let $q_n^{ac}$ be the first vertex of $q^{ac}$ in $b$. Let $x$ be a $b$-edge joining $q^{ab}_+$ to $q_n^{ac}$. The edges $r^a$ and $x$ are isolated $\mc P$-components of the $4$-gon $r^aq^{ab}xq^{ac}[n,0]$. By Theorem \[thm:n-gon\], we have $|r^a_-,r^a_+|_S,|x_-,x_+|_S\leq 4K$. Consequently by Step \[step:first\] we have $|p_-^{ab},p_-^{ac}|_S\leq 8K$.
First consider the case, where $p^{ab}_i$ is an $(\epsilon,
R)$-transition vertex of $p^{ab}$. Let $q^{ab}_h$ be the vertex defined after Step \[step:first\]. Subdividing the $4$-gon $r^aq^{ab}xq^{ac}[n,0]$ into two geodesic triangles in $\bar
\Gamma$, and applying twice Theorem \[thm:OsinTriangle\], gives $h^*$ such that $|q^{ab}_h, q^{ac}_{h^*}|_S \leq 6K$. By Proposition \[prop:transition\](i), there is $i^*$ such that $|q^{ac}_{h^*}, p^{ac}_{i^*}|_S \leq K$. It follows that $|p^{ab}_i, p^{ac}_{i^*}|_S \leq 3K+6K+K=10K$ and hence $|i-i^*| \leq 8K+10K=18K$. Therefore $|p^{ab}_i, p^{ac}_i|_S
\leq 10K+18K$.
Now consider the case, where $p^{ab}_i$ is an $(\epsilon,
R)$-deep vertex of $p^{ab}$ in the peripheral left coset $w$. Then $p^{ab}_i$ lies in an $(\epsilon, R)$-segment $p^{ab}[j,k]$ of $p^{ab}$ in $w$. Thus $\max\{|p^{ab}_j, w|_S,
|p^{ab}_k,w|_S\} \leq \epsilon \leq K.$ By Proposition \[cor:segments\](ii) and Step \[step:first\], there is a vertex $q^{ab}_m$ such that $|p^{ab}_j,
q^{ab}_m|_S\leq 3K$. As in the previous case, we obtain $j^*$ such that $|p^{ab}_j, p^{ac}_{j^*}|_S \leq 10K$. Analogously, there is $k^*$ such that $|p^{ab}_k, p^{ac}_{k^*}|_S \leq 10K$. In particular, we have $\max\left\{|p^{ac}_{j^*},w|_S,
|p^{ac}_{k^*},w|_S \right\}\leq 11K$. By quasiconvexity of $w$, Lemma \[lem:quasiconvexity\], every vertex of $p^{ac}[j^*,k^*]$ is at $S$-distance $\leq 11K^2$ from $w$. Moreover, we have $|j-j^*|\leq 8K+10K$, and analogously, $|k-k^*|\leq 18K$. Hence $i$ is at distance $\leq 18K$ from the interval $[j^*,k^*]$. (Note that we might have $j^*>k^*$, but that does not change the reasoning.) It follows that $|w,p^{ac}_i|_S \leq 11K^2+18K.$
By Step \[Step:combined\], we can assume that $a\neq c$ and $b\neq c$. Moreover, we can assume that there is no $b$-component in $q^{ac}$, nor an $a$-component in $q^{bc}$. Consequently, we can apply Theorem \[thm:n-gon\] to $\Pi$, viewing $r^a$ and $r^b$ as isolated components and the remaining four sides as geodesic sides. It follows that $$\label{eq:added}
\max\{|r^a_-, r^a_+|_S , |r^b_-, r^b_+|_S \} \leq 6K.$$ Together with Step \[eq:01\], this implies $$\label{eq:02} \max\{|p^{ab}_-, p^{ac}_-|_S, |p^{ab}_+, p^{bc}_-|_S\} \leq 10K.$$
\[lem:triangle2\] If $p^{ab}_i$ is an $(\epsilon, R)$-transition vertex of $p^{ab}$ or an endpoint of an $(\epsilon, R)$-segment of $p^{ab}$, then $|p^{ab}_i , p^{ac}_i |_S\leq 34K$ or $|p^{ab}_i , p^{bc}_{\ell-i} |_S \leq 34K$.
By Step \[step:first\], the vertex $p^{ab}_i$ is at $S$-distance $\leq 3K$ from some $q_h^{ab}$. We split the hexagon $\Pi$ into four geodesic triangles in $\bar \Gamma$ using diagonals. Repeated application of Theorem \[thm:OsinTriangle\] and inequality yields a vertex of $q^{ac}\cup q^{bc}$ at $S$-distance $\leq
8K$ from $q_h^{ab}$. Without loss of generality, suppose that this vertex is $q^{ac}_{h^*}$. By Proposition \[prop:transition\](i), the vertex $q^{ac}_{h^*}$ is at $S$-distance $\leq K$ from some $p^{ac}_{i^*}$. Consequently, $|p^{ab}_i,p^{ac}_{i^*}|_S\leq 12K$. By inequality , it follows that $|i-i^*|\leq 22K$. Therefore $|p^{ab}_i, p^{ac}_i|_S\leq 34K$.
To prove Theorem \[thm:stronglythin\], it remains to consider the case where $p^{ab}_i$ is an $(\epsilon, R)$-deep vertex of $p$ in a peripheral left coset $w$. Let $p^{ab}[j,k]$ be the $(\epsilon, R)$-segment of $p^{ab}$ in $w$ containing $p^{ab}_i$.
Note that if $w=c$, then $|a,c|_S\leq j+\epsilon$. Consequently $p_i^{ac}$ is at $S$-distance $\leq \epsilon$ from $p_+^{ac}\in w$, and the theorem follows. Henceforth, we will assume $w\neq c$.
Also note that if $k-j \leq 18K^2$, then assuming without loss of generality that Step \[lem:triangle2\] yields $|p^{ab}_j,p^{ac}_j|_S\leq 34K$, we have: $$|w,p^{ac}_i|_S\leq
|w,p^{ab}_j|_S+|p^{ab}_j,p^{ac}_j|_S+|p^{ac}_j,p^{ac}_i|_S\leq
\epsilon+34K+18K^2\leq D,$$ and the theorem is proved. Henceforth, we assume $k-j> 18K^2$.
\[lem:lastcases\] The path $q^{ab}$ has a $w$-component $q^{ab}[m,m+1]$ which is $5K^2$-similar to $p^{ab}[j,k]$. Moreover, $q^{bc}$ or $q^{ac}$ has a $w$-component.
The first assertion follows from Lemma \[lem:deep2\]. In particular, $w$ is distinct from $a$ and $b$. For the second assertion, suppose for contradiction that $q^{bc}$ and $q^{ac}$ have no $w$-components. Consequently, since $w\neq c$, the $w$-edge $q^{ab}[m,m+1]$ is an isolated $w$-component of $\Pi$. We apply Theorem \[thm:n-gon\] with $\Pi$ interpreted as an 8-gon with a side $q^{ab}[m,m+1]$ as the only isolated component. This contradicts $|q^{ab}_m,
q^{ab}_{m+1}|_S> 18K^2-2\cdot 5K^2\geq 8K$.
\[lem:another-case\] Suppose that $q^{ac}$ has a $w$-component $q^{ac}[n,n+1]$ and $q^{bc}$ does not have a $w$-component. Then $|w , p^{ac}_i
|_S< D$. Similarly, if $q^{bc}$ has a $w$-component and $q^{ac}$ does not have a $w$-component, then $|w ,
p^{bc}_{\ell-i} |_S< D$.
By symmetry, it suffices to prove the first assertion. Let $x$ be a $w$-edge in $\bar{\Gamma}$ from $q^{ab}_m$ to $q^{ac}_n$. Consider the geodesic $4$-gon $r^aq^{ab}[0,m]xq^{ac}[n,0]$ in $\bar{\Gamma}$. Observe that $x$ is an isolated $w$-component and hence Theorem \[thm:n-gon\] implies that $|q^{ab}_m,q^{ac}_n|_S \leq 4K$. Analogously, by considering a geodesic $6$-gon, we obtain $|q^{ab}_{m+1},q^{ac}_{n+1}|_S \leq 6K$.
Proposition \[prop:transition\](iii) implies that $q^{ac}[n,n+1]$ is $K$-similar to a subpath $p^{ac}[j^*,k^*]$ of $p^{ac}$. By Step \[lem:lastcases\], the paths $p^{ab}[j,k]$ and $p^{ac}[j^*,k^*]$ are $(5K^2+6K+K)$-similar, hence $12K^2$-similar. By inequality , it follows that $|j-j^*|\leq 10K+12K^2\leq 22K^2$ and similarly $|k-k^*|\leq 22K^2$. Hence $i\in [j^*-22K^2,k^*+22K^2]$. By Lemma \[lem:quasiconvexity\], we have $|p_i^{ac},w|_S\leq
K\cdot K+22K^2$.
It remains to consider the case where both $q^{ac}$ and $q^{bc}$ have $w$-components, which we denote by $q^{ac}[n,n+1]$ and $q^{bc}[\tilde n,\tilde n+1]$.
The argument in the proof of Step \[lem:another-case\] shows that $$\label{eq:06}\max\{ |q^{ab}_m,q^{ac}_n|_S, |q^{ac}_{n+1}, q^{bc}_{\tilde n+1}|_S,
|q^{ab}_{m+1},q^{bc}_{\tilde n}|_S \} \leq 4K.$$
By Proposition \[cor:segments\](iii) there are integers $0\leq\alpha\leq\beta\leq l(p^{ac})$ and $0\leq \gamma\leq \delta \leq l(p^{bc})$ such that $q^{ac}[n,n+1]$ and $p^{ac}[\alpha, \beta]$ are $K$-similar, and $q^{bc}[\tilde n,\tilde n+1]$ and $p^{bc}[\gamma, \delta]$ are $K$-similar.
\[lem:final\] We have $$\alpha -20K^2 \leq i \leq \beta +33K^2
\quad \text{ or } \quad \gamma-20K^2 \leq \ell-i \leq \delta+33K^2.$$
By inequality and Step \[lem:lastcases\], we have the following estimates: $$\begin{aligned}
|p^{ac}_\beta ,
p^{bc}_\delta|_S &\leq |p^{ac}_\beta, q^{ac}_{n+1}|_S +
|q^{ac}_{n+1}, q^{bc}_{\tilde n+1}|_S + |q^{bc}_{\tilde n+1},
p^{bc}_\delta|_S \leq 6K,\\
|p^{ab}_j, p^{ac}_\alpha|_S &\leq |p^{ab}_j, q^{ab}_m|_S +
|q^{ab}_{m}, q^{ac}_{n}|_S + |q^{ac}_{n}, p^{ac}_\alpha|_S
\leq 5K^2+4K+K\leq 10K^2,\\
|p^{ab}_k, p^{bc}_\gamma|_S &\leq |p^{ab}_k, q^{ab}_{m+1}|_S +
|q^{ab}_{m+1}, q^{bc}_{\tilde n}|_S + |q^{bc}_{\tilde n},
p^{bc}_\gamma|_S \leq 10K^2.\end{aligned}$$ These estimates and the triangle inequality imply $$\label{eq:07} k-j \leq (\beta-\alpha) + (\delta-\gamma) +26K^2.$$
From inequality it follows that $|j-\alpha|\leq
10K+10K^2\leq 20K^2$, and analogously, $|(\ell-k)-\gamma|\leq
20K^2$. In particular, $\alpha - 20K^2 \leq j\leq i$ and $\gamma - 20K^2 \leq \ell -k\leq \ell -i,$ as desired. To conclude the proof we argue by contradiction. Suppose that $i >
\beta +33K^2$ and $\ell-i > \delta+33K^2$. It follows that $$i-j\geq i-\alpha- 20K^2 > \beta -\alpha+13K^2$$ and $$k-i =(\ell-i)-(\ell-k) \geq (\ell-i)-\gamma-20K^2> \delta -\gamma+13K^2.$$ Adding these two inequalities yields $k-j> (\beta - \alpha) +
(\delta -\gamma) +26K^2$, which contradicts inequality .
We now conclude the proof of Theorem \[thm:stronglythin\]. Since the endpoints of $q^{ac}[n,n+1]$ are in $w$, it follows that the endpoints of $p^{ac}[\alpha, \beta]$ are at $S$-distance $\leq K$ from $w$. By quasiconvexity of $w$, Lemma \[lem:quasiconvexity\], all the vertices of $p^{ac}[\alpha, \beta]$ are at $S$-distance $\leq K^2$ from $w$. Analogously, all the vertices of $p^{bc}[\gamma,
\delta]$ are at $S$-distance $\leq K^2$ from $w$. Then Step \[lem:final\] yields $|w , p^{ac}_i |_S\leq K^2+33K^2<
D$ or $|w , p^{bc}_{\ell-i} |_S \leq 34K^2< D$.
[10]{}
J. M. Alonso, T. Brady, D. Cooper, V. Ferlini, M. Lustig, M. Mihalik, M. Shapiro, and H. Short. Notes on word hyperbolic groups. In [*Group theory from a geometrical viewpoint ([T]{}rieste, 1990)*]{}, pages 3–63. World Sci. Publ., River Edge, NJ, 1991. Edited by Short.
Jonathan Ariel Barmak and Elias Gabriel Minian. Strong homotopy types, nerves and collapses. , 47(2):301–328, 2012.
B. H. Bowditch. Relatively hyperbolic groups. , 22(3):1250016, 66, 2012.
Glen E. Bredon. . Lecture Notes in Mathematics, No. 34. Springer-Verlag, Berlin-New York, 1967.
Martin R. Bridson and Andr[é]{} Haefliger. , volume 319 of [ *Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\]*]{}. Springer-Verlag, Berlin, 1999.
Victor Chepoi and Damian Osajda. Dismantlability of weakly systolic complexes and applications. , 367(2):1247–1272, 2015.
Fran[ç]{}ois Dahmani. . PhD thesis, 2003.
Fran[ç]{}ois Dahmani. Classifying spaces and boundaries for relatively hyperbolic groups. , 86(3):666–684, 2003.
Cornelia Dru[ţ]{}u and Mark V. Sapir. Groups acting on tree-graded spaces and splittings of relatively hyperbolic groups. , 217(3):1313–1367, 2008.
Jeffrey L. Fletcher. . PhD thesis, 1998.
S. M. Gersten. Subgroups of word hyperbolic groups in dimension [$2$]{}. , 54(2):261–283, 1996.
Richard Gaelan Hanlon and Eduardo Martínez-Pedroza. A subgroup theorem for homological filling functions. (To appear).
Sebastian Hensel, Damian Osajda, and Piotr Przytycki. Realisation and dismantlability. , 18(4):2079–2126, 2014.
G. Christopher Hruska. Relative hyperbolicity and relative quasiconvexity for countable groups. , 10(3):1807–1856, 2010.
Urs Lang. Injective hulls of certain discrete metric spaces and groups. , 5(3):297–331, 2013.
Wolfgang L[ü]{}ck. , volume 1408 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 1989. Mathematica Gottingensis.
Wolfgang L[ü]{}ck and David Meintrup. On the universal space for group actions with compact isotropy. In [*Geometry and topology: [A]{}arhus (1998)*]{}, volume 258 of [ *Contemp. Math.*]{}, pages 293–305. Amer. Math. Soc., Providence, RI, 2000.
Eduardo Martínez-Pedroza. Subgroups of relatively hyperbolic groups of bredon cohomological dimension $2$. arXiv1508.04865., 2015.
Eduardo Mart[í]{}nez-Pedroza. A [N]{}ote on [F]{}ine [G]{}raphs and [H]{}omological [I]{}soperimetric [I]{}nequalities. , 59(1):170–181, 2016.
Eduardo Mart[í]{}nez-Pedroza and Daniel T. Wise. Relative quasiconvexity using fine hyperbolic graphs. , 11(1):477–501, 2011.
David Meintrup and Thomas Schick. A model for the universal space for proper actions of a hyperbolic group. , 8:1–7 (electronic), 2002.
Igor Mineyev and Asli Yaman. Relative hyperbolicity and bounded cohomology. Available at http://www.math.uiuc.edu/ mineyev/math/art/rel-hyp.pdf, 2007.
Denis V. Osin. Relatively hyperbolic groups: intrinsic geometry, algebraic properties, and algorithmic problems. , 179(843):vi+100, 2006.
Denis V. Osin. Peripheral fillings of relatively hyperbolic groups. , 167(2):295–326, 2007.
N. Polat. Finite invariant simplices in infinite graphs. , 27(2):125–136, 1993.
Yoav Segev. Some remarks on finite [$1$]{}-acyclic and collapsible complexes. , 65(1):137–150, 1994.
Tammo tom Dieck. , volume 8 of [*de Gruyter Studies in Mathematics*]{}. Walter de Gruyter & Co., Berlin, 1987.
|
---
author:
- 'Hisashi Johno[^1]'
- 'Masahide Saito[^2]'
- Hiroshi Onishi
title: 'Prediction-based compensation for gate on/off latency during respiratory-gated radiotherapy[^3]'
---
[^1]: Department of Mathematical Sciences, University of Yamanashi ().
[^2]: Department of Radiology, University of Yamanashi (, ).
[^3]: Accepted by *Computational and Mathematical Methods in Medicine* on October 8, 2018.
|
---
abstract: 'We present the results of 45 transit observations obtained for the transiting exoplanet HAT-P-32b. The transits have been observed using several telescopes mainly throughout the YETI network. In 25 cases, complete transit light curves with a timing precision better than $1.4\:$min have been obtained. These light curves have been used to refine the system properties, namely inclination $i$, planet-to-star radius ratio $R_\textrm{p}/R_\textrm{s}$, and the ratio between the semimajor axis and the stellar radius $a/R_\textrm{s}$. First analyses by @Hartman2011 suggest the existence of a second planet in the system, thus we tried to find an additional body using the transit timing variation (TTV) technique. Taking also literature data points into account, we can explain all mid-transit times by refining the linear ephemeris by [$21\:$ms]{}. Thus we can exclude TTV amplitudes of more than [$\sim1.5\:$min]{}.'
author:
- |
M. Seeliger,$^{1}$[^1] D. Dimitrov,$^{2}$ D. Kjurkchieva,$^{3}$ M. Mallonn,$^{4}$ M. Fernandez,$^{5}$ M. Kitze,$^{1}$ V. Casanova,$^{5}$ G. Maciejewski,$^{6}$ J. M. Ohlert,$^{7,8}$ J. G. Schmidt,$^{1}$ A. Pannicke,$^{1}$ D. Puchalski,$^{6}$ E. Göğüş,$^{9}$ T. Güver,$^{10}$ S. Bilir,$^{10}$ T. Ak,$^{10}$ M. M. Hohle,$^{1}$ T. O. B. Schmidt,$^{1}$ R. Errmann,$^{1,11}$ E. Jensen,$^{12}$ D. Cohen,$^{12}$ L. Marschall,$^{13}$ G. Saral,$^{14,15}$ I. Bernt,$^{4}$ E. Derman,$^{15}$ C. Ga[ł]{}an,$^{6}$ and R. Neuhäuser$^{1}$\
$^{1}~$ Astrophysical Institute and University Observatory Jena, Schillergaesschen 2-3, 07745 Jena, Germany\
$^{2}~$ Institute of Astronomy and NAO, Bulg. Acad. Sc., 72 Tsarigradsko Chaussee Blvd., 1784 Sofia, Bulgaria\
$^{3}~$ Shumen University, 115 Universitetska str., 9700 Shumen, Bulgaria\
$^{4}~$ Leibnitz Institut für Astrophysik Potsdam, An der Sternwarte 16, 14482 Potsdam, Germany\
$^{5}~$ Instituto de Astrofisica de Andalucia, CSIC, Apdo. 3004, 18080 Granada, Spain\
$^{6}~$ Centre for Astronomy, Faculty of Physics, Astronomy and Informatics, N. Copernicus University, Grudziadzka 5, 87-100 Toruń, Poland\
$^{7}~$ Astronomie Stiftung Trebur, Michael Adrian Observatorium, Fichtenstraße 7, 65468 Trebur, Germany\
$^{8}~$ University of Applied Sciences, Technische Hochschule Mittelhessen, Friedberg, Germany\
$^{9}~$ Sabanci University, Orhanli-Tuzla 34956, İstanbul, Turkey\
$^{10}$ Istanbul University, Faculty of Sciences, Department of Astronomy and Space Sciences, 34119 University, Istanbul, Turkey\
$^{11}$ Abbe Center of Photonis, Friedrich Schiller Universität, Max-Wien-Platz 1, 07743 Jena, Germany\
$^{12}$ Dept. of Physics and Astronomy, Swarthmore College, Swarthmore, PA 19081-1390, USA\
$^{13}$ Gettysburg College Observatory, Department of Physics, 300 North Washington St., Gettysburg, PA 17325, USA\
$^{14}$ Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA\
$^{15}$ Ankara University, Astronomy and Space Sciences Department, 06100 Tandoǧan, Ankara, Turkey
title: 'Transit Timing Analysis in the HAT-P-32 system'
---
\[firstpage\]
stars: individual: HAT-P-32 – planets and satellites: individual: HAT-P-32b – planetary systems
Introduction {#sec:Introduction}
============
Since the first results of the [*Kepler*]{} mission were published, the number of known planet candidates has enlarged tremendously. Most [*hot Jupiters*]{} have been found in single planetary systems and it was believed that those kind of giant, close-in planets are not accompanied by other planets [see e.g. @Steffen2012]. This result has been obtained analysing 63 [*Kepler*]{} hot Jupiter candidates and is in good agreement with inward migration theories of massive outer planets, and planet–planet scattering that could explain the lack of additional close planets in hot Jupiter systems. Nonetheless, wide companions to hot jupiters have been found, as shown e.g. in @Bakos2009 for the HAT-P-13 system. One has to state, though, that the formation of hot Jupiters is not yet fully understood (see @Steffen2012 and references therein for some formation scenarios, and e.g. @Lloyd2013 for possible tests). Recently @Szabo2013 reanalysed a larger sample of 159 [*Kepler*]{} candidates and in some cases found dynamically induced [*Transit Timing Variations (TTVs)*]{}. If the existence of additional planets in hot Jupiter systems can be confirmed, planet formation and migration theories can be constrained.
Since, according to @Szabo2013, there is only a small fraction of hot Jupiters believed to be part of a multiplanetary system, it is important to analyse those systems where an additional body is expected. In contrast to e.g. the Kepler mission, where a fixed field on the sky is monitored over a long time span, our ongoing study of TTVs in exoplanetary systems only performs follow-up observations of specific promising transiting planets where additional bodies are suspected. The targets are selected by the following criteria:
- The orbital solution of the known transiting planet shows non-zero eccentricity (though the circularization time-scale is much shorter than the system age) and/or deviant radial velocity (RV) data points – both indicating a perturber.
- The brightness of the host star is $V\leq13\:$mag to ensure sufficient photometric and timing precision at 1-2m telescopes.
- The target location on the sky is visible from the Northern hemisphere.
- The transit depth is at least 10 mmag to ensure a significant detection at medium-sized, ground-based telescopes.
- The target has not been studied intensively for TTV signals before.
Our observations make use of the YETI network [Young Exoplanet Transit Initiative; @YETI], a worldwide network of small to medium sized telescopes mostly on the Northern hemisphere dedicated to explore transiting planets in young open clusters. This way, we can observe consecutive transits, which are needed to enhance the possibility to model TTVs as described in [@Szabo2013], and [@PTmet]. Furthermore, we are able to obtain simultaneous transits observations to expose hidden systematics in the transit light curves, like time synchronization errors, or flat fielding errors.
In the past, the transiting exoplanets WASP-12b [@Maciejewski2011a; @Maciejewski2013a], WASP-3b [@Maciejewski2010; @Maciejewski2013b], WASP-10b [@Maciejewski2011b; @Maciejewski2011c], WASP-14b [@Raetz2012] and TrES-2 (Raetz et al. 2014, submitted) have been studied by our group in detail. In most cases, except for WASP-12b, no TTVs could be confirmed. Recently, also @vonEssen2013 claimed to have found possible TTV signals around Qatar-1. However, all possible variations should be treated with reasonable care.
In this project we monitor the transiting exoplanet HAT-P-32b. The G0V type [@Pickles2010] host star HAT-P-32 was found to harbour a transiting exoplanet with a period of $P=2.15\:$d by @Hartman2011. Having a host star brightness of $V=11.3\:$mag and a planetary transit depth of $21\:$mmag the sensitivity of medium-sized telescopes is sufficient to achieve high timing precision, therefore it is an optimal target for the YETI telescopes. The RV signal of HAT-P-32 is dominated by high jitter of $>60\:$ms$^{-1}$. @Hartman2011 claim that ’a possible cause of the jitter is the presence of one or more additional planets’. @Knutson2013 also analysed the RV signature of HAT-P-32 and found a long term trend indicating a companion with a minimum mass of $5-500\:$M$_{jup}$ at separations of $3.5-12\:$AU. However, such a companion could not yet explain the short time-scale jitter as seen in the Hartman data.
Besides the circular orbit fit, an eccentric solution with $e=0.163$ also fits the observed data. Though @Hartman2011 mention that the probability of a real non-zero eccentricity is only $\sim3\%$, it could be mimicked or triggered by a second body in the system. Thus, HAT-P-32b is an ideal candidate for further monitoring to look for Transit Timing Variations induced by a planetary companion.
Data acquisition and reduction {#sec:DataAquisitionAndReduction}
==============================
Between 2011 October and 2013 January we performed 30 complete and 15 partial transit observations (see Tables \[tab:H32completeObservations\] and \[tab:H32partialObservations\]) from 10 different observatories: our own one in Jena, as well as from telescopes at Torun (Poland), Trebur (Germany), Gettysburg and Swarthmore (USA), Tenerife and Sierra Nevada (Spain), Antalya and Ankara (Turkey), and Rozhen (Bulgaria) mostly throughout the YETI network [@YETI]. In addition, three literature data points from @Sada, and two observations from @Gibson2013 are available. The telescopes and abbreviations used hereafter are summarized in Table \[tab:H32Telescopes\], a short description of each observing site can be found below, sorted by the number of observations.
\# Observatory Telescope (abbreviation) $\oslash [m]$ N$_{tr}$
---- ----------------------------------------------------- ------------------------------------------- --------------- ----------
1 University Observatory Jena (Germany) Schmidt (Jena 0.6m) 0.6/0.9 5
Cassegrain (Jena 0.25m) 0.25 2
2 Teide Observatory, Canarian Islands (Spain) STELLA-1 (Tenerife 1.2m) 1.2 13
3 National Astronomical Observatory Rozhen (Bulgaria) Ritchey-Chrétien-Coudé (Rozhen 2.0m) 2.0 6
Cassegrain (Rozhen 0.6m) 0.6 1
4 Sierra Nevada Observatory (Spain) Ritchey-Chrétien (OSN 1.5m) 1.5 7
5 Michael Adrian Observatory Trebur (Germany) Trebur 1Meter Telescope (Trebur 1.2m) 1.2 3
6 TÜBİTAK National Observatory (Turkey) T100 (Antalya 1.0m) 1.0 1
7 Peter van de Kamp Observatory Swarthmore (USA) RCOS (Swarthmore 0.6m) 0.6 1
8 Toruń Centre for Astronomy (Poland) Cassegrain (Torun 0.6m) 0.6 4
9 Ankara University Observatory (Turkey) Schmidt (Ankara 0.4m) 0.4 1
10 Gettysburg College Observatory (USA) Cassegrain (Gettysburg 0.4m) 0.4 1
11 Kitt Peak National Observatory (USA) 2.1m KPNO Telescope (KPNO 2.1m) 2.1 2
KPNO Visitor Center Telescope (KPNO 0.5m) 0.5 1
12 Gemini Observatory (Hawaii, USA) Gemini North (GeminiN 8.0m) 8.2 2
Most of the observations have been acquired using a defocused telescope. As e.g. [@Southworth2009] show, this can be used to minimize flat fielding errors, atmospheric effects, and random errors significantly. We ensure that we have enough data points acquired during ingress and egress phase to model the transit and get precise transit mid points by having at least one data point per minute if possible. The duration of the ingress/egress phase $\tau$ [see @CarterWinn2010 their equation 9 assuming zero oblateness] is given by $$\tau=\left(\frac{R_\textrm{s}\cdot P_{\textrm{orb}}}{\pi\cdot a}\right)\cdot\frac{R_\textrm{p}}{R_\textrm{s}}\cdot\sqrt{1-\left(\cos i\cdot a/R_\textrm{s}\right)^2}\approx24\:\textrm{min}$$ using the system parameters of @Hartman2011 as listed in Table \[tab:JKTEBOPinput\]. Thus, our observing strategy generates at least 20 data points during ingress/egress phase which allows us to get as precise mid-transit times as possible. Using longer exposure times due to e.g. smaller telescope sizes has been proven not to improve the fits. Though one can remove atmospheric effects better, one loses time resolution at the same time.
The data reduction is performed in a standard way using *IRAF[^2]*. For each respective transit, dark or bias frames as well as flat-field images (with the same focus point as the scientific data) in the same bands have been obtained during the same night.
Observing telescopes {#sec:ObservingTelescopes}
--------------------
*Jena, Germany*
The University Observatory Jena houses three telescopes. Two of them were used to observe transits of HAT-P-32b. The 0.6/0.9m Schmidt Telescope is equipped with an E2V CCD42-10 camera [Jena 0.6m; @STK], and the 0.25m Cassegrain Telescope has an E2V CCD47-10 camera (Jena 0.25m). With the first one, we observed three partial and two complete transits, the latter one was used for two partial transit observations.
*Tenerife, Canarian Islands, Spain*
The robotic telescope STELLA-I, situated at the Teide Observatory and operated by the Leibnitz-Institut für Astrophysik Potsdam (AIP), has a mirror diameter of 1.2m. It is equipped with the Wide Field Stella Imaging Photometer [WIFSIP; @Weber2012] and could be used to observe seven complete and six partial transit events. The observations have been carried out in two filters, $r_\textrm{S}$ and $B$. Since the $r_\textrm{S}$ data turned out to be of higher quality, and to compare this data to our other observations, we only used these filter data for further analysis.
*Rozhen, Bulgaria*
The telescopes of the National Astronomical Observatory of Rozhen contributed to this study using their 2m telescope with a Princ. Instr. VersArray:1300B camera (six complete transit observations), as well as the 0.6m telescope with a FLI ProLine 0900 camera (one complete observation).
*Sierra Nevada, Spain*
The 1.5m telescope of the Sierra Nevada Observatory, operated by the Instituto de Astrofísica de Andalucía, observed seven transits of HAT-P-32b (six complete transit, one partly transit) using a Roper Scientific VersArray 2048B.
*Trebur, Germany*
The Trebur One Meter Telescope (T1T, telescope diameter 1.2m) operated at the Michael Adrian Observatory houses an SBIG ST-L-6K 3 CCD camera. So far, three complete transits could be observed.
*Antalya, Turkey*
The T100 Telescope of the TÜBİTAK National Observatory observed one transit of HAT-P-32b using a Spectral Instruments 1100 series CCD camera. However, due to technical problems the observation had to be cancelled.
*Swarthmore, USA*
The 0.6m telescope at the Peter van de Kamp Observatory of Swarthmore College contributed one complete transit observation using an Apogee U16M KAF-16803 CCD camera.
*Toruń, Poland*
One partial and three complete transits have been observed using the 0.6m Cassegrain Telescope at the Toruń Centre for Astronomy with an SBIG ST-L-1001 CCD camera mounted.
*Ankara, Turkey*
On 2011-10-04 a complete transit of HAT-P-32b was observed using the 0.4m Schmidt-Cassegrain Meade LX200 GPS Telescope equipped with an Apogee ALTA U47 CCD camera located at and operated by the University of Ankara.
*Gettysburg, USA*
The 0.4m Cassegrain reflector from Gettysburg College Observatory was used to observe one complete transit on 2012-01-15. Data were obtained with the mounted CH350 CCD camera with a back-illuminated SiTE3b chip with an $R$-filter.
\# epoch[^3] Telescope filter exposure $[s]$
---- ----------- ----------------- ---------------- ----------------
1 673 Tenerife 1.2m $r_\textrm{S}$ 15
2 679 Jena 0.6m $R_\textrm{B}$ 40
3 686 Rozhen 2.0m $V_\textrm{C}$ 20
4 686 Tenerife 1.2m $r_\textrm{S}$ 15
5 687 Rozhen 2.0m $R_\textrm{C}$ 20
6 693 Rozhen 0.6m $R_\textrm{C}$ 60
7 699 Rozhen 2.0m $R_\textrm{C}$ 20
8 708 Swarthmore 0.6m $R_\textrm{C}$ 50
9 807 Rozhen 2.0m $R_\textrm{C}$ 25
10 808 Tenerife 1.2m $r_\textrm{S}$ 15
11 820 OSN 1.5m $R_\textrm{C}$ 30
12 820 Trebur 1.2m $R_\textrm{B}$ 50
13 821 OSN 1.5m $R_\textrm{C}$ 30
14 833 Trebur 1.2m $R_\textrm{B}$ 50
15 853 OSN 1.5m $R_\textrm{C}$ 30
16 873 Tenerife 1.2m $r_\textrm{S}$ 25
17 987 Jena 0.6m $R_\textrm{B}$ 40
18 987 Rozhen 2.0m $R_\textrm{C}$ 30
19 987 Torun 0.6m $clear $ 10
20 1001 OSN 1.5m $R_\textrm{C}$ 30
21 1013 Rozhen 2.0m $R_\textrm{C}$ 25
22 1014 OSN 1.5m $R_\textrm{C}$ 30
23 1027 OSN 1.5m $R_\textrm{C}$ 30
24 1040 Trebur 1.2m $R_\textrm{B}$ 50
25 0
26 662
27 663
28 663
: The list of complete and usable transit observations gathered within the TTV project for HAT-P-32b. The initial ephemeris from the discovery paper [@Hartman2011], and three data points from the literature obtained at KPNO [@Sada] are given at the bottom lines. For the latter ones, no exposure times are given. Filter indices B, C, and S denote the photometric systems Bessel, Cousins, and Sloan, respectively, used with the different instrumentations.[]{data-label="tab:H32completeObservations"}
\# epoch[^4] Telescope filter exposure $[s]$ remarks
---- ----------- ----------------- ---------------- ---------------- --------------------------------------
1 654 Jena 0.6m $R_\textrm{B}$ 40 only first half of transit observed
2 660 Ankara 0.4m $R_\textrm{C}$ 10 large fit errors
3 666 Torun 0.6m $R$ 30 bad observing conditions
4 673 Jena 0.6m $R_\textrm{B}$ 50 only first half of transit observed
5 693 Tenerife 1.2m $r_\textrm{S}$ 15 only ingress observed
6 700 Tenerife 1.2m $r_\textrm{S}$ 15 only second half of transit observed
7 708 Gettysburg 0.4m $R$ 50 bad observing conditions
8 713 Jena 0.6m $R_\textrm{B}$ 50 only first half of transit observed
9 807 Antalya 1.0m $R$ 3 technical problems
10 821 Tenerife 1.2m $r_\textrm{S}$ 10 bad weather during egress phase
11 833 Jena 0.25m $R_\textrm{B}$ 100 bad weather, gaps in the data
12 834 Jena 0.25m $R_\textrm{B}$ 100 only first half of transit observed
13 834 OSN 1.5m $R_\textrm{C}$ 20 upcoming bad weather during ingress
14 840 Tenerife 1.2m $r_\textrm{S}$ 10 large fit errors
15 848 Tenerife 1.2m $r_\textrm{S}$ 15 only egress phase observed
16 854 Tenerife 1.2m $r_\textrm{S}$ 20 only ingress phase observed
17 855 Tenerife 1.2m $r_\textrm{S}$ 20 jumps in data, no good fits possible
18 861 Tenerife 1.2m $r_\textrm{S}$ 25 bad observing conditions
19 867 Tenerife 1.2m $r_\textrm{S}$ 25 bad observing conditions
20 906 Torun 0.6m $clear$ 10 only ingress phase observed
21 1001 Torun 0.6m $clear$ 6 jumps in data, no good fits possible
Analysis {#sec:Analysis}
========
All light curves (except for the Ankara 0.4m observation) are extracted from the reduced images using the same aperture photometry routines (described in section \[sec:GettingTheLightcurve\]) in order to prevent systematic offsets between different transit observations due to different light curve extraction methods. Afterwards we model the data sets using the *JKTEBOP* algorithm [@JKTEBOP], as well as the Transit Analysing Package *TAP* [@TAP] as described in the following sections.
Obtaining the light curve {#sec:GettingTheLightcurve}
-------------------------
Before generating the light curve, we compute the Julian Date of each exposure midtime using the header informations of each image. Hence, the quality of the final light curve fitting is not only dependent on the photometric precision, but also on a precise time synchronization of the telescope computers. One good method to reveal synchronization problems is to observe one transit from different telescope sights as done at epochs 673, 686, 693, 708, 807, 820, 821, 833, 834, and 987 (see Tables \[tab:H32completeObservations\] and \[tab:H32partialObservations\]). Unfortunately, due to bad weather conditions, some of the observations had to be aborted or rejected after a visual inspection of the light curve.
The brightness measurements are done with *IRAF* performing aperture photometry on all bright stars in the field of view in each image obtained per transit. The aperture size is manually varied to find the best photometric precision. Typical aperture values are $\sim$1.5 times the full width half-maximum. To generate the transit light curve (including the error), we use differential aperture photometry by computing a constant artificial standard star containing all comparison stars with a brightness of up to 1 mag fainter than the target star as introduced by @Broeg2005. The weight of each star is computed by its constancy and photometric precision, thus including more fainter stars does not increase the precision of the artificial star and hence the final light curve. The photometric error of each data point is rescaled using the *IRAF* measurement error, and the standard deviation in the light curves of all comparison stars as scaling parameters (for a more detailed description, see @Broeg2005).
Due to the small field of view, the Ankara 0.4m observation has been treated different. After applying the standard image reduction, differential aperture photometry was used to create the light curve. As comparison star, we used GSC 3280 781, i.e. the brightest, unsaturated star in the field of view.
To prepare the final light curve to be modelled, we fit a quadratic trend (second order polynomial) to the normal light phases to adjust for secondary airmass effects. Thus, it is required and ensured to observe a complete transit with one hour of normal light before and after the transit event itself. In addition to the originally data, we also binned all light curves threefold using an error weighted mean.
As a quality marker for our light curves we derive the photometric noise rate ($pnr$) as introduced by @Fulton2011. The $pnr$ is calculated using the root mean square ($rms$) of the model fit, as well as the number of data points per minute $\Gamma$.
$pnr=\frac{rms}{\sqrt{\Gamma}}$
The respective values for all our modelled light curves are given in Table \[tab:fitResults\] together with the fitted system parameters.
Modelling the light curve with JKTEBOP {#sec:JKTEBOP}
--------------------------------------
The light curve model code *JKTEBOP* [see e.g. @JKTEBOP], based on the *EBOP* code [@Etzel1981; @Popper1981], fits a theoretical light curve to the data using the parameters listed in Table \[tab:JKTEBOPinput\]. Since we only deal with ground based data, we only take quadratic limb darkening (LD) into account to directly compare the results of the fitting procedure with those of *TAP* (see the next section). We employ the LD values from [@Claret2000] for the stellar parameters listed in Table \[tab:JKTEBOPinput\]. To get the values we use the *JKTLD* code that linearly interpolates between the model grid values of $T_{eff}$ and $\log g$. Since @Hartman2011 only list $\left[ Fe/H\right]$ instead of $\left[ M/H\right]$ that is needed to get the LD values, we converted it according to @Salaris1993. *JKTLD* does not interpolate for $\left[ M/H\right]$, hence a zero value was assumed, which is consistent with the known values within the error bars. Since most LD coefficients are only tabulated for $V_{micro}$ = 2 km s$^{-1}$, this value was adopted (as also suggested by J. Southworth).
parameter
------------------------------------------------------ ---------- ----------
sum of radii $r_\textrm{p}+r_\textrm{s}$\* 0.1902 0.0013
ratio of radii $R_\textrm{p}/R_\textrm{s}$\* 0.1508 0.0004
orbital inclination $i$ \[$^\circ$\]\* 88.9 0.4
inverse fractional stellar radius $a/R_\textrm{s}$\* 6.05 0.04
mass ratio of the system $M_\textrm{p}/M_\textrm{s}$ 0.0007 0.0002
orbital eccentricity $e$
orbital period $P$ \[d\] 2.150008 0.000001
T$_{\textrm{eff}}$ \[K\] 6207 88
$\log g$ \[cgs\] 4.33 0.01
$[Fe/H]$ \[dex\] -0.04 0.08
$[M/H]$ \[dex\] -0.03 0.11
$v \sin i$ \[km s$^{-1}$\] 20.7 0.5
$V_{micro}$\[km s$^{-1}$\]
limb darkening (LD) law of the star
linear LD coefficient R-band\*
nonlinear LD coefficient R-band\*
linear LD coefficient V-band\*
nonlinear LD coefficient V-band\*
: The input parameters for the JKTEBOP & TAP runs with values of the circular orbit fit from the discovery paper [@Hartman2011], as well as the derived limb darkening coefficients. The metalicity of $\left[ Fe/H\right]=\left(-0.04\pm0.08\right)\,$dex according to the circular orbit fit of @Hartman2011 was converted to $\left[ M/H\right]=\left(-0.03\pm0.11\right)\,$dex using the equations of @Salaris1993. Free-to-fit parameters are marked by an asterisk.[]{data-label="tab:JKTEBOPinput"}
We fit for the parameters mid-transit time $T_{\textrm{mid}}$, sum of the fractional radii $r_\textrm{p}+r_\textrm{s}$ ($r_\textrm{p}$ and $r_\textrm{s}$ being the radius of the planet $R_\textrm{p}$ and the star $R_\textrm{s}$ divided by the semimajor axis $a$, respectively), ratio of the radii $R_\textrm{p}/R_\textrm{s}$, and orbital inclination $i$. In case of the limb darkening coefficients we use two different configurations having the linear and nonlinear term fixed, and fitting them around the theoretical values, respectively. The eccentricity is assumed to be zero.
*JKTEBOP* allows us to apply different methods to estimate error bars. We used Monte Carlo simulations (10$^4$ runs), bootstrapping algorithms (10$^4$ data sets), and a residual-shift method to see if there are significant differences in the individual error estimation methods.
Modelling the light curve with TAP {#sec:TAP}
----------------------------------
The Transit Analysis Package *TAP* [@TAP] makes use of the EXOFAST routine [@Eastman2013] with the light curve model by [@MandelAgol2002] and the wavelet-based likelihood functions by [@Carter2009] to model a transit light curve and to estimate error bars. The input parameters are listed in Table \[tab:JKTEBOPinput\]. *TAP* only uses quadratic limb darkening. Instead of the sum of fractional radii $r_\textrm{p}+r_\textrm{s}$, that is used by *JKTEBOP*, *TAP* uses the inverse fractional stellar radius $a/R_\textrm{s}$, but those two quantities can be converted into each other using the ratio of radii.
$$a/R_\textrm{s}=\left(1+R_\textrm{p}/R_\textrm{s}\right)/\left(r_\textrm{p}+r_\textrm{s}\right)$$
With *TAP* we also model the light curve several times using the unbinned and binned data (see section \[sec:GettingTheLightcurve\]), as well as keeping the limb-darkening coefficients fixed and letting them vary around the theoretical values. To estimate error bars, *TAP* runs several (in our case 10) Markov Chain Monte Carlo (MCMC) simulations with 10$^5$ steps each.
Results {#sec:Results}
=======
![image](0660111004Ankara.eps){width="29.00000%"} ![image](0673111101Tenerife.eps){width="29.00000%"} ![image](0679111114Jena.eps){width="29.00000%"}
![image](0687111201Rozhen.eps){width="29.00000%"} ![image](0693111214Rozhen.eps){width="29.00000%"} ![image](0699111227Rozhen.eps){width="29.00000%"}
![image](0807120815Rozhen.eps){width="30.00000%"} ![image](0808120818Tenerife.eps){width="30.00000%"} ![image](0821120914OSN.eps){width="30.00000%"}
![image](0840121025Tenerife.eps){width="30.00000%"} ![image](0853121122OSN.eps){width="30.00000%"} ![image](0867121222Tenerife.eps){width="30.00000%"}
![image](0873130104Tenerife.eps){width="30.00000%"} ![image](1001131006OSN.eps){width="30.00000%"} ![image](1013131101Rozhen.eps){width="30.00000%"}
![image](1014131103OSN.eps){width="30.00000%"} ![image](1027131201OSN.eps){width="30.00000%"} ![image](1040131229Trebur.eps){width="30.00000%"}
![image](0686111129Rozhen.eps){width="42.00000%"} ![image](0686111129Tenerife.eps){width="42.00000%"}
![image](0708120115Swarthmore.eps){width="42.00000%"} ![image](0708120115Gettysburg.eps){width="42.00000%"}
![image](0820120912OSN.eps){width="42.00000%"} ![image](0820120912Trebur.eps){width="42.00000%"}
![image](0833121010Trebur.eps){width="42.00000%"} ![image](0833121010Jena.eps){width="42.00000%"}
![image](0987130906Rozhen.eps){width="32.00000%"} ![image](0987130906Jena.eps){width="32.00000%"} ![image](0987130906Torun.eps){width="32.00000%"}
![ The best transit light curve obtained for this program so far observed with the 2m Rozhen telescope. The rms of the fit is 1.0 and 0.7 mmag in the unbinned and binned light curve, respectively. The mid-transit time has a fitting precision of 14 s.[]{data-label="fig:BestTransitLightCurves"}](0686111129Rozhenfull.eps){width="1\columnwidth"}
After fitting an individual transit with both *JKTEBOP* and *TAP* there are (typically) eight different fit results (two programs with fixed and free limb-darkening using binned and unbinned data, respectively) with in total 28 individually derived error bars for the properties $i$, $R_\textrm{p}/R_\textrm{s}$, $T_{\textrm{mid}}$, and $r_\textrm{p}+r_\textrm{s}$ or $a/R_\textrm{s}$.
Regarding all properties, the fit results for the original light curves and the binned ones show no significant differences. Especially concerning the precision of the mid-transit time the results can not be improved by binning the light curve. Though one can reduce the error of an individual data point, one reduces the timing resolution and hence decreases the timing precision [as well as the overall fitting precision; discussed in detail in e.g. @Kipping2010]. Thus, to get one final result for each transit event, we first convert $r_\textrm{p}+r_\textrm{s}$ to $a/R_\textrm{s}$ and then take the average of all obtained values.
In all cases the spread of the fitted values is smaller than the averaged error bar, hence the differences between the models are smaller than the fitting precision. This result is in good agreement with those of e.g. @Hoyer2012. Nevertheless, as previously discussed in e.g. @Maciejewski2013a and @Carter2009, *JKTEBOP* may underestimate the error bars. Though, in this work the differences between the errors derived by *JKTEBOP* (Monte Carlo, residual-shift, and bootstrapping) and those of *TAP* (MCMC) are not as noticeable as in e.g. @Maciejewski2013a (especially for $i$, $R_\textrm{p}/R_\textrm{s}$, and $a/R_\textrm{s}$, but larger for $T_{\textrm{mid}}$), the mean errors derived by the *TAP*-code are used as final errors.
Keeping the LD coefficients fixed at their theoretical values does not result in significant differences of the light curve fit. This is true for the value as well as the error estimations.
The final light curves and the model fits are shown in Fig. \[fig:TransitLightCurves1\] for the single site, and Fig. \[fig:TransitLightCurves2\], and \[fig:TransitLightCurves3\] for the multi site observations. Our most precise light curve has been obtained using the Rozhen 2.0m telescope and is shown in Fig. \[fig:BestTransitLightCurves\].
Transit timing {#sec:Timing}
--------------
![ The O–C diagram for HAT-P-32b assuming the circular orbit parameters from @Hartman2011. The open circle denotes literature data from @Hartman2011, open triangles denote data from @Sada, and @Gibson2013, the filled triangles denote our data (from Jena, Tenerife, Rozhen, Sierra Nevada, Swarthmore and Trebur). Redetermining the linear ephemeris by [$21\:$ms]{} can explain almost all points (black line denotes the fit, black dotted line the fit error), except for some outliers.[]{data-label="fig:H32OC"}](H32OC.eps){width="1\columnwidth"}
The goal of this ongoing project is to look for transit timing variations of known transiting planets where an additional body might be present (see section \[sec:Introduction\]). In case of HAT-P-32b, we have obtained 24 transit light curves with precise mid-transit times out of 45 observations (see Tables \[tab:H32completeObservations\] and \[tab:H32partialObservations\]). Unfortunately, there are long observational gaps between epochs 720 and 800, and epochs 880 and 980, which is not only due to the non-observability in northern summertime, but also due to the bad weather during the last northern winter affecting most participating YETI telescopes.
After the fitting process, each obtained mid-transit time has to be converted from JD$_{\textrm{UTC}}$ to barycentric Julian dates in the barycentric dynamical time (BJD$_{\textrm{TDB}}$) to account for the Earth’s movement. We use the online converter made available by Jason Eastman [see @Eastman2010] to do the corrections. Since the duration of the transit is just $\sim3\:$ hours, converting the mid-transit time is sufficient. The different positions of the Earth at the beginning and the end of the transit event is negligible with respect to the overall fitting precision of the mid-transit time, which makes a prior time conversion of the whole light curve unnecessary.
As already mentioned in section \[sec:GettingTheLightcurve\] it is extremely useful for transit timing analysis to have simultaneous observations from different telescopes. Since these data typically are not correlated to each other regarding e.g. the start of observation, observing cadence between two images, field of view (and hence number of comparison stars), one can draw conclusions on the quality of the data and reveal e.g. synchronization errors. In our case we got simultaneous observations at 10 epochs. Unfortunately one of the epoch 673, 693, 807 and 821 observations, and both epoch 834 observations had to be aborted due to the weather conditions or technical problems. The five remaining simultaneous observations, including the threefold observed transit at epoch 987, are consistent within the error bars (see Table \[tab:TimingSimultaneous\]). Thus, significant systematic errors can be neglected.
\[tab:TimingSimultaneous\]
epoch telescope $\Delta T_{\textrm{mid}}\:$\[d\]
------- ----------------- -------------- ---------------------------------- --
686 Rozhen 2.0m $5895.35297$ $0.00016$
686 Tenerife 1.2m $5895.35248$ $0.00080$
708 Swarthmore 0.6m $5942.65287$ $0.00064$
708 Gettysburg 0.4m $5942.65179$ $0.00113$
820 OSN 1.5m $6183.45364$ $0.00085$
820 Trebur 1.2m $6183.45361$ $0.00049$
833 Trebur 1.2m $6211.40361$ $0.00056$
833 Jena 0.25m $6211.40267$ $0.00214$
987 Rozhen 2.0m $6542.50530$ $0.00018$
987 Jena 0.6m $6542.50538$ $0.00032$
987 Torun 0.6m $6542.50522$ $0.00052$
: The results of the transit time fits of the four successful simultaneous transit observations. The transit mid-time in epoch 686, 708, 833, and 987 observations match within the error bars. Epoch 820 observations even match within $3.5\:$s, which is far below the error bars.
The resultant Observed-minus-Calculated (O–C) diagram is shown in Fig. \[fig:H32OC\]. In addition to our data, the originally published epoch from @Hartman2011, three data points from @Sada, and two data points from @Gibson2013 are included. We can explain almost all points by refining the linear ephemeris by [$\Delta P=\left(21\pm10\right)\:$ms]{}. Thus the newly determined period is
-------------------- -------------------- ------------------------------
$P_{\textrm{new}}$ ([$2.15000825$ ]{} [$0.00000012$]{})$\:$d
$P_{\textrm{old}}$ ($2.150008$ $0.000001\phantom{00}$)$\:$d
-------------------- -------------------- ------------------------------
From the present O–C diagram- we conservatively can rule out TTV amplitudes larger than [$\sim1.5\:$min]{}. Assuming a circular orbit for both the known planet, and an unseen perturber, we can calculate the minimum perturber mass needed to create that signal. In Fig. \[fig:possibleTTV\], all configurations above the solid black line can be ruled out, for they would produce a TTV amplitude larger than [$\sim1.5\:$min]{}. Taking a more restrictive signal amplitude of [$\sim1.0\:$min]{}, all masses above the dotted line can be ruled out. For our calculations we used the n-body integrator *Mercury6* [@Mercury6] to calculate the TTV signal for 73 different perturber masses between 1 Earth mass and 9 Jupiter masses placed at 1745 different distances to the host star ranging from 0.017$\:$AU to 0.1$\:$AU (i.e. from three times the radius of the host star to three times the semimajor axis of HAT-P-32b). These 127385 different configurations have been analysed to search for those systems that produce a TTV signal of at least [$\sim1.5\:$min]{} and [$\sim1.0\:$min]{}, respectively. In an area around the known planet (i.e. the grey shaded area in Fig. \[fig:possibleTTV\] corresponding to $\sim4$ Hill radii), most configurations were found to be unstable during the simulated time-scales. Within the mean motion resonances, especially the $1:2$ and $2:1$ resonance, only planets with masses up to a few Earth masses can still produce a signal comparable to (or lower than) the spread seen in our data. Such planets would be too small to be found by ground-based observations directly. For distances beyond 0.1$\:$AU, and accordingly beyond a period ratio above 3, even planets with masses up to a few Jupiter masses would be possible. But those planets would generate large, long period transit or radial velocity (RV) signals. Hence one can refuse the existence of such perturbers.
![The minimum mass of a perturber needed to create a TTV signal of [$\sim1.5\:$min]{} (solid line) and [$\sim1.0\:$min]{} (dotted line) as a function of the period fraction of the perturber $P_c$ and the known planet $P_b$. **The dash-dotted line indicates the upper mass limit for additional planets assuming them to produce a RV amplitude in the order of the observed RV jitter of $\sim63.5\:$m/s.** Within the mean motion resonances perturbers with less than $2$ Earth masses can not be excluded. The grey shaded area around $P_c/P_b=1$ denotes the dynamically instable region around the known planet HAT-P-32b corresponding to $\sim4$ Hill radii.[]{data-label="fig:possibleTTV"}](PossibleTTVRV.eps){width="1\columnwidth"}
Inclination, transit duration and transit depth {#sec:InclinatinoDurationDepth}
-----------------------------------------------
The two transit properties duration and depth are directly connected to the physical properties $a/R_\textrm{s}$ and $R_\textrm{p}/R_\textrm{s}$, respectively. Assuming a stable single-planet system, we expect the transit duration to be constant. Even if an exo-moon is present, the resultant variations are in the order of a few seconds [for example calculations see @Kipping2009] and hence too small to be recognized using ground based observations. Using the obtained fitting errors (as described above) as instrumental weights for a linear fit, we obtain $a/R_\textrm{s}=\,$[$6.056$ ]{}$\,\pm\,$[$0.009$ ]{} (see Fig. \[fig:H32aR\]). This value confirms the result of @Hartman2011 with $a/R_\textrm{s}=6.05\pm0.04$.
A similar result can be achieved for the transit depth in Fig. \[fig:H32k\] represented by the value of $k=R_\textrm{p}/R_\textrm{s}$. Assuming a constant value, we get a fit result of $k=\,$[$0.1510$ ]{}$\,\pm\,$[$0.0004$ ]{} compared to the originally published value of $k=0.1508\pm0.0004$ by @Hartman2011. @Gibson2013 found an M-dwarf $\approx2.8''$ away from HAT-P-32. Though @Knutson2013 ruled out the possibility that this star is responsible for the long term trend seen in the RV data, due to the size of our apertures this star always contributes to the brightness measurements of the main star and hence can affect the resulting planet-to-star radius ratio. Since actual brightness measurements of the star are not available but needed to correct the influence on the parameters, we do not correct for the M-dwarf. This way, the results are comparable to those of previous authors, but underestimate the true planet-to-star radius ratio. The spread seen in Fig. \[fig:H32k\] can therefore be an effect due to the close M-star, or may be also caused by different filter curves of different observatories. Possible filter-dependent effects to the transit depth are discussed in Bernt et al. (2014, in prep.).
![ The obtained values for the ratio of the semimajor axis over the stellar radius $a/R_\textrm{s}$. Assuming a constant value, the formal best fit using the model fit errors as instrumental weights is found to be $a/R_\textrm{s}=\,$[$6.056$ ]{}$\,\pm\,$[$0.009$ ]{}$ $ (dotted line), which is in agreement with the published value of $a/R_\textrm{s}=6.05\pm0.04$ [@Hartman2011].[]{data-label="fig:H32aR"}](H32aR.eps){width="1\columnwidth"}
![ The obtained values for the ratio of the radii $k=R_\textrm{p}/R_\textrm{s}$. Assuming a constant value, the best fit using the model fit errors as instrumental weights is found to be $k=\,$[$0.1510$ ]{}$\,\pm\,$[$0.0004$ ]{}$ $ (dotted line), compared to the published value of $k=0.1508\pm0.0004$ by @Hartman2011.[]{data-label="fig:H32k"}](H32k.eps){width="1\columnwidth"}
![ Inclination versus epoch for the HAT-P-32 system. No change in inclination can be seen. The best fit using the model fit errors as instrumental weights is $i=($[$88.92$ ]{}$\,\pm\,$[$0.10$ ]{}$)^\circ$ (dotted line), compared to $\left(88.9\pm0.4\right)^\circ$ of @Hartman2011[]{data-label="fig:H32i"}](H32i.eps){width="1\columnwidth"}
As expected, also the inclination is found to be consistent with the originally published value. Moreover, due to the number of data points and assuming a constant inclination, we can improve the value to $i=($[$88.92$ ]{}$\pm$[$0.10$ ]{}$)^\circ$ (see Fig \[fig:H32i\]). Table \[tab:VglHartman\] summarizes all our results and compares them to the stellar parameters taken from the circular orbit fit of @Hartman2011.
-------------- ---------------------- ---------------- ------------------- ------------------ -------------- -------------- --------------- --------------- -------------- -------------
our analysis [$2454420.44645$ ]{} [$0.00009$ ]{} [$2.15000825$ ]{} [$0.00000012$]{} [$6.056$ ]{} [$0.009$ ]{} [$0.1510$ ]{} [$0.0004$ ]{} [$88.92$ ]{} [$0.10$ ]{}
@Hartman2011 $2454420.44637$ $0.00009$ $2.150008$ $0.000001\:$ $0.1508$ $0.0004$ $88.9$ $0.4$
@Sada $2454420.44637$ $0.00009$ $2.1500103$ $0.0000003$ $0.1531$ $0.0012$
@Gibson2013 $2454942.899220$ $0.000077$ $2.1500085$ $0.0000002$ $0.1515$ $0.0012$
-------------- ---------------------- ---------------- ------------------- ------------------ -------------- -------------- --------------- --------------- -------------- -------------
Further limitations {#sec:furtherLimits}
-------------------
Besides the transit observations, we are also reanalysing the published RV-data [available at @Hartman2011] with the *systemic console* [@Meschiari2009]. It is indeed possible to increase the precision of the RV fit by putting additional, even lower mass or distant bodies into the system. However, due to the large observational gaps seen in Fig. \[fig:H32OC\], the large number of different possible scenarios, especially perturbers with larger periods, can hardly be restricted. Assuming a small perturber mass, an inclination $\sim90^\circ$, and an eccentricity equal to zero, one can easily derive the expected RV amplitude to be $$K\simeq\frac{28.4\cdot M_P}{P_c^{1/3}\cdot M_s^{2/3}}\nonumber$$ using Keplers laws and the conservation of momentum with the perturber mass $M_P$ in Jupiter masses, its period $P_c$ in days and mass of the central star $M_S$ in solar masses. The jitter amplitude of $\sim63.5\:$m/s found by @Knutson2013 then corresponds to a specific maximum mass of a potentially third body in the system, depending on its period. Hence, all objects above the dash dotted line in Fig. \[fig:possibleTTV\] can be ruled out, since they would result in even larger RV amplitudes.
In addition to the advantage of simultaneous observations concerning the reliability on the transit timing, one can also use them as quality markers for deviations in the light curve itself. As seen in Fig. \[fig:TransitLightCurves3\], there are no systematic differences between the three light curves obtained simultaneously. This is also true for the data shown in Fig. \[fig:TransitLightCurves2\], where no residual pattern is seen twice, though there is a small bump in the epoch 686 Teneriffe data. Hence, one can rule out real astrophysical reasons, e.g. the crossing of a spotty area on the host star, as cause for the brightness change. In a global context, no deviations seen in the residuals of our light curves are expected to be of astrophysical origin.
Folding the residuals of all light curves obtained within this project to a set of trial periods between $0.15\,$d and $2.15\,$d, i.e. period fractions of $P_c/P_b=\{0.1\ldots1\}$, we can analyse the resulting phase folded light curves regarding the orbital coverage. Thus, we can check if we would have seen the transit of an inner perturber just by chance while observing transits of HAT-P-32b. Though the duration of a single transit observation is limited to a few hours, taking the large number of observations spread over several months we are still able to cover a large percentage of the trial orbits. As seen in Fig. \[fig:Coverage\] we achieve an orbital coverage of more than $90\%$ for the majority of trial periods. For a small number of certain periods the coverage drops to $\sim80\%$. Especially within the resonances it drops to $\sim60\%$. Assuming the detectability of all transit like signatures with amplitudes more than $3\:$mmag (see residuals in Fig. \[fig:TransitLightCurves1\]), we can rule out the existence of any inner planet bigger than $\sim0.5\:$R$_{jup}$. Depending on the composition (rocky or gaseous) and bloating status of inner planets, this gives further constraints on the possible perturber mass.
![The orbital coverage for an inner transiting perturber as a function of the period fraction of the perturber $P_c$ and the known transiting planet $P_b$. The unstable area ($\sim4$ Hill radii) around the known transiting planet is marked in grey.[]{data-label="fig:Coverage"}](Coverage.eps){width="1\columnwidth"}
Finally, it is important to state that all findings assume a perturber on a circular orbit, and with the same inclination as the known planet. It is not unlikely that the inclination of a potential second planet is significantly different from the known transiting planet, as pointed out e.g. by @Steffen2012 and @PayneFord2011. This of course also increases the mass needed to create a certain TTV signal. Furthermore, the mass range of planets hidden in the RV jitter is also increased. While a possible eccentric orbit of an inner perturber is limited to a small value due to stability reasons, an outer – not necessarily transiting – perturber on an eccentric orbit is also possible, which in turn also would affect the TTV and RV signal. The companion candidate discovered by @Gibson2013, as well as the proposed originator of the RV long term trend [@Knutson2013] can, however, not be responsible neither for the RV jitter, nor for the still present spread in the O–C diagram.
Summary
=======
date epoch telescope $rms\:$\[mmag\] $pnr\:$\[mmag\]
------------ ------- ----------------- -------------- ----------- --------- ---------- ----------------- ----------------- --------- -------- ------- --------
2011-11-01 673 Tenerife 1.2m $5867.40301$ $0.00073$ $2.3$ $3.20$
2011-11-14 679 Jena 0.6m $5880.30267$ $0.00033$ $6.13 $ $0.09 $ $0.1493$ $0.0016$ $89.3 $ $1.0 $ $1.9$ $1.97$
2011-11-29 686 Rozhen 2.0m $5895.35297$ $0.00016$ $6.09 $ $0.04 $ $0.1507$ $0.0010$ $89.5 $ $0.6 $ $1.0$ $0.62$
2011-11-29 686 Tenerife 1.2m $5895.35249$ $0.00080$ $1.7$ $2.36$
2011-12-01 687 Rozhen 2.0m $5897.50328$ $0.00033$ $6.08 $ $0.09 $ $0.1508$ $0.0016$ $89.3 $ $0.9 $ $1.5$ $0.94$
2011-12-14 693 Rozhen 0.6m $5910.40274$ $0.00043$ $6.11 $ $0.13 $ $0.1508$ $0.0023$ $89.5 $ $1.1 $ $2.0$ $3.40$
2011-12-27 699 Rozhen 2.0m $5923.30295$ $0.00031$ $6.03 $ $0.10 $ $0.1536$ $0.0018$ $88.5 $ $0.9 $ $1.2$ $0.96$
2012-01-15 708 Swarthmore 0.6m $5942.65287$ $0.00064$ $6.04 $ $0.18 $ $0.1544$ $0.0024$ $88.9 $ $1.4 $ $3.3$ $3.28$
2012-08-15 807 Rozhen 2.0m $6155.50385$ $0.00026$ $6.01 $ $0.11 $ $0.1496$ $0.0014$ $88.5 $ $1.0 $ $1.3$ $1.11$
2012-08-18 808 Tenerife 1.2m $6157.65470$ $0.00072$ $2.6$ $3.71$
2012-09-12 820 OSN 1.5m $6183.45364$ $0.00085$ $5.96 $ $0.18 $ $0.1524$ $0.0027$ $88.7 $ $1.4 $ $4.3$ $4.85$
2012-09-12 820 Trebur 1.2m $6183.45361$ $0.00049$ $6.05 $ $0.14 $ $0.1548$ $0.0022$ $88.5 $ $1.2 $ $2.1$ $2.33$
2012-09-14 821 OSN 1.5m $6185.60375$ $0.00033$ $6.01 $ $0.12 $ $0.1509$ $0.0016$ $88.2 $ $1.2 $ $1.3$ $1.07$
2012-10-10 833 Trebur 1.2m $6211.40361$ $0.00056$ $5.98 $ $0.25 $ $0.1554$ $0.0059$ $88.1 $ $1.5 $ $2.2$ $2.18$
2012-11-22 853 OSN 1.5m $6254.40404$ $0.00022$ $6.037$ $0.062$ $0.1507$ $0.0018$ $89.2 $ $0.8 $ $1.1$ $0.84$
2013-01-04 873 Tenerife 1.2m $6542.40397$ $0.00058$ $1.9$ $2.87$
2013-09-07 987 Jena 0.6m $6542.50538$ $0.00032$ $6.04 $ $0.11 $ $0.1497$ $0.0016$ $88.7 $ $1.1 $ $1.6$ $1.57$
2013-09-07 987 Rozhen 2.0m $6542.50530$ $0.00018$ $5.97 $ $0.09 $ $0.1535$ $0.0012$ $88.3 $ $0.8 $ $0.9$ $0.67$
2013-09-07 987 Torun 0.6m $6542.50522$ $0.00052$ $5.89 $ $0.20 $ $0.1515$ $0.0029$ $87.9 $ $1.4 $ $3.5$ $2.33$
2013-10-06 1001 OSN 1.5m $6572.60532$ $0.00018$ $6.11 $ $0.06 $ $0.1465$ $0.0013$ $89.2 $ $0.7 $ $0.9$ $0.63$
2013-11-01 1013 Rozhen 2.0m $6598.40539$ $0.00017$ $6.05 $ $0.06 $ $0.1511$ $0.0010$ $88.9 $ $0.8 $ $0.8$ $0.68$
2013-11-03 1014 OSN 1.5m $6600.55546$ $0.00017$ $6.02 $ $0.05 $ $0.1503$ $0.0009$ $89.2 $ $0.8 $ $1.3$ $1.33$
2013-12-01 1027 OSN 1.5m $6628.50585$ $0.00031$ $6.13 $ $0.09 $ $0.1475$ $0.0022$ $89.2 $ $0.9 $ $1.8$ $1.59$
2013-12-29 1040 Trebur 1.2m $6656.45533$ $0.00045$ $6.07 $ $0.13 $ $0.1509$ $0.0022$ $88.8 $ $1.1 $ $2.6$ $2.74$
2011-10-09 662 KPNO 2.1m $5843.75341$ $0.00019$ $0.1531$ $0.0012$ – –
2011-10-11 663 KPNO 2.1m $5845.90287$ $0.00024$ – –
2011-10-11 663 KPNO 0.5m $5845.90314$ $0.00040$ – –
2012-09-06 817 [@Gibson2013] $6177.00392$ $0.00025$ – –
2012-10-19 837 [@Gibson2013] $6220.00440$ $0.00019$ – –
2011-10-04 660 Ankara 0.4m $5839.45347$ $0.00101$ $5.9 $ $0.2 $ $0.1448$ $0.0021$ $88.1 $ $1.4 $ $4.7$ $2.22$
2012-01-15 708 Gettysburg 0.4m $5942.65179$ $0.00113$ $5.79 $ $0.36 $ $0.1493$ $0.0054$ $87.3 $ $1.7 $ $2.5$ $4.25$
2012-10-10 833 Jena 0.25m $6211.40267$ $0.00214$ $6.04 $ $0.65 $ $0.1514$ $0.0089$ $86.5 $ $2.5 $ $5.3$ $6.96$
2012-10-25 840 Tenerife 1.2m $6226.45618$ $0.00102$ $3.8$ $5.05$
2012-12-22 867 Tenerife 1.2m $6284.50460$ $0.00100$ $3.3$ $5.10$
We presented our observations of HAT-P-32b planetary transits obtained during a timespan of 24 months (2011 October until 2013 October). The data were collected using telescopes all over the world, mainly throughout the YETI network. Out of 44 started observations we obtained 24 light curves that could be used for further analysis. 21 light curves have been obtained that could not be used due to different reasons, mostly bad weather. In addition to our data, literature data from @Hartman2011, @Sada, and @Gibson2013 were also taken into account (see Fig. \[fig:H32OC\], \[fig:H32aR\], \[fig:H32k\], and \[fig:H32i\] and Table \[tab:fitResults\]).
The published system parameters $a/R_\textrm{s}$, $R_\textrm{p}/R_\textrm{s}$ and $i$ from the circular orbit fit of @Hartman2011 were confirmed. In case of the semimajor axis over the stellar radius and the inclination, we were able to improve the results due to the number of observations. As for the planet-to-star radius ratio, we did not achieve a better solution for there is a spread in the data making constant fits difficult. In addition, @Gibson2013 found an M-dwarf $\approx2.8''$ away from HAT-P-32 and hence a possible cause for this spread.
Regarding the transit timing, a redetermination of the planetary ephemeris by [$21\:$ms]{} can explain the obtained mid-transit times, although there are still some outliers. Of course, having 1$\sigma$ error bars, one would expect some of the data points to be off the fit. Nevertheless, due to the spread of data seen in the O–C diagram, observations are planned to further monitor HAT-P-32b transits using the YETI network. This spread in the order of [$\sim1.5\:$min]{} does not exclude certain system configurations. Assuming circular orbits even an Earth mass perturber in a mean motion resonance could still produce such a signal.
Acknowledgements {#acknowledgements .unnumbered}
================
MS would like to thank the referee for the helpful comments on the paper draft. All the participating observatories appreciate the logistic and financial support of their institutions and in particular their technical workshops. MS would also like to thank all participating YETI telescopes for their observations. MMH, JGS, AP, and RN would like to thank the Deutsche Forschungsgemeinschaft (DFG) for support in the Collaborative Research Center Sonderforschungsbereich SFB TR 7 “Gravitationswellenastronomie”. RE, MK, and RN would like to thank the DFG for support in the Priority Programme SPP 1385 on the *First ten Million years of the Solar System* in projects NE 515/34-1 & -2. GM and DP acknowledge the financial support from the Polish Ministry of Science and Higher Education through the luventus Plus grant IP2011 031971. RN would like to acknowledge financial support from the Thuringian government (B 515-07010) for the STK CCD camera (Jena 0.6m) used in this project. The research of DD and DK was supported partly by funds of projects DO 02-362, DO 02-85 and DDVU 02/40-2010 of the Bulgarian Scientific Foundation, as well as project RD-08-261 of Shumen University. We also wish to thank the TÜBİTAK National Observatory (TUG) for supporting this work through project number 12BT100-324-0 using the T100 telescope.
[99]{} Bakos G. [Á]{}., et al., 2009, ApJ, 707, 446 Broeg C., Fern[á]{}ndez M., Neuh[ä]{}user R., 2005, AN, 326, 134 Carter J. A., Winn J. N., 2009, ApJ, 704, 51 Carter J. A., Winn J. N., 2010, ApJ, 716, 850 Chambers J. E., 1999, MNRAS, 304, 793 Claret 2000, A&A 363, 1081 Eastman J., Gaudi B. S., Agol E., 2013, PASP, 125, 83 Eastman J., Siverd R., Gaudi B. S., 2010, PASP, 122, 935 Etzel P. B., 1981, psbs.conf, 111 Fulton B. J., Shporer A., Winn J. N., Holman M. J., P[á]{}l A., Gazak J. Z., 2011, AJ, 142, 84 Gazak J. Z., Johnson J. A., Tonry J., Dragomir D., Eastman J., Mann A. W., Agol E., 2012, AdAst, 2012, 30 Gibson N. P., Aigrain S., Barstow J. K., Evans T. M., Fletcher L. N., Irwin P. G. J., 2013, arXiv, arXiv:1309.6998 Hartman J. D., et al., 2011, ApJ, 742, 59 Hoyer S., Rojo P., L[ó]{}pez-Morales M., 2012, ApJ, 748, 22 Kipping D. M., 2009, MNRAS, 392, 181 Kipping D. M., 2010, MNRAS, 408, 1758 Knutson H. A., et al., 2013, arXiv, arXiv:1312.2954 Lloyd J. P., et al., 2013, arXiv, arXiv:1309.1520 Maciejewski G., et al., 2010, MNRAS, 407, 2625 Maciejewski G., Errmann R., Raetz S., Seeliger M., Spaleniak I., Neuh[ä]{}user R., 2011, A&A, 528, A65 Maciejewski G., et al., 2011, MNRAS, 411, 1204 Maciejewski G., et al., 2011, A&A, 535, 7 Maciejewski G., et al., 2013a, A&A, 551, A108 Maciejewski G., et al., 2013b, AJ, 146, 147 Mandel K., Agol E., 2002, ApJ, 580, L171 Meschiari S., Wolf A. S., Rivera E., Laughlin G., Vogt S., Butler P., 2009, PASP, 121, 1016 Mugrauer M., Berthold T., 2010, AN, 331, 449 Neuh[ä]{}user R., et al., 2011, AN, 332, 547 Nesvorn[ý]{} D., Morbidelli A., 2008, ApJ, 688, 636 Payne M. J., Ford E. B., 2011, ApJ, 729, 98 Pickles A., Depagne [É]{}., 2010, PASP, 122, 1437 Popper D. M., Etzel P. B., 1981, AJ, 86, 102 Raetz St., 2012, PhD thesis University Jena Sada P. V., et al., 2012, PASP, 124, 212 Salaris M., Chieffi A., Straniero O., 1993, ApJ, 414, 580 Southworth J., et al., 2009, MNRAS, 396, 1023 Southworth J., 2008, MNRAS, 386, 1644 Steffen J. H., et al., 2012, PNAS, 109, 7982 Szab[ó]{} R., Szab[ó]{} G. M., D[á]{}lya G., Simon A. E., Hodos[á]{}n G., Kiss L. L., 2013, A&A, 553, A17 von Essen C., Schr[ö]{}ter S., Agol E., Schmitt J. H. M. M., 2013, A&A, 555, A92 Weber M., Granzer T., Strassmeier K. G., 2012, SPIE, 8451,
\[lastpage\]
[^1]: E-mail:martin.seeliger@uni-jena.de
[^2]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
[^3]: The epoch is calculated from the originally published ephemeris by [@Hartman2011].
[^4]: The epoch is calculated from the originally published ephemeris by [@Hartman2011].
|
---
abstract: 'We present exact analytical solutions for the zero-energy modes of two-dimensional massless Dirac fermions fully confined within a smooth one-dimensional potential $V(x)=-\alpha/\cosh(\beta{}x)$, which provides a good fit for potential profiles of existing top-gated graphene structures. We show that there is a threshold value of the characteristic potential strength $\alpha/\beta$ for which the first mode appears, in striking contrast to the non-relativistic case. A simple relationship between the characteristic strength and the number of modes within the potential is found. An experimental setup is proposed for the observation of these modes. The proposed geometry could be utilized in future graphene-based devices with high on/off current ratios.'
author:
- 'R. R. Hartmann'
- 'N. J. Robinson'
- 'M. E. Portnoi'
date: 21 June 2010
title: Smooth electron waveguides in graphene
---
Introduction
============
Klein proposed that relativistic particles do not experience exponential damping within a barrier like their non-relativistic counterparts, and that as the barrier height tends towards infinity, the transmission coefficient approaches unity.[@Klein] This inherent property of relativistic particles makes confinement non-trivial. Carriers within graphene behave as two-dimensional (2D) massless Dirac fermions, exhibiting relativistic behavior at sub-light speed[@DiracFermions; @CastroNetoReview] owing to their linear dispersion, which leads to many optical analogies. [@Lens; @LevitovParabolic; @ZBZcombined; @Beenakker_PRL_102_2009; @Chen_APL_94_2009] Klein tunneling through p-n junction structures in graphene has been studied both theoretically [@LevitovParabolic; @KleinCombined; @Cheianov_Falko_PRB(R)_74_2006; @Peeters_PRB_74_2006; @Chaplik_JETP_84_2006; @Peeters_APL_90_2007; @Fogler_PRL_100_2008; @Fogler_PRB_77_2008; @BeenakkerRMP08; @ChineseShape] and experimentally. [@Transport; @PN; @TopGateCombined; @Kim_PRL_99_2007; @Savchenko_NanoLett_8_2008; @Liu_APL_92_2008; @GG_PRL_102_2009; @Kim_NatPhys_5_2009] Quasi-bound states were considered in order to study resonant tunneling through various sharply terminated barriers. [@LevitovParabolic; @Peeters_PRB_74_2006; @Chaplik_JETP_84_2006; @Peeters_APL_90_2007; @ChineseShape] We propose to change the geometry of the problem in order to study the propagation of fully confined modes along a smooth electrostatic potential, much like photons moving along an optical fiber.
So far quasi-one-dimensional channels have been achieved within graphene nanoribbons, [@CastroNetoReview; @Nanoribbons; @RibbonTheoryCombined; @Efetov_PRL_98_2007; @Peres_JPhysCondMat_21_2009] however, controlling their transport properties requires precise tailoring of edge termination,[@RibbonTheoryCombined] currently unachievable. In this paper we claim that truly bound modes can be created within bulk graphene by top gated structures, [@Kim_PRL_99_2007; @Savchenko_NanoLett_8_2008; @Liu_APL_92_2008; @GG_PRL_102_2009; @Kim_NatPhys_5_2009] such as the one shown in Fig. \[fig:Cosh\_1/2\](a). In an ideal graphene sheet at half-filling, the Fermi level is at the Dirac point and the density of states for a linear 2D dispersion vanishes. In realistic graphene devices the Fermi level can be set using the back gate. This is key to the realization of truly bound modes within a graphene waveguide, as zero-energy modes cannot escape into the bulk as there are no states to tunnel into. Moreover the electrostatic confinement isolates carriers from the sample edges, which are considered as a major source of intervalley scattering in clean graphene.[@SavchenkoSSC09]
![(a) A schematic diagram of a Gedankenexperiment for the observation of localized modes in graphene waveguides, created by the top gate (V$_{\textrm{\scriptsize{TG}}}$). The Fermi level is set using the back gate (V$_{\textrm{\scriptsize{BG}}}$) to be at the Dirac point ($\varepsilon_{\textrm{\scriptsize{F}}}=0$). (b) The electrostatic potential created by the applied top gate voltage. The plane shows the Fermi level position at $\varepsilon_{\textrm{\scriptsize{F}}}=0$.[]{data-label="fig:Cosh_1/2"}](fig1a "fig:"){width="7.5cm"} ![(a) A schematic diagram of a Gedankenexperiment for the observation of localized modes in graphene waveguides, created by the top gate (V$_{\textrm{\scriptsize{TG}}}$). The Fermi level is set using the back gate (V$_{\textrm{\scriptsize{BG}}}$) to be at the Dirac point ($\varepsilon_{\textrm{\scriptsize{F}}}=0$). (b) The electrostatic potential created by the applied top gate voltage. The plane shows the Fermi level position at $\varepsilon_{\textrm{\scriptsize{F}}}=0$.[]{data-label="fig:Cosh_1/2"}](fig1b "fig:"){width="7.5cm"}
In this paper we obtain an exact analytical solution for bound modes within a smooth electrostatic potential in pristine graphene at half-filling, count the number of modes and calculate the conductance of the channel. The conductance carried by each of these modes is comparable to the minimal conductivity of a realistic disordered graphene system. [@DiracFermions; @Min.; @Con1; @MinConTheoryCombined; @DasSarma_PNASUSA_104_2007] For the considered model potential we show that there is a threshold potential characteristic strength (the product of the potential strength with its width), for which bound modes appear. Whereas a symmetric quantum well always contains a bound mode for non-relativistic particles, we show that it is not the case for charge carriers in graphene.
Fully confined modes in a model potential
=========================================
The Hamiltonian of graphene for a two-component Dirac wavefunction in the presence of a one-dimensional potential $U(x)$ is $$\hat{H}=v_{\textrm{\scriptsize{F}}}\left(\sigma_{x}\hat{p}_{x}+\sigma_{y}\hat{p}_{y}\right)+U(x),
\label{eq:Hamiltonian}$$ where $\sigma_{x,y}$ are the Pauli spin matrices, $\hat{p}_{x}=-i\hbar\frac{\partial}{\partial x}$ and $\hat{p}_{y}=-i\hbar\frac{\partial}{\partial y}$ are the momentum operators in the $x$ and $y$ directions respectively and $v_{\textrm{\scriptsize{F}}}\approx1\times10^{6}$m/s is the Fermi velocity in graphene. In what follows we will consider smooth confining potentials, which do not mix the two non-equivalent valleys. All our results herein can be easily reproduced for the other valley. When Eq. (\[eq:Hamiltonian\]) is applied to a two-component Dirac wavefunction of the form: $$\mbox{e}^{iq_{y}y}\left({\Psi_{A}(x) \atop \Psi_{B}(x)}\right),$$ where $\Psi_{A}(x)$ and $\Psi_{B}(x)$ are the wavefunctions associated with the $A$ and $B$ sublattices of graphene respectively and the free motion in the $y$-direction is characterized by the wavevector $q_{y}$ measured with respect to the Dirac point, the following coupled first-order differential equations are obtained: $$\left(V(x)-\varepsilon\right)\Psi_{A}(x)-i\left(\frac{\mbox{d}}{\mbox{d}x}+q_{y}\right)\Psi_{B}(x)=0,\label{eq:basic1}$$ $$-i\left(\frac{\mbox{d}}{\mbox{d}x}-q_{y}\right)\Psi_{A}(x)+\left(V(x)-\varepsilon\right)\Psi_{B}(x)=0.\label{eq:basic2}$$ Here $V(x)=U(x)/\hbar v_{\textrm{\scriptsize{F}}}$ and energy $\varepsilon$ is measured in units of $\hbar v_{\textrm{\scriptsize{F}}}$.
For the treatment of confined modes within a symmetric electron waveguide, $V(x)=V(-x)$, it is convenient to consider symmetric and anti-symmetric modes. One can see from Eqs. (\[eq:basic1\]-\[eq:basic2\]) that $\Psi_{A}(x)$ and $\Psi_{B}(x)$ are neither even nor odd, so we transform to symmetrized functions: $$\Psi_{1}=\Psi_{A}(x)-i\Psi_{B}(x),\quad\ensuremath{\Psi_{2}}=\ensuremath{\Psi_{A}(x)+i\Psi_{B}(x)}.$$ The wavefunctions $\Psi_{1}$ and $\Psi_{2}$ satisfy the following system of coupled first-order differential equations:$$\left[V(x)-\left(\varepsilon-q_{y}\right)\right]\Psi_{1}-\frac{\mbox{d}\Psi_{2}}{\mbox{d}x}=0,\label{eq:sym1}$$ $$\frac{\mbox{d\ensuremath{\Psi_{1}}}}{\mbox{d}x}+\left[V(x)-\left(\varepsilon+q_{y}\right)\right]\Psi_{2}=0.
\label{eq:sym2}$$ It is clear from Eqs. (\[eq:sym1\]-\[eq:sym2\]) that $\Psi_1$ and $\Psi_2$ have opposite parity.
For an ideal graphene sheet at half-filling the conductivity is expected to vanish due to the vanishing density of states. When the Fermi energy is at the Dirac point ($\varepsilon=0$) there are no charge carriers within the system, so graphene is a perfect insulator. However all available experiments demonstrate non-vanishing minimal conductivity [@DiracFermions; @CastroNetoReview; @Min.; @Con1] of the order of $e^{2}/{h}$ which is thought to be due to disorder within the system [@CastroNetoReview; @MinConTheoryCombined; @DasSarma_PNASUSA_104_2007; @PercolationNetwork] or finite-size effects.[@SizeEffectCombined; @Beenakker_PRL_96_2006] In order to study confined states within and conductance along an electron waveguide it is necessary to use the back gate to fix the Fermi energy ($\varepsilon_{\textrm{\scriptsize{F}}}$) at zero, as shown in Fig. \[fig:Cosh\_1/2\](b). Note that Fig. \[fig:Cosh\_1/2\](a) is just a schematic of the proposed experimental geometry and that side contacts may be needed to maintain the ‘bulk’ Fermi level at zero energy. The conductivity of the graphene sheet is a minimum and for a square sample the conductance is of the order of the conductance carried by a single mode within a waveguide. Thus the appearance of confined modes within the electron waveguide will drastically change the conductance of a graphene flake. Indeed each mode will contribute $4e^{2}/{h}$, taking into account valley and spin degeneracy. For device applications the sample should be designed in such a way that the contribution to the conductance from the confined modes is most prominent. In the ideal case, the conductivity of the channel would be the only contribution to that of the graphene sheet. When $\varepsilon_{\textrm{\scriptsize{F}}}\neq0$, the conductivity will be dominated by the 2D Fermi sea of electrons throughout the graphene sheet. Henceforth, we shall consider the modes for $\varepsilon=0$.
We shall consider truly smooth potentials, allowing us to avoid the statement of “sharp but smooth” potentials, which is commonly used to neglect intervalley mixing for the tunneling problem. [@KleinCombined; @Cheianov_Falko_PRB(R)_74_2006; @Peeters_PRB_74_2006; @Chaplik_JETP_84_2006; @Peeters_APL_90_2007] Furthermore, we are interested in potentials that vanish at infinity and have at least two fitting parameters, characterizing their width and strength, in order to fit experimental potential profiles. [@TopGateCombined; @Kim_PRL_99_2007; @Savchenko_NanoLett_8_2008; @Liu_APL_92_2008; @GG_PRL_102_2009; @Kim_NatPhys_5_2009] Let us consider the following class of potentials, which satisfy the aforementioned requirements:$$V(x)=-\frac{\alpha}{\cosh^{\lambda}(\beta
x)},\label{eq:potential}$$ where $\alpha$, $\beta$ and $\lambda$ are positive parameters. The negative sign in Eq. (\[eq:potential\]) reflects a potential well for electrons, and similar results can easily be obtained for holes by changing the sign of $V(x)$. Notably $\lambda=2$ is the familiar case of the Pöschl-Teller potential, which has an analytic solution for the non-relativistic case.[@PoschlTeller] It is shown in the Appendix that the model potential (\[eq:potential\]) with $\lambda=1$ provides an excellent fit to a graphene top-gate structure.
Eliminating $\Psi_{2}$ ($\Psi_{1}$) reduces the system of Eqs. (\[eq:sym1\]-\[eq:sym2\]) to a single second order differential equation for $\Psi_{1}$ ($\Psi_{2}$), which for the potential given by Eq. (\[eq:potential\]) and $\varepsilon=0$ becomes $$\frac{\mbox{d}^{2}\Psi_{1,2}}{\mbox{d}z^{2}}+\lambda\tanh(z)
\frac{\omega\cosh^{-\lambda}(z)}{\omega\cosh^{-\lambda}(z) \pm
\Delta}\frac{\mbox{d\ensuremath{\Psi_{1,2}}}}{\mbox{d}z}
+\left(\omega^{2}\cosh^{-2\lambda}(z)-\Delta^{2}\right)
\Psi_{1,2}=0, \label{eq:FinalDiff}$$ where we use the dimensionless variables $z=\beta x$, $\Delta= q_{y}/{\beta}$ and $\omega=\alpha/\beta$. For $\lambda=1$, the change of variable $\xi=\tanh(z)$ allows Eq. (\[eq:FinalDiff\]) to be reduced to a set of hypergeometric equations yielding the following non-normalized bound solutions for $\Delta>0$: $$\begin{aligned}
\Psi_{1,2} &=&
\pm{}\left(1+\xi\right)^{p}\left(1-\xi\right)^{q}\:_{2}\mbox{F}_{1}\left(p+q-\omega\mbox{, }
p+q+\omega\mbox{; }2p+\frac{1}{2};\frac{1+\xi}{2}\right)
\nonumber
\\
&+&
(-1)^{n}\left(1+\xi\right)^{q}\left(1-\xi\right)^{p}\,_{2}
\mbox{F}_{1}\left(p+q-\omega,\, p+q+\omega;\,2p+\frac{1}{2};\frac{1-\xi}{2}\right),
\label{eq:wfn1}\end{aligned}$$ where in order to terminate the hypergeometric series it is necessary to satisfy $p=\frac{1}{2}\left(\omega-n\right)+\frac{1}{4}$, $q=\frac{1}{2}\left(\omega-n\right)-\frac{1}{4}$ and $\Delta=\omega-n-\frac{1}{2}$, where $n$ is a positive integer. Though we have assumed that $\omega$ is positive, one can see that the structure of the solutions in Eq. (\[eq:wfn1\]) remains unchanged with the change of sign of $\omega$, reflecting electron-hole symmetry. In order to avoid a singularity at $\xi=\pm1$ we require that both $p>0$ and $q>0$ and obtain the condition that $\omega-n>\frac{1}{2}$. It should be noted that this puts an upper limit on $n$, the order of termination of the hypergeometric series. Notably the first mode occurs at $n=0$, thus there is a lower threshold of $\omega>\frac{1}{2}$ for which bound modes appear. Hence within graphene, quantum wells are very different to the non-relativistic case; bound states are not present for any symmetric potential, they are only present for significantly strong or wide potentials, such that $\omega=\alpha/\beta>\frac{1}{2}$.
Let us consider the first mode ($n=0$) in Eq. (\[eq:wfn1\]) which appears within the electronic waveguide with increasing $\omega$. In this case the hypergeometric function is unity, and the normalized wavefunctions are: $$\Psi_{1,2}= A_{1,2}\left[ \left(1+\xi\right)^{\frac{\omega}{2}-
\frac{1}{4}}\left(1-\xi\right){}^{\frac{\omega}{2}+\frac{1}{4}}\pm
\left(1+\xi\right)^{\frac{\omega}{2}+
\frac{1}{4}}\left(1-\xi\right){}^{\frac{\omega}{2}-
\frac{1}{4}}\right], \label{eq:wfnnorm}$$ where $A_{1,2}$ is given by: $$A_{1,2}=\left\{ \frac{\beta\,
\left(2\omega-1\right)\Gamma\left(\omega\right)\Gamma\left(\omega+\frac{1}{2}\right)}
{4\sqrt{\pi}\left[2\Gamma^{2}\left(\omega+\frac{1}{2}\right)
\pm\left(2\omega-1\right)\Gamma^{2}\left(\omega\right)\right]}
\right\}^{\frac{1}{2}},$$ where $\Gamma(z)$ is the Gamma function. As expected, the two functions given by Eq. (\[eq:wfnnorm\]) are of different parity, thus unlike the non-relativistic case there is an odd function corresponding to the first confined mode. This leads to a threshold in the characteristic potential strength $\omega$ at which the first confined mode appears; much like in the conventional quantum well, where the first odd state appears only for a sufficiently deep or wide potential well. In Fig. \[fig:Cosh\_3/4\] we present $\Psi_{1}$, $\Psi_{2}$ and the corresponding electron density profiles for the first and second bound modes for the case of $\omega=2$. The shape of the confinement potential is shown for guidance within the same figure. The charge density profile for these modes differs drastically from the non-relativistic case. The first mode ($n=0$) has a dip in the middle of the potential well, whereas the second mode ($n=1$) has a maximum. This is a consequence of the complex two-component structure of the wavefunctions.
![The wavefunctions $\Psi_1$ (solid line) and $\Psi_2$ (dashed line) are shown for $\omega=2$ for (a) the first ($n=0$) mode and (b) the second ($n=1$) mode. A potential profile is provided as a guide for the eye (dotted line). The insets show the electron density profile for the corresponding modes.[]{data-label="fig:Cosh_3/4"}](fig2a "fig:"){width="6.0cm"} ![The wavefunctions $\Psi_1$ (solid line) and $\Psi_2$ (dashed line) are shown for $\omega=2$ for (a) the first ($n=0$) mode and (b) the second ($n=1$) mode. A potential profile is provided as a guide for the eye (dotted line). The insets show the electron density profile for the corresponding modes.[]{data-label="fig:Cosh_3/4"}](fig2b "fig:"){width="6.0cm"}
For negative values of $\Delta$ , $\Psi_1$ and $\Psi_2$ switch parity such that $\Psi_{1,2}(-\Delta)=\Psi_{2,1}(\Delta)$. This means backscattering within a channel requires a change of parity of the wavefunctions in the $x$-direction. Notably, when another non-equivalent Dirac valley is considered, one finds that there are modes of the same parity propagating in the opposite direction. However intervalley scattering requires a very short-range potential or proximity to the sample edges. Thus for smooth scattering potentials backscattering should be strongly suppressed. Such suppression should result in an increase in the mean free path of the channel compared to that of graphene. This is similar to the suppression of backscattering in carbon nanotubes,[@Ando] where the ballistic regime is believed to persist up to room temperature with a mean free path exceeding one micrometer.[@CNTCombined; @Porntoi_NanoLett_7_2007] In some sense the considered waveguide can be thought of as a carbon nanotube-like structure with parameters controlled by the top gate.
Notably, our results are also applicable for the case of a one-dimensional massive particle which is confined in the same potential. This can be achieved by the substitution $|q_y|\rightarrow m v_{\textrm{\scriptsize{F}}}/\hbar$ into Eqs. (\[eq:sym1\]-\[eq:sym2\]), where the gap is given by $2m v_{\textrm{\scriptsize{F}}}^{2}$. Hence in the massless 2D case the momentum along the waveguide plays the same role as the gap in the massive one-dimensional case. Therefore, in a massive one-dimensional Dirac system (such as a narrow gap carbon nanotube) there exists a bound state in the middle of the gap for certain values of the characteristic strength of the potential.
The number of modes $N_{\omega}$ at a fixed value of $\omega$ is the integer part of $\omega+\frac{1}{2}$ herein denoted $N_{\omega}=\left\lfloor \omega+\frac{1}{2}\right\rfloor $. The conductance of an ideal one-dimensional channel characterized by $\omega$ is found using the Landauer formula to be $G_{\omega}=4
N_{\omega} e^{2}/ h$. By modulating the parameters of the potential, one can increase the conductance of the channel from zero in jumps of $4e^{2}/{h}$. The appearance of the first and further confined modes within the conducting channel modifies both the strength and the profile of the potential. This nonlinear screening effect [@Fogler_PRL_100_2008; @Fogler_PRB_77_2008] is neglected in the above expression for $G_{\omega}$ and shall be a subject of future investigation.
Exact solutions for confined modes can also be found for the Pöschl-Teller potential, $V(x)=-\alpha/\cosh^2(\beta x)$, which corresponds to $\lambda=2$ in Eq. (\[eq:FinalDiff\]). Wavefunctions can be expressed via Heun polynomials in variable $\xi=\tanh(\beta x)$.
Discussion and conclusions
==========================
All the results obtained in this paper have been for a specific potential. However, general conclusions can be drawn from these results for any symmetric potential. Namely, the product of the potential strength and its width dictates the number of confined modes within the channel.[@comment] Moreover, this product has a threshold value for which the first mode appears. The width of the potential is defined by the geometry of the top gate structure, and the strength of the potential is defined by the voltage applied to the top gate. The mean free path of electrons within graphene is of the order of 100nm and sub-100nm width gates have been reported in the literature, [@TopGateCombined; @Kim_PRL_99_2007; @Savchenko_NanoLett_8_2008; @Liu_APL_92_2008; @GG_PRL_102_2009; @Kim_NatPhys_5_2009] making quantum effects relevant. The number of modes within such top-gated structures is governed by the strength of the potential, with new modes appearing with increasing potential strength. In a top gate structure modeled by our potential with a width at half maximum of 50nm, the first bound mode should appear for a potential strength of approximately 17meV and further modes should appear with increasing potential strength in steps of 34meV, which corresponds to 395K. Therefore a noticeable change in conductivity should be observed in realistic structures even at room temperature. This is similar to the quantum Hall effect which is observed in graphene at room temperature.[@QHE] A change of geometry, from normal transmission to propagation along a potential, allows graphene to be used as a switching device.
In summary, we show that contrary to the widespread belief, truly confined (non-leaky) modes are possible in graphene in a smooth electrostatic potential vanishing at infinity. Full confinement is possible for zero-energy modes due to the vanishing density of states at the charge neutrality point. We present exact analytical solutions for fully confined zero-energy modes in the potential $V(x)=-\alpha/\cosh(\beta x)$, which provides a good fit (see Appendix) to experimental potential profiles in existing top gate structures. [@TopGateCombined; @Kim_PRL_99_2007; @Savchenko_NanoLett_8_2008; @Liu_APL_92_2008; @GG_PRL_102_2009; @Kim_NatPhys_5_2009] Within such a potential there is a threshold value of $\omega={\alpha}/{\beta}$ for which bound modes first appear, which is different to conventional non-relativistic systems. We found a simple relation between the number of confined modes and the characteristic potential strength $\omega$. The threshold potential strength enables on/off behavior within the graphene waveguide, and suggests future device applications. The existence of bound modes within smooth potentials in graphene may provide an additional argument in favor of the mechanism for minimal conductivity, where charge puddles lead to a percolation network of conducting channels.[@PercolationNetwork]
There are experimental challenges which need to be resolved in order to observe confined modes in graphene waveguides. These include creating narrow gates and thin dielectric layers as well as optimizing the geometry of the sample to reduce the background conductance. Our work also poses further theoretical problems, including the study of non-linear screening, many-body effects, parity changing backscattering and inter-valley scattering within the channel.
The study of quasi-one-dimensional channels within conventional semiconductor systems has lead to many interesting effects. Many problems are still outstanding, including the 0.7 anomaly in the ballistic conductance which is a subject of extensive experimentaland theoretical study.[@1DSystem] We envisage that the ability to produce quasi-one-dimensional channels within graphene will reveal new and non-trivial physics.
We are grateful to A.V. Shytov, A.S. Mayorov, Y. Kopelevich, J.C. Inkson, D.C. Mattis and N. Hasselmann for valuable discussions and we thank ICCMP Brasília and UNICAMP for hospitality. This work was supported by EPSRC (RRH and NJR), EU projects ROBOCON (FP7-230832) and TerACaN (FP7-230778), MCT, IBEM and FINEP (Brazil).
Potential due to a wire above a graphene sheet
==============================================
To illustrate the relevance of our model potential to realistic top-gate structures we provide a simple calculation of the potential distribution in the graphene plane for a simplified top-gate structure. Fig. \[fig:setup\] shows the model that we use for our estimate. We consider a wire of radius $r_{0}$, separated by distance $h$ from a metallic substrate (i.e. doped Si) and calculate the potential profile in the graphene plane separated from the same substrate by distance $d$. We are interested in the case when the Fermi level in graphene is at zero energy (the Dirac point), hence we assume the absence of free carriers in graphene. We also assume the absence of any dielectric layers, but the problem can be generalized in such an instance.
![The simplified geometry used to obtain the model potential. The image charge is shown for convenience.[]{data-label="fig:setup"}](fig3){width="11cm"}
One can easily show that the potential energy for an electron in the graphene plane is given by $$U(x)=\frac{e\widetilde{\phi_{0}}}{2}\ln\left(\frac{x^{2}+(h-d)^{2}}{x^{2}+(h+d)^{2}}\right),
\label{model1}$$ where $\widetilde{\phi_{0}}=\phi_{0}/\ln((2h-r_{0})/r_{0})$, $\phi_{0}$ is the voltage applied between the top electrode and metallic substrate and $e$ is the absolute value of the electron charge. One can see that this potential behaves as: $$\begin{aligned}
U(x) \approx -U_{0}+{e\widetilde{\phi_{0}}}\;\frac{2hd}{(h^{2}-d^{2})^{2}}\,
x^{2},
\qquad
x\ll(h-d);
\nonumber
\\
U(x) \approx -{e\widetilde{\phi_{0}}}\;\frac{2hd}{x^{2}},
\nonumber
\qquad x\gg(h+d).\end{aligned}$$ The depth of the potential well is given by $$U_{0}=e\phi_{0}\left[\frac{\ln\left(\frac{h+d}{h-d}\right)}{\ln\left(\frac{2h-r_{0}}{r_{0}}\right)}\right]
\nonumber$$ and the half width at half maximum (HWHM) is given by $x_{0}=\sqrt{h^{2}-d^{2}}$. In Fig. \[fig:potentials\] we show a comparison between the potential given by Eq. (\[model1\]) and the potential considered in our paper with the same HWHM and potential strength. Clearly the potential given by $-U_{0}/{\cosh\left(\beta x\right)}$, provides a significantly better approximation than that of the square well potential.
![A comparison between the potential created by a wire suspended above the graphene plane (solid line) and the $-U_0/\cosh{(\beta x)}$ potential with the same half width at half maximum (dashed line).[]{data-label="fig:potentials"}](fig4){width="9cm"}
[100]{}
O. Klein, Z. Phys. **53,** 157 (1929).
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature (London) **438**, 197 (2005).
A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. **81**, 109 (2009).
V. V. Cheianov, V. I. Fal’ko, and B. L. Altshuler, Science **315**, 5816 (2007).
A. V. Shytov, M. S. Rudner, and L. S. Levitov, Phys. Rev. Lett. **101**, 156804 (2008).
L. Zhao and S. F. Yelin, arXiv:0804.2225; Phys. Rev. B **81**, 115441 (2010).
C. W. J. Beenakker, R. A. Sepkhanov, A. R. Akhmerov, and J. Tworzydło, Phys. Rev. Lett. **102**, 146804 (2009).
F. M. Zhang, Y. He, and X. Chen, Appl. Phys. Lett. **94**, 212105 (2009).
M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Nature Phys. **2**, 620 (2006).
V. V. Cheianov and V. I. Fal’ko, Phys. Rev. B **74**, 041403(R) (2006).
J. M. Pereira Jr., V. Mlinar, F. M. Peeters, and P. Vasilopoulos, Phys. Rev. B **74**, 045424 (2006).
T. Ya. Tudorovskiy and A. V. Chaplik, JETP Lett. **84**, 619 (2006).
J. M. Pereira Jr., P. Vasilopoulos, and F. M. Peeters, Appl. Phys. Lett. **90**, 132122 (2007).
L. M. Zhang and M. M. Fogler, Phys. Rev. Lett. **100**, 116804 (2008).
M. M. Fogler, D. S. Novikov, L. I. Glazman, and B. I. Shklovskii, Phys. Rev. B **77**, 075420 (2008).
C. W. J. Beenakker, Rev. Mod. Phys. **80**, 1337 (2008).
H. C. Nguyen, M. T. Hoang, and V. L. Nguyen, Phys. Rev. B **79**, 035411 (2009).
J. R. Williams, L. DiCarlo, and C. M. Marcus, Science **317**, 638 (2007).
B. Huard, J. A. Sulpizio, N. Stander, K. Todd, B. Yang, and D. Goldhaber-Gordon, Phys. Rev. Lett. **98**, 236803 (2007).
B. Özyilmaz, P. Jarillo-Herrero, D. Efetov, D. A. Abanin, L. S. Levitov, and P. Kim, Phys. Rev. Lett. **99**, 166804 (2007).
R. V. Gorbachev, A. S. Mayorov, A. K. Savchenko, D. W. Horsell, and F. Guinea, Nano Lett. **8**, 1995 (2008).
G. Liu, J. Velasco Jr., W. Bao, and C. N. Lau, Appl. Phys. Lett. **92**, 203103 (2008).
N. Stander, B. Huard, and D. Goldhaber-Gordon, Phys. Rev. Lett. **102**, 026807 (2009).
A. F. Young and P. Kim, Nature Phys. **5**, 222 (2009).
M. Y. Han, B. Özylimaz, Y. Zhang, and P. Kim, Phys. Rev. Lett. **98**, 206805 (2007).
L. Brey and H. A. Fertig, Phys. Rev. B **73**, 235411 (2006).
P. G. Silvestrov and K. B. Efetov, Phys. Rev. Lett. **98**, 016802 (2007).
N. M. R. Peres, J. N. B. Rodrigues, T. Stauber, and J. M. B. Lopes dos Santos, J. Phys.: Condens. Matter **21**, 344202 (2009).
D, W. Horsell, A. K. Savchenko, F. V. Tikhonenko, K. Kechedzhi, I. V. Lerner, and V. I. Fal’ko, Solid State Commun. **149**, 1041 (2009).
Y. -W. Tan, Y. Zhang, K. Bolotin, Y. Zhao, S. Adam, E. H. Hwang, S. Das Sarma, H. L. Stormer, and P. Kim, Phys. Rev. Lett. **99**, 246803 (2007).
P. M. Ostrovsky, I. V. Gornyi, and A. D. Mirlin, Phys. Rev. Lett. **98**, 256801 (2007). S. Adam, E. H. Hwang, V. M. Galitski, and S. Das Sarma, Proc. Natl. Acad. Sci. USA **104**, 18392 (2007).
V. V. Cheianov, V. I. Fal’ko, B. L. Altshuler, and I. L. Aleiner, Phys. Rev. Lett. **99**, 176801 (2007).
M. I. Katsnelson, Eur. Phys. J. B **51**, 157 (2006).
J. Tworzydło, B. Trauzettel, M. Titov, A. Rycerz, and C. W. J. Beenakker, Phys. Rev. Lett. **96**, 246802 (2006).
G. Pöschl and E. Teller, Z. Phys. **83**, 143 (1933).
T. Ando, T. Nakanishi and R. Saito, J. Phys. Soc. Japan **67**, 2857 (1998).
M. P. Anantram and F. Léonard, Rep. Prog. Phys. **69**, 507 (2006).
O. V. Kibis, M. Rosenau da Costa, and M. E. Portnoi, Nano Lett. **7**, 3414 (2007) and references therein.
For example, in a finite square well of width $d$ and depth $U_0$, new zero-energy states appear when $U_0 d/\hbar v_{\textrm{\scriptsize{F}}}~=~(2n+1){}\pi/{}2$, where $n=0,1,2,\ldots$
K. S. Novoselov, Z. Jiang, Y. Zhang, S. V. Morozov, H. L. Stormer, U. Zeitler, J. C. Maan, G. S. Boebinger, P. Kim, and A. K. Geim, Science **315**, 1379 (2007).
I. A. Shelykh, M. Rosenau da Costa, and A. C. Seridonio, J. Phys.: Condens. Matter **20**, 164214 (2007) and references therein.
|
---
abstract: 'We present the results of theoretical study of Current-Phase Relations (CPR) $J_{S}(\varphi )$ in Josephson junctions of SIsFS type, where ’S’ is a bulk superconductor and ’IsF’ is a complex weak link consisting of a superconducting film ’s’, a metallic ferromagnet ’F’ and an insulating barrier ’I’. At temperatures close to critical, $T\lesssim T_{C}$, calculations are performed analytically in the frame of the Ginsburg-Landau equations. At low temperatures numerical method is developed to solve selfconsistently the Usadel equations in the structure. We demonstrate that SIsFS junctions have several distinct regimes of supercurrent transport and we examine spatial distributions of the pair potential across the structure in different regimes. We study the crossover between these regimes which is caused by shifting the location of a weak link from the tunnel barrier ’I’ to the F-layer. We show that strong deviations of the CPR from sinusoidal shape occur even in a vicinity of $T_{C}$, and these deviations are strongest in the crossover regime. We demonstrate the existence of temperature-induced crossover between 0 and $\pi$ states in the contact and show that smoothness of this transition strongly depends on the CPR shape.'
author:
- 'S. V. Bakurskiy'
- 'N. V. Klenov'
- 'I. I. Soloviev'
- 'M. Yu. Kupriyanov'
- 'A. A. Golubov'
title: Theory of supercurrent transport in SIsFS Josephson junctions
---
Introduction
============
Josephson structures with a ferromagnetic layer became very active field of research because of the interplay between superconducting and magnetic order in a ferromagnet leading to variety of new effects including the realization of a $\pi $-state with phase difference $\pi $ in the ground state of a junction, as well as long-range Josephson coupling due generation of odd-frequency triplet order parameter [@RevG; @RevB; @RevV].
Further interest to Josephson junctions with magnetic barrier is due to emerging possibilities of their practical use as elements of a superconducting memory [@Oh]$^{-}$ [@APL], on-chip $\pi$- phase shifters for self-biasing various electronic quantum and classical circuits [@Rogalla]$^{-}$ [@Ustinov], as well as $\varphi$- batteries, the structures having in the ground state phase difference $\varphi _{g}=\varphi
$, $(0<|\varphi |<\pi )$ between superconducting electrodes [Buzdin,Koshelev,Pugach,Gold1,Gold2,Bakurskiy,Heim, Linder, Chan]{}. In standard experimental implementations SFS Josephson contacts are sandwich-type structures [@ryazanov2001]$^{-}$ [@Ryazanov2006a]. The characteristic voltage $V_{C}=J_{C}R_{N}$ ($J_{C}$ is critical current of the junction, $R_{N}$ is resistance in the normal state) of these SFS devices is typically quite low, which limits their practical applications. In SIFS structures [@Kontos]$^{-}$ [@Weides3] containing an additional tunnel barrier I, the $J_{C}R_{N}$ product in a $0$-state is increased [@Ryazanov3], however in a $\pi $-state $V_{C}$ is still too small [@Vasenko; @Vasenko1] due to strong suppression of the superconducting correlations in the ferromagnetic layer.
Recently, new SIsFS type of magnetic Josepshon junction was realized experimentally [@Ryazanov3; @Larkin; @Vernik; @APL]. This structure represents a connection of an SIs tunnel junction and an sFS contact in series. Properties of SIsFS structures are controlled by the thickness of s layer $d_{s}$ and by relation between critical currents $J_{CSIs}$ and $J_{CsFS}$ of their SIs and sFS parts, respectively. If the thickness of s-layer $d_{s}$ is much larger than its coherence length $\xi _{S}$ and $J_{CSIs}\ll
J_{CsFS} $, then characteristic voltage of an SIsFS device is determined by its SIs part and may reach its maximum corresponding to a standard SIS junction. At the same time, the phase difference $\varphi $ in a ground state of an SIsFS junction is controlled by its sFS part. As a result, both $0 $- and $\pi $-states can be achieved depending on a thickness of the F layer. This opens the possibility to realize controllable $\pi $ junctions having large $J_{C}R_{N}$ product. At the same time, being placed in external magnetic field $H_{ext}$ SIsFS structure behaves as a single junction, since $d_{s}$ is typically too thin to screen $H_{ext}$. This provides the possibility to switch $J_{C}$ by an external field.
However, theoretical analysis of SIsFS junctions was not performed up to now. The purpose of this paper is to develop a microscopic theory providing the dependence of the characteristic voltage on temperature $T$, exchange energy $H$ in a ferromagnet, transport properties of FS and sF interfaces and thicknesses of s and F layers. Special attention will be given to determining the current-phase relation (CPR) between the supercurrent $J_{S}$ and the phase difference $\varphi $ across the structure.
![ a) Schematic design of SIsFS Josephson junction. b), c) Typical distribution of amplitude $|\Delta (x)|$ and phase difference $\protect\chi (x)$ of pair potential along the structure. []{data-label="design"}](design.eps){width="8.5cm"}
Model of SIsFS Josephson device \[Model\]
=========================================
We consider multilayered structure presented in Fig.\[design\]a. It consists of two superconducting electrodes separated by complex interlayer including tunnel barrier I, intermediate superconducting s and ferromagnetic F films. We assume that the conditions of a dirty limit are fulfilled for all materials in the structure. In order to simplify the problem, we also assume that all superconducting films are identical and can be described by a single critical temperature $T_{C}$ and coherence length $\xi _{S}.$ Transport properties of both sF and FS interfaces are also assumed identical and are characterized by the interface parameters $$\gamma =\frac{\rho _{S}\xi _{S}}{\rho _{F}\xi _{F}},\quad \gamma _{B}=\frac{R_{BF}\mathcal{A}_{B}}{\rho _{F}\xi _{F}}. \label{gammas}$$Here $R_{BF}$ and $\mathcal{A}_{B}$ are the resistance and area of the sF and FS interfaces $\xi _{S}$ and $\xi _{F}$ are the decay lengths of S and F materials while $\rho _{S}$ and $\rho _{F}$ are their resistivities.
Under the above conditions the problem of calculation of the critical current in the SIsFS structure reduces to solution of the set of the Usadel equations[@Usadel]. For the S layers these equations have the form [@RevG; @RevB; @RevV] $$\frac{\xi _{S}^{2}}{\Omega G_{m}}\frac{d}{dx}\left( G_{m}^{2}\frac{d}{dx}\Phi _{m}\right) -\Phi _{m}=-\Delta _{m},~G_{m}=\frac{\Omega }{\sqrt{\Omega
^{2}+\Phi _{m}\Phi _{m}^{\ast }}}, \label{fiS}$$$$\Delta _{m}\ln \frac{T}{T_{C}}+\frac{T}{T_{C}}\sum_{\omega =-\infty
}^{\infty }\left( \frac{\Delta _{m}}{\left\vert \Omega \right\vert }-\frac{\Phi _{m}G_{m}}{\Omega }\right) =0, \label{delta}$$where $m=S$ for $x\leq -d_{s}~$and$~x\geq d_{F};$ $m=s$ in the interval $-d_{s}\leq x\leq 0.$ In the F film $(0\leq x\leq d_{F})$ they are $$\xi _{F}^{2}\frac{d}{dx}\left( G_{F}^{2}\frac{d}{dx}\Phi _{F}\right) -\widetilde{\Omega }\Phi _{F}G_{F}=0. \label{FiF}$$Here $\Omega =T(2n+1)/T_{C}$ are Matsubara frequencies normalized to $\pi
T_{C}$, $\widetilde{\Omega }=\Omega +iH/\pi T_{C},$ $G_{F}=\widetilde{\Omega
}/(\widetilde{\Omega }^{2}+\Phi _{F,\omega }\Phi _{F,-\omega }^{\ast
})^{1/2},$ $H$ is exchange energy, $\xi _{S,F}^{2}=(D_{S,F}/2\pi T_{C})$ and $D_{S,F},$ are diffusion coefficients in S and F metals, respectively. Pair potential $\Delta _{m}$ and the Usadel functions $\Phi _{m}$ and $\Phi
_{F}$ in (\[fiS\]) - (\[FiF\]) are also normalized to $\pi T_{C}.$ To write equations (\[fiS\]) - (\[FiF\]), we have chosen the $x$ axis in the directions perpendicular to the SI, FS and sF interfaces and put the origin at sF interface. Equations (\[fiS\]) - (\[FiF\]) must be supplemented by the boundary conditions [@KL]. At $x=-d_{s}$ they can be written as $$\begin{gathered}
G_{S}^{2}\frac{d}{dx}\Phi _{S}=G_{s}^{2}\frac{d}{dx}\Phi _{s}, \label{BCIF}
\\
\gamma _{BI}\xi _{S}G_{s}\frac{d}{dx}\Phi _{s}=-G_{S}\left( \Phi _{S}-\Phi
_{s}\right) , \notag\end{gathered}$$where $\gamma _{BI}=R_{BI}\mathcal{A}_{B}/\rho _{S}\xi _{S}$, $R_{BI}$ and $\mathcal{A}_{B}$ are resistance and area of SI interface. At $x=0$ the boundary conditions are$$\begin{gathered}
\frac{\xi _{S}}{\Omega }G_{s}^{2}\frac{d}{dx}\Phi _{s}=\gamma \frac{\xi _{F}}{\widetilde{\Omega }}G_{F}^{2}\frac{d}{dx}\Phi _{F}, \label{BCsF} \\
\gamma _{B}\xi _{F}G_{F}\frac{d}{dx}\Phi _{F}=-G_{s}\left( \frac{\widetilde{\Omega }}{\Omega }\Phi _{s}-\Phi _{F}\right) \notag\end{gathered}$$and at $x=d_{F}$ they have the form $$\begin{gathered}
\frac{\xi _{S}}{\Omega }G_{S}^{2}\frac{d}{dx}\Phi _{S}=\gamma \frac{\xi _{F}}{\widetilde{\Omega }}G_{S}^{2}\frac{d}{dx}\Phi _{F}, \label{BCSF} \\
\gamma _{B}\xi _{F}G_{F}\frac{d}{dx}\Phi _{F}=G_{S}\left( \frac{\widetilde{\Omega }}{\Omega }\Phi _{S}-\Phi _{F}\right) , \notag\end{gathered}$$Far from the interfaces the solution should cross over to a uniform current-carrying superconducting state[@Ivanov]$^{-}$[@KuperD]$$\Phi _{S}(\mp \infty )=\Phi _{\infty }\exp \left\{ i(\chi (\mp \infty
)-ux/\xi _{S})\right\} ,~ \label{BCInf1}$$$$\Delta _{S}(\mp \infty )=\Delta _{0}\exp \left\{ i(\chi (\mp \infty )-ux/\xi
_{S})\right\} , \label{BCINFD}$$$$\Phi _{\infty }=\frac{\Delta _{0}}{1+u^{2}/\sqrt{\Omega ^{2}+|\Phi _{S}|^{2}}}, \label{BCinf2}$$resulting in order parameter phase difference across the structure equal to$$\varphi =\varphi (\infty )-2ux/\xi _{S},~\varphi (\infty )=\chi (\infty
)-\chi (-\infty ). \label{PhDifInf}$$Here $\varphi (\infty )$ is the asymptotic phase difference across the junction, $\Delta _{0}$ is modulus of order parameters far from the boundaries of the structure at a given temperature, $u=2mv_{s}\xi _{S},$ $m$ is the electron mass and $v_{s}$ is the superfluid velocity. Note that since the boundary conditions (\[BCIF\]) - (\[BCsF\]) include the Matsubara frequency $\Omega $, the phases of $\Phi _{S}$ functions depend on $\Omega $ and are different from the phase of the pair potential $\Delta _{S}$ at the FS interfaces $\chi (d_{F})$ and $\chi (0).$ Therefore it is the value $\varphi (\infty )$ rather than $\varphi =\chi (d_{F})-\chi (0),$ that can be measured experimentally by using a scheme compensating the linear in $x$ part in Eq. (\[PhDifInf\]).
The boundary problem (\[fiS\])-(\[PhDifInf\]) can be solved numerically making use of (\[BCInf1\]), (\[BCinf2\]). Accuracy of calculations can be monitored by equality of currents $J_{S}$ $$\frac{2eJ_{S}(\varphi )}{\pi T\mathcal{A}_{B}}=\sum\limits_{\omega =-\infty
}^{\infty }\frac{iG_{m,\omega }^{2}}{\rho _{m}\widetilde{\Omega }^{2}}\left[
\Phi _{m,\omega }\frac{\partial \Phi _{m,-\omega }^{\ast }}{\partial x}-\Phi
_{m,-\omega }^{\ast }\frac{\partial \Phi _{m,\omega }}{\partial x}\right] ,
\label{current}$$calculated at the SI and FS interfaces and in the electrodes.
In the further analysis carried out below we limit ourselves to the consideration of the most relevant case of low-transparent tunnel barrier at SI interface$$\gamma _{BI}\gg 1. \label{LargeGammaBI}$$In this approximation, the junction resistance $R_{N}$ is fully determined by the barrier resistance $R_{BI}$. Furthermore the current flowing through the electrodes can lead to the suppression of superconductivity only in the vicinity of sF and FS interfaces. That means, up to terms of the order of $\gamma _{BI}^{-1}$ we can neglect the effects of suppression of superconductivity in the region $x\leq -d_{s}~$ and write the solution in the form $$\Phi _{S}(x)=\Delta _{S}(x)=\Delta _{0}. \label{Sol_Left}$$Here without any lost of generality we put $\chi (-\infty )=\chi
(-d_{s}-0)=0 $ (see Fig. \[design\]c).
Substitution of (\[Sol\_Left\]) into boundary conditions (\[BCIF\]) gives$$\gamma _{BI}\xi _{S}G_{s}\frac{d}{dx}\Phi _{s}=-\frac{\Omega }{\sqrt{\Omega
^{2}+\Delta _{0}^{2}}}\left( \Delta _{0} -\Phi _{s}\right) . \label{BCIFmod}$$Further simplifications are possible in a several limiting cases.
The high temperature limit $T\approx T_{C}$\[SecHighTC\]
========================================================
In a vicinity of critical temperature the Usadel equations in the F layer can be linearized. Writing down their solution in the analytical form and using the boundary conditions (\[BCsF\]), (\[BCSF\]) on sF and FS interfaces we can reduce the problem to the solution of Ginzburg-Landau (GL) equations in the s and S layers. We limit our analysis by considering the most interesting case when the following condition is fulfilled: $$\Gamma _{BI}=\frac{\gamma _{BI}\xi _{S}}{\xi _{S}(T)}\gg 1, \label{GammaBB1}$$and when there is strong suppression of superconductivity in the vicinity of the sF and FS interfaces. The latter takes place if the parameter $\Gamma $ $$\Gamma =\frac{\gamma \xi _{S}(T)}{\xi _{S}},~\xi _{S}(T)=\frac{\pi \xi _{S}}{2\sqrt{1-T/T_{C}}} \label{GammasB1}$$satisfies the conditions $$\Gamma p\gg 1,~\Gamma q\gg 1. \label{condSup}$$Here $$\begin{aligned}
p^{-1} &=&\frac{8}{\pi ^{2}}\operatorname{\mathop{\mathrm{Re}}}\sum_{\omega =0}^{\infty }\frac{1}{\Omega ^{2}\sqrt{\widetilde{\Omega }}\coth \frac{d_{F}\sqrt{\widetilde{\Omega }}}{2\xi
_{F}}}, \label{Sums11} \\
q^{-1} &=&\frac{8}{\pi ^{2}}\operatorname{\mathop{\mathrm{Re}}}\sum_{\omega =0}^{\infty }\frac{1}{\Omega ^{2}\sqrt{\widetilde{\Omega }}\tanh \frac{d_{F}\sqrt{\widetilde{\Omega }}}{2\xi
_{F}}}. \label{Sums12}\end{aligned}$$
Note that in the limit $h=H/\pi T_{C}\gg 1$ and $d_{F}\gg \sqrt{2/h}\xi _{F}$ the sums in (\[Sums11\]), (\[Sums12\]) can be evaluated analytically resulting in $$\beta =\frac{p-q}{p+q}=\sqrt{8}\sin \left( \frac{d_{F}}{\xi _{F}}\sqrt{\frac{h}{2}}+\frac{3\pi }{4}\right) \exp \left( -\frac{d_{F}}{\xi _{F}}\sqrt{\frac{h}{2}}\right) , \label{pq1}$$$$p+q=2\sqrt{2h}\left( T/T_{C}\right) ^{2},~\ \ \ pq=2h\left( T/T_{C}\right)
^{4}.\ \ \label{pq2}$$
In general, the phases of the order parameters in s and S films are functions of the coordinate $x$. In the considered approximation the terms that take into account the coordinate dependence of the phases, are proportional to small parameters $(\Gamma q)^{-1}$ and $(\Gamma p)^{-1}$ and therefore provide small corrections to the current. For this reason, in the first approximation we can assume that the phases in superconducting electrodes are constants independent of x. In the further analysis we denote the phases at the s-film by $\chi $ and at the right S-electrode by $\varphi $ (see Fig.\[design\]c).
The details of calculations are summarized in the Appendix \[Appendix\]. These calculations show that the considered SIsFS junction has two modes of operation depending on relation between s layer thickness $d_{s}$ and the critical thickness $d_{{sc}}=(\pi /2)\xi _{S}(T).$ For $d_{s}$ larger than $d_{sc}$, the s-film keeps its intrinsic superconducting properties (*mode (1)*), while for $d_{s}\leq d_{sc}$ superconductivity in the s-film exists only due to proximity effect with the bulk S electrodes (*mode (2)*).
Mode (1): SIs $+$ sFS junction $d_{s}\geq d_{sc}$
-------------------------------------------------
We begin our analysis with the regime when the intermediate s-layer is intrinsically superconducting. In this case it follows from the solution of GL equations that supercurrent flowing across SIs, sF and FS interfaces ($J(-d_{s}),$ $J(0)$ and $J(d_{F}),$ respectively) can be represented in the form (see Appendix \[Appendix\]) $$\frac{J_{S}(-d_{s})}{J_{G}}=\frac{\delta _{s}(-d_{s})}{\Gamma _{BI}\Delta
_{0}}\sin \left( \chi \right) ,~J_{G}=\frac{\pi \Delta _{0}^{2}\mathcal{A}_{B}}{4e\rho _{S}T_{C}\xi _{S}(T)}, \label{JatIS1}$$$$\frac{J_{S}(0)}{J_{G}}=\frac{J_{S}(d_{F})}{J_{G}}=\frac{\Gamma (p-q)}{2\Delta _{0}^{2}}\delta _{s}(0)\delta _{S}(d_{F})\sin \left( \varphi -\chi
\right) , \label{JatSF1}$$where $\Delta _{0}=\sqrt{8\pi ^{2}T_{C}(T_{C}-T)/7\zeta (3)}$ is bulk value of order parameter in S electrodes, $\mathcal{A}_{B}$ is cross sectional area of the structure, $\zeta (z)$ is Riemann zeta function. Here $$\delta _{s}(0)=\frac{2b\left( p-q\right) \cos \left( \varphi -\chi \right)
-2a\left( p+q\right) }{\Gamma \left[ \left( p+q\right) ^{2}-\left(
p-q\right) ^{2}\cos ^{2}\left( \varphi -\chi \right) \right] },
\label{delta1(0)}$$$$\delta _{S}(d_{F})=\frac{2b\left( p+q\right) -2a\left( p-q\right) \cos
\left( \varphi -\chi \right) }{\Gamma \left( \left( p+q\right) ^{2}-\left(
p-q\right) ^{2}\cos ^{2}\left( \varphi -\chi \right) \right) },
\label{delta1(df)}$$are the order parameters at sF and FS interfaces, respectively (see Fig. [design]{}b) and $$a=-\delta _{s}(-d_{s})\sqrt{1-\frac{\delta _{s}^{2}(-d_{s})}{2\Delta _{0}^{2}}},~b=\frac{\Delta _{0}}{\sqrt{2}}, \label{derivaties1}$$where $\delta _{s}(-d_{s})$ is the solution of transcendental equation $$K\left( \frac{\delta _{s}(-d_{s})}{\Delta _{0}\eta }\right) =\frac{d_{s}\eta
}{\sqrt{2}\xi _{s}(T)},~\eta =\sqrt{2-\frac{\delta _{s}^{2}(-d_{s})}{\Delta
_{0}^{2}}}. \label{EqDsmall1}$$Here, $K(z),$ is the complete elliptic integral of the first kind. Substitution of $\delta _{s}(-d_{s})=0$ into Eq. (\[EqDsmall1\]) leads to the expression for critical s layer thickness, $d_{{sc}}=(\pi /2)\xi _{S}(T)$, which was used above.
For the calculation of the CPR we need to exclude phase $\chi $ of the intermediate s layer from the expressions for the currents (\[JatIS1\]), (\[JatSF1\]). The value of this phase is determined from the condition that the currents flowing across Is and sF interfaces should be equal to each other.
For large thickness of the middle s-electrode ($d_{s}\gg d_{sc}$) the magnitude of order parameter $\delta _{s}(-d_{s})$ is close to that of a bulk material $\Delta _{0}$ and we may put $a=-b$ in Eqs.(\[delta1(0)\]) and (\[delta1(df)\]) $$\delta _{S}(d_{F})=\delta _{s}(0)=\frac{\sqrt{2}\Delta _{0}}{\Gamma \left(
\left( p+q\right) -\left( p-q\right) \cos \left( \varphi -\chi \right)
\right) }, \label{Aeqb1}$$resulting in $$J_{S}(0)=J_{S}(d_{F})=\frac{J_{G}\beta \sin \left( \varphi -\chi \right) }{\Gamma \left( 1-\beta \cos \left( \varphi -\chi \right) \right) }
\label{Jsymm1}$$together with the equation to determine $\chi $ $$\frac{\Gamma }{\Gamma _{BI}}\sin \left( \chi \right) =\frac{\beta \sin
\left( \varphi -\chi \right) }{1-\beta \cos \left( \varphi -\chi \right) },~\beta =\frac{p-q}{p+q}. \label{EqForKhi1}$$
From (\[Aeqb1\]), (\[Jsymm1\]) and (\[EqForKhi1\]) it follows that in this mode SIsFS structure can be considered as a pair of SIs and sFS junctions connected in series. Therefore, the properties of the structure are almost independent on thickness $d_{s}$ and are determined by a junction with smallest critical current.
Indeed, we can conclude from (\[EqForKhi1\]) that the phase $\chi $ of s layer order parameter depends on the ratio of the critical current, $I_{CSIs}\varpropto \Gamma _{BI}^{-1},\ $of its SIs part to that, $I_{CsFS}\varpropto |\beta |\Gamma ^{-1},\ $of the sFS junction. The coefficient $\beta $ in (\[EqForKhi1\]) is a function of F layer thickness, which becomes close to unity in the limit of small $d_{F}$ and exhibits damped oscillations with $d_{F}$ increase (see analytical expression for $\beta $ (\[pq1\])). That means that there is a range of thicknesses, $d_{Fn,\text{ }}$determined by the equation $\beta =0$, at which $J_{S}\equiv 0$ and there is a transition from $0$ to $\pi $ state in sFS part of SIsFS junction. In other words, crossing the value $d_{Fn}$ with an increase of $d_{F}$ provides a $\pi $ shift of $\chi $ relative to the phase of the S electrode.
In Fig.\[Phase\] we clarify the classification of operation modes and demonstrate the phase diagram in the $(d_{s},d_{F})$ plane, which follows from our analytical results (\[pq1\])-(\[EqDsmall1\]). The calculations have been done at $T=0.9T_{C}$ for $h=H/\pi T_{C}=10$, $\Gamma _{BI}=200$ and $\Gamma =5.$ The structures with s-layer smaller than critical thickness $d_{sc}=\pi \xi _{S}(T)/2$ correspond to the *mode (2)* with fully suppressed superconductivity in the s layer. Conversely, the top part of diagram corresponds to s-layer in the superconductive state (*mode (1)*). This area is divided into two parts depending on whether the weak place located at the tunnel barrier I (*mode(1a)*) or at the ferromagnetic F-layer (*mode(1b)*). The separating black solid vertical lines in the upper part in Fig.\[Phase\] represent the locus of points where the critical currents of SIs and sFS parts of SIsFS junction are equal. The dashed lines give the locations of the points of $0$ to $\pi $ transitions, $d_{Fn}=\pi (n-3/4)\xi _{F}\sqrt{2/h},~n=1,2,3...,$ at which $J_{s}=0.$ In a vicinity of these points there are the valleys of *mode (1b)* with the width, $\Delta d_{Fn}\approx \xi _{F}\Gamma \Gamma _{BI}^{-1}h^{-1/2}\exp
\{\pi (n-3/4)\},$ embedded into the areas occupied by *mode (1a)*. For the set of parameters used for calculation of the phase diagram presented in Fig.\[Phase\], there is only one valley with the width $\Delta
d_{F1}\approx \xi _{F}\Gamma \Gamma _{BI}^{-1}h^{-1/2}\exp \{\pi /4\}$ located around the point $d_{F1}=(\pi /4)\xi _{F}\sqrt{2/h}$ of the first $0$ to $\pi $ transition.
### Mode (1a): Switchable $0-\protect\pi $ SIs junction
In the experimentally realized case [@Bolginov; @Ryazanov3; @Larkin; @Vernik] $\Gamma _{BI}^{-1}\ll |\beta |\Gamma ^{-1}$ the condition is fulfilled and the weak place in SIsFS structure is located at the SIs interface. In this approximation it follows from (\[EqForKhi1\]) that $$\chi \approx \varphi -\frac{2q\Gamma }{(p-q)\Gamma _{BI}}\sin \left( \varphi
\right)$$in 0-state ($d_{F}<d_{F1}$) and $$\chi \approx \pi +\varphi -\frac{2q\Gamma }{(p-q)\Gamma _{BI}}\sin \left(
\varphi \right)$$in $\pi $-state ($d_{F}>d_{F1}$). Substitution of these expressions into (\[Jsymm1\]) results in $$J_{S}(\varphi )=\pm \frac{J_{G}}{\Gamma _{BI}}\left[ \sin \varphi -\frac{\Gamma }{\Gamma _{BI}}\frac{1\mp \beta }{2\beta }\sin \left( 2\varphi
\right) \right] \label{SISIotFI3}$$for $0$- and $\pi $- states, respectively. It is seen that for $d_{F}<d_{F1}$ the CPR (\[SISIotFI3\]) has typical for SIS tunnel junctions sinusoidal shape with small correction taking into account the suppression of superconductivity in the s layer due to proximity with FS part of complex sFS electrode. Its negative sign is typical for the tunnel Josephson structures with composite NS or FS electrodes [@KuperD; @CurPh]. For $d_{F}>d_{F1}$ the supercurrent changes its sign thus exhibiting the transition of SIsFS junction into $\pi $ state. It’s important to note that in this mode the SIsFS structure may have almost the same value of the critical current both in $0$ and $\pi $ states. It is unique property, which can not be realized in SFS devices studied before. For this reason we have identified this mode as “Switchable $0-\pi $ SIS junction”.
### Mode (1b): sFS junction
Another limiting case is realized under the condition $\Gamma _{BI}^{-1}\gg
|\beta |\Gamma ^{-1}.$ It fulfills in the vicinity of the points of $0-$ to $\pi -$ transitions, $d_{Fn},$ and for large $d_{F}$ values and high exchange fields $H.$ In this mode (see Fig. \[Phase\]) the weak place shifts to sFS part of SIsFS device and the structure transforms into a conventional SFS-junction with complex SIs electrode.
In the first approximation on $\Gamma /(\beta \Gamma _{BI})\gg 1$ it follows from (\[Jsymm1\]), (\[EqForKhi1\]) that $$\chi =\frac{\Gamma _{BI}}{\Gamma }\frac{\beta \sin \left( \varphi \right) }{1-\beta \cos \left( \varphi \right) },$$resulting in $$J_{S}(\varphi )=\frac{J_{G}\beta }{\Gamma \left( 1-\beta \cos \varphi
\right) }\left( \sin \varphi -\frac{\Gamma _{BI}}{2\Gamma }\frac{\beta \sin
(2\varphi )}{\left( 1-\beta \cos \varphi \right) }\right) .
\label{SInFSotFI}$$The shape of CPR for $\chi \rightarrow 0$ coincides with that previously found in SNS and SFS Josephson devices [Ivanov]{}. It transforms to the sinusoidal form for sufficiently large thickness of F layer. For small thickness of the F-layer as well as in the vicinity of $0-\pi $ transitions, significant deviations from sinusoidal form may occurred.
Transition between the *mode (1a)* and the *mode (1b)* is also demonstrated in Fig.\[Ic\_ds\]. It shows dependence of critical current $J_{C}$ across the SIsFS structure versus F-layer thickness $d_{F}.$ The inset in Fig.\[Ic\_ds\] demonstrates the magnitude of an order parameter at Is interface as a function of $d_{F}$. The solid lines in Fig.\[Ic\_ds\] give the shape of $J_{C}(d_{F})$ and $\delta _{0}(-d_{s})$ calculated from (\[SISIotFI3\])-(\[SInFSotFI\]). These equations are valid in the limit $d_{s}\gg d_{sc}$ and do not take into account possible suppression of superconductivity in a vicinity of tunnel barrier due to proximity with FS part of the device. The dashed lines are the result of calculations using analytical expressions (\[JatIS1\])-(\[EqDsmall1\]) for the thickness of the s-layer $d_{s}=2\xi _{s}(T),$ which slightly exceeds the critical one, $d_{sc}=(\pi /2)\xi _{s}(T).$ These analytical dependencies are calculated at $T=0.9~T_{C}$ for $H=10\pi T_{C},$ $\Gamma _{BI}=200,~\Gamma =5,$ $\gamma
_{B}=0.$ The short-dashed curves are the results of numerical calculations performed selfconsistently in the frame of the Usadel equations (2)-(11) for corresponding set of the parameters $T=0.9~T_{C}$ for $H=10\pi T_{C},$ $\gamma _{BI}=1000,$ $\gamma =1,$ $\gamma _{B}=0.3$ and the same thickness of the s layer $d_{sc}=(\pi /2)\xi _{s}(T)$. Interface parameters $\gamma
_{BI}=1000,$ $\gamma =1$ are chosen the same as for the analytical case. The choice of $\gamma _{B}=0.3$ allows one to take into account the influence of mismatch which generally occurs at the sF and FS boundaries.
It can be seen that there is a qualitative agreement between the shapes of the three curves. For small $d_{F}$ the structure is in the $0$-state *mode (1a)* regime. The difference between dashed and short dashed lines in this area is due to the fact that the inequalities (\[condSup\]) are not fulfilled for very small $d_{F}.$ The solid and short dashed curves start from the same value since for $d_{F}=0$ the sFS electrode becomes a single spatially homogeneous superconductor. For $d_{s}=2\xi _{s}(T)$ the intrinsic superconductivity in the s layer is weak and is partially suppressed with $d_{F}$ increase (see the inset in Fig.\[Ic\_ds\]). This suppression is accompanied by rapid drop of the critical current. It can be seen that starting from the value $d_{F}\approx 0.4\xi _{F}$ our analytical formulas (\[JatIS1\])-([EqDsmall1]{}) are accurate enough. The larger $d_{s},$ the better agreement between numerical and analytical results due to the better applicability of the GL equations in the s layer. With further $d_{F}$ increase the structure passes through the valley of *mode (1b)* state, located in the vicinity of the $0$ to $\pi $ transition, and comes into the $\pi -$state of the *mode (1a)*. Finally for $d_{F}\gtrsim 1.6\xi _{F}$ there is a transition from *mode (1a)* to *mode (1b),* which is accompanied by damped oscillation of $J_{C}(d_{F})$ with $d_{F}$ increase.
Mode (2): SInFS junction $d_{s}\leq d_{sc}$
-------------------------------------------
For $d_{s}\leq d_{sc}$ intrinsic superconductivity in the $s$ layer is completely suppressed resulting in formation of the complex -InF- weak link area, where ’n’ marks the intermediate s film in the normal state. In this parameter range the weak is always located in the tunnel barrier and the CPR has sinusoidal shape $$J_{S}(\varphi )=\frac{J_{G}}{\sqrt{2}}\frac{\left( p-q\right) \sin \varphi }{2pq\Gamma \Gamma _{BI}\cos \frac{d_{s}}{\xi _{s}(T)}+\left[ 2pq\Gamma
+\left( p+q\right) \Gamma _{BI}\right] \sin \frac{d_{s}}{\xi _{s}(T)}}.
\label{I_ot_fi_small_ds1}$$In a vicinity of the critical thickness, $d_{s}\lesssim d_{sc},$ the factor $\cos (d_{s}/\xi _{S}(T))$ in (\[I\_ot\_fi\_small\_ds1\]) is small and supercurrent is given by the expression $$J_{S}(\varphi )=\frac{J_{G}}{2\sqrt{2}}\frac{\left( p-q\right) \sin \varphi
}{2pq\Gamma +\left( p+q\right) \Gamma _{BI}}. \label{curInter}$$ Further decrease of $d_{s}$ into the limit $d_{s}\ll d_{sc}$ leads to $$J_{S}(\varphi )=\frac{J_{G}}{\sqrt{2}}\frac{\left( p-q\right) \sin \varphi }{2pq\Gamma \Gamma _{BI}}. \label{curSInFS}$$The magnitude of critical current in (\[curSInFS\]) is close to that in the well-known case of SIFS junctions in appropriate regime.
Current-Phase Relation \[SecCPR\]
---------------------------------
In the previous section we have demonstrated that the variation in the thickness of the ferromagnetic layer should lead to the transformation of CPR of the SIsFS structure. Fig.\[CPR\]a illustrates the $J_{C}(d_{F})$ dependencies calculated from expressions (\[JatIS1\])-(\[EqDsmall1\]) at $T=0.9T_{C}$ for $H=10\pi T_{C},$ $\gamma _{B}=0,$ $\Gamma _{BI}\approx 200$ and $\Gamma \approx 5$ for two thickness of the s layer $d_{s}=5\xi _{S}(T)$ (solid line) and $d_{s}=0.5\xi _{S}(T)$ (dashed line). In Figs.\[CPR\]b-d we enlarge the parts of $J_{C}(d_{F})$ dependence enclosed in rectangles labeled by letters b, c and d in Fig.\[CPR\]a and mark by digits the points where the $J_{S}(\varphi )$ curves have been calculated. These curves are marked by the same digits as the points in the enlarge parts of $J_{C}(d_{F})$ dependencies. The dashed lines in the Figs.\[CPR\]b-d are the loci of critical points at which the $J_{S}(\varphi )$ dependence reaches its maximum value, $J_{C}(d_{F})$.
Figure \[CPR\]b presents the *mode (1b)* valley, which divides the *mode (1a)* domain into $0$- and $\pi $- states regions. In the *mode (1a)* domain the SIsFS structure behaves as SIs and sFS junctions connected in series. Its critical current equals to the minimal one among the critical currents of the SIs $(J_{CSIs})$ and sFS $(J_{CsFS})$ parts of the device. In the considered case the thickness of the s film is sufficiently large to prevent suppression of superconductivity. Therefore, $J_{CSIs}$ does not change when moving from the point $1$ to the point $2$ along $J_{C}d_{F}$ dependence. At the point $2$, when $J_{CSIs}=J_{CsFS},$ we arrive at the border between the *mode (1a)* and* mode (1b)*. It is seen that at this point there is maximum deviation of $J_{S}(\varphi )$ from the sinusoidal shape. Further increase of $d_{F}$ leads to $0$-$\pi $ transition, when parameter $\beta $ in ([SInFSotFI]{}) becomes small and $J_{S}(\varphi )$ practically restores its sinusoidal shape. Beyond the area of $0$ to $\pi $ transition, the critical current changes its sign and CPR starts to deform again. The deformation achieves its maximum at the point $7$ located at the other border between the *modes (1a)* and* (1b).* The displacement from the point 7 to the point 8 along the $J_{C}(d_{F})$ dependence leads to recovery of sinusoidal CPR.
Figure \[CPR\]c presents the transition from the $\pi $-state of *mode (1a)* to *mode (1b)* with $d_{F}$ increase. It is seen that the offset from the point $1$ to the points $2-5$ along $J_{C}(d_{F})$ results in transformation of the CPR similar to that shown in Fig.\[CPR\]b during displacement in the direction from the point $1$ to the points $2-6.$ The only difference is the starting negative sign of the critical current. However this behavior of CPR as well as close transition between modes lead to formation of the well pronounced kink at the $J_{C}(d_{F})$ dependence. Furthermore, contrary to Fig.\[CPR\]b at the point $6$, the junction is still in the *mode (1b)* and remains in this mode with further increase in $d_{F}.$ At the point $6$ the critical current achieves its maximum value and it decreases along the dashed line for larger $d_{F}.$
Figure \[CPR\]d shows the transformation of the CPR in the vicinity of the next $0$ to $\pi $ transition in *mode (1b).* There is small deviation from sinusoidal shape at the point $1$, which vanishes exponentially with an increase of $d_{F}$.
In the* mode (2)* (the dashed curve in Fig.\[CPR\]a) an intrinsic superconductivity in the s layer is completely suppressed resulting in the formation of a complex -InF- weak link region and the CPR becomes sinusoidal (\[I\_ot\_fi\_small\_ds1\]).
Arbitrary temperature
=====================
At arbitrary temperatures the boundary problem (\[fiS\])-(\[PhDifInf\]) goes beyond the assumptions of GL formalism and requires self-consistent solution. We have performed it numerically in terms of the nonlinear Usadel equations in iterative manner. All calculations were performed for $T=0.5T_{C},$ $\xi _{S}=\xi _{F},$ $\gamma _{BI}=1000,$ $\gamma _{BFS}=0.3$ and $\gamma =1.$
Calculations show that at the selected transparency of tunnel barrier $(\gamma _{BI}=1000)$ the suppression of superconductivity in the left electrode is negligibly small. This allows one to select the thickness of the left S electrode $d_{SL}=2\xi _{S}$ without any loss of generality. On the contrary, proximity of the right S electrode to the F layer results in strong suppression of superconductivity at the FS interface. Therefore the pair potential of the right S electrode reaches its bulk value only at thickness $d_{SR}\gtrsim 10\xi _{S}.$ It is for these reasons we have chosen $d_{SR}=10\xi _{S}$ for the calculations.
Furthermore, the presence of a low-transparent tunnel barrier in the considered SIsFS structures limits the magnitude of critical current $J_{C}$ by a value much smaller compared to a depairing current of the superconducting electrodes. This allows one to neglect nonlinear corrections to coordinate dependence of the phase in the S banks.
The results of calculations are summarized in Fig.\[Del\_ArbT\]. Figure [Del\_ArbT]{}a shows the dependence of $J_{C}$ of the SIsFS structure on the F-layer thickness $d_{F}$ for relatively large $d_{s}=5\xi _{S}$ (solid) and small $d_{s}=0.5\xi _{S}$ (dashed) s-film thickness. The letters on the curves indicate the points at which the coordinate dependencies of the magnitude of the order parameter, $|\Delta (x)|,$ and phase difference across the structure, $\chi ,$ have been calculated for the phase difference $\varphi = \pi /2$. These curves are shown in the panels b)-f) of the Fig.\[Del\_ArbT\] as the upper and bottom plots, respectively. There is direct correspondence between the letters, b, c, d, e, f, on $J_{C}(d_{F})$ curves and the labels, b), c), d), e), f), of the panels.
It is seen that qualitative behavior of the $J_{C}(d_{F})$ dependence at $T=0.5T_{C}$ remains similar to that obtained in the frame of the GL equations for $T=0.9T_{C}$ (see Fig.\[CPR\]a). Furthermore, the modes of operation discussed above remain relevant too. The panels b)-f) in Fig.\[Del\_ArbT\] make this statement more clear.
At the point marked by letter ’b’, the s-film is sufficiently thick, $d_{s}=5\xi
_{S},$ while F film is rather thin, $d_{F}=0.3\xi _{F},$ and therefore the structure is in $0$- state of the *mode (1a)*. In this regime the phase mainly drops across the tunnel barrier, while the phase shifts at the s-film and in the S electrodes are negligibly small(see the bottom plot in Fig.\[Del\_ArbT\]b).
At the point marked by the letter ’c’ $(d_{s}=5\xi _{S},$ $d_{F}=\xi _{F})$, the structure is in the $\pi $- state of the *mode (1a)*. It is seen from Fig.\[Del\_ArbT\]c that there is a phase jump at the tunnel barrier and an additional $\pi $-shift occurs between the phases of S and s layers.
For $d_{F}=3\xi _{F}$ (Fig.\[Del\_ArbT\]d) the position of the weak place shifts from SIs to sFS part of the SIsFS junction. Then the structure starts to operate in the *mode (1b)*. It is seen that the phase drop across SIs part is small, while $\varphi -\chi \approx $ $\pi /2$ across the F layer, as it should be in SFS junctions with SIs and S electrodes.
At the points marked by the letters ’e’ and ’f’, thickness of the s-layer $d_{s}=0.5\xi _{S}$ is less than its critical value. Then superconductivity in the s-spacer is suppressed due to the proximity with the F film and SIsFS device operates in the *mode (2)*. At $d_{F}=\xi _{F}$ (the dot ’e’ in Fig.[Del\_ArbT]{}a and the panel Fig.\[Del\_ArbT\]e) the position of the weak place is located at the SIs part of the structure and there is additional $\pi $-shift of phase across the F film. As a result, the SIsFS structure behaves like an SInFS tunnel $\pi $-junction. Unsuppressed residual value of the pair potential is due to the proximity with the right S-electrode and it disappears with the growth of the F-layer thickness, which weakens this proximity effect. At $d_{F}=3\xi _{F}$ (Fig.\[Del\_ArbT\]f) weak place is located at the F part of IsF trilayer. Despite strong suppression of the pair potential in the s-layer, the distribution of the phase inside the IsF weak place has rather complex structure, which depends on thicknesses of the s and F layers.
Temperature crossover from 0 to $\protect\pi $ states
-----------------------------------------------------
The temperature-induced crossover from 0 to $\pi $ states in SFSjunctions has been discovered in [@ryazanov2001] in structures with sinusoidal CPR. It was found that the transition takes place in a relatively broad temperature range.
Our analysis of SIsFS structure (see Fig.\[TempTrans\]a) shows that smoothness of 0 to $\pi $ transition strongly depends on the CPR shape. This phenomenon was not analyzed before since almost all previous theoretical results were obtained within a linear approximation leading in a sinusoidal CPR. To prove the statement, we have calculated numerically the set of $J_{C}(T)$ curves for a number of F layer film thicknesses $d_{F}.$ We have chosen the thickness of intermediate superconductor $d_{S}=5\xi _{S}$ in order to have SIsFS device in the *mode(1a)* and we have examined the parameter range $0.3\xi _{F}\leq d_{F}\leq \xi _{F},$ in which the structure exhibits the first $0$ to $\pi $ transition. The borders of the $d_{F}$ range are chosen in such a way that SIsFS contact is either in $0$- ($d_{F}=0.3\xi _{F}$) or $\pi $- ($d_{F}=\xi _{F}$) state in the whole temperature range. The corresponding $J_{C}(T)$ dependencies (dashed lines in Fig. \[TempTrans\]a) provide the envelope of a set of $J_{C}(T)$ curves calculated for the considered range of $d_{F}$. It is clearly seen that in the vicinity of $T_{C}$ the decrease of $d_{F}$ results in creation of the temperature range where $0$-state exists. The point of $0$ to $\pi $ transition shifts to lower temperatures with decreasing $d_{F}$. For $d_{F}\gtrsim 0.5\xi _{F}$ the transition is rather smooth since for $T\geq 0.8T_{C}$ the junction keeps the *mode (2)* (with suppressed superconductivity) and deviations of the CPR from $\sin (\varphi )$ are small. Thus the behavior of $J_{C}(T)$ dependencies in this case can be easily described by analytic results from Sec.\[SecCPR\].
The situation drastically changes at $d_{F}=0.46\xi _{F}$ (short-dashed line in Fig.\[TempTrans\]a). For this thickness the point of $0$ to $\pi $ transition shifts to $T\approx 0.25T_{C}.$ This shift is accompanied by an increase of amplitudes of higher harmonics of CPR (see Fig.\[TempTrans\]b). As a result, the shape of CPR is strongly modified, so that in the interval $0\leq $ $\varphi \leq $ $\pi $ the CPR curves are characterized by two values, $J_{C1}$ and $J_{C2}$, as is known from the case of SFcFS constrictions [@SFcFS]. In general, $J_{C1}$ and $J_{C2}$ differ both in sign and magnitude and $J_{C}=\max (\left\vert
J_{C1}\right\vert ,$ $\left\vert J_{C2}\right\vert ).$ For $T>0.25T_{C}$ the junction in the $0$-state and $J_{C}$ grows with decrease of $T$ up to $T\approx 0.5T_{C}.$ Further decrease of $T$ is accompanied by suppression of critical current. In a vicinity of $T\approx 0.25T_{C}$ the difference between $\left\vert J_{C1}\right\vert ,$ and $\left\vert J_{C2}\right\vert $ becomes negligible and the system starts to develop the instability that eventually shows up as a sharp jump from $0$ to $\pi $ state. After the jump, $\left\vert J_{C}\right\vert $ continuously increases when $T$ goes to zero.
It is important to note that this behavior should always be observed in the vicinity of $0-\pi $ transition, i.e. in the range of parameters, in which the amplitude of the first harmonic is small compared to higher harmonics. However, the closer is temperature to $T_c$, the less pronounced are higher CPR harmonics and the smaller is the magnitude of the jump. This fact is illustrated by dash-dotted line showing $J_{C}(T)$ calculated for $d_{F}=0.48\xi _{F}.$ The jump in the curves calculated for $d_{F}\geq 0.5\xi _{F}$ also exists, but it is small and can not be resolved on the scale used in the Fig. \[TempTrans\]a.
At $d_{F}=0.45\xi _{F}$ (dash-dot-dotted line in Fig. \[TempTrans\]) the junction is always in the $0$-state and there is only small suppression of critical current at low temperatures despite the realization of non-sinusoidal CPR.
Thus the calculations clearly show that it’s possible to realize a set of parameters of SIsFS junctions where thermally-induced $0$-$\pi $ crossover can be observed and controlled by temperature variation.
0 to $\protect\pi $ crossover by changing the effective exchange energy in external magnetic field
--------------------------------------------------------------------------------------------------
Exchange field is an intrinsic microscopic parameter of a ferromagnetic material which cannot be controlled directly by application of an external field. However, the spin splitting in F-layers can be provided by both the internal exchange field and external magnetic field[@Tedrow; @Koshina], resulting in generation of effective exchange field, which equals to their sum. However, practical realization of this effect is a challenge since it is difficult to fulfill special requirements[@Tedrow; @Koshina] on thickness of S electrodes and SFS junction geometry.
Another opportunity can be realized in soft diluted ferromagnetic alloys like Fe$_{0.01}$Pd$_{0.99}$. Investigations of magnetic properties[Uspenskaya]{} of these materials have shown that below 14 K they exhibit ferromagnetic order due to the formation of weakly coupled ferromagnetic nanoclusters. In the clusters, the effective spin polarization of Fe ions is about $4\mu _{B}$, corresponding to that in the bulk Pd$_{3}$Fe alloy. It was demonstrated that the hysteresis loops of Fe$_{0.01}$Pd$_{0.99}$ films have the form typical to nanostructured ferromagnets with weakly coupled grains (the absence of domains; a small coercive force; a small interval of the magnetization reversal, where the magnetization changes its direction following the changes in the applied magnetic field; and a prolonged part, where the component of the magnetization vector along the applied field grows gradually).
Smallness of concentration of Pd$_{3}$Fe clusters and their ability to follow variation in the applied magnetic field may result in generation of $H_{eff},$ which is of the order of $$H_{eff}\approx H\frac{n_{\uparrow }V_{\uparrow }-n_{\downarrow
}V_{\downarrow }}{n_{\downarrow }V_{\downarrow }+n_{\uparrow }V_{\uparrow
}+(n-n_{\uparrow }-n_{\downarrow })(V-V_{\downarrow }-V_{\uparrow })}.
\label{Heff}$$Here $n$ is concentration of electrons within a physically small volume $V $, in which one performs an averaging of Greens functions in the transformation to a quasiclassical description of superconductivity, $n_{\uparrow ,\downarrow }$ and $V_{\uparrow ,\downarrow
} $ are the values describing spin polarized parts of $n$ and parts of volume $V,$ which they occupy, respectively. Similar kind of $H_{eff}$ nucleates in NF or SF proximity structures, which are composed from thin layers [@BerVE; @BChG; @Karminskaya; @Golikova]. There is an interval of applied magnetic fields $H_{ext}$ where the alloy magnetization changes its direction and the concentrations $n_{\uparrow ,\downarrow }$ depend on a pre-history of application of the field [@Larkin; @APL], providing the possibility to control $H_{eff}$ by an external magnetic field.
Derivation of possible relationships between $H_{eff}$ and $H_{ext}$ is outside of the scope of this paper. Below we will concentrate only on an assessment of the intervals in which $H_{eff}$ should be changed to ensure the transition of SIsFS device from $0$ to $\pi $ state. To do this, we calculate the $J_{C}(H)$ dependencies presented in Fig.\[JC\_H\]. The calculations have been done for the set of structures with $d_{F}=2\xi _{F}$ and s-films thickness ranging from thick one, $d_{S}=5\xi _{S}$ (solid line) up to an intermediate value $d_{S}=2\xi _{S}$ (dashed-dotted line) and finishing with thin film having $d_{S}=0.5\xi _{S}$ (dashed line). It is clearly seen that these curves have the same shape as $J_{C}(d_{F})$ dependencies presented in the Sec.\[SecHighTC\]. For $d_{S}=5\xi _{S}$ and $H\lesssim 7\pi T_{C}$ the magnitude of $J_{C}$ is practically independent on $H,$ but it changes the sign at $H\approx 1.25\pi T_{C}$ due to $0$ - $\pi $ transition. It is seen that for the transition, while maintaining the normalized current value at a level close to unity, changes of $H$ are required approximately of the order of $0.1\pi T_{C}$ or $10\%$. For $d_{S}=2\xi _{S}$ and $H\lesssim 3\pi T_{C}$, it is necessary to change $H$ on $20\%$ to realize the such a transition. In this case the value of normalized current is at the level $0.4$. In mode 2 the transition requires $100\%$ change of $H,$ which is not practical.
Discussion
==========
We have performed a theoretical study of magnetic SIsFS Josephson junctions. At $T\leq T_{C}$ calculations have been performed analytically in the frame of the GL equations. For arbitrary temperatures we have developed numerical code for selfconsistent solution of the Usadel equations. We have outlined several modes of operation of these junctions. For s-layer in superconducting state they are S-I-sfS or SIs-F-S devices with weak place located at insulator (*mode (1a)*) and at the F-layer (*mode (1b)*), respectively. For small s-layer thickness, intrinsic superconductivity in it is completely suppressed resulting in formation of InF weak place (*mode (2)*). We have examined the shape of $J_{S}(\varphi )$ and spatial distribution of the module of the pair potential and its phase difference across the SIsFS structure in these modes.
For * mode (1)* the shape of the CPR can substantially differ from the sinusoidal one even in a vicinity of $T_{C}$. The deviations are largest when the structure is close to the crossover between the *modes (1a)* and* (1b)*. This effect results in the kinks in the dependencies of $J_{C}$ on temperature and on parameters of the structure (thickness of the layers $d_{F},$ $d_{s}$ and exchange energy $H$) as illustrated in Fig.\[CPR\] on $J_{C}(d_{F})$ curves. The transformation of CPR is even more important at low temperatures. For $T\lesssim 0.25T_{C}$ a sharp $0-\pi $ transition can be realized induced by small temperature variation (Fig.\[TempTrans\]). This instability must be taken into account when using the structures as memory elements. On the other hand, this effect can be used in detectors of electromagnetic radiation, where absorption of a photon in the F layer will provide local heating leading to development of the instability and subsequent phonon registration.
We have shown that suppression of the order parameter in the thin s-film due to the proximity effect leads to decrease of $J_{C}R_{N}$ product in both $0-$ and $\pi -$states. On the other hand, the proximity effect may also support s-layer superconductivity due to the impact of S electrodes. In *mode (1a)* $J_{C}R_{N}$ product in $0$- and $\pi $-states can achieve values typical for SIS tunnel junctions.
In *mode (2)* sinusoidal CPR is realized. Despite that, the distribution of the phase difference $\chi (x)$ in the IsF weak place may have a complex structure, which depends on thickness of the s and F layers. These effects should influence the dynamics of a junction in its *ac*-state and deserve further study.
Further, we have also shown that in * mode (1a)* nearly $10\%$ change in the exchange energy can cause a $0-\pi $ transition, i.e. changing the sign of $J_{C}R_{N}$ product, while maintaining its absolute value. This unique feature can be implemented in * mode (1a)*, since it is in it changes of the exchange energy only determine the presence or absence of a $\pi$ shift between s and S electrodes and does not affect the magnitude of the critical current of SIs part of SIsFS junction.
In * mode (1b)*, the F layer becomes a part of weak link area. In this case the $\pi $ shift, initiated by the change in $H$ must be accompanied by changes of $J_{C}$ magnitude due to the oscillatory nature of superconducting correlations in the F film. The latter may lead to very complex and irregular dependence of $J_{C}(H_{ext}),$ which have been observed in Nb-PdFe-Nb SFS junctions(see Fig.3 in[@Bolginov]). Contrary to that the $J_{C}(H_{ext})$ curves of SIsFS structure with the same PdFe metal does not demonstrate these irregularities[@Larkin; @Vernik].
To characterize a junction stability with respect to $H$ variations it is convenient to introduce the parameter $\eta =(dJ_{C}/J_{C})/(dH/H)$ which relates the relative change in the critical current to the relative change in the exchange energy. The larger the magnitude of $\eta $ the more intensive irregularities in an SFS junction are expected with variation of $H$. In the Fig.\[Compare\] we compare the SIsFS devices with conventional SFS, SIFS and SIFIS junctions making use of two the most important parameters: the instability parameter $\eta $ and $J_{C}R_{N}$ product, the value, which characterizes high frequency properties of the structures. The calculations have been done in the frame of Usadel equation for the same set of junctions parameters, namely $T=0.5 T_C$, $H=10\pi T_{C},$ $d_{F}=\xi _{F},$ $\gamma
_{BI}=1000,$ $\gamma _{B}=0.3,$ $\gamma =1.$ It can be seen that the presence of two tunnel barriers in SIFIS junction results in the smallest $J_{C}R_{N}$ and strong instability. The SIFS and SIsFS structures in the *mode (2)* demonstrate better results with almost the same parameters. Conventional SFS structures have two times smaller $J_{C}R_{N} $ product, having higher critical current but lower resistivity. At the same time, SFS junctions are more stable due to the lack of low-transparent tunnel barrier. The latter is the main source of instability due to sharp phase discontinuities at the barrier ’I’.
Contrary to the standard SFS, SIFS and SIFIS junctions, SIsFS structures achieve $J_{C}R_{N} $ and stability characteristics comparable to those of SIS tunnel junctions. This unique property is favorable for application of SIsFS structures in superconducting electronic circuits.
We thank V.V. Ryazanov, V.V. Bol’ginov, I.V. Vernik and O.A. Mukhanov for useful discussions. This work was supported by the Russian Foundation for Basic Research, Russian Ministry of Education and Science, Dynasty Foundation, Scholarship of the President of the Russian Federation, IARPA and Dutch FOM.
Boundary problem at $T\lesssim T_{C}$\[Appendix\]
=================================================
In the limit of high temperature $$G_{S}=G_{s}=G_{F}=\operatorname{\mathop{\mathrm{sgn}}}(\Omega ) \label{GTTc}$$and the boundary problem reduces to to the system of linerized equations. Their solution in the F layer, $(0\leq x\leq d_{F}),$ has the form$$\Phi _{F}=C\sinh \frac{\sqrt{\Theta }\left( x-d_{F}/2\right) }{\xi _{F}}+D\cosh \frac{\sqrt{\Theta }\left( x-d_{F}/2\right) }{\xi _{F}},
\label{SolinF}$$where $\Theta =\widetilde{\Omega }\operatorname{\mathop{\mathrm{sgn}}}(\Omega ).$ For transparent FS and sF interfaces $(\gamma _{B}=0)$ from the boundary conditions (\[BCsF\]), ([BCSF]{}) and (\[SolinF\]) it is easy to get that$$\frac{\xi _{s}}{\gamma \sqrt{\Theta }}\frac{d}{dx}\Phi _{s}(0)=-\Phi
_{s}(0)\coth \frac{d_{F}\sqrt{\Theta }}{\xi _{F}}+\frac{\Phi _{S}(d_{F})}{\sinh \frac{d_{F}\sqrt{\Theta }}{\xi _{F}}}, \label{Rel1}$$$$\frac{\xi _{S}}{\gamma \sqrt{\Theta }}\frac{d}{dx}\Phi _{S}(d_{F})=\Phi
_{S}(d_{F})\coth \frac{d_{F}\sqrt{\Theta }}{\xi _{F}}-\frac{\Phi _{s}(0)}{\sinh \frac{d_{F}\sqrt{\Theta }}{\xi _{F}}}. \label{Rel2}$$and thus reduce the problem to the solution of Ginzburg-Landau (GL) equations in s and S films.$$\xi _{S}^{2}(T)\frac{d^{2}}{dx^{2}}\Delta _{k}-\Delta _{k}(\Delta
_{0}^{2}-\left\vert \Delta _{k}\right\vert ^{2})=0,~\Delta _{0}^{2}=\frac{8\pi ^{2}T_{C}(T_{C}-T)}{7\zeta (3)}, \label{GLeq}$$$$J=\frac{J_{G}}{\Delta _{0}^{2}}\operatorname{\mathop{\mathrm{Im}}}\left( \Delta _{k}^{^{\ast }}\xi _{S}(T)\frac{d}{dx}\Delta _{k}\right) ,~J_{G}=\frac{\pi \Delta _{0}^{2}}{4e\rho
_{S}T_{C}\xi _{S}(T)}, \label{GLcurrent}$$where $\xi _{S}(T)=\pi \xi _{S}/2\sqrt{1-T/T_{C}}$ is GL coherence length and $k$ equals to $s$ or $S$ for $-d_{s}\leq x\leq 0$ and $x\geq d_{F\text{ }},$ respectively. At Is, sF and FS interfaces GL equations should be supplemented by the boundary conditions in the form[@Ivanov]$$\xi _{S}(T)\frac{d}{dx}\Delta _{k}(z)=b(z)\Delta _{k}(z),~b(z)=\frac{\Sigma
_{1}(z)}{\Sigma _{2}(z)}, \label{BCGL}$$$$\Sigma _{1}(z)=\sum_{\omega =-\infty }^{\infty }\xi _{S}(T)\frac{d}{dx}\frac{\Phi _{k}(z)}{\Omega ^{2}},~\Sigma _{2}(z)=\sum_{\omega =-\infty }^{\infty }\frac{\Phi _{k}(z)}{\Omega ^{2}}, \label{SigmaGL}$$where $z=-d_{s},~0,~d_{F}.$ In typical experimental situation $\gamma
_{BI}\gg 1,$ $\gamma \sqrt{H}\gg 1$ and $d_{F}\sqrt{H}\gtrsim \xi _{F}.$ In this case in the first approximation $$\Phi _{S}(d_{F})=0,~\Phi _{s}(0)=0,\frac{d}{dx}\Phi _{s}(-d_{s})=0$$and in the vicinity of interfaces $$\Phi _{S}(x)=\Delta _{S}(x)=B_{S}\frac{(x-d_{F})}{\xi _{S}(T)},~d_{F}\lesssim x\ll \xi _{S}(T), \label{FiSas}$$$$\Phi _{s}(x)=\Delta _{s}(x)=-B_{s}\frac{x}{\xi _{s}(T)},~-\xi _{S}(T)\ll
x\lesssim 0, \label{FisAs1}$$$$\Phi _{s}(x)=\Delta _{s}(x)=\Delta _{s}(-d_{s}),~-d_{s}\lesssim x\ll
-d_{s}+\xi _{S}(T), \label{Fisas2}$$where $B_{S},$ $B_{s},$ and $\Delta _{s}(-d_{s})$ are independent on $x$ constants. Substitution of the solutions (\[FiSas\]) - (\[Fisas2\]) into (\[BCIFmod\]), (\[Rel1\]), (\[Rel2\]) gives$$\Gamma _{BI}\xi _{S}(T)\frac{d}{dx}\Phi _{s}(-d_{s})=\Delta
_{s}(-d_{s})-\Delta _{0}, \label{next1}$$$$\Phi _{S}(d_{F})=\frac{B_{s}}{\Gamma \sqrt{\Theta }\sinh \frac{d_{F}\sqrt{\widetilde{\Omega }}}{\xi _{F}}}+\frac{B_{S}\cosh \frac{d_{F}\sqrt{\Theta }}{\xi _{F}}}{\Gamma \sqrt{\Theta }\sinh \frac{d_{F}\sqrt{\Theta }}{\xi _{F}}},~
\label{next2}$$$$\Phi _{s}(0)=\frac{B_{s}\cosh \frac{d_{F}\sqrt{\Theta }}{\xi _{F}}}{\Gamma
\sqrt{\widetilde{\Omega }}\sinh \frac{d_{F}\sqrt{\Theta }}{\xi _{F}}}+\frac{B_{S}}{\Gamma \sqrt{\Theta }\sinh \frac{d_{F}\sqrt{\Theta }}{\xi _{F}}},
\label{next3}$$$$\Gamma _{BI}=\frac{\gamma _{BI}\xi _{S}}{\xi _{s}(T)},~\Gamma =\frac{\gamma
_{BI}\xi _{s}(T)}{\xi _{S}}. \label{GammasB}$$From definition (\[BCGL\]), (\[SigmaGL\]) of coefficients $~b(z)$ and expressions (\[next1\]) - (\[next3\]) it follows that $$\Gamma _{BI}\xi _{s}(T)\frac{d}{dx}\Delta _{s}(-d_{s})=-\left( \Delta
_{0}-\Delta _{s}(-d_{s})\right) , \label{BCGLF1}$$$$\xi _{s}(T)\frac{d}{dx}\Delta _{s}(0)=-\frac{q+p}{2}\Gamma \Delta _{s}(0)-\frac{q-p}{2}\Gamma \Delta _{S}(d_{F}), \label{BCGLF2}$$$$\xi _{S}(T)\frac{d}{dx}\Delta _{S}(d_{F})=\frac{q+p}{2}\Gamma \Delta
_{S}(d_{F})+\frac{q-p}{2}\Gamma \Delta _{s}(0), \label{BCGLF3}$$where$$\begin{aligned}
p^{-1} &=&\frac{8}{\pi ^{2}}\operatorname{\mathop{\mathrm{Re}}}\sum_{\omega =0}^{\infty }\frac{1}{\Omega ^{2}\sqrt{\widetilde{\Omega }}\coth \frac{d_{F}\sqrt{\widetilde{\Omega }}}{2\xi
_{F}}}, \label{Sums} \\
q^{-1} &=&\frac{8}{\pi ^{2}}\operatorname{\mathop{\mathrm{Re}}}\sum_{\omega =0}^{\infty }\frac{1}{\Omega ^{2}\sqrt{\widetilde{\Omega }}\tanh \frac{d_{F}\sqrt{\widetilde{\Omega }}}{2\xi
_{F}}}. \label{Sums1}\end{aligned}$$
In considered limit both suppression parameters $\Gamma _{BI}\gg 1$ and $\Gamma \gg 1$ are large and from relations (\[BCIFmod\]), (\[Rel1\]), (\[Rel2\]) in the first approximation on these parameters we get that the boundary conditions (\[BCGLF1\]) - (\[BCGLF3\]) can be simplified to $$\xi _{S}(T)\frac{d}{dx}\Delta _{s}(-d_{s})=0,~\Delta _{s}(0)=0,~\Delta
_{S}(d_{F})=0. \label{BCGLs}$$Taking into account that in this approximation supercurrent $j=0$ and $\Delta _{S}(\infty )=\Delta _{0}$ from (\[GLeq\]), (\[BCGLs\]) it follows that $$\Delta _{S}(x)=\delta _{S}(x)\exp \left\{ i\varphi \right\} ,~\delta
_{S}(x)=\Delta _{0}\tanh \frac{x-d_{F}}{\sqrt{2}\xi _{S}(T)}, \label{SolGL1}$$while $$\Delta _{s}(x)=\delta _{s}(x)\exp \left\{ i\chi \right\} , \label{SolGL2}$$where $\delta _{s}(x)$ is the solution of transcendental equation $$F\left( \frac{\delta _{s}(x)}{\delta _{s}(-d_{s})},\frac{\delta _{s}(-d_{s})}{\Delta _{0}\eta }\right) =-\frac{x\eta }{\sqrt{2}\xi _{s}(T)},~\eta =\sqrt{2-\frac{\delta _{s}^{2}(-d_{s})}{\Delta _{0}^{2}}} \label{EqDsmall}$$and $\delta _{s}(-d_{s})$ is a solution of the same equation at the SIs boundary $x=-d_{s}$ $$K\left( \frac{\delta _{s}(-d_{s})}{\Delta _{0}\eta }\right) =\frac{d_{s}\eta
}{\sqrt{2}\xi _{s}(T)}. \label{EqDsmallB}$$Here $F(y,z)$ and $K(z)$ are the incomplete and complete elliptic integral of the first kind respectively.
Substitution of (\[SolGL1\]), (\[SolGL2\]) into (\[BCGLF1\]) - ([BCGLF3]{}) gives that in the next approximation on $\Gamma _{BI}^{-1}$ and $\Gamma ^{-1}$$$J(-d_{s})=J_{G}\frac{\delta _{s}(-d_{s})}{\Gamma _{BI}\Delta _{0}}\sin
\left( \chi \right) \label{JatIS}$$$$J(0)=J(d_{F})=J_{G}\frac{\Gamma (p-q)}{2\Delta _{0}^{2}}\delta _{s}(0)\delta
_{S}(d_{F})\sin \left( \varphi -\chi \right) , \label{JatSF}$$where $$\delta _{s}(0)=-\frac{2b\left( q-p\right) \cos \left( \varphi -\chi \right)
+2a\left( q+p\right) }{\Gamma \left[ \left( q+p\right) ^{2}-\left(
q-p\right) ^{2}\cos ^{2}\left( \varphi -\chi \right) \right] },
\label{delta(0)}$$$$\delta _{S}(d_{F})=\frac{2b\left( q+p\right) +2a\left( q-p\right) \cos
\left( \varphi -\chi \right) }{\Gamma \left( \left( q+p\right) ^{2}-\left(
q-p\right) ^{2}\cos ^{2}\left( \varphi -\chi \right) \right) },
\label{delta(df)}$$are magnitudes of the order parameters at the FS interfaces and $$a=-\delta _{s}(-d_{s})\sqrt{1-\frac{\delta _{s}^{2}(-d_{s})}{2\Delta _{0}^{2}}},~b=\frac{\Delta _{0}}{\sqrt{2}} \label{derivaties}$$Phase, $\chi ,$ of the order parameters of the s layer is determined from equality of currents (\[JatIS\]), (\[JatSF\]).
[99]{} A. A. Golubov, M. Yu. Kupriyanov, E. Il’ichev, Rev. Mod. Phys. **76**, 411 (2004).
A. I. Buzdin, Rev. Mod. Phys. **77**, 935 (2005).
F. S. Bergeret, A. F. Volkov, K. B. Efetov, Rev. Mod. Phys. **77**, 1321 (2005).
S. Oh, D. Youm, and M. Beasley, Appl. Phys. Lett. **71** 2376, (1997).
R. Held, J. Xu, A. Schmehl, C.W. Schneider, J. Mannhart, and M. Beasley, Appl. Phys. Lett. **89**, 163509 (2006).
C. Bell, G. Burnell, C. W. Leung, E. J. Tarte, D.-J. Kang, and M. G. Blamire, Appl. Phys. Lett. **84**, 1153 (2004).
E. Goldobin , H. Sickinger , M. Weides , N. Ruppelt , H. Kohlstedt , R. Kleiner , D. Koelle, arXiv:1306.1683 (2013).
V. V. Bol’ginov, V. S. Stolyarov, D. S. Sobanin, A. L. Karpovich, and V. V. Ryazanov, Pis’ma v ZhETF **95**, 408 (2012) \[JETP Lett. **95** 366 (2012)\].
V. V. Ryazanov, V. V. Bol’ginov, D. S. Sobanin, I. V. Vernik, S. K. Tolpygo, A. M. Kadin, O. A. Mukhanov, Physics Procedia **36**, 35 (2012).
T. I. Larkin, V. V. Bol’ginov, V. S. Stolyarov, V. V. Ryazanov, I. V. Vernik, S. K. Tolpygo, and O. A. Mukhanov, Appl. Phys. Lett. **100**, 222601 (2012).
I. V. Vernik, V. V. Bol’ginov, S. V. Bakurskiy, A. A. Golubov, M. Yu. Kupriyanov, V. V. Ryazanov and O. A. Mukhanov, IEEE Tran. Appl. Supercond., **23**, 1701208 (2013).
S. V. Bakurskiy, N. V. Klenov, I. I. Soloviev, V. V. Bol’ginov, V. V. Ryazanov, I. I. Vernik, O. A. Mukhanov, M. Yu. Kupriyanov, and A. A. Golubov, Appl. Phys. Lett. **102**, 192603 (2013).
T. Ortlepp, Ariando, O. Mielke, C. J. M. Verwijs, K. F. K. Foo, H. Rogalla, F. H. Uhlmann, and H. Hilgenkamp, Science **312**, 1495 (2006).
V. Ryazanov, Uspechi Fizicheskich Nauk **169**, 920 (1999) \[Physics-Uspekhi **42**, 825 (1999)\].
A.K. Feofanov, V.A. Oboznov, V.V. Bol’ginov, *et. al*., Nature Physics **6**, 593 (2010).
A. V. Ustinov and V. K. Kaplunenko, J. Appl. Phys. **94**, 5405 (2003).
A. Buzdin and A. E. Koshelev, Phys. Rev. B **67**, 220504(R) (2003).
A. E. Koshelev, Phys. Rev. B **86**, 214502 (2012).
N. G. Pugach, E. Goldobin, R. Kleiner, and D. Koelle, Phys. Rev. B **81**, 104513 (2010).
E. Goldobin, D. Koelle, R. Kleiner, and R.G. Mints, Phys. Rev. Lett, **107**, 227001 (2011).
H. Sickinger, A. Lipman, M Weides, R.G. Mints, H. Kohlstedt, D. Koelle, R. Kleiner, and E. Goldobin, Phys. Rev. Lett, **109**, 107002 (2012).
S. V. Bakurskiy, N. V. Klenov, T. Yu. Karminskaya, M. Yu. Kupriyanov, and A. A. Golubov, Supercond. Sci. Technol. **26**, 015005 (2013).
D. M. Heim, N. G. Pugach, M. Yu. Kupriyanov, E. Goldobin, D. Koelle, and R. Kleiner, J. Phys.Cond. Mat. **25**, 215701 (2013).
M. Alidoust, J. Linder, Phys. Rev. B **87**, 060503 (2013).
Jun-Feng Liu, K.S. Chan, Phys. Rev. B **82**, 184533 (2010).
V. V. Ryazanov, V. A. Oboznov, A. Yu. Rusanov, A. V. Veretennikov, A. A. Golubov, and J. Aarts, Phys. Rev. Lett. **86**, 2427 (2001).
V. A. Oboznov, V. V. Bol’ginov, A. K. Feofanov, V. V. Ryazanov, and A. I. Buzdin, Phys. Rev. Lett. **96**, 197003 (2006).
T. Kontos, M. Aprili, J. Lesueur, F. Genet, B. Stephanidis, and R. Boursier, Phys. Rev. Lett. **89**, 137007 (2002).
M. Weides, M. Kemmler, H. Kohlstedt, A. Buzdin, E. Goldobin, D. Koelle, R. Kleiner, Appl. Phys. Lett. **89**, 122511 (2006).
M. Weides, M. Kemmler, H. Kohlstedt, R. Waser, D. Koelle, R. Kleiner, and E. Goldobin Physical Review Letters **97**, 247001 (2006).
F. Born, M. Siegel, E. K. Hollmann, H. Braak, A. A. Golubov, D. Yu. Gusakova, and M. Yu. Kupriyanov, Phys. Rev. B. **74**, 140501 (2006).
J. Pfeiffer, M. Kemmler, D. Koelle, R. Kleiner, E. Goldobin, M. Weides, A. K. Feofanov, J. Lisenfeld, and A. V. Ustinov, Physical Review B **77**, 214506 (2008).
A. S. Vasenko, A. A. Golubov, M. Yu. Kupriyanov, and M. Weides, Phys.Rev.B, **77**, 134507 (2008).
A. S. Vasenko, S. Kawabata, A. A. Golubov, M. Yu. Kupriyanov, C. Lacroix, F. W. J. Hekking, Phys. Rev. B **84**, 024524 (2011).
K. D. Usadel, Phys. Rev. Lett. **25**, 507 (1970).
M. Yu. Kuprianov and V. F. Lukichev, Zh. Eksp. Teor. Fiz.**94**, 139 (1988) \[Sov. Phys. JETP **67**, 1163 (1988)\].
Z. G.Ivanov, M. Yu. Kupriyanov, K. K. Likharev, S. V. Meriakri, and O. V. Snigirev, Fiz. Nizk. Temp. **7**, 560 (1981). \[Sov. J. Low Temp. Phys. **7**, 274 (1981)\].
A. A.Zubkov, and M. Yu. Kupriyanov, Fiz. Nizk. Temp. **9**, 548 (1983) \[Sov. J. Low Temp. Phys. **9**, 279 (1983)\].
M. Yu. Kupriyanov, Pis’ma v ZhETF **56**, 414 (1992) \[JETP Lett. **56**, 399 (1992)\].
A.A. Golubov and M.Y. Kupriyanov, Pis’ma v ZhETF **81**, 419 (2005) \[JETP Lett. **81**, 335 (2005)\].
A.A. Golubov, M.Y. Kupriyanov, and Y.V. Fominov,.Pis’ma v ZhETF **75**, 709 (2002). \[JETP Lett. **75**, 588 (2002)\].
R. Meservey and P. M. Tedrow, Phys. Rep. **238**, 173 (1994).
E. A. Koshina, V. N. Krivoruchko, Metallofizika i Noveishie Tekhnologii, **35**, 45 (2013).
L. S. Uspenskaya, A. L. Rakhmanov, L. A. Dorosinskiy, A. A. Chugunov, V. S. Stolyarov, O. V. Skryabina, and S. V. Egorov, Pis’ma v ZhETF **97**, 176 (2013) \[JETP Lett. **97**, 155 (2013)\].
F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Phys. Rev.Lett. **86**, 3140 (2001).
Ya. V. Fominov, N. M. Chtchelkatchev, and A. A. Golubov, Phys. Rev. B **66**, 014507 (2002).
T. Yu. Karminskaya and M. Yu. Kupriyanov, Pis’ma v ZhETF **85**, 343 (2007) \[JETP Lett. **85**, 286 (2007)\].
T. E. Golikova, F. Hübler, D. Beckmann, I. E. Batov, T. Yu. Karminskaya, M. Yu. Kupriyanov, A. A. Golubov and V. V. Ryazanov, Phys. Rev. B **86**, 064416 (2012).
|
=10000
Introduction
============
The understanding, description and control of structures at the nanometer scales is a subject of interest from the fundamental and applied points of view [@generale; @revue]. From the fundamental point of view, there is a large literature [@noziere; @pimpinelli] concerning the growth of crystals and their shape. Yet, while the description of the equilibrium shape is rather clear, the dynamic description of crystal growth is still not well understood. In particular, we lack a complete understanding of the time scales involved in the relaxation process, and the mechanisms which irreversibly conduce the island to its equilibrium shape.
In this work, we study the shape relaxation of two dimensional islands by boundary diffusion at low temperatures. The typical size of the islands we will be concerned with consists of a few thousand atoms or molecules, corresponding to islands of a few nanometers. The model we consider is the same as the one studied in [@eur_physB], where two mechanisms of relaxation, depending on temperature, were pointed out: At high temperatures, the classical theory developed by Herring, Mullins and Nichols [@nichols] appears to describe adequately the relaxation process. In particular, it predicts that the relaxation time scales as the number of atoms to the power $2$. However, at low temperatures, the islands spend long times in fully faceted configurations, suggesting that the limiting step of the relaxation in this situation is the nucleation of a new row on a facet. This assumption leads to the correct scaling behavior of the relaxation time on the size of the island, as well as the correct temperature dependence. Yet, it is unclear what drives the island towards equilibrium in this scenario.
In this paper we propose a detailed description of this low temperature relaxation mechanism, and identify the event that drives the island towards its equilibrium shape. Based on our description, we construct a Markov process from which we can estimate the duration of each stage of the relaxation process. Finally, we use our result to determine the relaxation time of the islands and compare with simulation results.
The specific model under consideration consists of 2D islands having a perfect triangular crystalline structure. A very simple energy landscape for activated atomic motion was chosen: the aim being to point out the basic of mechanisms of relaxation, and not to fit the specific behavior of a particular material. The potential energy $E_p$ of an atom is assumed to be proportional to the number $i$ of neighbors, and the [*kinetic barrier*]{} $E_{act}$ for diffusion is also proportional to the number of [*initial*]{} neighbors before the jump, regardless of the [*final*]{} number of neighbors: $
E_{act}=- E_p = i*E $ where $E$ sets the energy scale ($E=0.1$ eV throughout the paper). Therefore, the probability $p_i$ per unit time that an atom with $i$ neighbors moves is $p_i = \nu_0
\exp[-i*E/k_bT]$, where $\nu_0= 10^{13} s^{-1}$ is the Debye frequency, $k_B$ is the Boltzmann constant and $T$ the absolute temperature. Hence, the average time in which a particle with $i$ neighbors would move is given by : $$\tau_i= \nu_0^{-1} \exp[i*E/k_bT]
\label{taui}$$ The complete description of the model and of the simulation algorithm can be found in [@eur_physB], where it was studied using Standard Kinetic Monte Carlo simulations. This simple kinetic model has only [*one*]{} parameter, the ratio $E/k_B T$. The temperature was varied from $83 K$ to $500 K$, and the number of atoms in the islands from $90$ up to $20000$. The initial configurations of the islands were elongated (same initial aspect ratio of about 10), and the simulations were stopped when the islands were close to equilibrium, with an aspect ratio of 1.2. The time required for this to happen was defined as the relaxation time corresponding to that island size and temperature. Concerning the dependence of the relaxation time on the size of the island, two different behaviors depending on temperature were distinguished[@eur_physB]. At high temperature, the relaxation time scaled as the number of atoms to the power $2$, but this exponent decreased when temperature was decreased. A careful analysis showed that the exponent tends towards $1$ at low temperature. The dependence of the relaxation time as a function of temperature also changes, the activation energy was 0.3 eV at high temperature and 0.4 eV at low temperature. In this context, it is important to define what we call a low temperature: following [@eur_physB], we denote by $L_c$ the average distance between kinks on a infinite facet: we define the low temperature regime as that in which $L_c \gg L$ where L is the typical size of our island, large facets are then visible on the island. It was shown that $L_c=\frac{a}{2} exp(\frac{E}{2k_bT})$ where $a$ is the lattice spacing.\
The behavior of the relaxation time as a function of the temperature and $N$, the number of particles of the island, can be summed up with two equations corresponding to the high and low temperature regimes: $$\begin{aligned}
t^{HT}_{relaxation}& \propto & \exp[3E/k_bT] N^2 \; \mbox{for} \; N \gg L_c^2 \label{teqHT} \\
t^{LT}_{relaxation}& \propto & \exp[4E/k_bT] N \; \mbox{for} \; N \ll L_c^2 \label{teqLT}\end{aligned}$$ Replacing the temperature dependent factors by a function of $N_c$ the crossover island size (where $N_c=L_c^2 \propto exp(E/k_bT)$), these two laws can be expressed as a unique scaling function depending on the rescaled number of particles $N/N_c$: $$t_{relaxation } \propto \left\{
\begin{array}{ll}
N_c^{5} \left(\frac{N}{N_c}\right)^2 \;
& \mbox{for} \;\frac{N}{N_c} \gg 1 \\
N_c^{5} \frac{N}{N_c} \;
& \mbox{for} \;\frac{N}{N_c} \ll 1
\end{array} \right.$$ So that the relaxation time [@note] is a simple monotonous function of $N/N_c$, and the temperature dependence is contained in $N_c$.\
We will now focus on the precise microscopic description of the limiting step for relaxation in the low temperature regime.
Description of the limiting process at low temperature
======================================================
Qualitative description
-----------------------
During relaxation at low temperature, islands are mostly in fully faceted configurations. Let us, for instance, consider an island in a simple configuration given by fig. \[island\]. When $L$ is larger than $l$, the island is not in its equilibrium shape (which should be more or less a regular hexagon). To reach the equilibrium shape, matter has to flow from the “tips” of the island (facets of length $l$ in this case) to the large facets $L$. In this low temperature regime there are very few mobile atoms at any given time, therefore this mass transfer must be done step by step: the initial step being the nucleation of a “germ” of two bound atoms on a facet of length $L$ and then, the growth of this germ up to a size $L-1$ due to the arrival of particles emitted from the kinks and corners of the boundary of the island. Thus the germ grows, and eventually completes a new row on the facet.
This simple picture still leaves a basic question unanswered: the relatively faster formation of a new row on a small facet would lead the island further away from its equilibrium shape, and yet, we observe that this never happens. Indeed, sometimes a germ appears on a small facet but it eventually disappears afterwards, whereas the appearance of a germ on a large facet frequently leads to the formation of a new row, taking the island closer to its equilibrium shape.
These observations are at the root of irreversible nature of the relaxation, germs only grow and become stable on the large facet, so the island can only evolve to a shape closer to equilibrium. Yet, there is clearly no local drive for growth on large facets nor any mechanism inhibiting growth on small ones. In order to explain how this irreversibility comes about, we propose the following detailed description of the mechanism of nucleation and of growth of a germ.
First, to create a germ, 2 atoms emitted from the corners of the island have to encounter on a facet. The activation energy required for this event is obviously independent of whether it occurs on a large facet or on a small facet. Once there is a germ of 2 atoms on a facet, the total energy of the island [*does not*]{} change when a particle is transfered from a kink to the germ (3 bonds are broken, and 3 are created) see Fig. \[island2\]. Clearly the same is true if a particle from the germ is transfered to its site of emission or any other kink. Thus, germs can grow or decrease randomly without energy variations driving the process. An exception to this occurs if the particle that reaches the germ is the last one of a row on a facet; in that case, the energy of the system decreases by $1$ binding energy E. The island is then in a configuration from which it is extremely improbable to return to the previous configuration. For this to occur, a new germ would have to nucleate (and grow) on the original facet. This event is almost impossible in the presence of the kinks of the first growing germ, which act as traps for mobile atoms. Thus, when a germ nucleates on a facet, it can grow or decrease without changing the energy of the island except if a complete row on a facet disappears, in which case it “stabilizes”.
The scenario above explains why no new rows appear on small facets: when a germ grows on a small facet, since atoms come either from a small or a large facet, no complete row of a facet can disappear during the germ’s growth, and thus, the island never decreases its energy. On the other hand when the germ grows on a large facet, the germ might grow or decrease, but if the size of the germ reaches the size of the small facet, the energy of the system will decrease and the system has almost no chance to go back its previous shape. We believe that this is the microscopic origin of irreversibility in the relaxation of this system. It should be stressed that this scenario for the growth and stabilization of germs is different from usual nucleation theory, where the germ has to overcome a free energy barrier [@nucleation] to become stable.
This microscopic description also shows that the limiting step for this “row by row” relaxation mechanism is actually the formation on a large facet, of a germ of the size of the small facet. This fact allows us to estimate the duration of the limiting step at each stage of the relaxation process.
A Quantitative description {#quanti_description}
--------------------------
Based on our description of the process, we propose a scheme for calculating the time required to form a stable germ, i.e. a germ of size $l$, on a facet of size L. As mentioned above, the appearance of this stable germ is the limiting step for the formation of a new row on that facet.
The idea is to describe the growth of the germ as a succession of different island states, and calculate the probability and the time to go from one state to another in terms of the actual diffusive processes occurring on the island surface. These states form a Markov chain, the future evolution of the system being essentially determined by the state of the system, independently of its previous behavior.
As a further simplification, we consider a simple fully faceted island in a elongated hexagonal shape whose facets are of length $L$ and $l$, see Fig. \[island\]; moreover, we normalize every length by the lattice spacing $a$.
The different states we consider are (See Fig. \[diff\_state\] also) :\
- state $0$ : there is no particle on the facets.\
- state $1$ : one particle is on one end of a facet $L$.\
- state $2$ : 2 particles are on the facet $L$: one of them is on one end of the facet, and the other one has diffused from an end.\
- state $3$ : 2 particles are on the facet $L$ but they are bonded together.\
- state $4$ : 3 bonded particles are on the facet $L$.\
…\
- state $n$ : $n-1$ bonded particles are on the facet $L$.\
The goal of this calculation is to estimate the time to go from state $0$ to state $l+1$. We treat the problem as a discrete time Markov chain. The unit time being $\tau_2$, the typical time for a particle with 2 neighbors to move. This time is in fact the smallest relevant time of the system so that the operation of discretization does not affect the results. In the following, the time $\tau_i$ is the average time for a particle with $i$ neighbors to move: $\tau_i=\nu_0^{-1}
e^{\frac{i*E}{kT}}$. For clarity, we will use the term [*time*]{} for the discrete time of the Markov chain, and the term [*real time*]{} for the time of physical process. The obvious relation between the two time scales is : $time= \frac{real\ time}{\tau_2}$.\
We define the parameter $\rho = \frac{\tau_2}{\tau_3}$. In the limit of small temperature, $\rho$ is a very small quantity. Moreover, one can easily check that $\rho= exp(-\frac{E}{kT})= \frac{2}{L_c^2}$ where $L_c$ is the average distance between kinks defined in the first section. So that the condition $L_c \gg L$ (low temperature regime) could be written as: $\sqrt{\rho} L \ll 1$ or $\rho L \ll 1/L < 1$.\
Denote by $\alpha_i$ the probability for the system in state $i$ to stay in the state $i$ the following step, $p_i$ the transition probability for system in state $i$ to go to state $i+1$, and $q_i$ the transition probability for the system to go into state $i-1$.\
We have now to evaluate the different quantities $\alpha_i,~p_i$ and $q_i$ in terms of the diffusive processes that take place on the island’s boundary.
We first evaluate the quantities $p_1,q_1$ and $\alpha_1$. Let us assume that the states $0$ and $2$ are absorbent; the average time $n_1$ needed to leave state $1$, corresponds to the average real time a particle stays on a facet starting on one of its edges, which act as traps. Since the particle performs a random walk, this can be readily calculated to be $L \tau_2$. So we should have : $$n_1=\frac{1}{1-\alpha_1}=L$$
Moreover, the probability to go to state $2$ is the probability that a new particle leaves a kink and reaches the facet while the first particle is still on the facet, we calculate this probability in Appendix \[appen\_1\] where we find: $$P = 1 - \left[\frac {2 \sinh(2 \sqrt{\rho} (L-1)) }{\sinh(2\sqrt{\rho}L) } -
\frac {\sinh(2\sqrt{\rho} (L/2-1)) }{\sinh(2\sqrt{\rho}L/2) } \right]
\label{P_exact_text}.$$ We expand expression Eq. \[P\_exact\_text\] for small $\rho$ keeping the first term : $$\begin{aligned}
P = 2 \rho (L-1) + o(\rho)\label{appen_1_final}\end{aligned}$$ We could also calculate the probability $b_1$, that the system in state $1$ eventually reaches state $2$ as: $$b_1=\alpha_1 b_1 + p_1.$$ Thus $b_1 = \frac{p_1}{1-\alpha_1}$, and using $p_1+\alpha_1+q_1 = 1 $ we find: $$\begin{aligned}
p_1&=& \frac{P}{L} \simeq 2 \rho \frac{L-1}{L} + o(\rho) \label{p1}\\
\alpha_1 &=& 1 - 1/L \label{a1}\\
q_1 &=& \frac{1}{L} - \frac{P}{L} \simeq \frac{1}{L} - 2 \rho \frac{L-1}{L} + o(\rho) \label{q1} \end{aligned}$$ We can do the same with state $2$, knowing that the probability for two particles to stick (state $3$) is $\lambda/L$, where $ \lambda =
\left(\frac{\cosh \pi +1 } { \sinh \pi} \right)\pi$ (the calculation of this probability is carried out in appendix \[appen\_11\]). The time to leave state $2$ is $\kappa L$, where $\kappa$ is a numerical constant given by $ \kappa = \frac{4}{\pi^2} \sum_{k=1}^{\infty}
\frac{1}{1+4k^2} \left[ 2 - \frac{1}{(2k+1)} \right]$ (see appendix \[appen\_12\]). Thus, we obtain: $$\begin{aligned}
p_2 &=& \frac{\lambda}{\kappa L^2} \label{p2}\\
\alpha_2 &=& 1 - \frac{1}{\kappa L} \label{a2}\\
q_2 &=& \frac{1}{\kappa L} - \frac{\lambda}{\kappa L^2} \label{q2}\end{aligned}$$
In order to obtain a chain which can be treated analytically, we assume that the probability to go from state 3 to state 4, is the same as the probability to go from state 4 to state 5 and, in general, that the probabilities to go from state $i$ to state $i+1$ are $p_i =
p_{i+1}=p$ for $i \ge 3$. Similarly, we assume that $\alpha_i=\alpha_{i+1}= \alpha$ for $i \ge 3$ and $q_i=q_{i+1} = q$ for $i > 3$.
To calculate the probabilities $p,q,\alpha$, in appendix \[appen\_2\] we have calculated the average real time $t_{p,q,\alpha}$ to go from state $i$ to state $i+1$ assuming the average distance between kinks and the germ is $L/2$: $t_{p,q,\alpha}=L/4 *\tau_3+ L(L-2)/4*\tau_2$. Moreover, since $p=q$ , we can calculate $p,q$ and $\alpha$ : $$\begin{aligned}
p &=& 2 \rho / L - \frac{2(L-2)}{L}\rho^2 + o(\rho^2)\label{pk}\\
\alpha &=& 1 - \frac{4 \rho}{L}+ \frac{4(L-2)}{L}\rho^2 + o(\rho^2) \label{ak} \\
q &=& 2 \rho / L - \frac{2(L-2)}{L}\rho^2 + o(\rho^2)\label{qk}\end{aligned}$$
When 2 particles are bonded on the facet (state 3), the probability to go to state $2$ should practically be equal to $q$, we will assume this to be the case.
So far, we have omitted the possibility that the germ can also nucleate on a small facet $l$. To take this into account, we have to consider new states :\
- state $-1$: one particle is on the facet $l$ on one of its edges.\
- state $-2$: 2 particles are on the facet $l$: one of them is on the edge of the facet, and the other has diffused from an edge\
- state $-3$: 2 particles are on the facet $l$ but they are bonded.\
- state $-4$: 3 bonded particles are on the facet $l$.\
…\
- state $-l$: $l-1$ bonded particles are on the facet $l$.\
As discussed above, if the system arrives to state $-l$, a row on a small row is completed, which is not an absorbent state, and since this row cannot grow any further the system can only go back to state $-l+1$ or stay in state $-l$.
The different probabilities of transition from one state to another in this branch of the chain are the same as the ones calculated before, replacing $L$ by $l$. Thus we have $q_{-i}=p_{i}(L \Rightarrow l)$, $p_{-i}=q_{i}(L \Rightarrow l)$ and $\alpha_{-i}=\alpha_{i}(L
\Rightarrow l)$ for $i \geq 1$ except for the state $-l$, where we always have $\alpha_{-i}=\alpha(L \Rightarrow l)$, but, $p_{-l}=2*q(L
\Rightarrow l)$. In the following, we will use the notation : $p_i^*=p_{i}(L \Rightarrow l)$, the values of $p_i$ where we have replace $L$ by $l$.
To complete the calculation, we now have to determine $p_0,q_0$ and $\alpha_0$. The average real time the system stays in state $0$, assuming states $1$ and $-1$ are absorbent, is almost $\tau_3/2$ if we take into account that there are [*two*]{} kinks, one at each end of the facet. The probability that the germ nucleates on the facet $L$ is simply $\frac{L}{L+2l}$. From this we deduce: $$\begin{aligned}
p_0&=&2\frac{L}{L+2l} \rho \label{p0}\\
\alpha_0&=&1-2\rho \label{a0} \\
q_0&=&2\frac{2l}{L+2l} \rho \label{q0}\end{aligned}$$
The diagram of the entire Markov chain is then given by Fig. \[chaine2\]. And we will calculate the time to go from state $0$ to state $l+1$.\
The state $l+1$ is absorbent (as discussed above, when the size of the germ reaches [*l* ]{} on a large facet, the system cannot go back to the initial state). Let us call $n_i$ the average time to go from state $i$ to state $l+1$. We can write : $$\begin{aligned}
n_{-l}&=& 1+ 2q^* n_{-l-1} + \alpha^* n_{-l} \label{eq-l} \\
n_{-k} &=& 1 + q^* n_{-k-1} + \alpha^* n_{-k} + p^* n_{-k+1} \label{eq-k}\\
with & & 3 \le k \le l \\
&...& \nonumber \\
n_{-2} &=& 1 + q_2^* n_{-1} + \alpha_2^* n_{-2} + p_2^* n_{-3} \label{eq-2}\\
n_{-1} &=& 1 + q_1^* n_0 + \alpha_1^* n_{-1} + p_1^* n_{-2} \label{eq-1} \\
n_0 &=& 1 + q_0 n_{-1} + \alpha_0 n_0 + p_0 n_1 \label{eq0} \\
n_1 &=& 1 + q_1 n_0 + \alpha_1 n_1 + p_1 n_2 \label{eq1} \\
n_2 &=& 1 + q_2 n_1 + \alpha_2 n_2 + p_2 n_3 \label{eq2}\\
&...& \nonumber \\
n_k &=& 1 + q n_{k-1} + \alpha n_k + p n_{k+1} \label{eqk}\\
with & & 3 \le k \le l-1 \end{aligned}$$ The boundary condition for this process is $n_{l+1}=0$. The calculation of $n_0$ is quite straight forward but tedious, it is carried out in the appendix \[appen\_3\]. We find that in the limit of small temperatures, the typical real time needed to nucleate a germ of size $l$ on a facet of size $L$ is given by: $$\tau(L,l) \approx \frac{\tau_3^2}{\tau_2} \frac{(L+2l)}{4L(L-1)}
\left[ (\frac{L}{\lambda}-1)(l-1)+1 \right]$$ where we have kept only the most relevant term at low temperatures.
Estimation of the Relaxation time
=================================
In this section we calculate the typical time required for an island to relax from an initial out of equilibrium shape. We assume that at all times, the instantaneous shape of the island can be characterized by the lengths $L$ and $l$ of its long and short facets respectively. Then, following the discussion in the previous sections, a new row of particles will appear on a long facet after a time $\tau(L,l)$. Thus, calling $v(L)$ the normal speed of the large facet and taking the particle size as our unit distance, we have:
$$v(L) \approx \frac{1}{\tau(L,l)} \approx \frac{4 \tau_2 L
(L-1)}{\tau_3^2 (L+2l)\left[ (\frac{L}{\lambda}-1)(l-1)+1 \right]}
\label{v(l)}$$
The scaling properties of the relaxation time can be deduced by noticing that the length scales involved scale as $N^{1/2}$, where N is the number of atoms of the island. Thus we renormalize the lengths by $x \rightarrow x'= N^{-1/2} x$. Then, to scale out the size dependence as $N$ grows, one must rescale time by $ t \rightarrow t'=t
N^{-1}$. This is the result obtained in [@eur_physB]: at low temperature, the relaxation time is proportional to the number of the atoms of the island. But, as we expect our results for the time required to complete a row at each stage, as given in Eq. \[n0\_exact\], to be relatively accurate, we can go beyond the scaling properties and use it to calculate numerically the time required for the complete relaxation process, including the corrections arising from the lower order terms.
In what follows, we establish the differential equations which permit the calculation of the full relaxation time of an island. As mentioned above, we still consider the simple island of Fig. \[island\], where $v(L)$ is the normal speed of the facet $L$, and $v(l)$ that of the facet $l$. We now consider $L$ and $l$ as continuous variables, which considerably simplifies the calculation.
Conservation of the matter imposes the relation: $$Lv(L)+2lv(l)=0$$ Moreover, we can find geometric relations between $L,l,v(L),v(l)$ : $$\begin{aligned}
v(L)&=&\sqrt{3}/2 \frac{dl}{dt} \\
v(l)&=&\sqrt{3}/4(\frac{dl}{dt} + \frac{dL}{dt} )\end{aligned}$$ So finally we find: $$\begin{aligned}
\frac{dl}{dt}&=&\frac{2}{\sqrt{3}} \frac{1}{\tau(L,l)} \label{eq_diff1}\\
\frac{dL}{dt}& =& - \frac{2}{\sqrt{3}} \left( \frac{L}{l}+1 \right) \frac{1}{\tau(L,l)}
\label{eq_diff2}\end{aligned}$$ To integrate these equations numerically: we use Eq. \[n0\_exact\], and for $p1,q1$ and $\alpha_1$ the exact estimation using Eq. \[P\_exact\], as well as the explicit values of the quantities $q_i,\alpha_i$ and $p_i$ we have found: Eqs. \[\[p2\]-\[q0\]\]. We start the integration from an island of aspect ratio of $R=10$, and stop it when the aspect ratio is $R=1.2$. Aspect ratios are explicitly calculated as: $$\begin{aligned}
R &=& \frac{r_x}{r_y} \\
r_x^2 &=&\frac{1}{S} \int \!\!\!\! \int_{Island \ Surface} (x-x_G)^2 dx dy \\
r_y^2 &=&\frac{1}{S}a
\int \!\!\!\! \int_{Island \ Surface} (y-y_G)^2 dx dy \\
S &=& \int \!\!\!\! \int_{Island \ Surface} dx dy\end{aligned}$$ Where $x_G$ and $y_G$ give the position of the center of gravity of the island. We report in Fig. \[integ\] the relaxation time as a function of $N$, the number of particles of the island in a log-log plot.
We find a good quantitative agreement between the simulations and our predictions except at the highest temperatures (see below).
Summary and Discussion
======================
We have considered the shape evolution of two dimensional islands as a result of the nucleation of a germ on a facet which then grows or decreases due to single particle processes. Then we recognized that the disappearance of a complete row is responsible for the stabilization of the germ, and that this is only feasible for germs growing on the large facets. This gives rise to an overall flux of particles from the small facets to the large facets which leads the shape of the island irreversibly towards equilibrium. Based on this description, we have recast the formation of stable germs, which is the limiting step for relaxation, into a Markov chain in which the transition probabilities are calculated in terms of the underlying diffusive processes taking place on the island’s boundary. Solving this chain yields an estimate of the time of formation of a new row at each stage of the relaxation. Integrating our results we can obtain the relaxation time for the evolution of an island from an aspect ratio of 10 to an aspect ratio of 1.2, as a function of temperature and island size. Our results have a rather good quantitative agreement with those obtained from direct simulations of the system. At higher temperatures, multiple nucleation processes, the presence of many mobile particles on the island’s boundary and the failure of our hypothesis $L_c \gg L$ (Sect. \[quanti\_description\]) invalidate our picture and the relaxation becomes driven by the coarsed grained curvature of the boundary, which leads to Mullins classical theory.
The description of the simple model considered in this work is certainly not exact and there are other effects that could be taken into account. Perhaps the most important effect we have overstepped at low temperatures, is related to our assumption that after the germ stabilizes a single new row is formed on the large facet. The differential equations for the evolution of the island were derived from this assumption. It is clear that this is not correct: our estimation of the time required to stabilize a germ starts from a fully faceted configuration, and once a complete row on a small facet disappears the germ becomes stable and a full row on the large facet can be formed. Once this row is finished, it is very unlikely that the island will be in a fully faceted configuration again, leaving at least an extra kink on the boundary. This gives rise to extra sources and traps for mobile particles, which might have affected the relaxation rate. Another issue is our characterization of the faceted island with only two facets sizes: a more detailed characterization may be relevant especially in the early stages of relaxation. It is clear that more accurate models for specific systems can also be constructed. These could take into account the dependence of the edge-diffusion coefficients on the orientation of the facet, as well as the dependence of emission rates on the local geometry. Such dependences have been studied, for example, by [@ferrando]. In terms of the elements of description we use, inclusion of these effects would be achieved by changing the values of $\tau_2$ (diffusion time) and $\tau_3$ (emission time from kinks and corners) depending on the orientation of the facets involved in each event. Thus, the nucleation time would depend on the facet upon which it happens. Such a dependence of the nucleation time may drive the island toward a non regular hexagon equilibrium shape [@michely], and would reproduce the phenomenology of a larger variety of materials. These changes will affect the temperature dependent prefactors in our results, as these depend on the temperature through $\tau_2$ and $\tau_3$. However, the size dependence of the nucleation time and of the relaxation time, which is where the departure from Mullin’s theory is evidenced, would stay the same.\
From a more general point of view, only the diffusion of particles along the perimeter of islands has been taken into account in this work. In real systems, other mechanisms can contribute to the transport of the matter which leads to relaxation: volume diffusion and transport through the two dimensional gas of particle surrounding the island. Volume diffusion is usually a much slower process than the other two, and can usually be neglected safely. On the other hand, it is well known that edge diffusion is more efficient for short trips whereas transport through the 2D gas is faster on long distances. Following Pimpinelli and Villain in [@pimpinelli](p.132), a characteristic length $r_1$ beyond which edge diffusion is less efficient than transport through the gas can be evaluated : $r_1 \approx \sqrt{D_s
\tau_v}$ where $D_s$ is the edge diffusion coefficient and $1/\tau_v$ the probability per unit time a given particle leaves the island. So that our assumptions should be valid for islands with a number N of particles such that : $N \ll D_s \tau_v$. Since the activation energy for edge diffusion is smaller than the activation energy for evaporation, $r_1$ is a decreasing quantity with temperature, and we expect $r_1$ to be very large at low temperature. Thus, this mechanism is essentially irrelevant in the description of the evolution of nanometer structures at low temperatures. Moreover, recent experimental results [@stoldt] have shown that supported Ag two dimensional islands relax via atomic diffusion on the island perimeter, without significant contribution from exchange with the two dimensional gas.\
Finally, our results are to be compared with a recent theoretical study [@PRL] concerning the relaxation of three dimensional crystallites. This study also points out two relaxation regimes as a function of temperature. At high temperature the relaxation scales in accordance with the results derived from Mullins’ theory, whereas at low temperature the relaxation time becomes an exponentional function of the size of the crystallites. So that the effects of lowering the temperature are qualitatively different for two dimensional and three dimensional crystallites : in two dimensions, lowering the temperature decreases the strength of the dependence of the relaxation time as a function of the size of the crystallites (as it crosses over from a $N^2$ dependence to a $N$ dependence), whereas it increases this the strength in three dimensions. In both cases, the limiting step is the nucleation of a germ on a facet : a unidimensional germ in two dimensions, and a two dimensional germ in three dimensions. The difference stems from the fact that in the two dimensional case, the activation energy for the creation of the germ does not depend on the size of the island, it is always constant : $4E$, and it stabilizes when a row on a small facet has been removed. In the three dimensional case, this activation energy depend on the size of the crystallite. The transfer of a particle from a tip of the crystallite to the germ has a gain in volume energy (depending on the size of the islands) and a loss in edge energy of the germ (depending on the size of the germ). Summing these two terms, an energy barrier proportional to the size of the crystallite appears for the creation of a stable germ. The exponential behavior of the relaxation time as a function of N is a consequence of this energy barrier dependence.\
Finally, we believe that the overall picture presented here, while still oversimplified, seems to be complete enough to provide a general picture of the processes leading to the shape relaxation of two dimensional islands at low temperatures.\
We acknowledge useful discussions with P. Jensen, J. Wittmer and F. Nicaise. We are grateful with an anonymous referee for his many useful comments. H.L. also acknowledges partial financial support from CONACYT and DGAPA-UNAM, and is thankful with Univ. Claude Bernard, Lyon 1, for the invitation during which part of this work was done.
Calculation of the probability to have 2 particles on the facet {#appen_1}
===============================================================
We calculate the probability $P$ of having 2 particles on a facet with absorbing boundaries, knowing that at time $t=0$, one particle is on one edge of the facet (abscissa $1$), and that the other particle can appear on the facet with a probability per unit time $1/\tau$. Relatively to our Markov chain, this probability is the probability the system in state $1$ eventually reaches state $2$.
We denote by $S(x,t)$ the probability that a particle on the facet at position x at time $t=0$ is still on the facet at time $t$. Then, $S(x,t)$ satisfies the usual diffusion equation: $$\frac{\partial S(x,t) } {\partial t} = D \frac{\partial^2 S(x,t) }
{\partial x^2}
\label{equa_diffu}$$ This equation is to be solved with the conditions: $S(x,0) = 1$ for every $x\in]0,L[$ (i.e. we are sure to find the particle on the facet at time $t=0$), $S(0,t) = S(L,t) = 0$ for every t (i.e. the boundaries sides of the facet are absorbent). The solution of Equation \[equa\_diffu\] is $$S(x,t) = \sum_{n=0}^{+\infty} \frac{4}{(2n+1) \pi} \sin(
\frac{(2n+1)\pi x}{L} ) e^{- \frac{D \pi^2 (2n+1)^2t }{L^2}}.$$
To take into account the appearance of particles on the facet, we assume the process to be Poissonian so that the probability to have a particle appearing at time $t$ is $\frac{1}{\tau} e^{-\frac{t}{\tau}}$.
Thus the probability $P$ that two particles are on the facet is: $$P = \int_{0}^{\infty} \frac{1}{\tau} e^{-\frac{t}{\tau}} S(x,t) dt$$ To take into account that particles can appear on the facet on both ends, we take $\tau = \tau_3/2$, which holds at low temperatures. This leads to the expression : $$P = \frac{4\delta^2}{\pi} \sum_{n=0}^{+\infty} \frac{1}{2n+1} \:
\frac{\sin((2n+1)\chi)}{\delta^2 + (2n+1)^2}
\label{appen_Po}$$ where: $$\begin{aligned}
\delta^2 = \frac{2L^2}{D \pi^2 \tau_3} \label{delta}\\
\chi = \frac{\pi x}{L} \label{chi}\end{aligned}$$
Using formula [@gradstein] : $$\begin{aligned}
\zeta(\delta,\chi) &=& \sum_{k=1}^{+\infty} \frac{\cos(k\chi)}{\delta^2 + k^2}
\nonumber \\
& = & \frac{\pi}{2 \delta} \frac{ \cosh (\delta(\pi-\chi))}{\sinh
(\delta \pi)} - \frac{1}{2 \delta^2}\end{aligned}$$ we have : $$\begin{aligned}
\sum_{n=0}^{+\infty} \frac{1}{2n+1} \frac{\sin( (2n+1)\chi )}
{\delta^2 + (2n+1)^2} &=& \nonumber \\ \int_{0}^{\chi} \left[
\zeta(\delta,\chi') - \frac{1}{4} \zeta(\delta/4,2\chi') \right]
d\chi'\end{aligned}$$ So that, using Eqs. \[delta\], \[chi\], we finally find the following expression for $P$: $$P = 1 - \left[\frac {2 \sinh(2 \sqrt{\rho} (L-1)) }{\sinh(2\sqrt{\rho}L) } -
\frac {\sinh(2\sqrt{\rho} (L/2-1)) }{\sinh(2\sqrt{\rho}L/2) } \right]
\label{P_exact}$$ where $\rho=\frac{\tau_2}{\tau_3}$, and we have taken $x=1$ (the initial particle on the face is at position 1 at time $t=0$). We can expand expression Eq. \[P\_exact\] for small $\rho$ keeping the first term : $$\begin{aligned}
P = 2 \rho (L-1) + o(\rho)\end{aligned}$$
Probability that two particles stick on the face {#appen_11}
================================================
In this part we evaluate the probability $P_{\triangle}$ that two particles stick on a facet with absorbing boundaries, knowing that at time $t=0$, one particle is on one end of the facet, and the other is at a position $x_0$ on the face. This problem can be mapped to a 2-d problem in which, at time t, the first particle is at position y, and the second particle is at position x. This virtual particle moves diffusively in a square of side $L$; starting from position $(x_0,a)$ (a is the lattice spacing). The quantity we are looking for is the probability for this virtual particle to reach the diagonal $y=x$ of the square. Thus we can consider the motion of the virtual particle in the triangle $ 0\leq x \leq y\leq L$. We call $D(x,y)$ the probability that a virtual particle starting at time $t=0$ from $(x,y)$ leaves the triangle by the diagonal, $V(x,y)$, the probability that this particle leaves the triangle by its vertical side, and $H(x,y)$ the probability that this particle leaves the triangle by its horizontal side. We use here a continuous description: the discrete problem being far too difficult. It can be easily seen that D,V and H satisfy Laplacian equations: $$\begin{aligned}
\Delta D(x,y) =0 \quad \Delta V(x,y) =0 \quad \Delta H(x,y) =0 \end{aligned}$$ With the conditions : $$\begin{aligned}
D(x,x)=1 \quad D(x,0) = 0 \quad D(0,y) = 0 \\
V(x,x)=0 \quad V(x,0) = 0 \quad V(L,y) = 1 \\
H(x,x)=0 \quad H(x,0) = 1 \quad H(0,y) = 0 \\
\mbox{for} \quad\forall x \in [0,L] \quad \mbox{and}\quad \forall y \in [0,L] \nonumber\end{aligned}$$ And moreover we should have : $$D(x,y) + V(x,y) + H(x,y) =1,
\label{probeq1}$$ which states that the particle is sure to leave the triangle since all sides are absorbent. Instead of calculating directly $D(x,y)$, we will calculate $V(x,y)$ and $H(x,y)$.
We first calculate $V_{\Box}(x,y)$ and $H_{\Box}(x,y)$ which are the probability that one brownian particle in a square with absorbing sides, respectively leaves the square by the vertical side $x=L$ and by the horizontal side $y=0$. So we have : $$\Delta V_{\Box}(x,y) =0 \quad \Delta H_{\Box}(x,y) =0,
\label{VHbox}$$ with boundary conditions : $$\begin{aligned}
V_{\Box}(x,L)=0 \quad V_{\Box}(x,0)=0 \\
V_{\Box}(0,y)=0 \quad V_{\Box}(L,y)=1 \\
H_{\Box}(L,y)=0 \quad H_{\Box}(0,y)=0 \\
H_{\Box}(x,0)=1 \quad H_{\Box}(x,L)=0 \\
\mbox{for} \quad\forall x \in [0,L] \quad \mbox{and}\quad \forall y \in [0,L] \nonumber\end{aligned}$$ The solution of equations Eq. \[VHbox\], with these conditions is:
$$\begin{aligned}
H_{\Box}(x,y)& =& \frac{4}{\pi} \sum_{m=0}^{\infty} \frac{
\sin\frac{(2m+1)\pi x}{L}}{2m+1} \; \frac{ \sinh
\frac{(2m+1)\pi(L-y)}{L} } { \sinh (2m+1)\pi } \label{Hbox} \\
V_{\Box}(x,y)& = & \frac{4}{\pi} \sum_{m=0}^{\infty} \frac{
\sin\frac{(2m+1)\pi y}{L}}{2m+1} \; \frac{ \sinh \frac{(2m+1)\pi x}{L}
} { \sinh (2m+1)\pi } \label{Vbox}\end{aligned}$$
One now can deduce the values of $V(x,y)$ and $H(x,y)$ of our initial problem with a superposition of solutions to impose the condition : $V(x,x)=0$ and $H(x,x)=0$ : $$\begin{aligned}
V(x,y) = V_{\Box}(x,y) - V_{\Box}(y,x) \label{vsol} \\
H(x,y) = H_{\Box}(x,y) - H_{\Box}(y,x) \label{hsol} \end{aligned}$$
Using Eq. \[probeq1\], \[vsol\], \[hsol\], \[Hbox\], \[Vbox\], we finally find an expression for $D(x_0,a)$ : $$\begin{aligned}
D(x_0,a) = 1& \label{D}\nonumber \\
- \frac{4}{\pi}\sum_{m=0}^{\infty} & \frac{ \sin\frac{(2m+1)\pi a}{L}}{2m+1} \left[ \frac{\sinh \frac{(2m+1)\pi x_0}{L} - \sinh \frac{(2m+1)\pi(L-x_0)}{L} } {\sinh (2m+1)\pi} \right]\nonumber \\
- \frac{4}{\pi}\sum_{m=0}^{\infty} &\frac{ \sin\frac{(2m+1)\pi x_0}{L}}{2m+1} \left[ \frac{\sinh \frac{(2m+1)\pi (L-a)}{L} - \sinh \frac{(2m+1)\pi a}{L} } {\sinh (2m+1)\pi} \right] \nonumber\end{aligned}$$
We now have to calculate the probability that the second particle is at position $x_0$ when the other one appears on the facet. We denote this probability $P(x_0)$. Then the probability we are looking for is simply : $$P_{\triangle} = \int_{0}^{L} D(x_0,a) P(x_0) dx_0 \label{probDtot}$$ We are actually able to calculate the exact probability $P(x_0)$, but then we are not able to find a simple expression for $P_{\triangle}$, so we prefer to make the following approximation: in the limit of small temperature, the typical time needed for a particle to appear on the facet is about $\tau_3$, which is long compared to the typical time a particle lasts on the facet (about $ L \tau_2$ for a particle that starts near the edge). Thus, to a good approximation, the probability $P(x_0)$ has reached its stationary value. Taking into account only the first term of the series, we have: $$P(x_0) = \frac{\pi}{2L} \sin \frac{\pi x_0}{L}
\label{P(x0)}$$ With this expression Eq. \[probDtot\] becomes easy to calculate and we find : $$\begin{aligned}
P_{\triangle} & = & 1 - \frac{ \sinh \frac{\pi (L-a)}{L} - \sinh
\frac{\pi a}{L} }{\sinh \pi} \label{Pprems}\\ & = & \left(\frac{\cosh
\pi +1 } { \sinh \pi} \right) \frac{\pi a}{L} - \frac{\pi^2 a^2}{2L^2}
+ o(1/L^2) \label{Pdeux}\end{aligned}$$ So that we find that the leading term of $P_{\triangle}$ is proportional to $1/L$. We write this as: $$\begin{aligned}
P_{\triangle} = \lambda \frac{1}{L} + o(1/L^2) \label{appen11_final}\\
\mbox{with} \; \lambda = \left(\frac{\cosh \pi +1 } { \sinh \pi} \right)\pi\end{aligned}$$
Calculation of the time to leave state 2 {#appen_12}
========================================
We calculate in this part the average time $<\tau>$ for two particles on the facet either to bond, or either one of them to reach the boundary. This time corresponds to the time the system stays in state $2$ in the Markov chain.\
Using the description of appendix \[appen\_11\] with the virtual particle in the triangle, we are looking for the average time this particle needs to leave the triangle.
Let us call $S(x_0,y_0,t)$ the survival probability: i.e. the probability a particle starting at $(x_0,y_0)$ at time $t=0$ is still in the triangle at time t. The average time $<\tau (x_0,y_0) >$ the particle stays in the triangle is given by : $$<\tau (x_0,y_0) > = \int_{0}^{\infty} S(x_0,y_0,t) dt
\label{detau}$$ Moreover calling $P_{\triangle}(x,y,x_0,y_0,t)$ the probability that one particle starting at $(x_0,y_0)$ at time $t=0$ is at position $(x,y)$ at time $t$, we have : $$S(x_0,y_0,t) = \int \!\!\!\! \int_{\triangle}
P_{\triangle}(x,y,x_0,y_0,t) dxdy
\label{defS}$$ Finally, we are only interested in $<\tau (x_0,1)>$ since we know that our virtual particle starts at $(x_0,a)$, so we have to average the time $<\tau (x_0,1)>$ over all values $x_0$. To do this we use the approximate distribution given in Eq. \[P(x0)\] : $$<\tau> = \int_{0}^{L} P(x_0) <\tau (x_0,1) > dx_0
\label{deftau}$$ So finally, using Eq. \[detau\], \[defS\], \[deftau\], we need to evaluate: $$<\tau> = \int_{0}^{L} \int_{0}^{\infty} \int \!\!\!\! \int_{\triangle}
P(x_0) P_{\triangle}(x,y,x_0,1,t) dxdy\; dt \; dx_0
\label{tautotal}$$
We now have to calculate $ P_{\triangle}(x,y,x_0,1,t)$. As before, we find the solution in a square, and then by superposition, we deduce the solution in the triangle : $$\begin{aligned}
&P_{\triangle}(x,y,x_0,1,t) = P_{\Box}(x,y,x_0,1,t) -
P_{\Box}(x,y,1,x_0,t) \label{Ptri}\\ &P_{\Box}(x,y,x_0,1,t) =
\nonumber \\ &\frac{4}{L^2} \sum_{m,n=1}^{\infty}
\sin\frac{m \pi x_0}{L}\sin\frac{n \pi}{L} \sin\frac{m \pi x}{L} \sin\frac{n \pi y}{L} e^{-\frac{D(m^2+n^2) \pi^2 t}{L^2}} \label{Pbox}\end{aligned}$$ We find $<\tau>$ integrating Eq. \[tautotal\] using Eq. \[Ptri\], \[Pbox\] over $x_0$ first, and then over $x$ and $y$, and finally over $t$, we find : $$<\tau> = \frac{L^2}{D \pi^3} \sum_{k=1}^{\infty} \frac{1}{1+4k^2} \; \sin\frac{2k \pi}{L} \; \left[ \frac{2}{k} - \frac{1}{k(2k+1)} \right] \label{ztau}$$ We interested in the leading term as $L \rightarrow \infty$, let us define a function $f(u)$ by : $$f(u) = \sum_{k=1}^{\infty} \frac{1}{1+4k^2} \; \sin(ku) \; \left[ \frac{2}{k} - \frac{1}{k(2k+1)} \right]$$ $f(u)$ is a normally convergent series, so that : $$\begin{aligned}
\frac{df(u)}{du} &=& \sum_{k=1}^{\infty} \frac{1}{1+4k^2} \; \cos(ku)
\; \left[2 - \frac{1}{(2k+1)} \right] \\
&\stackrel{u\rightarrow 0}{\longmapsto}&
\sum_{k=1}^{\infty} \frac{1}{1+4k^2} \; \left[2 - \frac{1}{(2k+1)}
\right] \label{premterm}\end{aligned}$$ which is also convergent. The second term of the development is of order $u^2$ or smaller, so that integrating Eq. \[premterm\], and using it in Eq. \[ztau\], we find: $$<\tau> = \frac{2L}{D \pi^2} \sum_{k=1}^{\infty} \frac{1}{1+4k^2} \; \;
\left[ 2 - \frac{1}{(2k+1)} \right] + O(1/L)$$
Thus $<\tau>$ is proportional to L, and we write: $$\begin{aligned}
<\tau> = \kappa L \tau_2 + O(1/L) \label{appen12_final}\\
\mbox{ with} \; \kappa = \frac{4}{\pi^2} \sum_{k=1}^{\infty} \frac{1}{1+4k^2} \; \; \left[ 2 - \frac{1}{(2k+1)} \right]\end{aligned}$$
Calculation of the time a particle needs to go to center of the facet {#appen_2}
=====================================================================
We now calculate the average real time $t_{p,q,\alpha}$ to go from state $i$ to state $i+1$ assuming the average distance between the kinks or corners from which particles are emitted and the germ is $L/2$. This again can be posed as a Markov chain where :\
- state $0$ : no particle is on the facet;\
- state $1$ : one particle coming from a kink is in position 1 on the facet;\
- state $2$ : the particle is at position $2$;\
…\
- state $k$ : the particle is at position k; and state $L/2$ is absorbent.
Again we use $\tau_2$ as the unit time. Using the same definition for the quantities $p_i$, $q_i$ and $\alpha_i$ as in the main text, we have: $$\begin{aligned}
p_i &=& 1/2 \\
\alpha_i&=&0 \\
q_i &=& 1/2 \\
with & & i \ge 1 \nonumber\end{aligned}$$ To calculate $p_0$ and $\alpha_0$ we know that the real time an atom needs to leave a kink is $\tau_3$, and as there are two kinks at the ends of the facet, $\tau_3/2$ is, to a good approximation, the average time to leave state $0$. Thus that we find: $$\begin{aligned}
p_0 &=& 2\rho \\
\alpha_0 &=& 1 - 2\rho \end{aligned}$$ Calling $n_i$ the average time to go from state $i$ to the absorbent state $L/2$, we have the equations : $$\begin{aligned}
n_0 &=& 1 + \alpha_0 n_0 + p_0 n_1 \label{appen_eq0} \\
n_k &=& 1 + \frac{n_{k-1}}{2} + \frac{ n_{k+1}}{2} \label{appen_eqk} \\
with & & k \ge 1 \nonumber\\
n_{L/2}&=& 0 \label{appen_eqL}\end{aligned}$$ and $n_{L/2}=0$. Summing equations \[appen\_eqk\] from $k=1$ to $j$, and then from $j=1$ to $L/2-1$, and using Eq. \[appen\_eqL\], one finds : $$n_1 = \frac{L-2}{L} n_0 + \frac{L-2}{2}$$ And Eq. \[appen\_eq0\] permits to obtain $n_0$ : $$n_0 = \frac{L}{2p_0} + \frac{L(L-2)}{4}$$ Going back to real time, one finds the average time $\tau$ a particle needs to leave a kink and reach the center of the facet is: $$t_{p,q,\alpha} = \frac{L}{4}\tau_3 + \frac{L(L-2)}{4} \tau_2
\label{appen_2_final}$$
Calculation of $n_0$ {#appen_3}
====================
To carry out the calculation, we can first calculate the $n_k$ for $k
\ge 1$. Noting that $p=q$, we have from Eq. \[eqk\]: $$(n_{k+1}-n_k) = (n_k - n_{k-1}) - 1/p,$$ so that summing from k=4 to j, and then from j=4 to $l$, and using $n_{l+1}=0$, we find: $$(l-1) n_3 = (l-2) n_2 + 1/p( \frac{(l-2)(l-1)}{2})$$ Inserting this result in Eq. \[eq2\] we find $n_2$ as a function of $n_1$, in the same way, using Eq. \[eq1\], one can deduce the equation for $n_1$ : $$n_1 \left( 1- \alpha_1 - p_1 B \right) = 1+p_1 A + q_1 n_0 \label{n1}$$ where : $$\begin{aligned}
A &= &\frac{ 1 +\frac{p_2}{p} \frac {l-2}{2} } { 1 - \alpha_2 - p_2
\frac{l-2}{l-1} } \label{A}\\ B &=& \frac { q_2} { 1 - \alpha_2 - p_2
\frac{l-2}{l-1} } \label{B}\\\end{aligned}$$
We now calculate the case $k < 0$.\
Calling $m_j=n_{-j}$ for every $j$, and summing Eq. \[eq-k\] from j to $z-1$, one finds : $$m_z-m_{z-1}=m_{j}-m_{j-1}-\frac{z-j}{p^*} \label{mz},$$ and Eq. \[eq-l\] yields: $m_l-m_{l-1}=\frac{1}{2p^*}$. Using this in Eq. \[mz\] and taking $j=3$, one finds : $$m_3-m_2=\frac{l-3}{p^*} + \frac{1}{2p^*}$$ Using Eqs. \[eq-2\], \[eq-1\], one finds : $$n_{-1}= \frac{1}{q_1^*} + \frac{p_1^*}{q_1^* q_2^*} + \frac
{p_1^*p_2^*}{q_1^* q_2^* p^*} (l-5/2) + n_0 \label{n-1}.$$
One can know obtain the value of $n_0$ from Eq. \[n1\], \[n-1\] and \[eq0\] : $$\begin{aligned}
\lefteqn{ p_0(1-\frac{q_1}{1- \alpha_1 - p_1 B}) n_0 =} \nonumber \\
& & 1+ p_0 \frac{1+p_1 A}{1- \alpha_1 - p_1 B} \nonumber \\ & & + q_0
\left( \frac{1}{q_1^*} + \frac{p_1^*}{q_1^* q_2^*} + \frac
{p_1^*p_2^*}{q_1^* q_2^* p^*} (l-5/2) \right)
\label{n0_exact}\end{aligned}$$
Then, using Eq. \[\[p1\]-\[q0\]\] we can calculate A and B : $$\begin{aligned}
A & = & \frac{L}{4 \rho} \frac{(l-1)(l-2)}{(\frac{L}{\lambda}-1)(l-1)+1} \nonumber \\
& & + \frac{ \frac{\kappa}{\lambda} L^2(l-1)+
\frac{L(L-2)(l-2)(l-1)}{4} }{(\frac{L}{\lambda}-1)(l-1)+1} + o (1)\\
B & = & 1 -
\frac{1}{(\frac{L}{\lambda}-1)(l-1)+1}\end{aligned}$$ And finally, the expression of $n_0$ is: $$\begin{aligned}
n_0 &=& \frac{1}{\rho^2}\frac{(L+2l)}{4L(L-1)} \left[
(\frac{L}{\lambda}-1)(l-1)+1 \right] \nonumber \\ & & + O(1/\rho)
\label{no_total}\end{aligned}$$
We report here only the leading term: the expressions of the quantities $p_i,q_i$ and $\alpha_i$ permit us to calculate $n_0$ up to order $1$, but these terms are truly ugly and it does not seem relevant to give them here.
Going back to real time, we find that the time $\tau(L,l)$ to nucleate a new row on a facet is given by : $$\begin{aligned}
\tau(L,l) &=& \frac{\tau_3^2}{\tau_2} \frac{(L+2l)}{4L(L-1)} \left[ (\frac{L}{\lambda}-1)(l-1)+1 \right] \nonumber \\
& & + O(\tau_3)
\label{tau_total}\end{aligned}$$
[99]{} M. Lagally, Physics Today [**46**]{}(11), 24 (1993) and references therein; H. Gleiter, Nanostructured Materials [**1**]{} 1 (1992); Z. Zhang and M. G. Lagally, [*Science*]{} [**276**]{}, 377 (1997)
P. Jensen, Rev. Mod. Phys. [**71**]{}, 1695 (1999) and references therein
P. Noziéres, in : Solids far from equilibrium, Ed. C. Godrèche (Cambridge Univ. Press, Cambridge, 1992)
A. Pimpinelli and J. Villain [*Physics of Crystal Growth*]{} (Cambridge University Press, 1998)
P. Jensen, N. Combe, H. Larralde, J. L. Barrat, C. Misbah and A. Pimpinelli, Eur. Phys. J. B [**11**]{}, 497-504 (1999)
C. Herring, [*Physics of Powder Metallurgy*]{}(McGraw-Hill Book New York, Company, Inc. 1951) Ed. W. E. Kingston; C. Herring, Phys Rev [**82**]{}, 87 (1951); W.W. Mullins, J. Appl. Phys. [**28**]{}, 333 (1957) and [**30**]{}, 77 (1959); F.A. Nichols and W.W. Mullins, J. Appl. Phys. [**36**]{}, 1826 (1965); F.A. Nichols, J. Appl. Phys. [**37**]{}, 2805 (1966)
This formulation of the relaxation time as a function of $N/N_c$ permits to resolve the apparent paradox mentioned in [@eur_physB]. Namely, that extrapolating the low temperatures regime to higher temperatures, one could infer that large islands would relax faster at high temperature than at low temperature. The curve $log(t_{relaxation})$ as a function of $log(N)$ at a given temperature $T_1$ is obtained by translation of the curve at temperature $T_0$ by a vector $\frac{N_c(T_1)}{N_c(T_0)} \left(\begin{array}{c}1\\5 \end{array}
\right)$. Since the slope of this translation is greater than the slopes of $log(t_{relaxation})$ as a function of $log(N)$, thus two curves at different temperatures can not cross each other.
see for example : J. Langer in : Solids far from equilibrium, Ed. C. Godrèche (Cambridge Univ. Press, Cambridge, 1992)
R. Ferrando and G. Tréglia, Phys. Rev. B [**50**]{}, 12104 (1994).
T. Michely, M. Hohage, M. Bott and G. Comsa, Phys. Rev. Lett. [**70**]{}, 3943 (1993).
C. R. Stoldt, A. M. Cadilhe, C. J. Jenks, J. -M. Wen and, J. W. Evans and P. A. Thiel, Phys. Rev. Lett. [**81**]{}, 2950 (1998).
N. Combe, P. Jensen and A. Pimpinelli, Phys. Rev. Lett. [**85**]{}, 110 (2000).
I.S. Gradshteyn and I.M. Ryzhik [*Table of Integrals, series, and Products*]{}(Academic Press San Diego) Ed. Alan Jeffrey p. 47
|
---
author:
- 'Makoto Naka, and Sumio Ishihara [^1]'
title: Electronic Ferroelectricity in a Dimer Mott Insulator
---
Novel dielectric and magneto-dielectric phenomena are one of the recent central issues in solid state physics. Beyond a conventional picture based on classical dipole moments, ferroelectricity on which electronic contribution plays crucial roles is termed electronic ferroelectricity. [@brink; @ishihara] Recently discovered multiferroics are known as phenomena where ferroelectricity is driven by a spin ordering. In a Mott insulator with frustrated exchange interactions, an electric polarization is induced by the exchange striction effects under a non-collinear spin structure.
There is another class of electronic ferroelectricity; charge-order driven ferroelectricity where electric polarization is caused by an electronic charge order (CO) without inversion symmetry. This class of ferroelectricity is observed in transition-metal oxides and charge-transfer type organic salts, for exmple LuFe$_2$O$_4$, [@ikeda; @nagano; @naka] (TMTTF)$_2$X (X: a monovalent cation), [@monceau; @yoshioka; @otsuka] and $\alpha$-(BEDT-TTF)$_2$I$_3$. [@yamamoto] A large magneto-dielectric coupling and fast polarization switching are expected in charge-driven ferroelectricity, since the electric polarization is governed by electrons.
Dielectric anomaly recently discovered in a quasi-two dimensional organic salt $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$ suggests a possibility of electronic ferroelectricty in this compound. The crystal structure consists of an alternate stacking of the BEDT-TTF donor layers and the Cu$_2$(CN)$_3$ acceptor layers. In a quasi-two-dimensional BEDT-TTF layer, pairs of dimerized molecules locate in an almost equilateral triangular lattice. When two dimerized molecules are considered as a unit, since average hole number per dimer is one, this material is identified as a Mott insulator. One noticeable property observed experimentally is a low temperature spin state; no evidences of a long-range magnetic order down to 32mK. [@yamashita; @shimizu] A possibility of quantum spin-liquid states is proposed. In the recent experiments [@abel], temperature dependence of the dielectric constant has a broad maximum around 25K and shows relaxor-like dielectric relaxation. Some anomalies are also seen in the lattice expansion coefficient and specific heat around 6K. [@yamashita2; @manna] These data promote us to reexamine electronic structure in dimer Mott (DM) insulators.
In this Letter, motivated from the recent experimental results in $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$, we study dielectric and magnetic properties in a DM insulating system. From a Hubbard-type Hamiltonian, we derive the effective Hamiltonian where the number of electron per dimer is one. By using the mean-field (MF) approximation and the classical Monte-Carlo (MC) simulation, we examine spin and charge structures in finite temperature. It is shown that the ferroelectric and magnetic phases are exclusive with each other. A reentrant feature of the DM phase enhances the dielectric fluctuation near the CO phase. Implications of the present results for $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$ are discussed.
We start from the model Hamiltonian to describe the electronic structure in a DM insulator. A moleculer-dimer is regarded as a unit and is allocated at each site of a two-dimensional triangular lattice. An average electron number per dimer is assumed to be one. The Hamiltonian consists of the two terms as $$\begin{aligned}
{\cal H}_0={\cal H}_{\rm intra}+{\cal H}_{\rm inter} .
\label{eq:h0}\end{aligned}$$ The first term is for the intra-dimer part given by $$\begin{aligned}
{\cal H}_{\rm intra}
&= \varepsilon\sum_{i \mu s} c_{i \mu s}^\dagger c_{i \mu s}^{}
-t_0 \sum_{i s} \left ( c_{i a s}^\dagger c_{i b s}^{}+ H.c. \right )
\nonumber \\
&+ U_0 \sum_{i \mu } n_{i \mu \uparrow} n_{i \mu \downarrow}
+V_0 \sum_{i } n_{i a} n_{i b} ,
\label{eq:hintra}\end{aligned}$$ where two molecules are identified by a subscript $\mu(=a,b)$. We introduce the electron annihilation operator $c_{i \mu s}$ for molecule $\mu$, spin $s(=\uparrow, \downarrow)$ at site $i$, and the number operator $n_{i \mu}=\sum_{s} n_{i \mu s}=\sum_{s} c_{i \mu s}^\dagger c_{i \mu s}$. We consider a level energy $\varepsilon$, the inter-molecule electron transfer $t_0(>0)$ in a dimer, the intra-molecule Coulomb interaction $U_0$ and the inter-molecule Coulomb interaction $V_0$ in a dimer. In addition, the inter-dimer part in the second term of Eq. (\[eq:h0\]) is given by $$\begin{aligned}
{\cal H}_{\rm inter}
&=-\sum_{\langle ij \rangle \mu \mu' s}
t_{ij}^{\mu \mu'}
\left ( c_{i \mu s}^\dagger c_{j \mu' s}+H.c. \right )
\nonumber \\
&+ \sum_{\langle ij \rangle \mu \mu'} V_{i j}^{\mu \mu'} n_{i \mu } n_{j \mu'},
\label{eq:hinter} \end{aligned}$$ where $t_{ij}^{\mu \mu'}$ and $V_{ij}^{\mu \mu'}$ are the electron transfer and the Coulomb interaction between an electron in a molecule $\mu$ at site $i$ and that in a molecule $\mu'$ at site $j$, respectively. The first and second terms in ${\cal H}_{\rm inter}$ are denoted by ${\cal H}_t$ and ${\cal H}_V$, respectively.
We briefly introduce the electronic structure in an isolated dimer. In the case where one electron occupies a dimer, the bonding and anti-bonding states are given by $|\beta_s \rangle =(|a_s \rangle + | b_s \rangle)/\sqrt{2}$ and $|\alpha_s \rangle =(|a_s \rangle - | b_s \rangle)/\sqrt{2}$ with energies $E_\beta = \varepsilon - t_0$ and $E_\alpha = \varepsilon + t_0$, respectively. In these bases, we introduce the electron operator $\hat c_{i \gamma s}$ for $\gamma=(\alpha, \beta)$ and the electron transfer integral ${\hat t}^{\gamma \gamma'}_{ij}$ between the NN molecular orbitals $\gamma$ and $\gamma'$. These are obtained by the unitary transformation from $c_{i \mu s}$ and $t_{ij}^{\mu \mu'}$. Two-electron states in a dimer are the following six states: the spin-triplet states $\{ |T_{\uparrow} \rangle, | T_\downarrow \rangle , | T_0 \rangle \} = \{
|\alpha_\uparrow \beta_\uparrow \rangle,
|\alpha_\downarrow \beta_\downarrow \rangle,
(|\alpha_\uparrow \beta_\downarrow \rangle+|\alpha_\downarrow \beta_\uparrow \rangle)/\sqrt{2} \} $ with the energy $E_T=2\varepsilon+V_0$, the spin-singlet state $|S \rangle =(|\alpha_\uparrow \beta_\downarrow \rangle -|\alpha_\downarrow \beta_\uparrow \rangle) /\sqrt{2} $ with $E_{S}=2\varepsilon+U_0$, and the doubly-occupied states $|D_+ \rangle=C_1| \alpha_\uparrow \alpha_\downarrow \rangle + C_2|\beta_\uparrow \beta_\downarrow \rangle$ and $|D_- \rangle=C_2| \alpha_\uparrow \alpha_\downarrow \rangle - C_1|\beta_\uparrow \beta_\downarrow \rangle$ with $E_{D\pm}=(4\varepsilon+U_0+V_0 \pm \sqrt{(U_0-V_0)^2+16t_0^2} )/2$ and coefficients $C_2/C_1=(U_0-V_0)/[2E_{D+}-4(\varepsilon-t_0)-(U_0-V_0)]$. The lowest eigen state is $|D_- \rangle $. The effective Coulomb interaction in the lowest eigen state is $U_{eff} \equiv E_{D_-}-2E_\beta \sim V_0+2t_0$ in the limit of $U_0, V_0 >> t_0$.
![(Color online) Pseudo-spin directions in the $Q^x-Q^z$ plane and electronic structures in a dimer. []{data-label="fig:ps"}](ps){width="0.7\columnwidth"}
From the Hamiltonian in Eq. (\[eq:h0\]), we derive an effective model in the subspace where each dimer is occupied by one electron. Charge structure in each dimer is represented by the pseudo-spin (PS) operator with amplitude of 1/2 defined by $
{\bf Q}_{i}=\frac{1}{2} \sum_{s \mu \mu'} {\hat c}_{i \mu s}^\dagger {\bf \sigma}_{\mu \mu'} {\hat c}_{i \mu' s}^{}
$ with the Pauli matrices $\bf \sigma$. The eigen states for $Q^x_i$ with the eigen values of $1/2$ and $-1/2$ are the charge polarized states where an electron occupies the $a$ and $b$ molecules, respectively, and those for $Q^z_i$ with $1/2$ and $-1/2$ correspond to the covalent states where an electron occupies the $\beta$ and $\alpha$ orbitals, respectively (see Fig. \[fig:ps\]).
Here we derive the effective Hamiltonian. We assume $U_0, V_0 >> t_{ij}^{\mu \mu'}, V_{ij}^{\mu \mu'}$, and treat the inter-dimer term ${\cal H}_{\rm inter}$ in Eq. (\[eq:h0\]) as the perturbed term. The effective Hamiltonian up to the orders of $O({\cal H}_t^2)$ and $O({\cal H}_V^1)$ is given by $
{\cal H}={\widetilde {\cal H}}_{\rm intra}+{\widetilde {\cal H}}_{V}+{\cal H}_J .
$ The first and second terms correspond to ${\cal H}_{\rm intra}$ in Eq. (\[eq:hintra\]) and the second term of ${\cal H}_{\rm inter}$ in Eq. (\[eq:hinter\]), respectively, where the doubly occupied states in a dimer are prohibited. By using the PS operator, these terms are given by $$\begin{aligned}
{\widetilde {\cal H}}_{\rm intra}+{\widetilde {\cal H}}_{V}=-
2t_0\sum_{i} Q^z_{i}+\sum_{\langle ij \rangle} W_{ij} Q_i^x Q_j^x,
\label{eq:tising}\end{aligned}$$ where $W_{ij}(=V_{ij}^{aa}+V_{ij}^{bb}-V_{ij}^{ab}-V_{ij}^{ba})$ is the effective inter-dimer Coulomb interaction. This part is nothing but the transverse-field Ising model.
The third term in ${\cal H}$ is the exchange-interaction term derived by the second-order perturbation with respect to ${\cal H}_{t}$ in Eq. (\[eq:hinter\]). This is given by a sum of the terms classified by the two-electron states in a dimer denoted by $m$ as $
{\cal H}_J=\sum_{m} {\cal H}_J^{(m)} ,
$ where a suffix $m=\{ T_\uparrow, T_\downarrow, T_0, S, D_+, D_- \}$. A dominant term in ${\cal H}_J$ is ${\cal H}_J^{(D_-)}$ which has the lowest intermediate state energy. This is explicitly given by $$\begin{aligned}
{\cal H}_{J}^{(D-)}=-\sum_{\langle ij \rangle}
\left ( \frac{1}{4} - {\bf S}_i \cdot {\bf S}_j \right )
h_{ij}^{(D-)} ,
\label{eq:Hd-}\end{aligned}$$ with $$\begin{aligned}
h_{ij}^{(D-)}
&=\sum_{\gamma_1, \gamma_2=(\alpha, \beta)}
J^{\gamma_1 \gamma_2}_{ij} n_{i \gamma_1} n_{j \gamma_2}
+\sum_{\nu_1, \nu_2=(+, -)}
J^{\nu_1 \nu_2}_{ij} Q_i^{\nu_1} Q_j^{\nu_2}
\nonumber \\
&+\sum_{\gamma=(\alpha, \beta)}
\left ( J^{x \gamma}_{ij} Q_i^x n_{j \gamma} +J^{\gamma x}_{ij} n_{i \gamma}Q_{j}^x \right ) .
\label{eq:hd-}\end{aligned}$$ We define $Q^{\pm}_i=Q_i^x \pm iQ_i^y$ and $n_{i \alpha(\beta)}=1/2-(+) Q_i^z$. Expressions of the exchange constants are given in Ref. [@exch]. Other terms in ${\cal H}_J$ are represented by similar forms with Eqs. (\[eq:Hd-\]) and (\[eq:hd-\]). Spin and charge degrees are coupled with each other in the Hamiltonian, although the SU(2) symmetry is preserved only in the spin sector. This type of the Hamiltonian is similar to the so-called Kugel-Khomskii model [@kugel] in an orbital degenerated Mott insulator, and was also proposed in study of the electronic state in $\alpha$-NaV$_2$O$_5$. [@thalmeier; @mostovoy; @sa]
This Hamiltonian is analyzed by the MF approximation and the classical MC method. In the MF calculations, triangular lattices of 12 unit cells along $\langle 110 \rangle$ direction with the periodic boundary condition are used. We adopt the following 15 MF’s, $\langle S^\mu \rangle$, $\langle Q^\mu \rangle$ and $\langle S^\mu Q^\nu \rangle$ where $(\mu, \nu)=(x,y,z)$, in each $ \langle 1{\bar 1}0 \rangle$ line. The multi-canonical MC simulations are performed in finite-size clusters of $L$ sites ($L\le 96$) with the periodic-boundary condition. We use 10$^7$ MC steps to obtain histograms and $2\times 10^7$ steps for measurements.
![(Color online) Finite-temperature phase diagrams obtained by MF method. (a) and (b) are for case I and case II, respectively (see text). Symbols Q:DM and Q:CO+DM represent the DM phase, and the coexistence phase of CO and DM states, respectively, and S:Para, S:120, S:3-fold I, S:3-fold II, and S:6-fold represent the paramagnetic phase, the 120$^\circ$ structure phase, the collinear 3-fold spin ordered phase, the coplanar 3-fold spin ordered phase, and the coplanar 6-fold spin order phase, respectively. We chose $W/t_0=1$ in (a), and $U_0/t_0=6$, $V_0/t_0=4.5$, $W_1/t_0=-1$, and $W_2/t_0=0.07$ in (b) where $W_1$ and $W_2$ are the NN inter-dimer Coulomb interactions along $\langle 100 \rangle$ and $\langle 110 \rangle$, respectively. The insets are schematic pictures of the inter-dimer transfer integrals. Vertical broken lines in (b) represent cases a), b), and c) (see text). []{data-label="fig:phase"}](phase){width="1.1\columnwidth"}
Finite-temperature phase diagrams obtained by the MF method are shown in Fig. \[fig:phase\]. As for $W_{ij}$ in Eq. (\[eq:tising\]) and $t_{ij}^{\mu \nu}$ in Eq. (\[eq:hinter\]), we consider the following two cases in order to identify general and specific features in the results: case I for a simple model, and case II for $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$. In case I, we chose $t_{ij}^{\mu \nu}=\delta_{\mu \nu} t$ $(t>0)$ and $W_{ij}=W(>0)$. In case II, the three dominant $t_{ij}^{\mu \nu}$’s estimated in $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$ are represented by a parameter $t$ as shown in the inset of Fig. \[fig:phase\](b), and $W_{ij}$ are considered based on the crystal structural data. [@mori; @seo; @hotta] Similar phase diagrams are obtained in the two cases shown in Figs. \[fig:phase\](a) and 2(b) except for detailed spin structures. The CO phases realize in the region of large $t$. This phase is caused by ${\widetilde {\cal H}}_V$ and the term proportional to $Q_i^xQ_j^x$ in ${\cal H}_J$. In low temperatures, magnetic phases are stabilized by the exchange interactions. One noticeable point is that when the magnetic phase appears, the CO phase is suppressed \[see around $t/t_0=0.6$ in Fig. \[fig:phase\](a) and around $t/t_0=0.8$ in Fig. \[fig:phase\](b)\]. As a result, a reentrant feature is observed in a DM phase. The reverse is also seen; as shown around $T/t_0=0.1$ and $t/t_0>0.9$ in Fig. \[fig:phase\](b), a slope of the magnetic transition temperature versus $t$ curve decreases with increasing $t$, when the CO sets in. That is, the charge and magnetic phases are exclusive with each other. This is caused by a competition between the terms proportional to $Q_i^xQ_j^x$ and $Q_i^xQ_j^x{\bf S}_i \cdot {\bf S}_j$ in ${\cal H}_J$; the former favors a ferro-type configuration of $Q^x$, but the latter favors an antiferro-type one under the antiferromagnetic spin correlation $\langle {\bf S}_i \cdot {\bf S}_j \rangle<0$.
![(Color online) (a) Temperature dependences of the charge susceptibility $\chi_{C}$ along $y$ in case a), and (b) those in cases b) and c) obtained by the MF method (see text). The inset of (b) is a schematic CO pattern in a triangular lattice. Ellipses and circles represent molecules and electrons, respectively. []{data-label="fig:suscept"}](suscept2){width="\columnwidth"}
Let us focus on case II in more detail. Temperature dependences of the charge susceptibility are presented in Fig. \[fig:suscept\] for several values of $t/t_0$. We define the charge susceptibility along the $y$ direction as $\chi_C=(TL)^{-1} (\sum_{i \in A} - \sum_{i \in B} ) (\sum_{j \in A} - \sum_{j \in B} ) $ $[\langle Q_i^x Q_j^x \rangle-\langle Q_i^x \rangle \langle Q_j^x \rangle ]$ where two sublattices are identified by $A$ and $B$ \[see the inset of Fig. \[fig:suscept\](b)\]. We pay our attention to a region of $t/t_0=0.8-1$. In Fig. \[fig:phase\](b), there are three types of sequential phase changes with decreasing $T$; a) (Q:DM, S:Para)$\rightarrow$ (Q:DM, S:120), b) (Q:DM, S:Para)$\rightarrow$ (Q:CO+DM, S:Para) $\rightarrow$ (Q:DM, S:120), and c) (Q:DM, S:Para)$\rightarrow$ (Q:CO+DM, S:Para) $\rightarrow$ (Q:CO+DM, S:6-fold) where abbreviations are defined in the caption of Fig. \[fig:phase\]. In the case a), although the CO phase does not realize, $\chi_C$ increases in the paramagnetic phase and suddenly reduces in the low temperature magnetic phase. This enhancement of $\chi_C$ is remarkable at a vicinity of the phase boundary between the CO and DM phases (see a curve for $t/t_0=0.828$ in Fig. \[fig:suscept\]). In the cases b) and c), the CO phase realizes and $\chi_C$ diverges at the phase boundary. Obtained CO pattern is schematically shown in the inset of Fig. \[fig:suscept\](b). This is a ferri-electric structure and is similar to the one proposed in Ref. [@abel]. Within a chain along the $x$ axis, the dipole moments align uniformly. Directions of the dipole moments in sublattices $A$ and $B$ are almost perpendicular with each other. As a result, the electric polarization appears along the $y$ axis. This ordered pattern is energetically favorable for the inter-site Coulomb interaction $W_{ij}$ and the term which is proportional to $Q_i^x Q_j^x$ in ${\cal H}_J$.
![(Color online) Temperature dependences of the charge susceptibility $T\chi_C$ obtained by the MC simulation in the case a). The inset shows the susceptibilities devided by $L$ in the case c). The spin susceptibility $T \chi_S$ is also plotted. []{data-label="fig:mc"}](mc){width="\columnwidth"}
We also examine the dielectric and magnetic structures in case II by the MC method. Temperature dependences of the charge susceptibilities $T\chi_C$ are presented in Fig. \[fig:mc\] for several $t/t_0$. It is shown that with increasing $L$, $T\chi_C$’s in $t/t_0=0.88$ and 0.92 tend to be constants. On the other hand, the susceptibility divided by the system size, $T\chi_C/L$, in $t/t_0=1$ is almost independent of $L$ for $T/t_0<0.05$. Therefore, the CO phase realizes in $T/t_0<0.05$ in $t/t_0=1$, and others are the DM phase. The results in $t/t_0=0.88$ and 0.92, and the ones in $t/t_0=1$ correspond to the case a) and the case c), respectively, although the corresponding values of $t/t_0$ and the ordering temperatures are different from those in the MF calculations. With decreasing $T$ in $t/t_0=0.92$, $T\chi_C$ increases and suddenly goes down around $T/t_0=0.03$ where the spin susceptibility $\chi_S[\equiv (TL)^{-1} \sum_{i j} e^{i {\bf q} \cdot {\bf r}_{ij}}
\langle {\bf S}_i \cdot {\bf S}_j \rangle]$ at ${\bf q}=(1/3,1/3)$, corresponding to the 120$^\circ$ structure, develops. The results are consistent qualitatively with the MF calculation results. Finally, we discuss implications of the present results for the dielectric and magnetic properties in $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$. We at first speculate that the transition temperature for the 120$^\circ$ spin structure seen in Fig. \[fig:phase\] (b) is interpreted to a temperature where the AFM spin correlation is developed in this compound. This is reasonable since this spin structure appears in a DM phase where spin frustration survives and the classical ordering temperature is expected to be largely reduced. On the other hand, calculated spin orders in the CO phase and their transition temperatures are substantial, because the anisotropic exchange interactions in the CO pattern shown in Fig. \[fig:suscept\](b) release spin frustration.
Based on the present study, we propose two possible scenarios which are relevant to $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$. In the first scenario, the dipole moments freeze in low temperatures. This corresponds to the sequential phase change of the case c) introduced previously; (Q:DM, S:Para)$\rightarrow$ (Q:CO+DM, S:Para) $\rightarrow$ (Q:CO+DM, S:6-fold). A relaxor-like dielectric dispersion observed experimentally requires a random freezing or a dipolar glass state below temperature where the dielectric constant takes a peak. This may be consistent with the experimental results suggesting inhomogeneous magnetic moment in low temperatures [@shimizu2; @kawamoto]. We propose from the present calculations that the characteristic magnetic temperature and the spin correlation functions in the CO phase are smaller than those in the DM phase, because the CO and magnetic phases are exclusive with each other as shown previously. One problem in this scenario is that the random/uniform polarization freezing makes the exchange interaction strongly anisotropy and releases spin frustration. This may be unfavorable for the experimental fact that neither a long-range magnetic order nor a spin glass transition appears down to 32mK. [@yamashita; @shimizu]
Another possible scenario is that the dipole moments do not freeze in low temperatures, and the dielectric anomaly experimentally observed is caused by charge fluctuation at vicinity of the phase boundary. This corresponds to the sequential phase change of the case a); (Q:DM, S:Para)$\rightarrow$ (Q:DM, S:120). Because of the reentrant feature of the DM phase shown in Fig. \[fig:phase\](b), the system approaches to the phase boundary at certain temperatures, and the charge susceptibility is enhanced as shown in Fig. \[fig:suscept\](a) and Fig. \[fig:mc\]. With decreasing temperature furthermore, development of the spin correlation stabilizes the DM phase and suppresses the dielectric fluctuation. In this scenario, spin frustration is alive in the low temperature DM phase. Further examinations will provide a clue to unveil the microscopic pictures in the low temperature magnetic and dielectric structures.
The authors would like to thank T. Sasaki, I. Terasaki, S. Iwai, T. Watanabe, H. Seo and J. Nasu for valuable discussions. This work was supported by JSPS KAKENHI, TOKUTEI from MEXT, CREST, and Grand Challenges in Next-Generation Integrated Nanoscience. MN is supported by the global COE program of MEXT Japan.
[99]{}
J. van den Brink, and D. I. Khomskii: J. Phys.: Condens. Matter [**20**]{} (2008) 434217.
S. Ishihara: J. Phys. Soc. Jpn. [**79**]{} (2010) 011010.
N. Ikeda, H. Ohsumi, K. Ohwada, K. Ishii, T. Inami, K. Kakurai, Y. Murakami, K. Yoshii, S. Mori, Y. Horibe and H. Kito: Nature [**436**]{} (2005) 1136.
A. Nagano, M. Naka, J. Nasu, and S. Ishihara: Phys. Rev. Lett. [**99**]{} (2007) 217202.
M. Naka, A. Nagano, and S. Ishihara: Phys. Rev. B [**77**]{}, (2008) 224441.
P. Monceau, F. Ya Nad, S. Brazovskii: Phys. Rev. Lett. [**86**]{} (2001) 2080.
H. Yoshioka, M. Tsuchiizu, and H. Seo: J. Phys. Soc. Jpn. [**76**]{} (2007) 105701.
Y. Otsuka, H. Seo, Y. Motome, and T. Kato: J. Phys. Soc. Jpn. [**77**]{} (2008) 113705.
K. Yamamoto, S. Iwai, S. Boyko, A. Kashiwazaki, F. Hiramatsu, C. Okabe, N. Nishi, and K. Yakushi: J. Phys. Soc. Jpn. [**77**]{} (2008) 074709.
S. Yamashita, Y. Nakazawa, M. Oguni, Y. Oshima, H. Nojiri, Y. shimizu, K. Miyagawa, and K. Kanoda: Nature Phys. [**4**]{} (2008) 459.
Y. Shimizu, K. Miyagawa, K. Kanoda, M. Maesato, and G. Saito: Phys. Rev. Lett. [**91**]{} (2003), 107001.
M. Abel-Jawad, I. Terasaki, T. Sasaki, N. Yoneyama, N. Kobayashi, Y. Uesu, and C. Hotta: (unpublished).
M. Yamashita, N. Nakata, Y. Kasahara, T. Sasaki, N. Yoneyama, N. Kobayashi, S. Fujimoto, T. Sshibauchi, ad Y. Matsuda: Nature Phys. [**5**]{} (2009) 44.
R. S. Manna, M. de Souza, A. Br$\rm \ddot u$hl, J. A. Schlueter, ad M. Lang: arXiv:0909.0718.
The exchange constants in Eq. (\[eq:hd-\]) are $J^{\gamma \gamma}_{ij}=4 {\hat t}^{\gamma \gamma 2}_{ij} D_{\bar \gamma}^2 \Delta_{\gamma \gamma}^{-1}$, $J^{\gamma {\bar \gamma}}_{ij}=2 {\hat t}^{\gamma {\bar \gamma} 2}_{ij} \Delta_{\gamma {\bar \gamma}}^{-1}$, $J^{\nu \nu}_{ij}=-2{\hat t}^{\alpha \alpha}_{ij} {\hat t}^{\beta \beta}_{ij}
C_1 C_2 (\Delta_{\alpha \alpha}^{-1}+\Delta_{\beta \beta}^{-1})$, $J^{\nu {\bar \nu}}_{ij}=-4{\hat t}^{\alpha \beta}_{ij} {\hat t}^{\beta \alpha}_{ij}
C_1 C_2 \Delta_{\alpha \beta}^{-1}$, $J^{x \gamma}_{ij}=2{\hat t}^{\gamma \gamma}_{ij} {\hat t}^{{\bar \gamma} \gamma}_{ij}
D_{{\bar \gamma}} (D_{\bar \gamma}-D_{\gamma}) (\Delta_{\gamma \gamma}^{-1}+\Delta_{\alpha \beta}^{-1})$ and $J^{\gamma x}_{ij}=2{\hat t}^{\gamma \gamma}_{ij} {\hat t}^{\gamma {\bar \gamma}}_{ij}
D_{{\bar \gamma}} (D_{\bar \gamma}-D_{\gamma}) (\Delta_{\gamma \gamma}^{-1}+\Delta_{\alpha \beta}^{-1})$ with the energy difference $\Delta_{\gamma_1 \gamma_2}=E_{D_-}-E_{\gamma_1}-E_{\gamma_2} $. We define ${\bar \gamma}=(\alpha, \beta)$ for $\gamma=(\beta, \alpha)$, ${\bar \mu}=(+,-)$ for $\mu=(-,+)$, and $D_\gamma=(C_1, C_2)$ for $\gamma=(\alpha, \beta)$ where $C_1$ and $C_2$ are the coefficients in the wave function $|D_- \rangle$.
K. I. Kugel and D. I. Khomskii: Sov. Phys. JETP [**37**]{} (1973) 725.
P. Thalmeier, and P. Fulde: Eur. Phys. Lett. [**44**]{}(2), (1998) 242.
M. V. Mostovoy, and D. I. Khomskii: Sol. Stat. Comm. [**113**]{}, (2000) 159.
D. Sa, and C. Gros: Eur. Phys. J. B [**18**]{}, (2000), 421.
T. Mori, H. Mori, and S. Tanaka: Bull. Chem. Soc. Jpn. [**72**]{} (1999) 179. H. Seo: J. Phys. Soc. Jpn. [**69**]{} (2000), 805. C. Hotta: J. Phys. Soc. Jpn. [**72**]{} (2003) 840.
Y. Shimizu, K. Miyagawa, K. Kanoda, M. Maesato, and G. Saito: Phys. Rev. B [**73**]{} (2006) 140407.
A. Kawamoto, Y. Honma, K. Kumagai, N. Matsunaga, and K. Nomura: Phys. Rev. B [**74**]{} (2006) 212508.
[^1]: E-mail address:ishihara@cmpt.phys.tohoku.ac.jp
|
---
abstract: 'We report critical current density ($J_c$) in tetragonal FeS single crystals, similar to iron based superconductors with much higher superconducting critical temperatures ($T_{c}$’s). The $J_c$ is enhanced 3 times by 6% Se doping. We observe scaling of the normalized vortex pinning force as a function of reduced field at all temperatures. Vortex pinning in FeS and FeS$_{0.94}$Se$_{0.06}$ shows contribution of core-normal surface-like pinning. Reduced temperature dependence of $J_c$ indicates that dominant interaction of vortex cores and pinning centers is via scattering of charge carriers with reduced mean free path ($\delta$$l$), in contrast to K$_x$Fe$_{2-y}$Se$_2$ where spatial variations in $T_{c}$ ($\delta$$T_{c}$) prevails.'
author:
- 'Aifeng Wang,$^{1}$ Lijun Wu,$^{1}$ V. N. Ivanovski,$^{2}$ J. B. Warren,$^{3}$ Jianjun Tian,$^{1,4}$ Yimei Zhu$^{1}$ and C. Petrovic$^{1}$'
title: 'Critical current density and vortex pinning in tetragonal FeS$_{1-x}$Se$_{x}$ ($x=0,0.06$)'
---
INTRODUCTION
============
Fe-based superconductors have been attracting considerable attention since their discovery in 2008.[@Kamihara] Due to rich structural variety and signatures of high-temperature superconductivity similar or above iron arsenides, iron chalcogenide materials with Fe-Ch (Ch=S,Se,Te) building blocks are of particular interest.[@WangQY; @HeS; @GeJF; @ShiogaiJ] Recently, superconductivity below 5 K is found in tetragonal FeS synthesized by the hydrothermal reaction.[@LaiXF] The superconducting state is multiband with nodal gap and large upper critical field anisotropy.[@Borg; @LinH; @XingJ; @YingTP] Local probe $\mu$SR measurements indicate two $s$-wave gaps but also a disordered impurity magnetism with small moment that microscopically coexists with bulk superconductivity below superconducting transition temperature.[@Holenstein] This is similar to FeSe at high pressures albeit with weaker coupling and larger coherence length.[@Khasanov1; @Khasanov2]
Binary iron chalcogenides show potential for high field applications.[@SiW; @SunY; @LeoA; @JungSG] Since FeCh tetrahedra could be incorporated in different superconducting materials, it is of interest to study critical currents and vortex pinning mechanism in tetragonal FeS.[@HosonoS; @ForondaFR; @LuXF] Moreover, vortex pinning and dynamics is strongly related to coherence length and superconducting pairing mechanism.
Here we report critical current density and the vortex pinning mechanism in FeS and and FeS$_{0.94}$Se$_{0.06}$. In contrast to the point defect pinning in Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$ and K$_x$Fe$_{2-y}$Se$_2$,[@YangH; @LeiHC2; @LeiHC1] the scattering of charge carriers with reduced mean free path $l$ ($\delta$$l$ pinning) is important in vortex interaction with pinning centers.
EXPERIMENTAL DETAILS
====================
FeS and FeS$_{0.94}$Se$_{0.06}$ single crystals were synthesized by de-intercalation of potassium from K$_x$Fe$_{2-y}$(Se,S)$_2$ single crystals, using the hydrothermal reaction method.[@LaiXF; @LeiHC1] First, 8 mmol Fe powder, 5 mmol Na$_2$S$\cdot$9H$_2$O, 5 mmol NaOH, and 10 ml deionized water were mixed together and put into 25 ml Teflon-lined steel autoclave. After that, $\sim$0.2g K$_x$Fe$_{2-y}$S$_2$ and K$_x$Fe$_{2-y}$S$_{1.6}$Se$_{0.4}$ single crystals were added. The autoclave is tightly sealed and annealed at 120 $^{\circ}$C for three days. Silver colored FeS single crystals were obtained by washing the powder by de-ionized water and alcohol. Finally, FeS single crystals were obtained by drying in the vacuum overnight. X-ray diffraction (XRD) data were taken with Cu K$_{\alpha}$ ($\lambda=0.15418$ nm) radiation of Rigaku Miniflex powder diffractometer. The element analysis was performed using an energy-dispersive x-ray spectroscopy (EDX) in a JEOL LSM-6500 scanning electron microscope. High-resolution TEM imaging and electron diffraction were performed using the double aberration-corrected JEOL-ARM200CF microscope with a cold-field emission gun and operated at 200 kV. Mössbauer spectrum was performed in a transmission geometry with $^{57}$Co(Rh) source at the room temperature. Single crystals are aligned on the sample holder plane with some overlap but without stack overflow. The spectrum has been examined by WinNormos software.[@Brand] Calibration of the spectrum was performed by laser and isomer shifts were given with respect to $\alpha $-Fe. Magnetization measurements on rectangular bar samples were performed in a Quantum Design Magnetic Property Measurement System (MPMS-XL5).
RESULTS AND DISCUSSIONS
=======================
![(Color online). (a) Powder x-ray diffraction pattern of tetragonal FeS (bottom) and FeS$_{0.94}$Se$_{0.06}$ (top). Vertical ticks mark reflections of the *P4/nmm* space group. Electron diffraction pattern for FeS (b), and FeS$_{0.94}$Se$_{0.06}$ (c) and (d). High angle annular dark field scanning transmission electron microscopy (HAADF-STEM) image viewed along \[001\] direction of FeS (e) and FeS$_{0.94}$Se$_{0.06}$ (f) single crystal. \[001\] atomic projection of FeS is embedded in (b) with red and green spheres representing Fe and S/Se, respectively. The reflection condition in (b), (c) and (d) is consistent with P4/nmm space group. While the spots with h+k=odd are extinct in FeS, they are more \[in (d)\] or less \[in (c)\] observed here, indicating possible ordering of Se.[]{data-label="magnetism"}](fig1.eps)
Figure 1 (a) shows powder X ray diffraction pattern of FeS and FeS$_{0.94}$Se$_{0.06}$. The lattice parameters of FeS$_{0.94}$Se$_{0.06}$ are a=0.3682(2) nm and c=0.5063(3) nm, suggesting Se substitution on S atomic site in FeS \[a=0.3673(2) nm, c=0.5028(2) nm\]. High-resolution TEM imaging is consistent with the *P4/nmm* unit cell and indicates possible ordering of Se atoms.
![(Color online). (a) Mössbauer spectrum at 294 K of the tetragonal FeS. The observed data are presented by the gray solid circles, fit is given by the red solid line, and difference is shown by the blue solid line. Vertical arrow denotes relative position of the experimental point with respect to the background. (b) Superconducting transition of FeS and FeS$_{0.94}$Se$_{0.06}$ measured by magnetic susceptibility in magnetic field 10 Oe.[]{data-label="magnetism"}](fig2.eps)
FeS Mössbauer fit at the room temperature shows a singlet line \[Fig.2(a)\] and the absence of long range magnetic order. The isomer shift is $\delta$ = 0.373(1) mm/s whereas the Lorentz line width is 0.335(3) mm/s, in agreement with the previous measurements.[@MulletM; @VaughanDJ; @BertautEF] Since FeS$_{4}$ tetrahedra are nearly ideal, one would expect axial symmetry of the electric field gradient (*EFG*) and small values of the largest component of its diagonalized tensor $V_{zz}$. The linewidth is somewhat enhanced and is likely the consequence of small quadrupole splitting. If the Lorentz singlet would be split into two lines, their centroids would have been 0.06 mm/s apart, which is the measure of quadrupole splitting ($\Delta$). The measured isomer shift is consistent with Fe$^{2+}$, in agreement with X-ray absorbtion and photoemission spectroscopy studies.[@KwonK] There is very mild discrepancy of Mössbauer theoretical curve when compared to observed values near 0.2 mm/s, most likely due to texture effects and small deviations of incident $\gamma$ rays from the c-axis of the crystal. The point defect corrections to Mössbauer fitting curve are negligible. Fig. 2(b) presents the zero-field-cooling (ZFC) magnetic susceptibility taken at 10 Oe applied perpendicular to the $c$ axis for FeS and FeS$_{0.94}$Se$_{0.06}$ single crystals. Superconducting transition temperature Tc = 4.4 K (onset of diamagnetic signal) is observed in FeS, consistent with previous report.[@LaiXF] There is almost no change of $T_c$ in FeS$_{0.94}$Se$_{0.06}$. Both samples exhibit bulk superconductivity with somewhat enhanced diamagnetic shielding with Se substitution.
![(Color online). Magnetization hysteresis loops at various temperatures for FeS (a), and FeSe$_{0.06}$ (b), respectively. Magnetic field was applied parallel to $c$ axis. Magnetic field dependence of the in-plane critical current density $J_c^{ab}$ for FeS (c), and FeSe$_{0.06}$ (d), respectively.[]{data-label="magnetism"}](fig3.eps)
Figures 3(a) and 3(b) show the magnetization hysteresis loops (MHL) for FeS and FeS$_{0.94}$Se$_{0.06}$, respectively. Both MHLs show symmetric field dependence and absence of paramagnetic background, suggestive of dominant bulk pinning.[@LinH] No fishtail effect is observed in both samples. The critical current density $J_c$ can be calculated from MHL using the Bean model.[@Bean] When the field is applied along $c$ axis, the in-plane critical current density $J_c(\mu_0H)$ is given by[@Bean; @Gyorgy] $${J}_{c} = \frac{20\Delta M(\mu_0H)}{a(1-a/3b)}$$ where $\Delta M(\mu_0H)$ is the width of magnetization loop at specific applied field value and is measured in emu/cm$^{3}$. The $a$ and $b$ are the width and length of the sample ($a$$\leq$$b$) and measured in cm. The $J_c$ used in the formula is the unrelaxed critical current density. Practically measured critical current density, however, is the $J_s$ (relaxed value). Because magnetization relaxation is not very strong in iron-based superconductors, the $J_s$$\approx$$J_c$.[@ShenB] The paramagnetic (linear) $M(\mu_{0}H)$ background has no effect on the calculation of $\Delta$M($\mu_{0}$H) and consequently crucial currents due to its subtraction. The inclusion of ferromagnetic impurities[@Holenstein] is unlikely due to highly symmetric M(H) loops (Fig. 3). Therefore we attribute somewhat reduced volume fraction in pure tetragonal FeS to the presence of the unreacted paramagnetic hydrothermal solvent on the surface of our crystal, similar to what has been observed before.[@LaiXF]
Figs. 3(c) and 3(d) show the field dependence of $J_s$. The calculated $J_c$ at 1.8 K and 0 T reach 6.9 $\times$ 10$^3$ A/cm$^2$ and 2.1 $\times$ 10$^4$ A/cm$^2$, i.e. $J_c$ increases about 3 times for small Se substitution. It is interesting to note that the critical current densities of FeS and FeS$_{0.94}$Se$_{0.06}$ are comparable to that of K$_x$Fe$_{2-y}$Se$_2$ and FeSe which feature much higher superconducting transition temperatures (32 K and 8 K, respectively).[@GaoZS; @LeiHC2; @LeiHC3; @Hafiez] However, $J_c$ of FeS and FeS$_{0.94}$Se$_{0.06}$ are lower when compared to iron pnictide superconductors where typical critical current densities are above 10$^5$ A/cm$^2$ at 2 K.[@YangH]
![(Color online). Normalized flux pinning force as a function of the reduced field for FeS (a), and FeSe$_{0.06}$ (b), respectively. Solid lines are the fitting curves using $f_p$ = $Ah^p(1-h)^q$. (c) Maximum pinning force ($J_p^{max}$) vs field curves for FeS and FeS$_{0.94}$Se$_{0.06}$. The inset shows reduced temperature dependence of irreversibility field. (d) Reduced field dependence of normalized critical current at zero field for FeS and FeS$_{0.94}$Se$_{0.06}$, blue and cyan solid lines correspond to theoretical data for $\delta$T$_c$ pinning and $\delta$$l$ pinning.[]{data-label="magnetism"}](fig4.eps)
Pinning force ($F_p = \mu_0HJ_c$) can provide useful insight into vortex dynamics. There is a peak in the pinning force density as a function of the reduced magnetic field for all hard high-field superconductors.[@Kramer] According to Dew-Hughes model,[@Hughes], normalized vortex pinning force $f_p$ = $F_p/F_p^{max}$ should be proportional to $h^p(1-h)^q$, where $h = H/H_{irr}$ is normalized field, and $H_{irr}$ is irreversibility field obtained by an extrapolation of $J_c(T, \mu_0H)$ to zero. Parameters $p$ and $q$ are determined by the pinning mechanism. As shown in Figs. 4(a) and 4(b), the curves of $f(h)$ at different temperatures overlap well with each other, indicating that the same pinning mechanism dominate at the temperature range we study. Fitting with scaling law $h^p(1-h)^q$ gives $p$ = 0.42, $q$ = 1.65, and $h_{max}^{fit}$ =0.21 for FeS and $p$ = 0.63, $q$ = 2.34, and $h_{max}^{fit}$ =0.21 for FeS$_{0.94}$Se$_{0.06}$, roughly consistent with theoretical $p$ = 0.5, $q$ = 2, $h_{max}^{fit}$ = 0.2 for core normal surface-like pinning. Core normal surface-like pinning describes the pinning center from the microstructure and geometry aspect. The free energy of the flux lines in the pinning centers is different from that in the superconducting matrix and the pinning center is normal whereas the geometry of the pinning centers is two dimensional. Weak and widely spaced pins induce a small peak in $f(h)$ at high $h$, while strong closely spaced pins produce a large peak at low $h$.[@Kramer] Similar $h$ indicates the strength and spacing of the pins is similar between two samples.
Figure 4(c) presents irreversibility field ($\mu_0H_{irr}$) dependence of $F_p^{max}$. Both the pinning force and $\mu_0H_{irr}$ are enhanced by Se doping. The curves can be scaled by $F_p^{max} \propto \mu_0H_{irr}^{\alpha}$ with $\alpha$ = 2.0 for FeS and 2.1 for FeS$_{0.94}$Se$_{0.06}$, close to theoretical prediction ($\alpha$ = 2).[@Hughes] Since the tetragonal FeS and FeS$_{0.94}$Se$_{0.06}$ were synthetized by de-intercalation of potassium using hydrothermal method and are cleaved along the $c$ axis much easier than other iron superconductors, it is likely that weakly connected surfaces are important in the flux pinning as opposed to K$_{x}$Fe$_{2-y}$Se$_{2}$, FeSe$_{0.5}$Te$_{0.5}$ thin flim, and Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$ where point-like pinning prevails.[@LeiHC2; @YangH; @YuanPS]
In addition, as shown in the inset of Fig. 4(c), the reduced temperature dependence of $\mu_0H_{irr}$ can be fitted with $\mu_0H_{irr}(T) = \mu_0H_{irr}(0)(1-t)^{\beta}$, where $t = T/T_c$, which gives $\beta = 1.07$ for FeS and $\beta = 1.12$ for FeS$_{0.94}$Se$_{0.06}$. There are two primary mechanisms of core pinning from spatial variation of the Ginzburg-Landau (GL) coefficient $\alpha$ in type $\rm\uppercase\expandafter{\romannumeral2}$ superconductors, corresponding to the spatial variation in transition $T_c$ ($\delta$T$_c$ pinning), or to charge carrier mean free path $l$ near lattice defects ($\delta$$l$ pinning). In the case of $\delta$T$_c$ pinning, $J_c(t)$/$J_c(0)$ $\propto$ (1-$t^2$)$^{7/6}$(1+$t^2$)$^{5/6}$, whereas $J_c(t)$/$J_c(0)$ $\propto$ (1-$t^2$)$^{5/2}$(1+$t^2$)$^{-1/2}$ for $\delta$$l$ pinning, where $t$ = $T/T_c$.[@Griessen] As shown in Fig. 4(d), the reduced temperature dependence of reduced critical current density is nearly the same for FeS and FeS$_{0.94}$Se$_{0.06}$ and is between the theoretical curves of $\delta$T$_c$ pinning and $\delta$$l$ pinning. This suggests the presence of both microscopic mechanisms. Each contribution can be estimated by $J_{c, H=0}(t) = xJ_{c, H=0}^{\delta{T_c}} + (1-x)J_{c, H=0}^{\delta{l}}(t)$. The fitting gives $x$ = 0.15 for FeS, and $x$ = 0.17 for FeS$_{0.94}$Se$_{0.06}$, indicating $\delta$$l$ pinning plays major role in FeS and FeS$_{0.94}$Se$_{0.06}$. The pinning mechanism is different from K$_x$Fe$_{2-y}$Se$_2$, where $\delta$$T_c$ pinning is prevalent, and is similar with YBa$_2$Cu$_3$O$_7$ and NaFe$_{0.97}$Co$_{0.03}$As.[@LeiHC2; @Griessen; @Shabbir] Due to intrinsic nanoscale phase separation in K$_x$Fe$_{2-y}$Se$_2$[@Ryan; @Wang; @Liu; @Ricci; @Li] $\delta$$T_c$ could play a major role in pinning, in contrast to tetragonal iron sulfide even though FeS and FeSe$_{0.06}$ single crystals are obtained from K$_x$Fe$_{2-y}$(Se,S)$_2$ by de-intercalation. Moreover, because FeS is a typical type $\rm\uppercase\expandafter{\romannumeral2}$ superconductors, the multigap might have an effect on the fitting result. Nevertheless, the Dew-Hughes model still reveals that the $\Delta{l}$ pinning by Dew-Hughes model still gives an insight in the pinning mechanism.
CONCLUSIONS
===========
In summary, we report the increase in the critical current density in tetragonal FeS single crystals by Se doping. The core normal surface-like pinning is present in the vortex dynamics. The pinning is dominated by the variation of charge carrier mean free path $l$ near lattice defects ($\delta$$l$ pinning). The critical current density is comparable to iron based superconductors with much higher superconducting transition temperature. This suggests that FeS-based superconducting structures with higher $T_{c}$’s could exhibit high performance, potentially attractive for low temperature high magnetic field applications.
Acknowledgements {#acknowledgements .unnumbered}
================
The work carried out at Brookhaven was primarily supported by the Center for Emergent Superconductivity, an Energy Frontier Research Center funded by the U.S. DOE, Office of Basic Energy Sciences (A.W and C.P) but also in part by the U.S. DOE under Contract No. DEAC02-98CH10886. This work has also been supported by Grant No. 171001 from the Serbian Ministry of Education and Science. Jianjun Tian was in part supported by a scholarship of Faculty Training Abroad Program of Henan University.
Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, J. Am. Chem. Soc. **130**, 3296 (2008). Q. Y. Wang, Z. Li, W. H. Zhang, Z. C. Zhang, J. S. Zhang, W. Li, H. Ding, Y. B. Ou, P. Deng, K.Chang, J. Wen, C. L. Song, K. He, J. F. Jia, S. H. Ji, Y. Y.Wang, L. L. Wang, X. Chen,X. C. Ma and Q. K. Xue , Chin. Phys. Lett. **29**, 037402 (2012). Shaolog He, Junfeng He,Wenhao Zhang, Lin Zhao, Defa Liu, Xu Liu, Daixiang Mou, Yun-Bo Ou, Qing-YanWang, Zhi Li, LiliWang, Yingying Peng, Yan Liu, Chaoyu Chen, Li Yu, Guodong Liu, Xiaoli Dong, Jun Zhang, Chuangtian Chen, Zuyan Xu, Xi Chen, Xucun Ma, Qikun Xue\* and X. J. Zhou, Nature Materials **12**, 605 (2013). Jian-Feng Ge, Zhi-Long Liu, Canhua Liu, Chun-Lei Gao, Dong Qian, Qi-Kun Xue, Ying Liu and Jin-Feng Jia, Nature Mater. **14**, 285 (2015). J. Shiogai, Y. Ito, T. Mitsuhashi, T. Nojima and A. Tsukazaki, Nature Phys. **12**, 42 (2016). Xiaofang Lai, Hui Zhang, Yingqi Wang, Xin Wang, Xian Zhang, Jianhua Lin and Fuqiang Huang, J. Am. Chem. Soc. **137**, 10148 (2015). Hechang Lei, Milinda Abeykoon, Emil S. Bozin and C. Petrovic, Phys. Rev. B **83**, 180503 (2011). C. K. H. Borg, X. Zhou, C. Eckberg, D. J. Campbell, S. R. Saha, J. Paglione, and E. E. Rodriguez, Phys. Rev. B **93**, 094522 (2016). H. Lin, Y. Li, Q. Deng, J. Xing, J. Liu, X. Zhu, H. Yang, and H. H. Wen, Phys. Rev. B **93**, 144505 (2016). J. Xing, H. Lin, Y. F. Li, S. Li, X. Y. Zhu, H. Yang, and H. H. Wen, Phys. Rev. B **93**, 104520 (2016). T. P. Ying, X. F. Lai, X. C. Hong, Y. Xu, L. P. He, J. Zhang, M. X. Wang, Y. J. Yu, F. Q. Huang, and S. Y. Li, arXiv: 1511.07717 S. Holenstein, U. Pachmayr, Z. Guguchia, S. Kamusella, R. Khasanov, A. Amato, C. Baines, H. H. Klauss, E. Morenzoni, D. Johrendt and H. Luetkens, Phys. Rev. B **93**, 140506 (2016). R. Khasanov, K. Conder, E. Pomjakushina, A. Amato, C. Baines, Z. Bukowski, J. Karpinski, S. Katrych, H.-H. Klauss, H. Luetkens, A. Shengelaya, and N. D. Zhigadlo, Phys. Rev. B **78**, 220510 (2008). R. Khasanov, M. Bendele, A. Amato, K. Conder, H. Keller, H.-H. Klauss, H. Luetkens and E. Pomjakushina, Phys. Rev. Lett. **104**, 087004 (2010). Weidong Si, Su Jung Han, Xiaoya Shi, Steven N. Ehrlich, J. Jaroszynski, Amit Goyal and Qiang Li, Nature Communications **4**, 1347 (2013). Yue Sun, Toshihiro Taen, Yuji Tsuchiya, Qingping Ding, Sunsen Pyon, Zhixiang Shi and Tsuyoshi Tamegai Appl. Phys. Express **6**, 043101 (2013). A Leo, G Grimaldi, A Guarino, F Avitabile, A Nigro, A Galluzzi1, D Mancusi, M Polichetti, S Pace, K Buchkov, E Nazarova, S Kawale, E Bellingeri and C Ferdeghini, Supercond. Sci. Technol. **28**, 125001 (2015). Soon-Gil Jung, Ji-Hoon Kang, Eunsung Park, Sangyun Lee, Jiunn-Yuan Lin, Dmitriy A. Chareev, Alexander N. Vasiliev and Tuson Park, Sci. Rep. **5**, 16385 (2015). Shohei Hosono, Takashi Noji, Takehiro Hatakeda, Takayuki Kawamata, Masatsune Kato, and Yoji Koike, J. Phys. Soc. Jpn. **85**, 013702 (2010). F. R. Foronda, S. Ghannadzadeh, S. J. Sedlmaier, J. D. Wright, K. Burns, S. J. Cassidy, P. A. Goddard, T. Lancaster, S. J. Clarke and S. J. Blundell, Phys. Rev. B **92**, 134517 (2015). X. F. Lu, N. Z.Wang, H.Wu, Y. P.Wu, D. Zhao, X. Z. Zeng, X. G. Luo, T.Wu, W. Bao, G. H. Zhang, F. Q. Huang, Q. Z. Huang and X. H. Chen, Nature Materials **14**, 325 (2015). Hechang Lei, and C. Petrovic, Phys. Rev. B **84**, 212502 (2011). H. C. Lei, M. Abeykoon, E. S. Bozin, and C. Petrovic, Phys. Rev. B **83**, 180503 (2011). H. Yang, H. Q. Luo, Z. S. Wang, and H. H. Wen, Appl. Phys. Lett. **93**, 142506 (2008). R. A. Brand, WinNormos Mössbauer fitting program, Universität Duisburg 2008 M. Mullet, S. Boursiquot, M. Abdelmoula, J. M. Genin and J. J. Ehrhardt, Geochimica et Cosmochimica Acta **66**, 829 (2002). D.J. Vaughan and M.S. Ridout, Journal of Inorganic and Nuclear Chemistry **33**, 741 (1971). E.F. Bertaut, P. Burlet, and J. Chappert, Solid State Communications 3, 335 (1965). Kideok D. Kwon, Keith Refson, Sharon Bone, Ruimin Qiao, Wan-li Yang, Zhi Liu, and Garrison Sposito, Phys. Rev. B **83**, 064402 (2011). C. P. Bean, Phys. Rev. Lett. **8**, 250 (1962). E. M. Gyorgy, R. B. van Dover, K. A. Jackson, L. F. Schneemeyer, and J. V. Waszczak, Appl. Phys. Lett. **55**, 283 (1989). B. Shen, P. Cheng, Z. S. Wang, L. Fang, C. Ren, L. Shan, and H. H. Wen, Phys. Rev. B **81**, 014503 (2010). Z. S. Gao, Y. P. Qi, L. Wang, C. Yao, D. L. Wang, X. P. Zhang, Y. W. Ma, Physica C **492**, 18 (2013). Hechang Lei, R. W. Hu, and C. Petrovic, Phys. Rev B **84**, 014520 (2011). M. Abdel-Hafiez, Y. Y. Zhang, Z. Y. Cao, C. G. Duan, G. Karapetrov, V. M. Pudalov, V. A. Vlasenko, A. V. Sadakov, D. A. Knyazev, T. A. Romanova, D. A. Chareev, O. S. Volkova, A. N. Vasiliev, and X. J. Chen, Phys. Rev. B **91**, 165109 (2015). E. J. Kramer, J. Appl. Phys. **44**, 1360 (1973). D. Dew-Hughes, Philos. Mag. **30**, 293 (1974). P. S. Yuan, Z. T. Xu, Y. W. Ma, Y. Sun, and T. Tamegai, Supercond. Sci. Technol. **29**, 035013 (2016). R. Griessen, Wen Hai-hu, A. J. J. van Dalen, B. Dam, J. Rector, H. G. Schnack, S. Libbrecht, E. Osquiguil, and Y. Bruynseraede, Phys. Rev. Lett. **72**, 1910 (1994). B. Shabbir, X. L. Wang, S. R. Ghorbani, A. F. Wang, S. X. Dou, and X. H. Chen, Sci. Rep. **5**, 10606. D. H. Ryan, W. N. Rowan-Weetaluktuk, J. M. Cadogan, R. Hu, W. E. Straszheim, S. L. Budko, and P. C. Canfield, Phys. Rev. B **83**, 104526 (2011). Z. Wang, Y. J. Song, H. L. Shi, Z. W. Wang, Z. Chen, H. F. Tian, G. F. Chen, J. G. Guo, H. X. Yang, and J. Q. Li, Phys. Rev. B **83**, 140505 (2011). Y. Liu, Q. Xing, K. W. Dennis, R. W. McCallum, and T. A. Lograsso, Phys. Rev. B **86**, 144507 (2012). A. Ricci, N. Poccia, B. Josesph, G. Arrighetti, L. Barba, J. Plaiser, G. Campi, Y. Mizuguchi, H. Takeya, Y. Takano, N. L. Saini and A. Bianconi, Supercond. Sci. Technol. **24**, 082002 (2011). Wei Li, H. Ding, P. Deng, K. Chang, C. Song, Ke He, L. Wang, X. Ma, J. P. Hu, X. Chen and Q.K. Xue, Nature Physics **8**, 126 (2012).
|
---
abstract: 'We consider possibilities of observing CP-violation effects in neutrino oscillation experiments with low energy ($\sim$ several hundreds MeV).'
address: ' Department of Physics, University of Tokyo, Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan'
author:
- Joe Sato
title: ' CP and T violation in (long) long baseline neutrino oscillation experiments '
---
Introduction
============
Many experiments and observations have shown evidences for neutrino oscillation one after another. The solar neutrino deficit has long been observed[@Ga1; @Ga2; @Kam; @Cl; @SolSK]. The atmospheric neutrino anomaly has been found[@AtmKam; @IMB; @SOUDAN2; @MACRO] and recently almost confirmed by SuperKamiokande[@AtmSK]. There is also another suggestion given by LSND[@LSND]. All of them can be understood by neutrino oscillation and hence indicates that neutrinos are massive and there is a mixing in lepton sector[@FukugitaYanagida].
Since there is a mixing in lepton sector, it is quite natural to imagine that there occurs CP violation in lepton sector. Several physicists have considered whether we may see CP-violation effect in lepton sector through long baseline neutrino oscillation experiments. First it has been studied in the context of currently planed experiments[@Tanimoto; @ArafuneJoe; @AKS; @MN; @BGG] and recently in the context of neutrino factory[@BGW; @Tanimoto2; @Romanino; @GH].
The use of neutrinos from muon beam has great advantages compared with those from pion beam[@Geer]. Neutrinos from $\mu^+$($\mu^-$) beam consist of pure $\nu_{\rm e}$ and $\bar\nu_\mu$ ($\bar\nu_{\rm e}$ and $\nu_\mu$) and will contain no contamination of other kinds of neutrinos. Also their energy distribution will be determined very well.
In this proceedings, we will consider how large CP-violation effect we will see in oscillation experiments with low energy neutrino from muon beam. Such neutrinos with high intensity will be available in near future[@PRISM]. We will consider three active neutrinos without any sterile one by attributing the solar neutrino deficit and atmospheric neutrino anomaly to the neutrino oscillation.
CP violation in long baseline neutrino oscillation experiments
==============================================================
Here we consider neutrino oscillation experiments with baseline $L\sim$ several hundreds km.
Oscillation probability and its approximated formula
----------------------------------------------------
First we derive approximated formulas[@AKS] of neutrino oscillation to clarify our notation.
We assume three generations of neutrinos which have mass eigenvalues $m_{i} (i=1, 2, 3)$ and MNS mixing matrix $U$[@MNS] relating the flavor eigenstates $\nu_{\alpha} (\alpha={\rm e}, \mu, \tau)$ and the mass eigenstates in the vacuum $\nu\,'_{i} (i=1, 2, 3)$ as $$\nu_{\alpha} = U_{\alpha i} \nu\,'_{i}.
\label{Udef}$$ We parameterize $U$[@ChauKeung; @KuoPnataleone; @Toshev] as
$$\begin{aligned}
& &
U
=
{\rm e}^{{\rm i} \psi \lambda_{7}} \Gamma {\rm e}^{{\rm i}
\phi \lambda_{5}} {\rm e}^{{\rm i} \omega \lambda_{2}} \nonumber
\\
&=&
\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & c_{\psi} & s_{\psi} \\
0 & -s_{\psi} & c_{\psi}
\end{array}
\right)
\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & {\rm e}^{{\rm i} \delta}
\end{array}
\right)
\left(
\begin{array}{ccc}
c_{\phi} & 0 & s_{\phi} \\
0 & 1 & 0 \\
-s_{\phi} & 0 & c_{\phi}
\end{array}
\right)
\left(
\begin{array}{ccc}
c_{\omega} & s_{\omega} & 0 \\
-s_{\omega} & c_{\omega} & 0 \\
0 & 0 & 1
\end{array}
\right)
\nonumber \\
&=&
\left(
\begin{array}{ccc}
c_{\phi} c_{\omega} &
c_{\phi} s_{\omega} &
s_{\phi}
\\
-c_{\psi} s_{\omega}
-s_{\psi} s_{\phi} c_{\omega} {\rm e}^{{\rm i} \delta} &
c_{\psi} c_{\omega}
-s_{\psi} s_{\phi} s_{\omega} {\rm e}^{{\rm i} \delta} &
s_{\psi} c_{\phi} {\rm e}^{{\rm i} \delta}
\\
s_{\psi} s_{\omega}
-c_{\psi} s_{\phi} c_{\omega} {\rm e}^{{\rm i} \delta} &
-s_{\psi} c_{\omega}
-c_{\psi} s_{\phi} s_{\omega} {\rm e}^{{\rm i} \delta} &
c_{\psi} c_{\phi} {\rm e}^{{\rm i} \delta}
\end{array}
\right),
\label{UPar2}\end{aligned}$$
where $c_{\psi} = \cos \psi, s_{\phi} = \sin \phi$, etc.
The evolution equation of neutrino with energy $E$ in matter is expressed as $${\rm i} \frac{{\rm d} \nu}{{\rm d} x}
= H \nu,
\label{MatEqn}$$ where $$H
\equiv
\frac{1}{2 E}
\tilde U
{\rm diag} (\tilde m^2_1, \tilde m^2_2, \tilde m^2_3)
\tilde U^{\dagger},
\label{Hdef}$$ with a unitary mixing matrix $\tilde U$ and the effective mass squared $\tilde m^{2}_{i}$’s $(i=1, 2, 3)$. The matrix $\tilde U$ and the masses $\tilde m_{i}$’s are determined by[@Wolf; @MS; @BPPW] $$\tilde U
\left(
\begin{array}{ccc}
\tilde m^2_1 & & \\
& \tilde m^2_2 & \\
& & \tilde m^2_3
\end{array}
\right)
\tilde U^{\dagger}
=
U
\left(
\begin{array}{ccc}
0 & & \\
& \delta m^2_{21} & \\
& & \delta m^2_{31}
\end{array}
\right)
U^{\dagger}
+
\left(
\begin{array}{ccc}
a & & \\
& 0 & \\
& & 0
\end{array}
\right).
\label{MassMatrixInMatter}$$ Here $\delta m^2_{ij} = m^2_i - m^2_j$ and $$a \equiv 2 \sqrt{2} G_{\rm F} n_{\rm e} E \nonumber \\
= 7.56 \times 10^{-5} {\rm eV^{2}} \cdot
\left( \frac{\rho}{\rm g\,cm^{-3}} \right)
\left( \frac{E}{\rm GeV} \right),
\label{aDef}$$ with the electron density, $n_{\rm e}$ and the averaged matter density[@KS], $\rho$. The solution of eq.(\[MatEqn\]) is then $$\begin{aligned}
\nu (x) &=& S(x) \nu(0)
\label{nu(x)}\\
S &\equiv& {\rm T\, e}^{ -{\rm i} \int_0^x {\rm d} s H (s) }
\label{Sdef}\end{aligned}$$ (T being the symbol for time ordering), giving the oscillation probability for $\nu_{\alpha} \rightarrow \nu_{\beta} (\alpha, \beta =
{\rm e}, \mu, \tau)$ at distance $L$ as $$\begin{aligned}
P(\nu_{\alpha} \rightarrow \nu_{\beta}; E, L)
&=&
\left| S_{\beta \alpha} (L) \right|^2.
\label{alpha2beta}\end{aligned}$$ Note that $P(\bar\nu_{\alpha} \rightarrow \bar\nu_{\beta})$ is related to $P(\nu_{\alpha} \rightarrow \nu_{\beta})$ through $a \rightarrow
-a$ and $U \rightarrow U^{\ast} ({\rm i.e.\,} \delta \rightarrow
-\delta)$. Similarly, we obtain $P(\nu_{\beta} \rightarrow \nu_{\alpha})$ from eq.(\[alpha2beta\]) by replacing $\delta \rightarrow -\delta$, $P(\bar\nu_{\beta} \rightarrow \bar\nu_{\alpha})$ by $a \rightarrow -a$.
Attributing both solar neutrino deficit and atmospheric neutrino anomaly to neutrino oscillation, we can assume $a, \delta m^2_{21} \ll \delta m^2_{31}$. The oscillation probabilities in this case can be considered by perturbation[@AKS]. With the additional conditions $$\frac{aL}{2E}
=
1.93 \times 10^{-4} \cdot
\left( \frac{\rho}{\rm g\,cm^{-3}} \right)
\left( \frac{L}{\rm km} \right)
\ll 1
% \quad &\Leftrightarrow& \quad
% L \ll 2700(? {\rm check})\;{\rm km}
\label{AKScond1}$$ and $$\frac{\delta m^{2}_{21} L}{2E}
=
2.53
\frac{(\delta m^{2}_{21} / {\rm eV^{2}})(L / {\rm km})}{E / {\rm
GeV}} \ll 1,
% \quad &\Leftrightarrow& \quad
% E \gg
% 2.53 \; {\rm GeV}
% \left( \frac{\delta m^{2}_{21}}{\rm eV^{2}}\right)
% \left( \frac{L}{\rm km} \right)
\label{AKScond2}$$ the matrix $S$ (\[Sdef\]) is given by $$S(x) \simeq {\rm e}^{ -{\rm i} H_0 x } +
{\rm e}^{ -{\rm i} H_0 x }
( -{\rm i} ) \int_{0}^{x} {\rm d} s H_1 (s),
\label{S0+S1}$$ where $$\begin{aligned}
H_0 &=& \frac{1}{2 E} U \left(
\begin{array}{ccc}
0 & & \\
& 0 & \\
& & \delta m^2_{31}
\end{array}
\right)
U^{\dagger}
\label{H0def} \\
H_1 (x) &=& {\rm e}^{ {\rm i} H_0 x} H_1 {\rm e}^{ -{\rm i} H_0 x},
\label{H1(x)def}\\
H_1
&=&
\frac{1}{2 E}
\left\{
U
\left(
\begin{array}{ccc}
0 & & \\
& \delta m^2_{21} & \\
& & 0
\end{array}
\right)
U^{\dagger} +
\left(
\begin{array}{ccc}
a & & \\
& 0 & \\
& & 0
\end{array}
\right)
\right\}.
\label{H1def}\end{aligned}$$ Then the oscillation probabilities are calculated, e.g., as $$\begin{aligned}
& &
P(\nu_{\mu} \rightarrow \nu_{\rm e}; E, L)
=
4 \sin^2 \frac{\delta m^2_{31} L}{4 E}
c_{\phi}^2 s_{\phi}^2 s_{\psi}^2
\left\{
1 + \frac{a}{\delta m^2_{31}} \cdot 2 (1 - 2 s_{\phi}^2)
\right\}
\nonumber \\
&+&
2 \frac{\delta m^2_{31} L}{2 E} \sin \frac{\delta m^2_{31} L}{2 E}
c_{\phi}^2 s_{\phi} s_{\psi}
\left\{
- \frac{a}{\delta m^2_{31}} s_{\phi} s_{\psi} (1 - 2 s_{\phi}^2)
+
\frac{\delta m^2_{21}}{\delta m^2_{31}} s_{\omega}
(-s_{\phi} s_{\psi} s_{\omega} + c_{\delta} c_{\psi} c_{\omega})
\right\}
\nonumber \\
&-&
4 \frac{\delta m^2_{21} L}{2 E} \sin^2 \frac{\delta m^2_{31} L}{4 E}
s_{\delta} c_{\phi}^2 s_{\phi} c_{\psi} s_{\psi} c_{\omega}
s_{\omega}.
\label{eq:AKSmu2e}\end{aligned}$$
As stated, oscillation probabilities such as $P(\bar\nu_{\mu}
\rightarrow \bar\nu_{\rm e})$, $P(\nu_{\rm e} \rightarrow \nu_{\mu})$ and $P(\bar\nu_{\rm e} \rightarrow \bar\nu_{\mu})$ are given from the above formula by some appropriate changes of the sign of $a$ and/or $\delta$.
The first condition (\[AKScond1\]) of the approximation leads to a constraint for the baseline length of long-baseline experiments as $$L \ll
1.72 \times 10^{3} {\rm km}
\left( \frac{\rho}{3 {\rm g\,cm^{-3}}} \right)
\label{eq:AKScond1'}$$ The second condition (\[AKScond2\]) gives the energy region where we can use the approximation, $$E
\gg
76.0 {\rm MeV}
\left( \frac{\delta m^{2}_{21}}{10^{-4} {\rm eV^{2}}} \right)
\left( \frac{L}{300 {\rm km}} \right).
\label{eq:AKScond2'}$$
As long as these conditions, (\[eq:AKScond1’\]) and (\[eq:AKScond2’\]) are satisfied, the approximation (\[S0+S1\]) works pretty well[@AKS; @KS2].
Magnitude of CP violation and matter effect
-------------------------------------------
The available neutrino as an initial beam is $\nu_{\mu}$ and $\bar\nu_{\mu}$ in the current long baseline experiments[@K2K; @Ferm]. The “CP violation” gives the nonzero difference of the oscillation probabilities between, e.g., $P(\nu_{\mu} \rightarrow \nu_{\rm e})$ and $
P(\bar\nu_{\mu} \rightarrow \bar\nu_{\rm e})$[@AKS]. This gives $$\begin{aligned}
P(\nu_{\mu} \rightarrow \nu_{\rm e}; L)
-
P(\bar\nu_{\mu} \rightarrow \bar\nu_{\rm e}; L)
&=&
16 \frac{a}{\delta m^2_{31}} \sin^2 \frac{\delta m^2_{31} L}{4 E}
c_{\phi}^2 s_{\phi}^2 s_{\psi}^2 (1 - 2 s_{\phi}^2)
\nonumber \\
&-&
4 \frac{a L}{2 E} \sin \frac{\delta m^2_{31} L}{2 E}
c_{\phi}^2 s_{\phi}^2 s_{\psi}^2 (1 - 2 s_{\phi}^2)
\nonumber \\
&-&
8 \frac{\delta m^2_{21} L}{2 E}
\sin^2 \frac{\delta m^2_{31} L}{4 E}
s_{\delta} c_{\phi}^2 s_{\phi} c_{\psi} s_{\psi} c_{\omega}
s_{\omega}.
\label{eq:CP}\end{aligned}$$ The difference of these two, however, also includes matter effect, or the fake CP violation, proportional to $a$. We must somehow distinguish these two to conclude the existence of CP violation as discussed in ref.[@AKS].
On the other hand, a muon ring enables to extract $\nu_{\rm e}$ and $\bar\nu_{\rm e}$ beam. It enables direct measurement of pure CP violation through “T violation”, e.g., $P(\nu_{\mu} \rightarrow
\nu_{\rm e}) - P(\nu_{\rm e} \rightarrow \nu_{\mu})$ as $$P(\nu_{\mu} \rightarrow \nu_{\rm e}) -
P(\nu_{\rm e} \rightarrow \nu_{\mu})
=
- 8 \frac{\delta m^2_{21} L}{2 E}
\sin^2 \frac{\delta m^2_{31} L}{4 E}
s_{\delta} c_{\phi}^2 s_{\phi} c_{\psi} s_{\psi} c_{\omega}
s_{\omega}.
\label{eq:T}$$ Note that this difference gives pure CP violation.
By measuring “CPT violation”, e.g. the difference between $P(\nu_{\mu} \rightarrow
\nu_{\rm e})$ and $P(\bar\nu_{\rm e} \rightarrow \bar\nu_{\mu})$, we can check the matter effect. $$\begin{aligned}
P(\nu_{\mu} \rightarrow \nu_{\rm e}; L)
-
P(\bar\nu_{\rm e} \rightarrow \bar\nu_{\mu}; L)
&=&
16 \frac{a}{\delta m^2_{31}} \sin^2 \frac{\delta m^2_{31} L}{4 E}
c_{\phi}^2 s_{\phi}^2 s_{\psi}^2 (1 - 2 s_{\phi}^2)
\nonumber \\
&-&
4 \frac{a L}{2 E} \sin \frac{\delta m^2_{31} L}{2 E}
c_{\phi}^2 s_{\phi}^2 s_{\psi}^2 (1 - 2 s_{\phi}^2)
\nonumber \\
\label{eq:CPT}\end{aligned}$$
(15,18)
We present in Fig.\[fig1\] “T-violation” part (\[eq:T\]) and “CPT-violation” part (\[eq:CPT\]) for some parameters allowed by the present experiments[@Fogli][^1] with $\sin^{2}
\omega = 1/2$, $\sin^{2} \psi = 1/2$, $\sin \delta = 1$ fixed. The matter density is also fixed to the constant value $\rho = 2.5 {\rm
g/cm^{3}}$[@KS]. Other parameters are taken as $\delta m^{2}_{31}
= 3 \times 10^{-3} {\rm eV}^{2}$ and $1 \times 10^{-3} {\rm eV}^{2}$, $\delta m^{2}_{21} = 1 \times 10^{-4} {\rm eV}^{2}$ and $3 \times
10^{-5} {\rm eV}^{2}$.
“T-violation” effect is proportional to $\delta m^{2}_{21} / \delta
m^{2}_{31}$ and, for $\phi \ll 1$, also to $\sin \phi$ as seen in eq.(\[eq:T\]) and Fig.\[fig1\]. Recalling that the energy of neutrino beam is of several hundreds MeV, we see in Fig.\[fig1\] that the “T-violation” effect amounts to at least about 5%, hopefully 10$\sim$20%. This result gives hope to detect the pure leptonic CP violation directly with the neutrino oscillation experiments.
CP violation in long long baseline experiments
==============================================
Here we consider neutrino experiments with baseline $L \sim$ 10000km and see that “T violation” will be amplified[@KP; @KS3].
Since the baseline length $L$ does not satisfy the condition (\[eq:AKScond1’\],) we cannot make use of the previous approximation.
However, as $a, \delta m^2_{21} \ll \delta m^2_{31}$ is satisfied, we have approximation formulae of the mixing matrix in matter $\tilde U$ for [*neutrino*]{}, $$\begin{aligned}
\tilde U &=&{\rm e}^{{\rm i} \psi \lambda_{7}} \Gamma {\rm e}^{{\rm i}
\phi \lambda_{5}} {\rm e}^{{\rm i} \tilde\omega \lambda_{2}}
\label{UinMatter}\\
\tan 2\tilde\omega &=& \frac{\delta m^2_{21} s_{2\omega}}
{-a c^2_{\phi}+\delta m^2_{21}c_{2\omega}}
\nonumber\end{aligned}$$ and of “masses” in matter $\tilde m^2_{i}$ for[*neutrino*]{} $$\begin{aligned}
(\tilde m^2_1,\tilde m^2_2,\tilde m^2_3)&=&
(\lambda_-,\lambda_+,\delta m^2_{31}+a c_\phi^2)
\\
\lambda_{\pm}&=&\frac{a c_\phi^2+\delta m^2_{21}}{2}\pm \frac{1}{2}
\sqrt{
(-a c^2_{\psi}+\delta m^2_{21}c_{2\omega})^2+(\delta m^2_{21}s_{2\omega})^2}.
\nonumber\end{aligned}$$ Thus the “T violation” is given by $$\begin{aligned}
&& P(\nu_e \rightarrow \nu_\mu; L)
-
P(\nu_{\mu} \rightarrow \nu_{\rm e}; L)
\nonumber\\
&=&4
s_{\delta} c_{\phi}^2 s_{\phi} c_{\psi} s_{\psi} c_{\tilde\omega}
s_{\tilde\omega}\left(
\sin \frac{\delta \tilde m^2_{21} L}{2 E}+
\sin \frac{\delta \tilde m^2_{32} L}{2 E}+
\sin \frac{\delta \tilde m^2_{13} L}{2 E}
\right)
\label{eq:Tlong}\\
&\sim&
s_{\delta} c_{\phi}^2 s_{\phi} c_{\psi} s_{\psi} c_{\tilde\omega}
s_{\tilde\omega}\left(
\sin \frac{\delta \tilde m^2_{21} L}{2 E}
\right),
\nonumber\end{aligned}$$ here in the last equation we dropped the terms $ \sin \frac{\delta \tilde m^2_{32} L}{2 E}+
\sin \frac{\delta \tilde m^2_{13} L}{2 E}$, since they oscillate very rapidly and will no contribution to the actual measurement.
As is seen in eq. (\[UinMatter\]), due to MSW effect[@Wolf; @MS] “T violation” may be amplified very much even if the mixing angle $\omega$ is very small and hence we can test whether there is a CP phase $\delta$[@KS3].
Summary and conclusion
======================
We considered how large CP/T violation effects can be observed making use of low-energy neutrino beam, inspired by PRISM[@PRISM].
First we consider the baseline with several hundreds km. In this case more than 10%, hopefully 20% of the pure CP-violation effects may be observed within the allowed region of present experiments. To see CP-violation effect those baseline length and the neutrino energy are most preferable statistically[@KS2].
Then we consider the baseline with $\sim$ 10000 km. We see that in this case, due to MSW effect, “T violation” will be amplified and we can test whether the CP phase $\delta$ is large or not.
[999]{}
GALLEX Collaboration, W. Hampel [*et al.*]{}, Phys. Lett. B [**447**]{}, 127 (1999) .
SAGE Collaboration, J. N. Abdurashitov [*et al.*]{}, astro-ph/9907113.
Kamiokande Collaboration, Y. Suzuki, Nucl. Phys. B (Proc. Suppl.) [**38**]{}, 54 (1995).
Homestake Collaboration, B. T. Cleveland [*et al.*]{}, Astrophys. J. [**496**]{}, 505 (1998) .
Super-Kamiokande Collaboration, Y. Fukuda [*et al.*]{}, Phys. Rev. Lett. [**82**]{},1810 (1999), [*ibid.*]{} [**82**]{}, 2430 (1999).
Kamiokande Collaboration, K. S. Hirata [*et al.*]{}, Phys. Lett. [**B205**]{},416 (1988); [*ibid.*]{} [**B280**]{},146 (1992); Y. Fukuda [*et al.*]{}, Phys. Lett. [**B335**]{}, 237 (1994).
IMB Collaboration, D. Casper [*et al.*]{}, Phys. Rev. Lett. [**66**]{}, 2561 (1991);\
R. Becker-Szendy [*et al.*]{}, Phys. Rev. [**D46**]{}, 3720 (1992).
SOUDAN2 Collaboration, T. Kafka, Nucl. Phys. B (Proc. Suppl.) [**35**]{}, 427 (1994); M. C. Goodman, [*ibid.*]{} [ **38**]{}, 337 (1995); W. W. M. Allison [*et al.*]{}, Phys. Lett. [ **B391**]{}, 491 (1997).
MACRO Collaboration, M. Ambrosio [*et al.*]{}, Phys. Lett. [**B434**]{}, 451 (1998).
Super-Kamiokande Collaboration, Y. Fukuda [*et al.*]{}, Phys. Rev. Lett. [**81**]{}, 1562 (1998), Phys. Lett. [**B433**]{}, 9 (1998), Phys. Lett. [**B436**]{}, 33 (1998), Phys. Rev. Lett. [ **82**]{},2644 (1999).
LSND Collaboration, C.Athanassopoulos [*et al.*]{}, Phys. Rev. Lett [**77**]{}, 3082 (1996); [*ibid*]{} [**81**]{}, 1774 (1998).
For a review, M. Fukugita and T. Yanagida, in [*Physics and Astrophysics of Neutrinos*]{}, edited by M. Fukugita and A. Suzuki (Springer-Verlag, Tokyo, 1994).
M. Tanimoto, Phys. Rev. [**D55**]{}, 322 (1997); Prog. Theor. Phys.[**97**]{}, 901 (1997).
J. Arafune and J. Sato, Phys. Rev. [**D55**]{}, 1653 (1997).
J. Arafune, M. Koike and J. Sato, Phys. Rev. [**D56**]{}, 3093 (1997).
H. Minakata and H. Nunokawa, Phys. Rev. [**D57**]{} 4403 (1998); Phys. Lett. [**B413**]{}, 369 (1997).
M. Bilenky, C. Giunti, W. Grimus, Phys. Rev. [**D58**]{}, 033001 (1998).
V. Berger, S.Geer and K. Whisnant, to appear in Phys. Rev. D (hep-ph/9906478).
M. Tanimoto, hep-ph/9906516; A. Kalliomki, J. Maalampi, M. Tanimoto, hep-ph/9909301.
A. Romanino, hep-ph/9909425.
A. De Rujula, M. B. Gavela and P. Hemandez, Nucl. Phys. [**B547**]{}, 21 (1999); A. Donini, M. B. Gavela, P. Hemandez and S. Rigolin, hep-ph/9909254.
S. Geer, Phys. Rev. [**D57**]{}, 6989 (1998), erratum [*ibid.*]{} [**D59**]{}, 039903 (1999).
Y. Kuno and Y. Mori, Talk at the ICFA/ECFA Workshop “Neutrino Factories based on Muon Storage Ring”, July 1999; Y. Kuno, in [*Proceedings of the Workshop on High Intensity Secondary Beam with Phase Rotation*]{}.
Z. Maki, M. Nakagawa and S. Sakata, Prog. Theor. Phys. [**28**]{}, 870 (1962).
L. -L. Chau and W. -Y. Keung, Phys. Rev. Lett. [**59**]{}, 671 (1987).
T. K. Kuo and J. Pantaleone, Phys. Lett. [**B198**]{}, 406 (1987).
S. Toshev, Phys. Lett. [**B226**]{}, 335 (1989).
L. Wolfenstein, Phys. Rev. [**D17**]{}, 2369 (1978).
S. P. Mikheev and A. Yu. Smirnov, Sov. J. Nucl. Phys. [**42**]{}, 913 (1985).
V. Barger, S. Pakvasa, R.J.N. Phillips and K. Whisnant, Phys. Rev. [**D22**]{}, 2718 (1980).
M. Koike and J.Sato, Mod. Phys. Lett. [**A14**]{},1297 (1999).
M. Koike and J.Sato, hep-ph/9909469.
K. Nishikawa, INS-Rep-924 (1992).
S. Parke, Fermilab-Conf-93/056-T (1993), hep-ph/9304271.
G. L. Fogli, E. Lisi, D. Montanino and G. Scioscia, Phys. Rev. [**D59**]{}, 033001 (1999).
M. Apollonio [*et al.*]{}, Phys. Lett. [**B420**]{}, 397 (1998); hep-ex/9907037.
P. I. Krastev and S. T. Petcov, Phys. Lett.[**B205**]{}, 84 (1988).
M. Koike and J. Sato, hep-ph/9911258.
[^1]: Although the Chooz reactor experiment have almost excluded $\sin^2\phi=0.1$[@chooz], there remains still small chance to take this value.
|
---
abstract: 'We propose an alternate construction to compute the minimal entanglement wedge cross section (EWCS) for a single interval in a $(1+1)$ dimensional holographic conformal field theory at a finite temperature, dual to a bulk planar BTZ black hole geometry. Utilizing this construction we compute the holographic entanglement negativity for the above mixed state configuration from a recent conjecture in the literature. Our results exactly reproduce the corresponding replica technique results in the large central charge limit and resolves the issue of the missing thermal term for the holographic entanglement negativity computed earlier in the literature. In this context we compare the results for the holographic entanglement negativity utilizing the minimum EWCS and an alternate earlier proposal involving an algebraic sum of the lengths of the geodesics homologous to specific combinations of appropriate intervals. From our analysis we conclude that the two quantities are proportional in the context of the $AdS_3/CFT_2$ scenario and this possibly extends to the higher dimensional $AdS_{d+1}/CFT_d$ framework.'
author:
- 'Jaydeep Kumar Basak[^1]'
- 'Vinay Malvimat[^2]'
- 'Himanshu Parihar[^3]'
- 'Boudhayan Paul[^4]'
- 'Gautam Sengupta[^5]'
bibliography:
- 'references.bib'
title: On minimal entanglement wedge cross section for holographic entanglement negativity
---
Introduction {#sec_intro}
============
Quantum entanglement has evolved as one of the dominant themes in the development of diverse disciplines covering issues from condensed matter physics to quantum gravity and has garnered intense research attention. Entanglement entropy, defined as the von Neumann entropy of the reduced density matrix for the subsystem being considered, plays a crucial role in the characterization of entanglement for bipartite pure states. However for bipartite mixed states, entanglement entropy receives contributions from irrelevant correlations (e.g., for finite temperature configurations, it includes thermal correlations), and fails to correctly capture the entanglement of the mixed state under consideration. This crucial issue was taken up in a classic work [@Vidal] by Vidal and Werner, where a computable measure termed entanglement negativity was proposed to characterize mixed state entanglement and provided an upper bound on the distillable entanglement of the bipartite mixed state in question.[^6] It was defined as the logarithm of the trace norm of the partially transposed reduced density matrix with respect to one of the subsystems of a bipartite system. Subsequently it could be established in [@Plenio:2005cwa] that despite being non convex, the entanglement negativity was an entanglement monotone under local operations and classical communication (LOCC).
In a series of communications [@Calabrese:2004eu; @Calabrese:2009qy; @Calabrese:2009ez; @Calabrese:2010he] the authors formulated a replica technique to compute the entanglement entropy in $(1+1)$ dimensional conformal field theories ($CFT_{1+1}$s). The procedure was later extended to configurations with multiple disjoint intervals in [@Hartman:2013mia; @Headrick:2010zt], where it was shown that the entanglement entropy receives non universal contributions, which depended on the full operator content of the theory, and were sub leading in the large central charge limit. A variant of the replica technique described above was developed in [@Calabrese:2012ew; @Calabrese:2012nk; @Calabrese:2014yza] to compute the entanglement negativity of various bipartite pure and mixed state configurations in a $CFT_{1+1}$. This was subsequently extended to a mixed state configuration of two disjoint intervals in [@Kulaxizi:2014nma] where the entanglement negativity was found to be non universal, and it was possible to elicit a universal contribution in the large central charge limit if the intervals were in proximity. Interestingly the entanglement negativity for this configuration was numerically shown to exhibit a phase transition depending upon the separation of the intervals [@Kulaxizi:2014nma; @Dong:2018esp].
In [@Ryu:2006bv; @Ryu:2006ef] Ryu and Takayanagi (RT) proposed a holographic conjecture where the universal part of the entanglement entropy of a subsystem in a dual $CFT_d$ could be expressed in terms of the area of the co dimension two static minimal surface (RT surface) in the bulk $AdS_{d+1}$ geometry, homologous to the subsystem. This development opened up a significant line of research in the context of the $AdS/CFT$ correspondence (for a detailed review see [@Ryu:2006ef; @Nishioka:2009un; @Rangamani:2016dms; @Nishioka:2018khk]). The RT conjecture was proved initially for the $AdS_3/CFT_2$ scenario, with later generalization to the $AdS_{d+1}/CFT_d$ framework in [@Fursaev:2006ih; @Casini:2011kv; @Faulkner:2013yia; @Lewkowycz:2013nqa]. Hubeny, Rangamani and Takayanagi (HRT) extended the RT conjecture to covariant scenarios in [@Hubeny:2007xt], a proof of which was established in [@Dong:2016hjy].
The above developments motivated a corresponding holographic characterization for the entanglement negativity and could be utilized to compute the entanglement negativity for the vacuum state of a $CFT_d$ dual to a bulk pure $AdS_{d+1}$ geometry in [@Rangamani:2014ywa]. In [@Chaturvedi:2016rcn; @Chaturvedi:2016opa] a holographic entanglement negativity conjecture and its covariant extension were advanced for bipartite mixed state configurations in the $AdS_3/CFT_2$ scenario, with the generalization to higher dimensions reported in[@Chaturvedi:2016rft]. A large central charge analysis of the entanglement negativity through the monodromy technique for holographic $CFT_{1+1}$s was established in [@Malvimat:2017yaj] which provided a strong substantiation for the holographic entanglement negativity construction described above. Subsequently in [@Jain:2017sct; @Jain:2017uhe] the above conjecture along with its covariant version was extended for bipartite mixed state configurations of adjacent intervals in dual $CFT_{1+1}$s, with the higher dimensional generalization described in [@Jain:2017xsu]. These conjectures were applied to explore the holographic entanglement negativity for various mixed state configurations in $CFT_d$s dual to the bulk pure $AdS_{d+1}$ geometry, $AdS_{d+1}$-Schwarzschild black hole and the $AdS_{d+1}$-Reissner-Nordstr[ö]{}m black hole [@Jain:2017xsu; @Jain:2018bai]. Subsequently the entanglement negativity conjecture was extended to the mixed state configurations of two disjoint intervals in proximity in the $AdS_3/CFT_2$ framework in [@Malvimat:2018txq], with its covariant generalization given in [@Malvimat:2018ood]. In a recent communication [@Basak:2020bot], a higher dimensional generalization of the conjecture described above was proposed and utilized to compute the holographic entanglement negativity for such mixed state configurations with long rectangular strip geometries in $CFT_d$s dual to bulk pure $AdS_{d+1}$ geometry and $AdS_{d+1}$-Schwarzschild black hole.
On a different note, motivated by the quantum error correcting codes, an alternate approach involving the backreacted minimal entanglement wedge cross section (EWCS) to compute the holographic entanglement negativity for configurations with spherical entangling surface was advanced in [@Kudler-Flam:2018qjo]. Furthermore a [*proof*]{} for this proposal, based on the [*reflected entropy*]{} [@Dutta:2019gen] was established in another recent communication [@Kusuki:2019zsp]. The entanglement wedge was earlier shown to be the bulk subregion dual to the reduced density matrix of the dual $CFT$s in [@Czech:2012bh; @Wall:2012uf; @Headrick:2014cta; @Jafferis:2014lza; @Jafferis:2015del]. Recently the minimal entanglement wedge cross section has been proposed to be the bulk dual of the entanglement of purification (EoP) [@Takayanagi:2017knl; @Nguyen:2017yqw] (For recent progress see [@Bhattacharyya:2018sbw; @Bao:2017nhh; @Hirai:2018jwy; @Espindola:2018ozt; @Umemoto:2018jpc; @Bao:2018gck; @Umemoto:2019jlz; @Guo:2019pfl; @Bao:2019wcf; @Harper:2019lff]). Unlike entanglement negativity, the entanglement of purification receives contributions from both quantum and classical correlations (see [@Terhal_2002] for details). The connection of minimal entanglement wedge cross section to the odd entanglement entropy [@Tamaoka:2018ned] and reflected entropy [@Dutta:2019gen; @Jeong:2019xdr; @Bao:2019zqc; @Chu:2019etd] has also been explored.
As mentioned earlier, in [@Takayanagi:2017knl] the authors advanced a construction for the computation of the minimal EWCS. In [@Kudler-Flam:2018qjo; @Kusuki:2019zsp], the authors proposed that for configurations involving spherical entangling surfaces, the holographic entanglement negativity may be expressed in terms of the backreacted EWCS. Utilizing this conjecture the authors computed the holographic entanglement negativity for various pure and mixed state configurations in holographic $CFT_{1+1}$s at zero and finite temperatures, dual to bulk pure $AdS_3$ geometry and planar BTZ black hole, through the construction given in [@Takayanagi:2017knl].
The results for the holographic entanglement negativity for the various bipartite states in a $CFT_{1+1}$ following from the above construction in terms of the minimum backreacted EWCS reproduce the universal part of the corresponding replica technique results for most of the mixed state configurations. However their result for the single interval at a finite temperature does not exactly match with the corresponding universal part of the replica technique result described in Calabrese et. al. [@Calabrese:2014yza], in the large central charge limit. Specifically their result for this mixed state configuration does not reproduce the subtracted thermal entropy term in the expression for the entanglement negativity of the single interval[@Calabrese:2014yza]. Given that their construction exactly reproduces the replica technique results for all the other mixed state configurations, the mismatch described above requires further investigation and a possible resolution. In this article we address this intriguing issue and propose an alternate construction for the computation of the minimal EWCS for a single interval in a finite temperature holographic $CFT_{1+1}$.
Our construction is inspired by that of Calabrese et al. [@Calabrese:2014yza] and involves two symmetric auxiliary intervals on either side of the single interval under consideration. In this construction we have utilized certain properties of the minimal EWCS along with a specific relation valid for the dual bulk BTZ black hole geometry to determine the minimal EWCS for this configuration. Finally we implement the bipartite limit where the auxiliary intervals are allowed to be infinite and constitute the rest of the system, to arrive at the minimal EWCS for the single interval at a finite temperature in question. The holographic entanglement negativity computed using the conjecture advanced in [@Kudler-Flam:2018qjo; @Kusuki:2019zsp] through our alternate construction for the EWCS correctly reproduces the corresponding replica result in [@Calabrese:2014yza] mentioned earlier. In particular we are able to obtain the missing thermal entropy term in the expression for the holographic entanglement negativity described in [@Calabrese:2014yza]. We further observe that the monodromy analysis employed by the authors in [@Kusuki:2019zsp] and that in [@Malvimat:2017yaj] lead to identical functional forms for the relevant four point correlation function for the twist fields, in the large central charge limit for the mixed state configuration of the single interval at a finite temperature. Our resolution to the issue described above indicates that the minimal EWCS is proportional to the specific algebraic sum of the lengths of the geodesics homologous to the particular combinations of intervals as described in [@Chaturvedi:2016rcn].
This article is organized as follows. In section \[sec\_ew\_review\] we review the entanglement wedge construction given in [@Takayanagi:2017knl] and the subsequent computation of holographic entanglement negativity described in [@Kudler-Flam:2018qjo; @Kusuki:2019zsp]. Following this we describe the mismatch of the results for the single interval at a finite temperature in [@Kudler-Flam:2018qjo; @Kusuki:2019zsp] with existing results in the literature before moving on to digress briefly on the monodromy method utilized to compute the entanglement negativity. In section \[sec\_alt\_construct\] we propose our alternate construction for the minimal entanglement wedge cross section for this configuration and compute the holographic entanglement negativity utilizing this construction. Finally, we summarize our results in section \[sec\_summary\] and present our conclusions.
Review of entanglement wedge for a holographic $CFT_{1+1}$ at a finite temperature {#sec_ew_review}
==================================================================================
In a significant communication [@Takayanagi:2017knl], Takayanagi and Umemoto advanced a construction for the computation of the minimal cross section of the entanglement wedge. In subsection \[subsec\_en\_from\_ew\] we briefly review their construction, and the computation of the holographic entanglement negativity utilizing their construction, as described in [@Kudler-Flam:2018qjo; @Kusuki:2019zsp]. Following this, in subsection \[subsec\_issues\], we outline the issue of a mismatch between the results for the holographic entanglement negativity for a single interval at a finite temperature described in the previous subsection with known results in the literature. Finally, we provide a concise review of the monodromy method utilized by the authors in [@Malvimat:2017yaj; @Kusuki:2019zsp] in subsection \[subsec\_monodromy\], before proceeding in section \[sec\_alt\_construct\] to advance an alternate construction for the computation of the minimal entanglement wedge cross section, which successfully resolves the issues brought up in subsection \[subsec\_issues\].
Computation of holographic entanglement negativity from entanglement wedge {#subsec_en_from_ew}
--------------------------------------------------------------------------
We begin with a brief review of the entanglement wedge construction in the context of a single interval in a $(1+1)$ dimensional holographic CFT at a finite temperature, dual to a bulk planar BTZ black hole as described in [@Takayanagi:2017knl]. For this, the authors considered the bipartition (refer to figure \[Single-interval\]) comprising a single interval (of length $l$), denote by $A$, with the rest of the system denoted by $B$. As noted in [@Takayanagi:2017knl], the minimal cross section of the entanglement wedge corresponding to the intervals $A$ and $B$, denoted by $\Sigma_{AB}^{\text{min}}$, has two possible candidates. The first one, denoted by $\Sigma_{AB}^{(1)}$, is the union of the two dotted lines depicted in figure \[Single-interval\], while the other one, $\Sigma_{AB}^{(2)}$, is given by the RT surface $\Gamma_A$. Then, the minimal entanglement wedge cross section (EWCS) was shown to be given by [@Takayanagi:2017knl] $$\begin{aligned}
\label{ew_takayanagi_formula}
E_W(A:B)&=\frac{c}{3}\text{min}
\left[ \text{Area}\left(\Sigma_{AB}^{(1)}\right),
\text{Area}\left(\Sigma_{AB}^{(2)}\right) \right] \\
\label{ew_takayanagi_result}
&=\frac{c}{3}\text{min}\left[\ln\left(\frac{\beta}{\pi \epsilon}\right),
~\ln \left(\frac{\beta}{\pi\epsilon} \sinh\frac{\pi l}{\beta}\right)\right] ,\end{aligned}$$ where $\epsilon$ is the UV cut off, and $\beta$ is the inverse temperature. Recently in a series of significant communications [@Kudler-Flam:2018qjo; @Kusuki:2019zsp], the authors advanced an intriguing proposal that for configurations involving spherical entangling surfaces, the holographic entanglement negativity ${\cal E}$ may be expressed in terms of the minimal EWCS, which, in the context of $AdS_3/CFT_2$, is given by $$\label{en_ew_conjecture}
{\cal E} = \frac{3}{2} E_W .$$
![Illustration of minimal entanglement wedge cross section for a single interval in a finite temperature $CFT_{1+1}$ in the boundary, dual to bulk planar BTZ black hole geometry.[]{data-label="Single-interval"}](Single_interval_wedge.eps)
Thus, according to their proposal, the holographic entanglement negativity of the single interval $A$ at a finite temperature may be obtained from eq. as [@Kudler-Flam:2018qjo; @Kusuki:2019zsp] $$\label{naive}
\mathcal{E}=\frac{c}{2}\mathrm{min}\left[\ln \left(\frac{\beta}{\pi\epsilon} \sinh\frac{\pi l}{\beta}\right), ~\ln\left(\frac{\beta}{\pi \epsilon}\right)\right].$$
The authors substantiated their results by utilizing the monodromy technique to extract the dominant contribution of the conformal block expansions for the $s$ and $t$ channels for the relevant four point function. However the above results do not exactly reproduce the corresponding replica technique results for the mixed state configuration in question described in [@Calabrese:2014yza] except in the low and high temperature limits. Specifically the thermal entropy term is missing from the holographic entanglement negativity and this issue requires further analysis. Having described the entanglement wedge construction in brief, in subsection \[subsec\_issues\] we will precisely describe the above issue for the computation of the holographic entanglement negativity for the mixed state configuration of the single interval, utilizing the minimal EWCS as described in [@Kudler-Flam:2018qjo; @Kusuki:2019zsp].
Issue with computation of holographic entanglement negativity {#subsec_issues}
-------------------------------------------------------------
In a crucial communication [@Calabrese:2014yza], Calabrese, Cardy, and Tonni computed the entanglement negativity of a single interval (of length $l$) for a $CFT_{1+1}$ at a finite temperature, which is given by $$\label{en_sg_cardy}
{\cal E}=\frac{c}{2}\ln \left(\frac{\beta}{\pi\epsilon} \sinh\frac{\pi l}{\beta}\right)-\frac{\pi c l}{2 \beta} + f \left( e^{-2 \pi l / \beta} \right) + 2 \ln c_{1/2} ,$$ where $\epsilon$ is the UV cut off. Here $f$ is a non universal function and $c_{1/2}$ is a non universal constant, both of which depend on the full operator content of the theory.
Comparing eq. with eq. , it can be seen that the holographic entanglement negativity, as computed from eq. , does not match exactly[^7] with the corresponding replica technique result reported in [@Calabrese:2014yza], in the large central charge limit. Specifically the subtracted thermal entropy term in the universal part of the replica technique result is missing in the expression for the holographic entanglement negativity of the single interval at a finite temperature described in [@Kudler-Flam:2018qjo; @Kusuki:2019zsp]. One may further observe that the entanglement negativity in eq. reduces to that in eq. only in the specific limits of low temperature ($\beta \to \infty$) and high temperature ($\beta \to 0$).
Having briefly discussed the issue with the entanglement wedge construction as described in subsection \[subsec\_en\_from\_ew\], we now proceed to digress on the monodromy technique in subsection \[subsec\_monodromy\], before proposing the solution to this problem in section \[sec\_alt\_construct\].
Results from monodromy technique {#subsec_monodromy}
--------------------------------
In this subsection we briefly recollect the results obtained through the monodromy technique employed by the authors in [@Kusuki:2019zsp] to compute the entanglement negativity as given in eq. . The entanglement negativity for a single interval in a zero temperature $CFT_{1+1}$ is computed through the following four point correlation function of the twist fields in the complex plane $$\label{en_twist_4pt_fn_cplane}
{\cal E}
= \lim_{ n_e \to 1 } \ln
\left \langle {\cal T}_{n_e} (0)
\overline{{\cal T}}_{n_e}^2 (x)
\overline{{\cal T}}_{n_e}^2 (1)
{\cal T}_{n_e} (\infty) \right \rangle_{ \mathbb{C} } ,$$ where the interval $A$ is described by $ [ 0, x ]$, while the rest of the system $B$ is given by $ [ 1, \infty ] $. This specific choice of the coordinates may be implemented through a suitable conformal transformation (see [@Malvimat:2018txq] for a detailed review).
For a $CFT_{1+1}$ at a finite temperature $ 1/\beta $, the entanglement negativity for a single interval may be computed from eq. through the conformal transformation $ z \to w = \left( \beta / 2 \pi \right) \ln z $ from the complex plane to the cylinder. Note that here we have to compactify the Euclidean time direction to a circle with circumference $ \beta $. The transformation of the four point twist correlator under this map is described by (see [@Calabrese:2014yza] for a review) $$\label{eq_twist_4pt_fn_trfn}
\begin{aligned}
& \left \langle {\cal T}_{n_e} (w_1)
\overline{{\cal T}}_{n_e}^2 (w_2)
\overline{{\cal T}}_{n_e}^2 (w_3)
{\cal T}_{n_e} (w_4) \right \rangle_{\beta} \\ \\
& \qquad\qquad\qquad = \prod_{i=1}^{4}
\left [ \left ( \frac{ d w(z) }{ d z } \right )^{ - \Delta_i }
\right ]_{ z = z_i }
\left \langle {\cal T}_{n_e} (z_1)
\overline{{\cal T}}_{n_e}^2 (z_2)
\overline{{\cal T}}_{n_e}^2 (z_3)
{\cal T}_{n_e} (z_4) \right \rangle_{ \mathbb{C} } .
\end{aligned}$$ Here $\Delta_i$s describe the scaling dimensions of the twist fields at $ z = z_i $. Utilizing this conformal transformation, the entanglement negativity for a single interval in a finite temperature $CFT_{1+1}$ may be obtained through the following equation $$\label{en_twist_4pt_fn_finite_temp}
\begin{aligned}
{\cal E}
&= \lim_{L \to \infty} \lim_{ n_e \to 1 } \ln
\left \langle {\cal T}_{n_e} (-L)
\overline{{\cal T}}_{n_e}^2 (0)
\overline{{\cal T}}_{n_e}^2 (l)
{\cal T}_{n_e} (L) \right \rangle_{ \beta } \\
&= \lim_{L \to \infty} \lim_{ n_e \to 1 } \ln
\left \langle {\cal T}_{n_e} (0)
\overline{{\cal T}}_{n_e}^2 (x)
\overline{{\cal T}}_{n_e}^2 (1)
{\cal T}_{n_e} (\infty) \right \rangle_{ \mathbb{C} }
+ \frac{c}{2} \ln \left( \frac{\beta}{2 \pi \epsilon} e^{\frac{\pi l}{\beta}} \right) ,
\end{aligned}$$ where the cross ratio $x$ is specified by $ \lim\limits_{L \to \infty} x = e^{-2 \pi l / \beta} $.
In the large central charge limit, the above four point twist correlator can be expressed in terms of a single conformal block (the block which is dominant in the conformal block expansion), which can be described by two channels depending on the cross ratio $x$, as described in [@Malvimat:2017yaj].
For the $s$ channel (described by $ x \to 0 $), the authors in [@Kusuki:2019zsp] have computed the four point function on the complex plane as $$\label{4pt_fn_s_channel}
\lim_{L \to \infty} \lim_{ n_e \to 1 } \ln
\left \langle {\cal T}_{n_e} (0)
\overline{{\cal T}}_{n_e}^2 (x)
\overline{{\cal T}}_{n_e}^2 (1)
{\cal T}_{n_e} (\infty) \right \rangle_{ \mathbb{C} }
\sim \frac{c}{4} \ln x .$$ Utilizing the above four point twist correlator, the entanglement negativity may be computed from eq. as $$\label{en_s_channel}
\mathcal{E}=\frac{c}{2}\ln\left(\frac{\beta}{\pi \epsilon}\right).$$ For the $t$ channel (given by $x\to 1$), the authors in [@Kusuki:2019zsp] have obtained the following four point function $$\label{4pt_fn_t_channel}
\lim_{L \to \infty} \lim_{ n_e \to 1 } \ln
\left \langle {\cal T}_{n_e} (0)
\overline{{\cal T}}_{n_e}^2 (x)
\overline{{\cal T}}_{n_e}^2 (1)
{\cal T}_{n_e} (\infty) \right \rangle_{ \mathbb{C} }
\sim \frac{c}{2} \ln \left( 1 - x \right),$$ utilizing which the entanglement negativity may be obtained from eq. as $$\label{en_t_channel}
\mathcal{E}=\frac{c}{2}\ln\left(\frac{\beta}{\pi\epsilon} \sinh\frac{\pi l}{\beta}\right).$$
We note that the four point functions on the complex plane given in eqs. and , utilized to compute the entanglement negativity for both the channels match with those obtained in [@Malvimat:2017yaj]. We further observe that the monodromy technique employed in [@Chaturvedi:2016rcn; @Malvimat:2017yaj] in their analyses, and the monodromy method utilized by the authors in [@Kusuki:2019zsp], all produce identical large central charge limit for the four point function.
However we would like to emphasize here that the authors in [@Kusuki:2019zsp], inspired by the large $c$ computations for entanglement entropy in [@Hartman:2013mia; @Headrick:2010zt] assumed that in the large $c$ limit the $s$ channel and the $t$ channel results are valid beyond their usual regime $ x \to 0 $ and $ x \to 1 $, that is, for $ 0 \leqslant x \leqslant \frac{1}{2} $ and $ \frac{1}{2} \leqslant x \leqslant 1 $ respectively. In other words their computation involves an assumption that there is a phase transition for the large central charge limit for the entanglement negativity of a single interval at $x=\frac{1}{2}$. Although this is true for the entanglement entropy [@Hartman:2013mia; @Headrick:2010zt], for this specific case of the entanglement negativity for a single interval this assumption is possibly invalid, as the required four point twist field correlator is obtained from a specific channel for the corresponding six point twist correlator as explained in detail in [@Malvimat:2017yaj]. The above conclusion is also supported by the alternate minimal EWCS that we propose in this article. We will elaborate on this point further in section \[sec\_alt\_construct\].
Having briefly reviewed the results from the monodromy technique employed to compute the relevant four point function of the twist fields, we are now in a position to address the issue mentioned in subsection \[subsec\_issues\]. To this end in section \[sec\_alt\_construct\] we advance an alternate construction for the computation of the minimal entanglement wedge cross section for the mixed state configuration of the single interval at a finite temperature under consideration.
Alternate minimal EWCS construction {#sec_alt_construct}
===================================
Having reviewed the entanglement wedge construction as proposed in [@Takayanagi:2017knl], the inconsistencies ensuing from it, and the monodromy technique used in [@Kusuki:2019zsp] in section \[sec\_ew\_review\], in this section we advance an alternate computation of the minimal entanglement wedge cross section (EWCS) for a single interval at a finite temperature in the context of $AdS_3/CFT_2$.
To this end we first consider a tripartite system comprising subsystems $A$, $B$ and $C$ in the boundary. We then consider the following properties of the minimal entanglement wedge cross section for tripartite pure state configurations (see [@Takayanagi:2017knl; @Nguyen:2017yqw] for a review) $$\label{tripartite_ub}
E_W(A:BC) \leq E_W(A:B) + E_W(A:C) ,$$ $$\label{tripartite_lb}
\frac{1}{2} I(A:B) + \frac{1}{2} I(A:C) \leq E_W(A:BC),$$ where $I(A:B)$ is the mutual information between $A$ and $B$. For any two adjacent intervals $A$ and $B$ in a $CFT_{1+1}$ dual to a planar BTZ black hole in the bulk, the minimal EWCS may be explicitly computed by taking the adjacent limit in the corresponding disjoint interval result derived in [@Takayanagi:2017knl]. The holographic mutual information for these adjacent intervals may also be explicitly calculated from the corresponding holographic entanglement entropies. Comparing these results, we obtain the following relation for this specific configuration $$\label{tripartite_ewcs_mi}
E_W(A:B) = \frac{1}{2} I(A:B) .$$ Substituting the result described in eq. into eq. , and comparing with eq. , we arrive at the following equality for the bulk BTZ black hole configuration $$\label{tripartite_equality}
E_W(A:BC) = E_W(A:B) + E_W(A:C) ,$$ where $B$ and $C$ are adjacent to $A$.
To compute the minimal EWCS, we now consider a tripartition (see figure \[Single-interval-modified\]) consisting of interval $A$, of length $l$, with two auxiliary intervals $B_1$ and $B_2$, each of length $L$, on either side of $A$. We denote $B_1 \cup B_2 \equiv B$. Next we take the bipartite limit $L \to \infty$ to recover the original configuration with a single interval $A$ and the rest of the system given by $B$. Note that upon implementing the bipartite limit, the configuration $ A \cup B $ describes the full system which is in a pure state that obeys eq. . Thus for this configuration (in the bipartite limit) $$\label{tripartite_equality_specific}
\lim_{L \to \infty} E_W(A:B_1 B_2)
= \lim_{L \to \infty} \left[ E_W(A:B_1) + E_W(A:B_2) \right].$$
![Alternate computation of the minimal EWCS (dotted lines) for a single interval in a finite temperature $CFT_{1+1}$, dual to a planar bulk BTZ black hole geometry.[]{data-label="Single-interval-modified"}](Tri_Single_wedge.eps)
By computing the right hand side of the above equation, we obtain the minimal EWCS for the bipartition involving the interval $A$ and rest of the system $B ~(\equiv B_1 \cup B_2) \equiv A^C$ as follows $$\label{ew_single_correct}
E_W(A:B)=\frac{c}{3}\ln \left(\frac{\beta}{\pi\epsilon} \sinh\frac{\pi l}{\beta}\right)-\frac{\pi c l}{3 \beta}.$$ The holographic entanglement negativity for the mixed state configuration of a single interval $A$ in a holographic $CFT_{1+1}$ at a finite temperature may now be obtained from eq. utilizing the proposal described in eq. as follows $$\label{en_single_correct}
\mathcal{E}=\frac{c}{2}\ln \left(\frac{\beta}{\pi\epsilon} \sinh\frac{\pi l}{\beta}\right)-\frac{\pi c l}{2 \beta}.$$ The above result matches exactly with the corresponding $CFT_{1+1}$ replica result described in eq. , in the large central charge limit.
We thus note that the holographic entanglement negativity as described by eq. , computed utilizing the construction advanced in eq. matches exactly with the replica technique result reported in [@Calabrese:2014yza], in the large central charge limit. This is in contrast with the result given in eq. , computed through the construction proposed in eq. , which matches with the result in [@Calabrese:2014yza] only in the specific limits of low and high temperature.[^8]
In figure \[plot\], we have plotted the minimal EWCS as a function of the inverse temperature $\beta$ to compare our wedge construction with that advanced in [@Takayanagi:2017knl]. In figure \[plot1\], the two possible candidates for the minimal EWCS described in eq. have been compared with our expression given in eq. . In figure \[plot2\], the minimal EWCS as prescribed in [@Takayanagi:2017knl], given in eq. , has been plotted along with the minimal EWCS obtained utilizing our proposition as described in eq. . It is interesting to note that our expression for minimal EWCS always remains strictly less than that proposed in [@Takayanagi:2017knl] for the range of $\beta$ used in the plot. This conclusively singles out our expression over the other one as the more appropriate expression for the minimal EWCS, given the fact that the former one reproduces the replica technique result described in [@Calabrese:2014yza], in the large $c$ limit. In the limits of high ($\beta\to 0$) and low ($\beta \to \infty$) temperatures, the two expressions asymptotically approach each other.
In subsection \[subsec\_monodromy\], we pointed out that the monodromy techniques described in [@Chaturvedi:2016rcn; @Malvimat:2017yaj; @Kusuki:2019zsp] gave identical form for the four point function, in the large central charge limit. With the wedge construction advanced by us above, we are now in a position to compare the holographic entanglement negativity computed utilizing the entanglement wedge conjecture proposed in [@Kusuki:2019zsp], as described in eq. , and that reported in [@Chaturvedi:2016rcn]. The minimal EWCS may then be shown to be proportional to the particular algebraic sum of the lengths of the geodesics homologous to the specific combinations of intervals as proposed for the holographic construction for the single interval in [@Chaturvedi:2016rcn]. In a similar manner, the minimal EWCS computed in [@Kudler-Flam:2018qjo; @Kusuki:2019zsp] for different mixed state configurations of two adjacent and disjoint intervals in a holographic $CFT_{1+1}$ may be shown to be proportional to similar algebraic sums of the lengths of the geodesics homologous to the specific combinations of intervals prescribed for the corresponding holographic constructions advanced in [@Jain:2017sct; @Malvimat:2018txq]. Taking cue from these observations, we may take a step further to propose that for any bipartite configuration in the $AdS_3/CFT_2$ scenario, the minimal EWCS is proportional to the relevant specific algebraic sum of the geodesic lengths. Finally, considering the fact that the arguments considered above are purely geometrical, we further suggest that the same will be valid for the more general $AdS_{d+1}/CFT_d$ framework.
[0.8]{} ![Plots for different choices of $E_W$ against the inverse temperature for a single interval at a finite temperature. Here, $\epsilon=0.01$ and $l=0.5$.[]{data-label="plot"}](Wedge_final_1.eps "fig:"){width="0.95\linewidth"}
\
[.8]{} ![Plots for different choices of $E_W$ against the inverse temperature for a single interval at a finite temperature. Here, $\epsilon=0.01$ and $l=0.5$.[]{data-label="plot"}](Wedge_final_2.eps "fig:"){width="0.95\linewidth"}
Discussion and conclusions {#sec_summary}
==========================
To summarize we have proposed an alternate construction to compute the minimal entanglement wedge cross section (EWCS) for a single interval in a finite temperature holographic $CFT_{1+1}$, dual to a bulk planar BTZ black hole. Our construction involved the introduction of two symmetric auxiliary intervals on either side of the single interval under consideration. Subsequently certain properties of the minimal EWCS along with a result specific to the BTZ configuration were utilized in this context. Finally a bipartite limit was implemented by taking the auxiliary intervals to be the rest of the system to determine the correct minimal EWCS for the original configuration. Interestingly our result matches exactly with the corresponding replica technique result reported in [@Calabrese:2014yza], in the large central charge limit. In particular our analysis resolves the issue of the missing subtracted thermal entropy term in the expression for the holographic entanglement negativity of the single interval at a finite temperature described in [@Kudler-Flam:2018qjo; @Kusuki:2019zsp].
From our analysis and a comparison of the results for the holographic entanglement negativity for other mixed state configurations described in [@Chaturvedi:2016rcn; @Jain:2017sct; @Malvimat:2018txq] based on relevant algebraic sum of the lengths of the geodesics homologous to appropriate combinations of intervals and those based on the minimal EWCS reported in [@Kudler-Flam:2018qjo; @Kusuki:2019zsp] we observe an interesting connection between the two approaches. Specifically we deduce that the minimal EWCS is actually proportional to the specific algebraic sum of geodesic lengths mentioned above in the context of the $AdS_3/CFT_2$ scenario which possibly extends to the higher dimensional $AdS_{d+1}/CFT_d$ framework. Such an extension would involve a similar algebraic sum of bulk co dimension two extremal surfaces anchored on appropriate boundary subsystems to be proportional to the corresponding minimal EWCS in the bulk. However this needs further investigation and corroboration through appropriate consistency checks involving applications to specific examples of subsystem geometries. These constitute interesting open issues for future investigations.
Acknowledgments {#sec_ackw}
===============
We acknowledge Jonah Kudler-Flam, Yuya Kusuki and Shinsei Ryu for communication, which led to this work. We would like to thank Saikat Ghosh for useful discussions.
[^1]: E-mail: [jaydeep@iitk.ac.in]{}
[^2]: E-mail: [vinaymm@iiserpune.ac.in]{}
[^3]: E-mail: [himansp@iitk.ac.in]{}
[^4]: E-mail: [paul@iitk.ac.in]{}
[^5]: E-mail: [sengupta@iitk.ac.in]{}
[^6]: Several other measures to characterize mixed state entanglement have also been proposed in quantum information theory. However most of these involve optimization over LOCC protocols and are not directly computable.
[^7]: In [@Takayanagi:2017knl], the authors have indicated that for a single interval at a finite temperature with a length $ l \gg \beta \ln (\sqrt{2} + 1 )/\pi $, the extensive contribution is missing in the expression for the minimal EWCS, as described in eq. .
[^8]: This may be understood from the fact that the monodromy technique applied to the four point twist correlator fixes it only at the two limits of the cross ratio $x$ [@Calabrese:2014yza]. To fix the four point twist correlator over the full range of the cross ratio $x$ it is required to analyze the corresponding six point function as described in [@Malvimat:2017yaj].
|
---
abstract: '[The presence of a turbulent magnetic field in the quiet Sun has been unveiled observationally using different techniques. The magnetic field is quasi-isotropic and has field strengths weaker than 100G. It is pervasive and may host a local dynamo.]{} [We aim to determine the length scale of the turbulent magnetic field in the quiet Sun.]{} [ The Stokes V area asymmetry is sensitive to minute variations in the magnetic topology along the line of sight. Using data provided by Hinode-SOT/SP instrument, we performed a statistical study of this quantity. We classified the different magnetic regimes and infer properties of the turbulent magnetic regime. In particular we measured the correlation length associated to these fields for the first time.]{} [The histograms of Stokes V area asymmetries reveal three different regimes: one organized, quasi-vertical and strong field (flux tubes or other structures of the like); a strongly asymmetric group of profiles found around field concentrations; and a turbulent isotropic field. For the last, we confirm its isotropy and measure correlation lengths from hundreds of kilometers down to 10km, at which point we lost sensitivity. A crude attempt to measure the power spectra of these turbulent fields is made.]{} [In addition to confirming the existence of a turbulent field in the quiet Sun, we give further prove of its isotropy. We also measure correlation lengths down to 10km. The combined results show magnetic fields with a large span of length scales, as expected from a turbulent cascade.]{}'
author:
- '[A. L[ó]{}pez Ariste]{}'
- 'A. Sainz Dalda'
date: 'Received ; accepted'
title: 'Scales of the magnetic fields in the quiet Sun.'
---
Introduction
============
This work is dedicated to explore the properties of the turbulent magnetic field in the quiet Sun through the analysis of the asymmetries in the Stokes V profiles observed by Hinode-SOT/SP [@kosugi_hinode_2007; @tsuneta_solar_2008] in a Zeeman-sensitive line. The existence of a magnetic field turbulent in nature in those places with weak Zeeman signals and absence of temporal coherence in the plasma flows is taken for granted and, from Section 2 and thereafter, we shall not discuss whether these fields are turbulent or not. Although our analysis provides further evidence of the existence of this turbulent field, we will assume that this existence is proven and make our analysis in that framework, studying the coexistence of those turbulent fields with others more structured in nature, whose existence is also beyond questioning. Despite that, but also because of that, we now dedicate a few lines to justify and put into context the turbulent nature of the magnetic field in most of the quiet Sun.
The existence of a turbulent magnetic field accompanying the turbulent plasma in the quiet Sun is not a theoretical surprise, rather the opposite. Upon the discovery of flux tubes in the photospheric network, [@parker_dynamics_1982] expressed surprise at the existence of these coherent magnetic structures and referred to them as *extraordinary state of the field*. He argued in that paper that only under conditions of temporal coherence of the plasma flows in the photosphere could these structures be stable. Consequently one could expect to find them in the photospheric network where advection flows concur before dipping into the solar interior. Similar conditions can be found here and there in the internetwork in those intergranular lanes where plumes have grown strong enough to survive granular lifetimes. Everywhere else the high Reynolds number of the photospheric plasma does not allow any structure of any kind and a turbulent field, if anything, is to be expected. Other theoretical analyses [@petrovay_turbulence_2001] confirm and insist on these turbulent fields. The first attempts of numerical simulations of magnetoconvection [@nordlund_dynamo_1992] revealed a magnetic field whose field lines, away from downflows, are twisted and folded even for the low Reynolds numbers (kinetic and magnetic) and for the wrong ratio of both adimensional quantities. Independent of the existence of a local dynamo on which most of these simulations focus, the turbulent field is there.
Observationally, the picture has been different until recently. The discovery of flux tubes in the network [@stenflo_magnetic-field_1973] together with the common observation of G-band bright points in high-resolution images of the photosphere has spread the idea that flux tubes are everywhere. Because inversion techniques for the measurement of the magnetic field mostly used Milne-Eddington atmospheric models that assume a single value of the magnetic field per line of sight, whenever they have been used in the quiet Sun, a single field value was attached to a point in the photosphere, which additionally spread the impression that it was the field of the flux tube. Doubts were cast on this observational picture of the quiet Sun when magnetic measurements using infrared lines produced fields for the same points different than measurements with visible lines. The wavelength dependence of the Zeeman effect sufficed to expose the fact that a single field could not be attached to a given point in the quiet Sun. Both measurements were right, in the sense at least that they were measuring different aspects of the complex magnetic topology of the quiet Sun. Turbulent fields had in the meantime been the solution offered by those observing the quiet Sun through the Hanle effect. The absence of Stokes U in those measurements and the high degree of depolarization of the lines pointed to a turbulent field as the only scenario fitting their observations. Unfortunately, the difficulties both in the observations (very low spatial and temporal resolutions) and in the diagnostic (many subtle quantum effects involved) made the comparison with the observations using Zeeman effect difficult. The advent of the statistical analysis of Zeeman observations has solved the problem. First it was the observation that the quiet Sun, if one excludes the network and strong magnetic patches from the data, looks suspiciously similar independent of the position on the solar disk that one is observing [@martinez_gonzalez_near-ir_2008]. This independence of the measurements with the viewing angle pointed toward isotropy, a characteristic of the quiet Sun fields later confirmed by [@asensio_ramos_evidence_2009]. Then came the realization that the average longitudinal flux density measured in the quiet Sun at different spatial resolutions was roughly the same [@lites_characterization_2002; @martinez_gonzalez_statistical_2010]. This could only be interpreted that either the field was already resolved, but obviously it was not, or that the observed signals were just the result of a random addition of many magnetic elements. Within the limit of large numbers the amplitude of this fluctuation only depends on the square root of the size and not linearly, as expected for a non-resolved flux tube. The turbulent field is in this way unveiled by the statistical analysis of Zeeman effect, and it was shown that Zeeman signatures in the quiet Sun were often merely statistical fluctuations of the turbulent field and not measurements of the field itself [@lopez_ariste_turbulent_2007].
The solar magnetic turbulence was therefore explored in a statistical manner. Furthermore, it was explored assuming that different realizations of the magnetic probability distribution functions sit side by side. In this approximation one can compute the resulting polarization by just adding up the individual contributions of each magnetic field. The Stokes V profile caused by the Zeeman effect of each individual magnetic field will be anti-symmetric with respect to the central wavelength, with one positive and one negative lobe. The areas of the two lobes of every profile are identical and their addition, the area asymmetry, will be zero. Adding many such polarization profiles will alter the resulting profile, but the area asymmetry of the final profile will always be zero. A completely different result is obtained if we consider the different realizations of the magnetic field probability distribution function placed one after the other along the line of sight. Computing the resulting polarization profile now requires integrating the radiative transfer equation for polarized light in a non-constant atmosphere. If the variations in the magnetic field along the line of sight are associated with velocities, the integration results in a profile that lacks any particular symmetry.
Therefore, measuring and analyzing the area asymmetry of the Stokes V profiles in the quiet Sun provides information on the properties of the turbulent magnetic field along the line of sight, in contrast with the previous studies, which only explored this turbulence in terms of accumulation of magnetic elements in a plane perpendicular to the line of sight. At disk center, the line of sight means exploring those fields with depth, while near the limb it means exploring fields sitting side by side. Comparing asymmetries in statistical terms from quiet regions at different heliocentric angles provides us with a probe on the angular dependence of the magnetic fields, and this is one of the purposes of this paper. In Section 2 we describe the asymmetries observed by Hinode-SOT in these terms and recover the three expected magnetic regimes: the structured and mostly vertical strong fields (strong in terms of quiet Sun magnetism), the turbulent, ubiquitous, disorganized and weak fields and a class of profiles with strong asymmetries that can be observed at those places where the line of sight crosses from one regime to the other, from turbulent to organized.
Focusing on these profiles assigned to turbulent magnetic fields, the value of the area asymmetry can be linked with the dominant scales of variation of the magnetic field. The results on stochastic radiative transfer that allow us to make that link are recalled in section \[sec\_scales\], in particular those of [@carroll_meso-structured_2007]. Thanks to those works we can quantitatively determine the correlation length of the magnetic field for every value of area asymmetry. From this determination we attempt to give an energy spectrum for the magnetic turbulence at scales below the spatial resolution. For this attempt, we will use the longitudinal flux density as a lower boundary to the field strength, and hence to the magnetic energy, and plot it versus the correlation length already determined. The approximations and simplifications made to reach this result may appear excessive to the reader. We argue that it is important not as an energy spectrum to be compared to numerical simulations or to theoretical considerations, but as a first attempt that follows what we consider to be a promising tool and method for more elaborated and reliable determinations of the magnetic energy spectrum. We also stress that through the proxy of the asymmetries of the profiles (seen through the models, tools, and results of stochastic radiative transfer), we can access scales of variation of the magnetic field 10 times smaller than the diffraction limit of our best instruments and probably smaller or comparable to the mean free path of the photons in the photosphere.
Statistics of area asymmetries of the Stokes V profile.
=======================================================
To collect data on asymmetries of the Stokes V profiles in the quiet Sun at different heliocentric angles, we examined data from the SOT/SP instrument on board Hinode. Several large area scans of the quiet Sun at different positions on the solar disk were selected. Table \[latabla\] summarizes the observational features of those data. The spatial sampling (0.15$\times$ 0.16) and spectral sampling (roughly 21 mÅ) were the same for all observing runs, but the exposure times changed from one to other. The data were calibrated using the standard procedure, which is available in [*SolarSoft*]{} and has been developed by B. Lites. The area asymmetry is measured as the integral of the profile over the Fe [I]{} 6302.5 Å line, normalized to the total enclosed area, and with corrected polarity. Before this measurement a denoising based on PCA (Principal Component Analysis) of the data has been performed [@martinez_gonzalez_pca_2008]. This allows one to establish true noise and signal levels for the signal. In brief, the eigenvectors of the correlation matrix of each data set are computed at well-defined heliocentric angles. The data are reconstructed with only the ten first eigenprofiles. The rest are added to provide a measurement of noise. Histograms of this residual show the unmistakable Gaussian distribution shape of noise with typical values of $5\times10^{-4}$ the intensity of the continuum for the maximum of the distribution.
The first eigenprofile is, as usual in PCA techniques, the average eigenprofile and accordingly presents a nice antisymmetric shape with two well-defined lobes. This eigenprofile is used to automatically detect the polarity of every observed profile: a positive coefficient for this eigenprofile indicates a positive polarity profile. With this simple test we can automatically assign an unambiguous sign for the area asymmetry for all but a few anomalous cases, which can be easily disregarded because of their scarcity. The resulting sign of the area asymmetry is therefore related to the area of the blue lobe of Stokes V compared to that of the red lobe, and has no relationship with the polarity of the field. A positive area asymmetry results when the red lobe encloses a larger area than the blue lobe.
Using a scalar parameter like the area asymmetry to describe the many different spectral features that a Stokes V profile can present can be perceived as simplistic considering the magnetically and thermodynamically complex atmospheres in which it is formed. Studies and classifications of those shapes have been made and interpreted in the past [@sanchez_almeida_observation_1992; @viticchie_interpretation_2010] and we refer to them for details on this aspect of asymmetries. No doubt, a scalar like the area asymmetry hides all that richness and we should worry about the possibility that our conclusions could be polluted or invalidated because of that reason. In face of that criticism we claim that the area asymmetry, contrary to the amplitude asymmetry, allows a relatively easy analytical computation under unspecified variations of the atmospheric parameters [@lopez_ariste_asymmetry_2002]. The shape of the profiles is implicitly taken into account in these computations and, however complex, it does not alter the known dependencies of the area asymmetry on magnetic and velocity fields. Nevertheless, the shape of the profile, or of any particular spectral feature, can be attributed to particular variations in the atmosphere, something that cannot be achieved with only a value for the area asymmetry. As an example, we mention below the interpretation of single-lobe V profiles as produced in atmospheres with jumps along the line of sight. Because this work is limited to the area asymmetry, we avoid any conclusion on particular conditions along the line of sight. We will therefore describe the atmospheres through a simple correlation length with no additional details on the geometry of the fields, details that the analysis of profile shapes may eventually provide.
[ccccccc]{} Date Time & X scale & Y scale & ( X, Y) & $\mu$ & Exp. Time & S/N\
& & & & & &\
\
2007-09-01 20:35 & 0.15 & 0.16 & (-153.1, 922.9) & 0.224 & 8.0 & 549.\
2007-09-06 15:55 & 0.15 & 0.16 & ( -34.6, 7.0) & 0.999 & 8.0 & 1150.\
2007-09-09 07:05 & 0.15 & 0.16 & ( 646.8, 7.2) & 0.739 & 9.6 & 957.\
2007-09-27 01:01 & 0.15 & 0.16 & (-1004., 7.5) & 0.000 & 12.8 & 887.\
\[latabla\]
Figure \[view1\] shows the data on area asymmetry for four different heliocentric angles ($\mu=1, 0.88, 0.7,$ and 0.2). The left column shows the typical histogram of frequency as a function of signed area asymmetry. The position of the maxima of these histograms is given as a vertical dashed line. The histograms are quite common bell-shaped distributions with a slight bias toward negative area asymmetries, that is, toward profiles with a blue lobe larger than the red one. That bias is stronger as we approach the solar limb [@martinez_pillet_active_1997]. Larger blue lobes as observed have been observed in the past in active regions and network fields [@solanki_properties_1984]. They have been traditionally interpreted as being the result of the more usual gradients in the solar atmosphere [@solanki_can_1988]. More precise information can be obtained if we make a 2D histogram as a function of the signal amplitude. This signal amplitude of the Stokes V profile in quiet Sun conditions can be safely interpreted as longitudinal flux density. Although given in terms of polarization levels, it can as a rule of thumb be interpreted as $\times1000\ MW/cm^2$. These histograms are shown in the right column of Fig. \[view1\]. In them we plot in gray levels (color online) the histogram of area asymmetry of profiles with that signal amplitude. Indeed, to better display the strong, organized but relatively rare fields, the histograms are plotted not linearly but logarithmically. What is apparent in these plots is, first, a component of strong but weakly asymmetric fields with amplitudes higher than roughly 3% (or $30\ MW/cm^2$). This class of fields is an important contribution to the histogram at and near disk center but is only marginally important near the limb. The obvious interpretation is that these fields are the structured non-turbulent and mostly vertical fields that can be found in the photospheric network or in particular intergranular lanes. This hypothesis can be confirmed by inspecting the actual magnetograms compared with the intensity maps (not shown here), rather than in the histograms as presented. Because they are mostly vertical, it is clear that their signature in Stokes V will diminish as we approach the limb where these fields are seen transversally. Their asymmetries are small and centered around zero in all data sets. This indicates that these are very coherent structures with few variations (magnetic and velocity fields alike) along the line of sight.
![image](fig1.ps){width="17cm"}
Except for this class of strong and weakly asymmetric profiles, the histograms are dominated by fields with low amplitude values (though well above the noise limit) that can produce almost any possible asymmetry from 0 to $\pm 1$. The most extreme asymmetries correspond to profiles with just one lobe, and we discuss them below. Now we concentrate on the distinction between this class of profiles with any asymmetry but weak amplitude and the symmetric and strong-amplitude class described in the previous paragraph. The separation between those two classes can better be seen with the help of Fig.\[view3\], particularly the right column of 2D histograms. We computed the position of the maximum of the histogram for each value of the amplitude signal and plotted it versus amplitude in the left column of that figure. The strong fields have maxima at zero asymmetry. Maxima start drifting toward negative values for amplitudes below 5% (or $50\ MW/cm^2$) and we can place a boundary between the two classes at amplitudes of 3% (or $30\ MW/cm^2$). We observe that the fields responsible for this change in asymmetry appear to be mostly insensitive to the heliocentric angle. This can be seen in Fig.\[view4\] where the histograms were made exclusively with profiles with amplitudes below the boundary of 3% (or $30\ MW/cm^2$). We overplotted in the same figure and at the same scale the histograms for all heliocentric angles to facilitate the comparison. The few differences with heliocentric angle that were seen in Fig.\[view1\] have now almost disappeared. Only a small variability in the slope toward negative area asymmetries is noticeable now. This variability is in contrast with the almost perfect superposition of the histograms in their slope toward positive area asymmetries. We are unable to offer a definitive explanation for the variability in one of the slopes, but we nevertheless stress the weak dependence of those histograms on the heliocentric angle. It is not proof, but yet suggestive evidence of the general isotropy of those fields that their signature in terms of area asymmetries is independent of the view angle under which they are observed.
![image](fig2.ps){width="17cm"}
It would be premature to identify any and all fields with amplitudes below 3% with the turbulent fields that pervade the quiet Sun. The histograms of Fig.\[view4\] suggest the conclusion that they appear to be isotropic, which is an important characteristic of turbulent magnetic fields in the sense of fields frozen to a turbulent photospheric plasma at high values of the plasma $\beta$ parameter. The reason for this caution is that radiative transfer of polarized light through an atmosphere at the microturbulent limit concludes that asymmetries larger on average than 0.53 are impossible [@carroll_meso-structured_2007]. Therefore, those observed profiles with amplitudes below 3% but with extreme asymmetries cannot be considered as being formed by radiative transfer through turbulent fields. At this point, we recall that those anomalous single-lobed profiles, with extreme asymmetries, are not observed at random over the solar surface but are instead systematically observed over the boundaries of regions with concentrations of strong magnetic fields [@SainzDalda12]. This realization suggests that the origin of those profiles is a line of sight that crosses from a weak disordered magnetic region to a strong and organized magnetic structure, with magnetic and velocity fields completely uncorrelated in one and the other regions. Simulation of these scenarios has successfully reproduced these single-lobed profiles with strong area asymmetries. We should therefore distinguish among two different magnetic scenarios: the turbulent field we are interested in studying and the single-lobed Stokes V profiles appearing at the boundaries of concentrations with strong magnetic field. It is essential to be able to distinguish between these two classes of profiles by looking at the statistics of the asymmetries alone. Consequently, in Fig.\[view5\], we try to ascertain the contribution of single lobe profiles to our histograms. Since it is difficult to see the true weight of profiles with a given asymmetry in a histogram with a logarithmic scaling, we computed the 95 percentile of the area asymmetry, that is, the value of the asymmetry such that 95% of the profiles with the same amplitude have an area asymmetry smaller than, or equal to, that value. We see in the left column of Fig.\[view5\] that the 95 percentile area asymmetry is almost constant for all these fields with amplitudes below 3%. That is, once we enter into the class of weak fields, profiles with extreme asymmetries beyond 0.7 contribute to the histograms with less than 5% of the cases, and that is independent of amplitude or heliocentric angle. We consider that given the measurement conditions and the rough proxy that the 95 percentile is, the constant value of 0.7 is to be identified as the microturbulent asymmetry limit. We conclude therefore that 95% of the profiles with weak amplitudes have asymmetries that can be explained as radiative transfer of polarized light through a turbulent magnetic field. The constancy of this value for any amplitude and any heliocentric angle is in our view a strong statement in favor of this identification. What other explanation as simple as the one we put forward can be offered to explain that asymmetry values fall predominantly in the range (0-0.7) independent of both signal amplitude (i.e. longitudinal flux density) and heliocentric angle?
Summarizing, in this section we have successfully identified three different magnetic regimes in the data of area asymmetries: first, a strong, coherent, organized, mostly vertical and weakly asymmetric field, second, the anomalous single-lobed Stokes V profiles with strong asymmetries contributing to less than 5% of the cases, which are located in the boundaries of strong-magnetic field concentrations, and finally a turbulent field that is quasi-isotropic and has asymmetries in the range expected from stochastic radiative transfer calculations. With this classification we reach the first and primary result of the present work, which is that also through area asymmetries we identify the presence of a turbulent, quasi-isotropic magnetic field component in the quiet Sun. Asymmetries are sensitive to the variations of the magnetic and velocity field along the line of sight. This is complementary to previous studies that concentrated on statistics of amplitudes and amplitude ratios that were sensitive to magnetic fields placed side by side over the photosphere. Despite this complementarity, we arrive at the same conclusion of the presence of a turbulent and isotropic field in the quiet Sun.
![image](fig4.ps){width="17cm"}
Scales of the turbulent magnetic field in the quiet Sun. {#sec_scales}
========================================================
We now focus on the regime of turbulent fields identified in the statistical data of area asymmetries. These fields are mostly isotropic, as is to be expected from turbulence associated to the photospheric plasma, and they have low amplitudes. They should not be interpreted as the direct, which is best described by a probability distribution function [@trujillo_bueno_substantial_2004; @dominguez_cerdena_distribution_2006; @lopez_ariste_turbulent_2007; @sanchez_almeida_simple_2007; @sampoorna_zeeman_2008; @stenflo_distribution_2010]. A good choice for that distribution function is a Maxwellian for the modulus of the vector (the field strength) [@dominguez_cerdena_distribution_2006; @lopez_ariste_turbulent_2007; @sanchez_almeida_simple_2007], which is fully isotropic for its inclination and azimuth in whatever reference system of choice. The average value of this probability distribution function for a vector field is the null vector. But it would be a mistake to interpret from that zero average that no polarization signals would be observed if this were the topology of the field in the quiet Sun. The average of the distribution is only attained at the limit of infinite realizations. Only if the scale of variation of the magnetic field, along the line of sight or across our resolution element, were zero the average would be realized and observed. But in reality that scale of variation is nonzero, and both along the formation region of the observed spectral line and across the spatial resolution element of our observation, a finite number of magnetic field realizations is found. Their integration will fluctuate around the average zero value, but will not be zero. Those fluctuations are what is to be expected to become the observed of longitudinal flux density in a turbulent magnetic field scenario. The variance of those fluctuations will diminish with the square root of the number of realizations. Therefore one should expect in the approximation of magnetic realizations sitting side by side across the spatial resolution element that as the spatial resolution increases, the average longitudinal flux increases not with the square of the size of the resolution element (as could be expected if unresolved magnetic structures were present), but linearly with that size. This is what is indeed observed in the quiet Sun, where observations of the average flux density in the quiet Sun have shown no particular difference between instruments with 1, 0.6 or 0.3 resolutions [@martinez_gonzalez_statistical_2010]. This is one of the strongest arguments in favor of a turbulent magnetic field in the quiet Sun.
In terms of area asymmetries we are concerned about the number of realizations of the magnetic probability distribution function along the formation region of the observed line. Contrary to the case of the spatial resolution, it is not evident to change the size of the sampling region. The formation region of a spectral line is what it is and cannot be changed and, in the photosphere, most spectral lines of interest have formation regions that are too similar in size and position to be of any interest in the present terms. However, it is of great advantage that the size of those formation regions is quite small, a few hundred kilometers at most, and the response functions over those regions are not flat but present a shape that has led many authors to speak of height of formation rather than region of formation despite warnings about this oversimplification [@sanchez_almeida_heights_1996]. Therefore, when observing area asymmetries we are sampling those small regions of formation. We are directly sampling scales of less than 100 $km$. These scales are difficult to access for any diffraction-limited imaging instrument. Therefore area asymmetries present a clear advantage in terms of sensing the scales of variation of turbulent magnetic fields.
To correctly interpret the actual values of area asymmetries in terms of scales of variation of the magnetic field, we require a theory for radiative transfer of polarized light through stochastic atmospheres. Such models can be found in the works of [@auvergne_spectral_1973; @frisch_non-lte_1976; @frisch_stochastic_2005; @frisch_stochastic_2006] and [@carroll_line_2005], to give a few solar-related examples. Here we used the result of [@carroll_meso-structured_2007] since they explicitly computed area asymmetries as a function of the correlation length of the parameters in their stochastic atmospheres. The correlation length, the key concept for the goal of this paper, describes the distance between two points along the light path for which the probability of the magnetic field being the same is small [^1], assuming a Markovian model in which this probability falls with increasing distance. This correlation length describes *“the mean length scale of the structures”*, to quote [@carroll_meso-structured_2007]. Roughly in every correlation length along the light path there is a change in the value of the atmospheric parameters, and gradients consequently appear producing asymmetries in the profiles. If the correlation length is longer than the formation region of the line, there will be no change in parameters, the magnetic and velocity fields will be constant and no asymmetries must be expected. With a shorter correlation length than the formation region, more gradients appear and accordingly asymmetry grows. However, if the correlation length shrinks beyond the mean free path of the photon in the atmosphere, there are no atom-photon interactions to take those small-scale gradients into account and the asymmetries saturate to a *microturbulent* limit. This is the expected dependence of asymmetries with correlation length; and this is what [@carroll_meso-structured_2007] found in their calculations, shown in their Fig. 1 (right side), which we reproduce here in Fig. \[curva\] for the sake of completeness. The details on the computation of the figure are given by the authors of the referred work. The figure refers to the same line as used in the observations of the present work. Using the computed relation shown in that figure but reversing the argument, the observed asymmetries of those profiles (identified as belonging to the turbulent regime in the previous section) can be translated into correlation lengths. Since all asymmetries from 0 to the micro turbulent limit of 0.53 are observed, we can already conclude that in the data considered for this work there are profiles formed in regions with variations of the magnetic field at scales of 10km. Below that length the asymmetries quickly and asymptotically approach the microturbulent limit and we loose our sensitivity. It is important, before proceeding any further, to stress that point: the observation of asymmetries in the Stokes V profiles allows us to identify tiny scales of variation for the magnetic fields that produce those signals. Many of the strong signals identified in the previous section with the structured and mostly vertical fields presented asymmetries around zero. That is, those structures had correlation lengths or were coherent over scales of hundreds of km up to 1000km. This is what we should expect from structured fields and it is justified that the best instruments in terms of spatial resolution start resolving them more and more frequently.
On the other hand, for the fields that we assigned to the turbulent regime, the range of variation of observed asymmetries fills the allowed range of variation all the way up to the microturbulent limit. Thus we have identified profiles that arise from regions with magnetic fields varying at scales of less than 100km and eventually down to 10km. These are actual observations, not simulations or estimates. To put those scales in another context, they correspond to the diffraction limit of an instrument with 7 meters of diameter of entrance pupil. More interesting, it is comparable to, or smaller than, the mean free path of the photon [@mihalas_stellar_1978; @sanchez_almeida_heights_1996]. We emphasize these comparisons because they show the finesse of the diagnostic that is accessible through the asymmetries of the Stokes V profile.
After translating the asymmetry of each profile belonging to the turbulent regime to correlation lengths, we noticed that we also have a measurement of the longitudinal flux density. It is tempting to try and convert it into magnetic energy which would allow us to produce the all-important magnetic energy spectrum of the turbulent magnetic field [@nakagawa_energy_1973; @knobloch_spectrum_1981].
There is a caveat attached to this, however. It is a main result of this work to have identified the three different regimes as seen from the point of view of area asymmetries in the Stokes V profiles. For the turbulent regime, the actual asymmetry obviously depends on the correlation length of the parameters, including the magnetic and velocity fields, of the stochastic atmosphere used in the model. The details of the stochastic radiative transfer may be a matter of discussion and study, but the general relation of asymmetries with correlation length is not in doubt. Our claim to have measured magnetic fields varying at scales below 100 $km$ is also a robust result of this work. To proceed to a magnetic energy spectrum requires a series of assumptions, approximations and simplifications, which may appear as too strong. We nevertheless proceed because 1) those simplifications and approximations are, in our opinion, still justified as fairly educated guesses, and 2) because the final result, even if crude, still illustrates how powerful a tool the analysis and study of area asymmetries can be.
With those cautions, we assume that the measured longitudinal flux density is a fair proxy of the average field strength of the distribution of fields present along the line of sight. Because of the projection along the line of sight we can assume that it is indeed a lower bound to that field strength. We furthermore assume that the probability distribution function underlying the observed distribution is Maxwellian for the field strength. This distribution is fully determined by the position of its unique maximum. Our measurement of the longitudinal flux density is seen as a lower boundary proxy to that maximum. The magnetic energy of the Maxwellian is computed as $$E=\frac{\int B^2 p(B)dB}{\int p(B)dB},$$ where $p(B)$ is the Maxwellian probability distribution function. The integrand looks in shape very much like the original Maxwellian, but its maximum is shifted toward higher values and the high-field wing is emphasized. Because of these similarities we can claim that the integrand is fully determined by the new maximum of which the maximum of the original Maxwellian is a lower boundary. Our measurement of the longitudinal flux density therefore provides a lower limit to the magnetic energy in the distribution of fields along the line of sight in our turbulent atmosphere. In Fig. \[view6\] we plot at four different heliocentric angles this magnetic energy as a function of the correlation length. They are in a sense crude approximations to the magnetic energy spectrum with sensitivities down to scales of 10 $km$.
Owing to the approximations and simplifications made it is difficult to extract much information from those energy spectra. Clearly, where the strong and structured fields are seen, near disk center, they dominate the energy spectrum and create a bump at the larger scales and a slope toward smaller scales that can be compared to previous results [@nakagawa_energy_1973]. But that appears to be a valid description only for the more organized and mostly vertical fields. For the fields that we included in the turbulent regime, the spectrum appears to be decreasing over all the scales at which we are sensitive with a tendency to flatten out toward the 10km limit and certainly below that limit, where we are not sensitive.
Conclusion
==========
We studied the properties of the turbulent magnetic field that we assumed to pervade most of the quiet Sun. It is not the purpose of this work to demonstrate the existence of this turbulent component. Many observational and theoretical arguments are accumulating to prove its existence, one of which is the way in which we are detecting it through Zeeman polarimetry. We added new observational arguments confirming its existence but, beyond that, we studied it through the asymmetries in the Stokes V. We have collected and analyzed the asymmetries of the Stokes V profile of the Fe **** line at 6302.5 Å observed with Hinode-SOT/SP at several heliocentric angles. The histograms of those asymmetries for different signal amplitudes (or longitudinal flux densities) reveal three different magnetic regimes. We notice that although in our results we talk about three regimes as separated, this is only a classification to facilitate the understanding of their existence. Needless to say, these regimes are simultaneously taking place in the quiet Sun and they are not uniquely separated by the area asymmetry value but also by their flux density or magnetic energy spectrum. The first one is made of strong signals with weak area asymmetries that are mostly distributed around zero. They are mostly vertical, as can be concluded from the field’s importance at disk center and its waning as we approach the limb. The weak asymmetries, when interpreted in terms of magnetic and velocity field variations along the line of sight, are a signature of big correlation lengths, that is, these structures present coherent fields all throughout them, which justifies referring to them as magnetic structures.
The second magnetic regime detected is characterized by its weak longitudinal flux density (below $30 MW/cm^2$ at Hinode-SOT/SP spatial resolutions) and unsigned area asymmetries that span the full range from 0 through the observed microturbulent limit of 0.7. This is slightly higher than the theoretically predicted value of 0.53. We identified them as the turbulent field that pervades the quiet Sun. But before focusing on this, we first turn to the third magnetic regime detected, also made of weak signals, but with strong asymmetries beyond the microturbulent limit. Distinguishing between these two regimes is a delicate matter. It is done through two independent observations. The first is that these single-lobed Stokes V profiles with area asymmetries beyond 0.6 or 0.7 are to be found mainly in the boundaries of strong magnetic field concentrations. This observation has prompted us to interpret them as profiles arising from lines of sight that cross from a strong magnetic structure to the turbulent quiet Sun. These jumps in the magnetic and thermodynamic parameters of the atmosphere along the line of sight can produce anomalous profiles like this. The second observation to tell them apart from the turbulent regime is the 95 percentile of the asymmetry histogram. This statistical measure is in the data independent of the amplitude signal or heliocentric angle and roughly equal to the microturbulent limit. The independence of the amplitude signal is the hardest test that makes us confident that the two regimes can be also directly separated in the histograms.
As a first result of this work we identified the magnetic turbulent fields in the data of area asymmetries in the Stokes V profiles. We furthermore realized that the histograms of the asymmetries of these profiles are almost independent of the heliocentric angle, which is another observational result supporting the isotropy of these fields. Next we used the results of radiative transfer of polarized light through stochastic atmospheres, particularly the work of [@carroll_meso-structured_2007] to translate those asymmetries into correlation lengths for the atmospheric parameters, specifically the magnetic and velocity fields. We stressed the sensitivity to small scales that area asymmetries provide. The formation region of the observed spectral lines spans a few hundred kilometers and is mostly concentrated in a few tens of kilometers. This is much better than the spatial resolutions achievable through direct imaging. We emphasized that we measured scales of variation and not physical structures. Those scales of variation are the best description of a turbulent field, without identifiable structures over the range of scales corresponding to the inertial regime of the turbulence. The smallest of these scales is the scale of dissipation, which has been estimated to be as small as 100m [@pietarila_graham_turbulent_2009]. Our measurements appear to identify magnetic scales that are still short of this dissipation scale by a factor 100.
With a powerful tool like this we determined the range of scales of variation of the turbulent magnetic field and saw that examples of all scales are found: from the hundreds of kilometers of magnetic concentrations that other instruments start to resolve through direct imaging through the tiniest scales of 10km at which area asymmetries loose their sensitivity. In an effort to exploit this information on scales, we attempted to measure the magnetic energy through the proxy of the longitudinal flux densities. There are many approximations and simplifications, but beyond the confidence on the final result we stress again the potential of the tool if only better determinations of the magnetic energy can be used.
Hinode is a Japanese mission developed by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC(UK) as international partners. It is operated in cooperation with ESA and NSC (Norway). The Hinode project at Stanford and Lockheed is supported by NASA contract NNM07AA01C (MSFC).
[35]{} natexlab\#1[\#1]{}
, A. 2009, The Astrophysical Journal, 701, 1032
Auvergne, M., Frisch, H., Frisch, U., Froeschle, C., & Pouquet, A. 1973, Astronomy and Astrophysics, 29, 93
Carroll, T. A. & Kopf, M. 2007, Astronomy and Astrophysics, 468, 323
Carroll, T. A. & Staude, J. 2005, Astronomische Nachrichten, 326, 296
, I., [S[á]{}nchez Almeida]{}, J., & Kneer, F. 2006, The Astrophysical Journal, 636, 496
Frisch, H. & Frisch, U. 1976, Monthly Notices of the Royal Astronomical Society, 175, 157
Frisch, H., Sampoorna, M., & Nagendra, K. N. 2005, Astronomy and Astrophysics, 442, 11
Frisch, H., Sampoorna, M., & Nagendra, K. N. 2006, Astronomy and Astrophysics, 453, 1095
Graham, J. P., Danilovic, S., & Sch[ü]{}ssler, M. 2009, The Astrophysical Journal, 693, 1728
Knobloch, E. & Rosner, R. 1981, The Astrophysical Journal, 247, 300
Kosugi, T., Matsuzaki, K., Sakao, T., [et al.]{} 2007, Solar Physics, 243, 3
Lites, B. W. 2002, The Astrophysical Journal, 573, 431
, A. 2002, The Astrophysical Journal, 564, 379
, A., Malherbe, J. M., [Manso Sainz]{}, R., [et al.]{} 2007, in SF2A-2007: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics, ed. J. Bouvier, A. Chalabaev, & C. Charbonnel, 592
, M. J., [Asensio Ramos]{}, A., Carroll, T. A., [et al.]{} 2008, Astronomy and Astrophysics, 486, 637
, M. J., [Asensio Ramos]{}, A., [L[ó]{}pez Ariste]{}, A., & [Manso Sainz]{}, R. 2008, Astronomy and Astrophysics, 479, 229
, M. J., [Manso Sainz]{}, R., [Asensio Ramos]{}, A., [L[ó]{}pez Ariste]{}, A., & Bianda, M. 2010, The Astrophysical Journal, 711, L57
, V., Lites, B. W., & Skumanich, A. 1997, The Astrophysical Journal, 474, 810
Mihalas, D. 1978, Stellar atmospheres /2nd edition/, 2nd edn. (San Francisco: W.H. Freeman and Co.)
Nakagawa, Y. & Priest, E. R. 1973, The Astrophysical Journal, 179, 949
Nordlund, A., Brandenburg, A., Jennings, R. L., [et al.]{} 1992, The Astrophysical Journal, 392, 647
Parker, E. N. 1982, The Astrophysical Journal, 256, 292
Petrovay, K. 2001, Space Science Reviews, 95, 9
, A., Mart[í]{}nez-Sykora, J., [Bellot Rubio]{}, L., & Title, A. 2012, The Astrophysical Journal, in Press
Sampoorna, M., Nagendra, K. N., Frisch, H., & Stenflo, J. O. 2008, Astronomy and Astrophysics, 485, 275
, J. 2007, The Astrophysical Journal, 657, 1150
, J. & Lites, B. W. 1992, The Astrophysical Journal, 398, 359
, J., [Ruiz Cobo]{}, B., & del [Toro Iniesta]{}, J. C. 1996, Astronomy and Astrophysics, 314, 295
Solanki, S. K. & Pahlke, K. D. 1988, Astronomy and Astrophysics, 201, 143
Solanki, S. K. & Stenflo, J. O. 1984, Astronomy and Astrophysics, 140, 185
Stenflo, J. O. 1973, Solar Physics, 32, 41
Stenflo, J. O. 2010, Astronomy and Astrophysics, 517, 37
, J., Shchukina, N., & [Asensio Ramos]{}, A. 2004, Nature, 430, 326
Tsuneta, S., Ichimoto, K., Katsukawa, Y., [et al.]{} 2008, Solar Physics, 249, 167
Viticchi[é]{}, B., [S[á]{}nchez Almeida]{}, J., [Del Moro]{}, D., & Berrilli, F. 2010, Interpretation of [HINODE]{} [SOT/SP]{} asymmetric Stokes profiles observed in quiet Sun network and internetwork, [http://adsabs.harvard.edu/abs/2010arXiv1009.6065V]{}
[^1]: In the particular model adopted by [@carroll_meso-structured_2007], this probability is $1/e$.
|
---
author:
- Tayyaba Zafar
- Attila Popping
- Céline Péroux
bibliography:
- 'dla.bib'
date: 'Received / Accepted '
title: 'The ESO UVES Advanced Data Products Quasar Sample - I. Dataset and New $N_{{H}\,{\sc I}}$ Measurements of Damped Absorbers'
---
Introduction
============
Among the absorption systems observed in the spectra of quasars, those with the highest neutral hydrogen column density are thought to be connected with the gas reservoir responsible for forming galaxies at high redshift and have deserved wide attention (see review by @wolfe05). These systems are usually classified according to their neutral hydrogen column density as damped Ly$\alpha$ systems (hereafter DLAs) with $N_{{\rm H}\,{\sc \rm I}}\ge2\times10^{20}$ atoms cm$^{-2}$ [e.g., @storrie00; @wolfe05] and sub-damped Ly$\alpha$ systems (sub-DLAs) with $10^{19}\le N_{{\rm H}\,{\sc \rm I}}\le2\times10^{20}$ atoms cm$^{-2}$ [e.g., @peroux03b].
The study of these systems has made significant progress in recent years, thanks to the availability of large sets of quasar spectra with the two-degree field survey (2dF, @croom01) and the Sloan Digital Sky Survey (SDSS; @prochaska05; @noterdaeme09; @noterdaeme12b). They have been shown to contain most of the neutral gas mass in the Universe [@lanzetta91; @lanzetta95; @wolfe95] and are currently used to measure the redshift evolution of the total amount of neutral gas mass density [@lanzetta91; @wolfe95; @storrie00; @peroux03; @prochaska05; @noterdaeme09; @noterdaeme12b]. In addition, the sub-DLAs may contribute significantly to the cosmic metal budget, which is still highly incomplete. Indeed, only $\sim$20% of the metals are observed when one adds the contribution of the Ly$\alpha$ forest, DLAs, and galaxies such as Lyman break galaxies [e.g, @pettini99; @pagel02; @wolfe03; @pettini04; @pettini06; @bouche05; @bouche06; @bouche07].
Therefore, to obtain a complete picture of the redshift evolution of both the cosmological neutral gas mass density and the metal content of the Universe, the less-studied sub-DLAs should be taken into account [@peroux03b]. However, these systems cannot be readily studied at low resolution, and only limited samples of high-resolution quasar spectra have been available until now [e.g., @peroux03b; @dessauges03; @ledoux03; @kulkarni07; @meiring08; @meiring09].The excellent resolution and large wavelength coverage of UVES allows this less studied class of absorber to be explored.
We have therefore examined the high-resolution quasar spectra taken between February 2000 and March 2007 and available in the UVES [@dekker00] Advanced Data Products archive, ending up with a sample of 250 quasar spectra. In this paper we present both the dataset of quasars observed with UVES and the damped absorbers (DLAs and sub-DLAs) covered by these spectra. In addition, we measured column densities of DLAs/sub-DLAs seen in the spectra of these quasars and not reported in the literature. In a companion paper [@zafar12b], we built a carefully selected subset of this dataset to study the statistical properties of DLAs and sub-DLAs, their column density distribution, and the contribution of sub-DLAs to the gas mass density. Further studies, based on specifically designed subsets of the dataset built in this paper, will follow (e.g., studies of metal abundances, molecules).
This work is organized as follows. In §2, information about the UVES quasar data sample is provided. In §3, the properties of the damped absorbers are described. This section also summarizes the details of the new column density measurements. In §4, some global properties of the full sample are presented and lines-of-sight of interest are reported in §5. All log values and expressions correspond to log base 10.
The Quasar Sample
=================
ESO Advanced Data Products
--------------------------
In 2007, the European Southern Observatory (ESO) managing the 8.2m Very Large Telescope (VLT) observatory has made available to the international community a set of Advanced Data Products for some of its instruments, including the high-resolution UVES[^1] instrument. The reduced archival UVES echelle dataset is processed by the ESO UVES pipeline (version 3.2) within the `MIDAS` environment with the best available calibration data. This process has been executed by the quality control (QC) group, part of the Data Flow Department. The resulting sample is based on an uniform reprocessing of UVES echelle point source data from the beginning of operations (dated 18$^{\rm th}$ of February 2000) up to the 31$^{\rm st}$ of March 2007. The standard quality assessment, quality control and certification have been integral parts of the process. The following types of UVES data are not included in the product data set: $i)$ data using the image slicers and/or the absorption cell; $ii)$ Echelle data from extended objects and iii) data from the Fibre Large Array Multi Element Spectrograph (FLAMES)/UVES instrument mode.
In general, no distinction has been made between visitor mode (VM) and service mode (SM) data, nor between standard settings and non-standard settings. However, the data reduction was performed only when robust calibration solutions i.e., (“master calibrations") were available. In the UVES Advanced Data Products archive, these calibrations are available only for the standard settings centered on $\lambda$ 346, 390, 437, 520, 564, 580, 600 or 860 nm. For certain “non-standard" settings, master calibrations were not produced in the first years of UVES operations (until about 2003). These are e.g. 1x2 or 2x3 binnings, or the central wavelengths mentioned above. As a result, the Advanced Data Products database used for the study presented here is not as complete as the ESO UVES raw data archive.
Quasars Selection
-----------------
The UVES archives do not provide information on the nature of the targets. Indeed, the target names are chosen by the users and only recently does the Phase 2 step propose for the user to classify the targets, but only on a voluntary-basis. Therefore, the first step to construct a sample of quasar spectra out of the Advanced Data Products archive is to identify the nature of the objects. For this purpose we retrieved quasar lists issued from quasar surveys: the Sloan digital sky survey data release 7 (DR7) database[^2], HyperLeda[^3], 2dF quasar redshift survey[^4], Simbad[^5] and the Hamburg ESO catalogue. The resulting right ascension (RA) and declination (Dec) of the quasars were cross-matched with UVES Advanced Data Products archive within a radius of $15.0''$. The large radius was chosen to overcome possible relative astrometric shifts between the various surveys and the UVES database. Because of this large radius, the raw matched list do not only contain quasars but also other objects such as stars, galaxies, Seyferts. The non-quasar objects have been filtered out by visual inspection of the spectra. The data in an ESO OPC category C (Interstellar Medium, Star Formation and Planetary Systems) and D (Stellar Evolution) are usually targeting galactic objects, but for some cases observers targeted quasars under the same program. The spectra have been visually inspected for those particular cases.
Further Data Processing
-----------------------
In the UVES spectrograph, the light beam from the telescope is split into two arms (UV to Blue and Visual to Red) within the instrument. The spectral resolution of UVES is about $R=\lambda/\Delta\lambda\sim41,400$ when a $1.0''$ slit is used. By varying slit width, the maximum spectral resolution can reach up to $R=80,000$ and $110,000$ for the BLUE and the RED arm, respectively. For each target, individual spectra (most often with overlapping settings) were merged using a dedicated `Python` code which weights each spectrum by its signal-to-noise ratio. All contributing spectra were regridded to a common frame, with the resolution being that of the spectrum with the highest sampling. When present, the bad pixels were masked to assure that they would not contribute to the merged spectrum. In the regions of overlap the spectra were calibrated to the same level before being error-weighted and merged. Particular attention was given to “step" features in the quasar continua and a visual search has identified and corrected these features when they corresponded to the position in between two orders of the Echelle spectrum. In the merging process for each individual spectrum, a radial velocity correction for barycentric and heliocentric motion (using heliocentric correction values from the files header) was applied. A vacuum correction on the wavelength was also applied.
The resulting list comprises 250 quasar spectra. The number of individual spectra used to produce the co-added spectrum, Simbad $V$-band magnitudes, together with total exposure time in seconds for each target, are provided in Table \[proptab\]. Throughout the paper, this sample obtained from the ESO UVES Advanced Data Products facility is called EUADP sample. The total VLT exposure time of this dataset is $T_{\rm tot}=1560$ hours.
In the case of close pairs of quasars or gravitationally-lensed quasars, we have separated the objects if different slit-positions were used but only analyzed the brightest object if these objects were aligned along the slit. Our sample contains two lensed quasars: QSO B0908+0603 (double system) and QSO B1104-181 (quadruple system). In the former case two objects, and, in the latter case, three objects were aligned on the slit [@lopez07]. In these cases we only analyzed the brightest objects. Our sample contains four quasar pairs: QSO J0008-2900 & QSO J0008-2901 (separated by $1.3'$), J030640.8-301032 & J030643.7-301107 (separated by $\sim$0.85$'$), QSO B0307-195A & B (separated by $\sim$1$'$), and QSO J1039-2719 & QSO B1038-2712 (separated by $17.9'$). For these eight objects, eight different slit-positions were used.
The DLA/sub-DLA Sample
======================
The quasar spectra were normalized to unity within the `MIDAS` environment. For each quasar the local continuum was determined in the merged spectrum by using a spline function to smoothly connect the regions free of absorption features (see Fig. \[conti\]). The final normalized spectra used for column density analysis were obtained by dividing merged spectra by these continua. During normalization, the artifact from residual fringes (especially in standard setting centered on $\lambda$ 860nm) and spectral gaps are also included in the spline fitting. In order to secure all DLAs and sub-DLAs in a given spectrum, we proceeded as in @lanzetta91 and used an automated `Python` detection algorithm. This code builds an equivalent width spectrum over 400 pixel wide boxes ($\sim$12Å for 0.03$\AA$ pixel$^{-1}$) blueward of the quasar’s Ly$\alpha$ emission and flags regions of the spectra where the observed equivalent width exceeds the sub-DLA definition (i.e. $N_{{\rm H}\,{\sc \rm I}}\gtrsim10^{19}$ cm$^{-2}$). This candidate list of DLAs and sub-DLAs is further supplemented by visual inspection done by TZ and CP. The DLAs and sub-DLAs have been confirmed by looking for associated metal lines and/or higher members of the Lyman series. This resulted in 150 DLAs/sub-DLAs with $1.6<z_{\rm abs}<4.7$ for which Ly$\alpha$ is covered in the spectra of these quasars. This method is complete down to the sub-DLA definition of $N_{{\rm H}\,{\sc \rm I}}=10^{19}$ equivalent to $\rm EW_{\rm rest}=2.5$$\AA$ based on a curve of growth analysis [@dessauges03].
$N_{{\rm H}\,{\sc \rm I}}$ measurements of DLAs and sub-DLAs
------------------------------------------------------------
An extensive search in the literature was undertaken to identify which of these damped absorbers were already known. Of the 150 DLAs/sub-DLAs that we have identified above, 131 (87 DLAs and 44 sub-DLAs) have their $N_{{\rm H}\,{\sc \rm I}}$ already reported in the literature. Of the remaining 19 (6 DLAs and 13 sub-DLAs), 10 (3 DLAs and 7 sub-DLAs) are new identifications (see Table \[dlamet\] and §\[notes\] for details on each system).
For damped absorbers, the Lorentzian component of the Voigt profile results in pronounced damping wings, allowing precise determination of $N_{{\rm H}\,{\sc \rm I}}$ down to the sub-DLA definition at high-resolution. The neutral hydrogen column density measurements of these absorbers are determined by fitting a Voigt profile to the Ly$\alpha$ absorption line. The fits were performed using the $\chi^2$ minimization routine `FITLYMAN` package in `MIDAS` [@fontana]. Laboratory wavelengths and oscillator strengths from @morton2003 were used. The global fit returns the best fit parameters for central wavelength, column density, and Doppler turbulent broadening, as well as errors on each quantity. The central redshift was left as a free parameter except when no satisfactory fit could be found in which case the strongest component of the metal line was chosen as central redshift. The Doppler turbulent parameter $b$-value was usually left as a free parameter and sometimes fixed because of low signal-to-noise or multiple DLAs at 30 km s$^{-1}$ (for DLAs) or $20$ km s$^{-1}$ (for sub-DLAs). The $N_{{\rm H}\,{\sc \rm I}}$ fit is performed using the higher members of the Lyman series, in addition to Ly$\alpha$, where these are available. For fitting $N_{{\rm H}\,{\sc \rm I}}$, we usually used the components from $\ion{O}{i}$. The other low ionization line components are used to fit $N_{{\rm H}\,{\sc \rm I}}$ for the cases where there is no $\ion{O}{i}$ covered by our data. Table \[dlamet\] summarizes the properties of the DLAs/sub-DLAs for which we obtained ${\rm H}\,{\sc \rm I}$ column density for the first time and provides quasar emission redshifts, absorption redshifts, [H]{}[i]{} column densities and metal line lists.
Moreover, the majority of the $N_{{\rm H}\,{\sc \rm I}}$ measurements of the DLAs/sub-DLAs towards the 250 EUADP quasars comes from high-resolution data mostly from UVES or Keck/HIRES. In 7 cases, we cover the DLA/sub-DLA in our data but the $N_{{\rm H}\,{\sc \rm I}}$ in the literature is obtained from low/moderate resolution spectra. For these 7 cases, we refitted the DLA/sub-DLA using the EUADP data and new $N_{{\rm H}\,{\sc \rm I}}$ are reported in Table \[newNH\]. For most of the cases, we find consistent results with the low resolution studies. In the case of QSOB1114-0822, @storrie00 reported a DLA with log $N_{{\rm H}\,{\sc \rm I}}=20.3$ at $z_{\rm abs}=4.258$ while we find a sub-DLA with log $N_{{\rm H}\,{\sc \rm I}}=20.02\pm0.12$.
------------------ -------------- --------------- -------------------------------- ------------ --------------------------------------------------------------------------------------------------------------------------- ------ --
Quasar $z_{\rm em}$ $z_{\rm abs}$ log $N_{{\rm H}\,{\sc \rm I}}$ Ly series Metals New?
cm$^{-2}$
QSO J0008-2900 2.645 2.254 $20.22\pm0.10$ Ly$\beta$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{}, [C]{}[ii]{}, [Mg]{}[ii]{}, [Al]{}[iii]{}, [Si]{}[iv]{} no
QSO J0008-2901 2.607 2.491 $19.94\pm0.11$ Ly$\gamma$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{}, [C]{}[ii]{}, [Al]{}[ii]{}, [Si]{}[iv]{} no
QSO J0041-4936 3.240 2.248 $20.46\pm0.13$ Ly$\beta$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{}, [C]{}[ii]{}, [Al]{}[ii]{}, [Zn]{}[ii]{}, [Al]{}[iii]{}, [C]{}[iv]{} no
QSO B0128-2150 1.900 1.857 $20.21\pm0.09$ Ly$\alpha$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{}, [C]{}[ii]{}, [Al]{}[iii]{} no
J021741.8-370100 2.910 2.429 $20.62\pm0.08$ Ly$\beta$ [O]{}[i]{}, [Si]{}[ii]{} no
$\cdots$ $\cdots$ 2.514 $20.46\pm0.09$ Ly$\beta$ [Fe]{}[ii]{}, [Si]{}[ii]{} no
J060008.1-504036 3.130 2.149 $20.40\pm0.12$ Ly$\alpha$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{}, [C]{}[ii]{}, [Al]{}[ii]{}, [Al]{}[iii]{} yes
QSO B0952-0115 4.426 3.476 $20.04\pm0.07$ Ly$\alpha$ [Fe]{}[ii]{}, [Si]{}[ii]{}, [Al]{}[ii]{}, [Al]{}[iii]{}, [C]{}[iv]{} yes
Q1036-272 3.090 2.792 $20.65\pm0.13$ Ly$\beta$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{}, [Al]{}[iii]{} yes
QSO B1036-2257 3.130 2.533 $19.30\pm0.10$ Ly$\alpha$ [Fe]{}[ii]{}, [Si]{}[ii]{}, [C]{}[ii]{}, [Al]{}[iii]{}, [Si]{}[iv]{}, [C]{}[iv]{} no
QSO B1036-268 2.460 2.235 $19.96\pm0.09$ Ly$\beta$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{}, [C]{}[ii]{}, [Al]{}[ii]{}, [Mg]{}[ii]{}, [Al]{}[iii]{}, [Si]{}[iv]{}, [C]{}[iv]{} yes
LBQS 1232+0815 2.570 1.720 $19.48\pm0.13$ Ly$\alpha$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{}, [Al]{}[iii]{}, [Si]{}[iv]{}, [C]{}[iv]{} yes
QSO J1330-2522 3.910 2.654 $19.56\pm0.13$ Ly$\alpha$ [Si]{}[ii]{}, [Al]{}[ii]{}, [Al]{}[iii]{}, [Si]{}[iv]{}, [C]{}[iv]{} yes
QSO J1356-1101 3.006 2.397 $19.88\pm0.09$ Ly$\alpha$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{} yes
QSO J1723+2243 4.520 4.155 $19.23\pm0.12$ Ly$\gamma$ metal lines blended or not covered yes
LBQS 2114-4347 2.040 1.912 $19.50\pm0.10$ Ly$\alpha$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{}, [C]{}[ii]{}, [Al]{}[ii]{}, [Mg]{}[ii]{}, [Si]{}[iv]{}, [C]{}[iv]{} yes
J223941.8-294955 2.102 1.825 $19.84\pm0.14$ Ly$\alpha$ [O]{}[i]{}, [Fe]{}[ii]{}, [Si]{}[ii]{}, [C]{}[ii]{}, [Al]{}[ii]{}, [Mg]{}[ii]{}, [Al]{}[iii]{}, [Si]{}[iv]{}, [C]{}[iv]{} no
QSO B2318-1107 2.960 1.629 $20.52\pm0.14$ Ly$\alpha$ [Fe]{}[ii]{}, [Si]{}[ii]{}, [C]{}[ii]{}, [Al]{}[ii]{}, [Al]{}[iii]{}, [Si]{}[iv]{} yes
QSO B2342+3417 3.010 2.940 $20.18\pm0.10$ Ly$\gamma$ limited red wavelength coverage no
------------------ -------------- --------------- -------------------------------- ------------ --------------------------------------------------------------------------------------------------------------------------- ------ --
Notes on Individual Objects {#notes}
---------------------------
In this section, we provide details on the DLAs and sub-DLAs in the EUADP sample for which ${\rm H}\,{\sc \rm I}$ column density is determined in this work. Best fit parameters of the Voigt profile fits to the [H]{}[i]{} absorption lines are detailed below.
------------------ --------------- -------------------------------- ------------ --------------------------------
Quasar
$z_{\rm abs}$ log $N_{{\rm H}\,{\sc \rm I}}$ Survey/ log $N_{{\rm H}\,{\sc \rm I}}$
cm$^{-2}$ Instrument cm$^{-2}$
J004054.7-091526 4.538 $20.20\pm0.09$ SDSS $20.22\pm0.26$$^{1}$
$\cdots$ 4.740 $20.39\pm0.11$ SDSS $20.55\pm0.15$$^{2}$
QSO B1114-0822 4.258 $20.02\pm0.12$ WHT 20.3$^{3}$
J115538.6+053050 2.608 $20.37\pm0.11$ SDSS $20.47\pm0.23$$^{1}$
$\cdots$ 3.327 $21.00\pm0.10$ SDSS $20.91\pm0.27$$^{1}$
LBQS 2132-4321 1.916 $20.74\pm0.09$ LBQS 20.70$^{3}$
J235702.5-004824 2.479 $20.41\pm0.08$ SDSS 20.45$^{2}$
------------------ --------------- -------------------------------- ------------ --------------------------------
: New high-resolution $N_{{\rm H}\,{\sc \rm I}}$ measurements of DLAs/sub-DLAs previously observed at low/medium resolution.[]{data-label="newNH"}
\(1) @noterdaeme09; (2) @prochaska08; (3)@storrie00
1. QSO J0008-2900 ($z_{\rm em}=2.645$). The quasar was discovered during the course of the 2dF quasar redshift survey [@croom01]. An [H]{}[i]{} absorber at $z=2.253$ is reported by @tytler09. We find that the absorber is a sub-DLA with log $N_{{\rm H}\,{\sc \rm I}}=20.22\pm0.10$ and $b=28.5\pm2.2$ km s$^{-1}$ at $z_{\rm abs}=2.254$. The Lyman series lines down to Ly$\beta$ are fitted together. In the red part of the spectrum metal lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda\lambda\lambda$ 2344, 2374, 2382, 2586, [Si]{}[ii]{} $\lambda\lambda\lambda\lambda$ 1260, 1304, 1526, 1808, [C]{}[ii]{} $\lambda$ 1334, [Mg]{}[ii]{} $\lambda\lambda$ 2796, 2803, [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862, and [Si]{}[iv]{} $\lambda\lambda$ 1393, 1402 are detected at the redshift of the sub-DLA. Fig. \[J0008-2900\] shows our best fit result of the [H]{}[i]{} lines.
2. QSO J0008-2901 ($z_{\rm em}=2.607$). The quasar was also discovered during the course of the 2dF quasar redshift survey [@croom01]. An [H]{}[i]{} absorber at $z=2.491$ is reported by @tytler09. We find that the absorber at $z_{\rm abs}=2.491$ is a sub-DLA with log $N_{{\rm H}\,{\sc \rm I}}=19.94\pm0.11$ and $b=39\pm3.5$ km s$^{-1}$ detected down to Ly$\gamma$. Metal lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda\lambda\lambda$ 2344, 2374, 2382, 2586, [Si]{}[ii]{} $\lambda\lambda\lambda$ 1260, 1304, 1808, [C]{}[ii]{} $\lambda$ 1334, [Al]{}[ii]{} $\lambda$ 1670, and [Si]{}[iv]{} $\lambda\lambda$ 1393, 1402 are detected in the red part of the spectrum. Fig. \[J0008-2901\] shows our best fit result of the [H]{}[i]{} lines.
3. QSO J0041-4936 ($z_{\rm em}=3.240$). A damped absorber is known in this quasar from the Calán Tololo survey [@maza95]. From a low-resolution spectrum, @lopez01 measured the equivalent width of the [H]{}[i]{} absorption line to be ${\rm EW_{obs}}=34.60\,\AA$ but state that they cannot measure the [H]{}[i]{} column density due to the limited spectral resolution. Using the high-resolution UVES spectrum, we are able to measure the column density of the DLA to be log $N_{{\rm H}\,{\sc \rm I}}=20.46\pm0.13$ and $b=29\pm3.9$ km s$^{-1}$ at $z_{\rm abs}=2.248$ detected down to Ly$\beta$. Metal lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda$ 1608, 1611, [Si]{}[ii]{} $\lambda\lambda\lambda\lambda$ 1260, 1304, 1526, 1808, [C]{}[ii]{} $\lambda$ 1334, [Al]{}[ii]{} $\lambda$ 1670, [Zn]{}[ii]{} $\lambda$ 2026, [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862, and [C]{}[iv]{} $\lambda\lambda$ 1548, 1550 associated with the DLA are detected in the red part of the spectrum. Fig. \[J0041-4936\] shows the best fit result of the [H]{}[i]{} lines.
4. QSO B0128-2150 ($z_{\rm em}=1.900$). This quasar was discovered during the course of the Montréal-Cambridge-Tololo survey [@lamontagne00]. Two absorbing systems at $z_{\rm abs}=1.64$ and 1.85 are reported in the UVES observing proposal. We find that the system at $z_{\rm abs}=1.857$ is a sub-DLA with log $N_{{\rm H}\,{\sc \rm I}}=20.21\pm0.09$ and $b=25\pm2.6$ km s$^{-1}$. Metal lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda\lambda$ 2344, 2374, 2382, [Si]{}[ii]{} $\lambda\lambda\lambda$ 1260, 1304, 1808, [C]{}[ii]{} $\lambda$ 1334, and [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862 at the redshift of the absorber are observed in the red part of the spectrum. Fig. \[B0128-2150\] shows our best fit result of the [H]{}[i]{} column density. The absorbing system at $z_{abs}=1.64$ is below sub-DLA limit.
\
5. J021741.8-370100 ($z_{\rm em}=2.910$). Damped absorbers have been known in this quasar from the Calán Tololo survey [@maza96]. The column densities for these DLAs have not been reported before. We measure log $N_{{\rm H}\,{\sc \rm I}}=20.62\pm0.08$ at $z_{\rm abs}=2.429$ and log $N_{{\rm H}\,{\sc \rm I}}=20.46\pm0.09$ at $z_{\rm abs}=2.514$. Both absorbers are seen down to Ly$\beta$. The $b$ parameter is fixed to $b=30$ km s$^{-1}$ for both absorbers. Due to limited wavelength coverage only a few metal lines associated with the absorbers are seen in the spectrum. Metal lines from [O]{}[i]{} $\lambda$ 1302 and [Si]{}[ii]{} $\lambda\lambda\lambda\lambda$ 1190, 1193, 1260, 1304 are covered for the absorber at $z_{\rm abs}=2.429$. The lines from [Fe]{}[ii]{} $\lambda$ 1144 and [Si]{}[ii]{} $\lambda\lambda\lambda$ 1190, 1193, 1260 associated with the absorber at $z_{\rm abs}=2.514$ are covered in the spectrum. Fig. \[J021741.8-370100a\] and Fig. \[J021741.8-370100b\] show our best fit of [H]{}[i]{} lines for absorbers at $z_{\rm abs}=2.429$ and $z_{\rm abs}=2.514$ respectively.
\
\
6. J060008.1-504036 ($z_{\rm em}=3.130$). This quasar was discovered during the course of the Calán Tololo survey [@maza96]. No detailed analysis of this quasar has been published. A Lyman limit system at $z_{\rm LLS}=3.080$ is seen in the spectrum. We identified a DLA at $z_{\rm abs}=2.149$. The column density of the DLA is fitted using three strong components (at 24, 0, and -44 km s$^{-1}$) seen in $\ion{O}{i}$, resulting in a total column density of log $N_{{\rm H}\,{\sc \rm I}}=20.40\pm0.12$ with $b=20$ km s$^{-1}$ fixed for each component. Metal absorption lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda$ 1608, 1611, [Si]{}[ii]{} $\lambda\lambda\lambda\lambda$ 1260, 1304, 1526, 1808, [C]{}[ii]{} $\lambda$ 1334, [Al]{}[ii]{} $\lambda$ 1670, and [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862 at the redshift of the DLA are covered in the red part of the spectrum. Fig. \[J060008.1-504036\] shows our best fit of the neutral hydrogen column density of the DLA at $z_{\rm abs}=2.149$.
\
\
\
7. QSO B0952-0115 ($z_{\rm em}=4.426$). One damped absorber was previously reported in this quasar at $z_{\rm abs}=4.024$ [@storrie00; @prochaska07]. A Lyman limit system at $z_{\rm LLS}=4.242$ is detected in the spectrum. We find a sub-DLA at $z_{\rm abs}=3.476$ with log $N_{{\rm H}\,{\sc \rm I}}=20.04\pm0.07$ and $b=32\pm3.9$ km s$^{-1}$. Metal lines from [Fe]{}[ii]{} $\lambda\lambda$ 1608, 1611, [Si]{}[ii]{} $\lambda\lambda\lambda$ 1260, 1304, 1526, [Al]{}[ii]{} $\lambda$ 1670, [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862, and [C]{}[iv]{} $\lambda\lambda$ 1548, 1550 associated with the sub-DLA are covered in the red part of the spectrum. Fig. \[B0952-0115\] shows the best fit result of the [H]{}[i]{} column density.
8. Q1036-272 ($z_{\rm em}=3.090$). A low-resolution spectrum of this quasar has been previously published [@jakobsen92]. We report a DLA with $\ion{H}{i}$ column density of log $N_{{\rm H}\,{\sc \rm I}}=20.65\pm0.13$ and $b=35\pm5$ km s$^{-1}$ down to Ly$\beta$ at $z_{\rm abs}=2.792$. Several metal absorption lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda\lambda\lambda$ 1144, 2344, 2374, 2382, [Si]{}[ii]{} $\lambda\lambda\lambda\lambda$ 1190, 1193, 1260, 1304, and [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862 are covered in the red part of the spectrum. Fig. \[Q1036-272\] shows best fit result of the [H]{}[i]{} lines at $z_{\rm abs}=2.792$.
\
\
9. QSO B1036-2257 ($z_{\rm em}=3.130$). A damped and sub-damped absorber were previously reported in this quasar at $z_{\rm abs}=2.777$ [@fox09] and $z_{\rm abs}=2.531$ [@lopez01] respectively. A Lyman limit system at $z_{\rm LLS}=2.792$ is also detected. From a low-resolution spectrum, @lopez01 measured the equivalent width of the [H]{}[i]{} absorption line to be ${\rm EW_{obs}}=13.52\,\AA$ but state that they cannot measure the [H]{}[i]{} column density due to the limited spectral resolution. Using the high-resolution UVES spectrum, we are able to identify two main components in the system from the metal lines at 0 and -144 km s$^{-1}$. Fitting these components we measure the total column density of the sub-DLA to be log $N_{{\rm H}\,{\sc \rm I}}=19.30\pm0.10$ with $b$ parameter of $b=26\pm3.4$ and fixed 20 km s$^{-1}$ (at 0 and -144 km s$^{-1}$). The component at 0 km s$^{-1}$ is stronger and heavily dominates over the component at 144 km s$^{-1}$. Metal lines from [Fe]{}[ii]{} $\lambda\lambda\lambda\lambda$ 1144, 2344, 2382, 2586, [Si]{}[ii]{} $\lambda\lambda\lambda$ 1193, 1260, 1526, [C]{}[ii]{} $\lambda$ 1334, [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862, [Si]{}[iv]{} $\lambda\lambda$ 1393, 1402, and [C]{}[iv]{} $\lambda\lambda$ 1548, 1550 associated with the sub-DLA are detected in the red part of the spectrum. Fig. \[Q1036-2257\] shows the best fit result of the [H]{}[i]{} column density.
\
\
\
10. QSO B1036-268 ($z_{\rm em}=2.460$). A low-resolution slit spectrum of the quasar has been previously published [@jakobsen92]. We find four strong velocity components in the system at $z_{\rm abs}=2.235$ from the metal lines. The column density of the sub-DLA fitted down to Ly$\beta$ with four components at 0, -55, -95, and -144 km s$^{-1}$, resulting a total of log $N_{{\rm H}\,{\sc \rm I}}=19.96\pm0.09$ and $b=20$, 20, 20 (fixed), and $35\pm3.0$ km s$^{-1}$ respectively. The component at 0 km s$^{-1}$ is strongest and heavily dominates over other components. Strong metal absorption lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda\lambda\lambda\lambda\lambda\lambda$ 1144, 1608, 1611, 2344, 2374, 2382, 2586, [Si]{}[ii]{} $\lambda\lambda\lambda\lambda\lambda$ 1190, 1193, 1260, 1304, 1526, [C]{}[ii]{} $\lambda$ 1334, [Al]{}[ii]{} $\lambda$ 1670, [Mg]{}[ii]{} $\lambda\lambda$ 2796, 2803, [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862, [Si]{}[iv]{} $\lambda\lambda$ 1393, 1402, and [C]{}[iv]{} $\lambda\lambda$ 1548, 1550 at the redshift of the sub-DLA are covered in the red part of the spectrum. Fig. \[B1036-268\] shows our best fit result of [H]{}[i]{} lines of the sub-DLA.
11. LBQS 1232+0815 ($z_{\rm em}=2.570$). A DLA was known at $z_{\rm abs}=2.338$ along the line-of-sight of this quasar [@prochaska07; @ivanchik10]. We report for the first time a sub-DLA at $z_{\rm abs}=1.720$ with total log $N_{{\rm H}\,{\sc \rm I}}=19.48\pm0.13$. From $\ion{O}{i}$, we identified two main velocity components from the system at 0 and -78 km s$^{-1}$ (see Fig. \[met1232\]) where $b$ is fixed at $b=20$ km s$^{-1}$ for both components. Metal absorption lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda\lambda\lambda\lambda$ 1608, 1611, 2344, 2374, 2382, [Si]{}[ii]{} $\lambda\lambda$ 1526, 1808, [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862, [Si]{}[iv]{} $\lambda\lambda$ 1393, 1402, and [C]{}[iv]{} $\lambda\lambda$ 1548, 1550 associated with the sub-DLA are covered in the red part of the spectrum. Fig. \[1232+0815\] shows the best fit result of [H]{}[i]{} column density of the sub-DLA. While the Ly$\alpha$ line is noisy, the presence of the sub-DLA is confirmed through the metal lines detected in the spectrum (see Fig. \[met1232\]).
\
\
12. QSO J1330-2522 ($z_{\rm em}=3.910$). Two sub-DLAs were previously reported at $z_{\rm abs}=2.910$ and 3.080 [@peroux01] along the line-of-sight of this quasar. A Lyman limit system at $z_{\rm LLS}=3.728$ is also seen. We report a new sub-DLA at $z_{\rm abs}=2.654$ with neutral hydrogen column density of log $N_{{\rm H}\,{\sc \rm I}}=19.56\pm0.13$ and $b=25.5\pm2.4$ km s$^{-1}$. Fig. \[J1330-2522\] shows our best fit result. Metal lines from [Si]{}[ii]{} $\lambda\lambda\lambda$ 1260, 1526, 1808, [Al]{}[ii]{} $\lambda$ 1670, [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862, [Si]{}[iv]{} $\lambda\lambda$ 1393, 1402, and [C]{}[iv]{} $\lambda\lambda$ 1548, 1550 are covered in the red part of the spectrum at the redshift of the sub-DLA. As an example of metal lines, low ionization lines from the sub-DLA confirming its presence are plotted in Fig. \[met1330\]. High ionization lines are blended with the lines from the other two sub-DLAs.
13. QSO J1356-1101 ($z_{\rm em}=3.006$). Two damped absorbers have been previously reported in this quasar at $z_{\rm abs}=2.501$ and 2.967 [@prochaska07; @noterdaeme08; @fox09]. We report for the first time a sub-DLA at $z_{\rm abs}=2.397$ in the quasar. From the $\ion{O}{i}$, we find four strong components in the system at 29, 0, -48, and -300 km s$^{-1}$. The total $\ion{H}{i}$ column density of the system is log $N_{{\rm H}\,{\sc \rm I}}=19.88\pm0.09$ with $b=20$, 20, 20, (fixed) $28\pm2.8$ km s$^{-1}$ for 29, 0, -48, and -300 km s$^{-1}$ components respectively. The component at 0 km s$^{-1}$ is strongest and heavily dominates over other components. Metal absorption lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda\lambda\lambda\lambda$ 1144, 2344, 2374, 2382, 2586, and [Si]{}[ii]{} $\lambda\lambda\lambda$ 1190, 1193, 1260, 1304 associated with the sub-DLA are detected in the red part of the spectrum. Fig. \[J1356-1101\] shows the best fit result of the neutral hydrogen column density of the sub-DLA.
\
\
14. QSO J1723+2243 ($z_{\rm em}=4.520$). A damped absorber has been previously reported in this quasar at $z_{\rm abs}=3.697$ [@prochaska07; @guimares09]. We observed a sub-DLA down to Ly$\gamma$ at $z_{\rm abs}=4.155$ with [H]{}[i]{} column of log $N_{{\rm H}\,{\sc \rm I}}=19.23\pm0.12$ with a fixed $b=20$ km s$^{-1}$. The metal lines associated with this system are either blended with other features or not covered by our data so that this detection is based on the Lyman line only and is a little less secure than the others. The detection of absorption lines from the Lyman series confirm the presence of the sub-DLA. Fig. \[J1723+2243\] shows our best fit result of [H]{}[i]{} lines.
\
15. LBQS 2114-4347 ($z_{\rm em}=2.040$). The quasar has been discovered as part of the Large Bright Quasar Survey (LBQS; @morris91). No absorber has been previously reported in this quasar [@peroux03]. We observed, for the first time, a sub-DLA at $z_{\rm abs}=1.912$ with best fit column density of log $N_{{\rm H}\,{\sc \rm I}}=19.50\pm0.10$ and $b=31.7\pm3.4$ km s$^{-1}$. Several metal absorption lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda\lambda\lambda\lambda\lambda$ 1144, 1608, 2344, 2374, 2382, 2586, [Si]{}[ii]{} $\lambda\lambda\lambda\lambda\lambda$ 1193, 1260, 1304, 1526, 1808, [C]{}[ii]{} $\lambda$ 1334, [Al]{}[ii]{} $\lambda$ 1670, [Mg]{}[ii]{} $\lambda\lambda$ 2796, 2803, [Si]{}[iv]{} $\lambda\lambda$ 1393, 1402, and [C]{}[iv]{} $\lambda\lambda$ 1548, 1550 at the redshift of the sub-DLA are covered in the red part of the spectrum. Fig. \[2114-4347\] shows our best fit of [H]{}[i]{} column density.
16. J223941.8-294955 ($z_{\rm em}=2.102$). This quasar was discovered during the course of the 2dF quasar redshift survey. The absorber at $z_{abs}=1.825$ in this quasar has been reported before by @cappetta10 with a column density of log $N_{{\rm H}\,{\sc \rm I}}=20.60$ (where [H]{}[i]{} fit was not shown). We fit the [H]{}[i]{} of the absorber again and find that the absorber at $z_{\rm abs}=1.825$ fits well with [H]{}[i]{} column density of log $N_{{\rm H}\,{\sc \rm I}}=19.84\pm0.14$ and $b=53.0\pm4.7$ km s$^{-1}$. Emission is clearly seen in the trough of this absorber which is likely to correspond to the Lyman-$\alpha$ emission from the sub-DLA host. Several metal absorption lines from [O]{}[i]{} $\lambda$ 1302, [Fe]{}[ii]{} $\lambda\lambda\lambda\lambda\lambda\lambda$ 1144, 1608, 2344, 2374, 2382, 2586, [Si]{}[ii]{} $\lambda\lambda\lambda\lambda\lambda$ 1193, 1260, 1304, 1526, 1808, [C]{}[ii]{} $\lambda$ 1334, [Al]{}[ii]{} $\lambda$ 1670, [Mg]{}[ii]{} $\lambda\lambda$ 2796, 2803, [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862, [Si]{}[iv]{} $\lambda\lambda$ 1393, 1402, and [C]{}[iv]{} $\lambda\lambda$ 1548, 1550 associated with sub-DLA are covered in the red part of the spectrum. Fig. \[J223941.8-294955\] shows our best fit result of neutral hydrogen column density fit.
17. QSO B2318-1107 ($z_{\rm em}= 2.960$). A DLA was previously known at $z_{\rm abs}=1.989$ along the line-of-sight of the quasar [@noterdaeme07; @fox09]. We find a new DLA in the quasar at $z_{\rm abs}=1.629$ with neutral hydrogen column of log $N_{{\rm H}\,{\sc \rm I}}=20.52\pm0.14$ with fixed $b=30.0$ km s$^{-1}$. Several metal absorption lines from [Fe]{}[ii]{} $\lambda\lambda\lambda\lambda\lambda$ 1608, 1611, 2344, 2374, 2382, [Si]{}[ii]{} $\lambda\lambda\lambda\lambda$ 1193, 1260, 1304, 1526 [C]{}[ii]{} $\lambda$ 1334, [Al]{}[ii]{} $\lambda$ 1670, [Al]{}[iii]{} $\lambda\lambda$ 1854, 1862, and [Si]{}[iv]{} $\lambda\lambda$ 1393, 1402 associated with the DLA are covered in the red part of the spectrum. Fig. \[B2318-1107\] shows our best fit of the [H]{}[i]{} column to the DLA.
18. QSO B2342+3417 ($z_{\rm em}=3.010$). A damped absorber with log $N_{{\rm H}\,{\sc \rm I}}=21.10\pm0.10$ was previously reported at $z_{\rm abs}=2.909$ in the quasar [@prochaska03b; @fox09]. A joint fit was implemented by @prochaska03b to fit the DLA together with the neighboring sub-DLA, but the column density of the sub-DLA was not reported. We measure log $N_{{\rm H}\,{\sc \rm I}}=20.18\pm0.10$ down to Ly$\gamma$ at $z_{\rm abs}=2.940$ with a fixed $b=20.0$ km s$^{-1}$. The metal lines associated with this system are not seen because of limited wavelength coverage in the red part of the spectrum. Fig. \[B2342+3417\] shows the best fit result of the [H]{}[i]{} lines.
DLAs/sub-DLAs towards EUADP quasars
-----------------------------------
Besides the 150 DLAs/sub-DLAs, we found another 47 damped absorbers (21 DLAs and 26 sub-DLAs) in the literature along the lines-of-sight of our 250 EUADP quasars, for which Ly$\alpha$ absorption lines are not covered by our data either due to the limited wavelength coverage or non-overlapping settings. These systems are however of interest to us because their metal lines might still be included in our data and are helpful in further studies of the EUADP sample. These 150 and 47 damped absorbers (with and without Ly$\alpha$ covered by the EUADP dataset) make up a total of 197 DLAs/sub-DLAs along lines-of-sight of the 250 EUADP quasars where 114 are DLAs and 83 are sub-DLAs. The EUADP sample by design is biased towards DLAs and therefore we see less sub-DLAs than DLAs in the sample. Indeed, in the redshift range 0.2 $<$ $z$ $<$ 4.9, we expect twice as many sub-DLAs as DLAs based on the number density of absorbers at a mean redshift of $z=2.4$ [see @peroux05].
Global Properties of the EUADP sample
=====================================
\
The emission redshifts of the 250 EUADP sample quasars are initially obtained from the Simbad catalogue and later double checked for the cases where Ly$\alpha$ emission from the quasar is covered by our data. The Ly$\alpha$ emission for 5 quasars (i.e., QSOJ0332-4455, QSOB0528-2505, QSOB0841+129, QSOB1114-0822 and QSOJ2346+1247) is not seen because of the presence of DLAs belonging to the “proximate DLA” class with $z_{\rm abs}\approx z_{\rm em}$ [e.g., @moller98; @ellison11; @zafar11]. For the emission redshifts of these 5 cases, we rely on the literature. The emission redshifts of all the other objects in the EUADP sample have been compared with measurements from the literature. For a few cases, emission redshifts provided in the Simbad catalogue are not correct and the correct redshifts are obtained from the literature. In our sample, there are 38 quasars with emission redshifts below $z_{\rm em}<1.5$. For these cases we cannot see Ly$\alpha$ emission from the quasar because of the limited spectral coverage, therefore, we relied mostly on the Simbad catalogue. However, other emission lines are covered in the spectra, helping us to confirm the emission redshifts. The emission redshifts of 250 quasars of EUADP sample ranges from $0.191\leq z_{\rm em}\leq6.311$. Their distribution is shown in Fig. \[zemhist\] and is found to peak at $z_{\rm em}\simeq2.1$.
\
In Fig. \[zabshist\], the column density distribution of DLAs and sub-DLAs is presented (see @zafar12b for complete list of DLAs and sub-DLAs [H]{}[i]{} column densities). It is worth noting that in the EUADP sample, damped absorbers with column densities up to log $N_{{\rm H}\,{\sc \rm I}}=21.85$ are seen, while higher column densities have been recently reported [@guimaraes12; @kulkarni12; @noterdaeme12]. As mentioned above in the EUADP sample, the number of sub-DLAs is lower than the number of DLAs. Indeed, a large fraction ($\sim$45%) of the quasars in the EUADP sample were observed because of a known strong damped absorber along their line-of-sight. A carefully selected subset of the EUADP will have to be built for the purpose of statistical analysis of DLAs and sub-DLAs [@zafar12b].
Lines-of-sight of Interest
==========================
In the EUADP sample, a few lines-of-sight of quasars are rich and contain more than one absorber. One interesting example is the line-of-sight of QSO J0133$+$0400, containing six DLAs and sub-DLAs. In this line-of-sight two sub-DLAs $z_{\rm abs}=3.995/3.999$ are separated by only $\Delta v=240$ km s$^{-1}$. Three more examples of rich lines-of-sight are: $i)$ QSO J0006-6208 with 3 DLAs and one sub-DLA, $ii)$ QSO J0407-4410 with 4 DLAs, where three DLAs have log $N_{{\rm H}\,{\sc \rm I}}\gtrsim21.0$, and $iv)$ QSO B0841$+$129 containing 3 DLAs (with log $N_{{\rm H}\,{\sc \rm I}}\gtrsim21.0$) and one sub-DLA. Such complex group or systems can be classified as multiple DLAs (MDLAs; @lopez03).
In addition, there are four quasar pairs in the EUADP sample: $i)$ QSO J0008-2900 & QSO J0008-2901 separated by $1.3'$. Two sub-DLAs at $z=2.254$ and $z=2.491$ are seen along the lines of sight of QSO J0008-2900 and QSO J0008-2901 respectively. $ii)$ J030640.8-301032 & J030643.7-301107 separated by $\sim$0.85 arcmin and no absorber is seen along the line-of-sight to the pair. $iii)$ QSO B0307-195A & B separated by $\sim$1 arcmin. A sub-DLA (at $z=1.788$ with log $N_{{\rm H}\,{\sc \rm I}}=19.00\pm0.10$) along the line-of-sight of QSO B0307-195B is detected [@odorico02], but is not seen in its companion. $iv)$ QSO J1039-2719 & QSO B1038-2712 separated by $17.9'$. A sub-DLA (at $z=2.139$ with log $N_{{\rm H}\,{\sc \rm I}}=19.90\pm0.05$) along the line-of-sight of QSO J1039-2719 is detected [@odorico02], but no damped absorber is seen in its companion.
Conclusion
==========
In this study, high-resolution spectra taken from the UVES Advanced Data Products archive have been processed and combined to make a sample of 250 individual quasars spectra. The high-resolution spectra of these quasars allow us to detect absorbers down to log $N_{{\rm H}\,{\sc \rm I}}=19.00$ cm$^{-2}$. Automated and visual searches for quasar absorbers have been undertaken leading to a sample of 93 DLAs and 57 sub-DLAs. An extensive search in the literature shows that 6 of these DLAs and 13 of these sub-DLAs have their [H]{}[i]{} column densities measured for the first time, where 10 are new identifications. These new damped absorbers are confirmed by detecting metal lines associated with the absorber and/or lines from the higher members of the Lyman series. The [H]{}[i]{} column densities of all these new absorbers are determined by fitting a Voigt profile to the Ly$\alpha$ line together with the lines from higher order of the Lyman series whenever covered. Our data contain five proximate DLA cases and three quasar pairs. We found that a few lines-of-sight of quasars are very rich, particularly the line-of-sight of QSO J0133+0400 which contains six DLAs and sub-DLAs.
In an accompanying paper, [@zafar12b], we use a carefully selected subset of this dataset to study the statistical properties of DLAs and sub-DLAs, measure their column density distribution, and quantify the contribution of sub-DLAs to the [H]{}[i]{} gas mass density. Further studies using specifically designed subsets of the EUADP dataset will follow.
Acknowledgments
===============
This work has been funded within the BINGO! (‘history of Baryons: INtergalactic medium/Galaxies cO-evolution’) project by the Agence Nationale de la Recherche (ANR) under the allocation ANR-08-BLAN-0316-01. We would like to thank the ESO staff for making the UVES Advanced Data Products available to the community. We are thankful to Stephan Frank, Jean-Michel Deharveng and Bruno Milliard for helpful comments.
------------------ ------------- -------------- ----------------- -------------- ---------------------------------- ------------------------------- ------ ---------------
Quasar RA Dec Mag. $z_{\rm em}$ Wavelength coverage Prog. ID No. T$_{\rm exp}$
2000 2000 Å spec sec
LBQS 2359-0216B 00 01 50.00 -01 59 40.00 18.00 2.810 3290-5760, 5835-8520, 8660-10420 66.A-0624(A), 073.B-0787(A)Ê 16 21,300
QSO J0003-2323 00 03 44.92 -23 23 54.80 16.70 2.280 3050-5760, 5835-8520, 8660-10420 166.A-0106(A)Ê 36 43,200
QSO B0002-422 00 04 48.28 -41 57 28.10 17.40 2.760 3050-5760, 5835-8520, 8660-10420 166.A-0106(A) 84 97,421
QSO J0006-6208 00 06 51.61 -62 08 03.70 18.29 4.455 4780-5750, 5830-6800 69.A-0613(A)Ê 4 7,200
QSO J0008-0958 00 08 15.33 -09 58 54.45 18.85 1.950 3300-4510,5720-7518,7690-9450 076.A-0376(A)Ê 18 21,600
QSO J0008-2900 00 08 52.72 -29 00 43.75 19.12$^\dagger$ 2.645 3300-4970,5730-10420 70.A-0031(A)Ê, 075.A-0617(A)Ê 17 24,000
QSO J0008-2901 00 08 57.74 -29 01 26.61 19.85$^\dagger$ 2.607 3300-4970,5730-10420 70.A-0031(A)Ê, 075.A-0617(A)Ê 18 17,700
QSO B0008+006 00 11 30.56 +00 55 50.71 18.50 2.309 3760-4980,6700-8510,8660-10420 267.B-5698(A)Ê 6 7,200
LBQS 0009-0138 00 12 10.89 -01 22 07.76 18.10 1.998 3300-4510,4770-5755,5835-6805 078.A-0003(A)Ê 6 6,000
LBQS 0010-0012 00 13 06.15 +00 04 31.90 19.43 2.145 3050-3870,4610-5560,5670-6650 68.A-0600(A) 3 5,400
LBQS 0013-0029 00 16 02.41 -00 12 25.08 17.00 2.087 3054-6800 66.A-0624(A)Ê, 267.A-5714(A)Ê 66 88,200
LBQS 0018+0026 00 21 33.28 +00 43 00.99 18.20 1.244 3320-4510,4620-5600,5675-6650 078.A-0003(A)Ê 3 3,000
QSO B0027-1836 00 30 23.63 -18 19 56.00 17.90 2.550 3050-3870,4780-5755,5835-6800 073.A-0071(A)Ê 15 23,772
J004054.7-091526 00 40 54.66 -09 15 26.92 24.55 4.976 6700-8515,8655-10420 072.B-0123(D)Ê 6 7,285
QSO J0041-4936 00 41 31.44 -49 36 11.80 17.90 3.240 3290-4520,4620-5600,5675-6650 68.A-0362(A) 3 3,600
QSO B0039-407 00 42 01.20 -40 30 39.00 18.50 2.478 3290-4520,4620-5600,5675-6650 075.A-0158(A)Ê 15 16,500
QSO B0039-3354 00 42 16.45 -33 37 54.50 17.74 2.480 3295-4520 072.A-0442(A)Ê 1 1,800
LBQS 0041-2638 00 43 42.80 -26 22 11.00 17.79 3.053 3757-4980,6700-8520,8660-10420 70.A-0031(A)Ê 6 5,400
LBQS 0041-2707 00 43 51.81 -26 51 28.00 17.83 2.786 3757-4980,6700-8520,8660-10420 70.A-0031(A)Ê 6 5,400
QSO B0042-2450 00 44 28.10 -24 34 19.00 17.30 0.807 3300-4520,4620-5600,5675-6650 67.B-0373(A)Ê 6 1,800
QSO B0042-2656 00 44 52.26 -26 40 09.12 19.00 3.358 3750-4980,6700-8520,8660-10420 70.A-0031(A)Ê 6 3,600
LBQS 0042-2930 00 45 08.54 -29 14 32.59 17.92$^\dagger$ 2.388 3290-4510,4780-5750,5830-6800 072.A-0442(A)Ê 3 1,800
LBQS 0042-2657 00 45 19.60 -26 40 51.00 18.72 2.898 3757-4980,6700-8520,8660-10420 70.A-0031(A)Ê 9 5,400
J004612.2-293110 00 46 12.25 -29 31 10.17 19.89 1.675 3050-3870,4620-5600,5675-6650 077.B-0758(A)Ê 26 45,470
LBQS 0045-2606 00 48 12.59 -25 50 04.80 18.08$^\dagger$ 1.242 3300-4520,4615-5595,5670-6650 67.B-0373(A)Ê 6 3,000
QSO B0045-260 00 48 16.84 -25 47 44.20 18.60 0.486 3300-4520,4615-5595,5670-6650 67.B-0373(A) 6 7,200
QSO B0046-2616 00 48 48.41 -26 00 20.30 18.70 1.410 3300-4520,4615-5595,5670-6650 67.B-0373(A)Ê 6 7,200
LBQS 0047-2538 00 50 24.89 -25 22 35.09 18.40 1.969 3290-4520,4615-5595,5670-6650 67.B-0373(A)Ê 9 5,400
LBQS 0048-2545 00 51 02.30 -25 28 48.00 18.20 2.082 3295-4520,4615-5595,5670-6650 67.B-0373(A)Ê 12 7,200
QSO B0018-2608 00 51 09.10 -25 52 15.00 18.20 2.249 3300-4520,4615-5595,5670-6650 67.B-0373(A)Ê 6 3,600
LBQS 0049-2535 00 52 11.10 -25 18 59.00 19.20 1.528 3300-4520,4615-5595,5670-6650 67.B-0373(A)Ê 12 7,200
LBQS 0051-2605 00 54 19.78 -25 49 01.20 18.04$^\dagger$ 0.624 3300-4520,4615-5595,5670-6650 67.B-0373(A)Ê 15 9,000
QSO B0055-26 00 57 58.00 -26 43 14.90 17.47 3.662 3050-5755,5835-6810 65.O-0296(A)Ê 17 37,800
QSO B0058-292 01 01 04.60 -28 58 00.90 18.70 3.093 3300-5750, 5830-8515,8650-10420 67.A-0146(A), 66.A-0624(A)Ê 29 41,222
LBQS 0059-2735 01 02 17.05 -27 19 49.91 18.00$^\dagger$ 1.595 3100-5755,5830-8520,8660-10420 078.B-0433(A)Ê 24 22,800
QSO B0100+1300 01 03 11.27 +13 16 17.74 16.57 2.686 3290-4525,6695-8520,8660-10420 074.A-0201(A)Ê, 67.A-0022(A)Ê 7 12,600
QSO J0105-1846 01 05 16.82 -18 46 41.90 18.30 3.037 3300-5595,5670-6650,6690-8515, 67.A-0146(A) 9 14,400
$\cdots$ 8650-10420
QSO B0102-2931 01 05 17.95 -29 15 11.41 19.90 2.220 3300-4515,5720-7520,7665-9460 075.A-0617(A) 3 3,900
QSO B0103 -260 01 06 04.30 -25 46 52.30 18.31 3.365 4165-5160,5220-6210 66.A-0133(A)Ê 16 24,800
QSO B0109-353 01 11 43.62 -35 03 00.40 16.90 2.406 3055-5755,5835-8515,8650-10420 166.A-0106(A) 39 46,800
QSO B0112-30 01 15 04.70 -30 25 14.00 $\cdots$ 2.985 4778-5760,5835-6810 68.A-0600(A)Ê 6 13,500
QSO B0117+031 01 15 17.10 -01 27 04.51 18.90 1.609 4165-5160,5230-6200 69.A-0613(A) 2 3,600
QSO J0123-0058 01 23 03.23 -00 58 18.90 18.76 1.550 4165-5160,5230-6200 078.A-0646(A)Ê 4 5,400
QSO J0124+0044 01 24 03.78 +00 44 32.67 17.90 3.834 4165-6810 69.A-0613(A) 10 15,600
QSO B0122-379 01 24 17.38 -37 44 22.91 17.10 2.190 3050-5760,5835-8520,8660-10420 166.A-0106(A)Ê 39 46,800
QSO B0122-005 01 25 17.15 -00 18 28.88 18.60 2.278 3300-4520,4620,5600,5675-6650 075.A-0158(A) 12 13,200
QSO B0128-2150 01 31 05.50 -21 34 46.80 15.57 1.900 3045-3868,4785-5755,5830-6810 072.A-0446(B) 6 6,130
QSO B0130-403 01 33 01.93 -40 06 28.00 17.40 3.023 3300-4520,4775-5755,5828-6801 70.B-0522(A)Ê 3 3,065
QSO J0133+0400 01 33 40.40 +04 00 59.00 18.30 4.154 4180-5165,5230-6410, 69.A-0613(A), 073.A-0071(A), 5 9,100
$\cdots$ 6700-8520,8660-10250 074.A-0306(A)Ê
QSO J0134+0051 01 34 05.77 00 51 09.35 18.37 1.522 3050-3870,4620-5600,5675-6650 074.A-0597(A)Ê 8 14,400
QSO B0135-42 01 37 24.41 -42 24 16.80 18.46 3.970 4170-5165,5235-6210 69.A-0613(A)Ê 2 3,600
QSO J0138-0005 01 38 25.54 -00 05 34.52 18.97 1.340 3050-3870,4620-5598,5675-6650 078.A-0646(A)Ê 9 15,600
QSO J0139-0824 01 39 01.40 -08 24 44.08 18.65 3.016 3300-4520 074.A-0201(A)Ê 1 4,800
QSO J0143-3917 01 43 33.64 -39 17 00.00 16.28 1.807 3050-5755,5830-8515,8655-10420 67.A-0280(A)Ê 33 39,600
QSO J0153+0009 01 53 18.19 00 09 11.39 17.80 0.837 3100-3870,4625-5600,5675-6650 078.A-0646(A)Ê 6 10,800
QSO J0153-4311 01 53 27.19 -43 11 38.20 16.80 2.789 3100-5757,5835-8520,8650-10420 166.A-0106(A) 53 63,900
QSO J0157-0048 01 57 33.88 -00 48 24.49 17.88 1.545 3300-4515,4785-5757,5835-6810 078.A-0003(A)Ê 6 6,000
QSO B0201+113 02 03 46.66 +11 34 45.41 19.50 3.610 3760-4980,6700-8515,8660-10420 68.A-0461(A) 3 3,000
QSO J0209+0517 02 09 44.62 +05 17 14.10 17.80 4.174 3300-4520,4775-5754,5830-6810 69.A-0613(A) 3 3,600
J021741.8-370100 02 17 41.78 -37 00 59.60 18.40 2.910 3300-4520 072.A-0442(A)Ê 2 7,200
QSO J0217+0144 02 17 48.95 +01 44 49.70 18.33 1.715 4165-5160,5230-6210 078.A-0646(A)Ê 4 5,400
QSO B0216+0803 02 18 57.36 +08 17 27.43 18.10 2.996 3065-4520,4775-5755,5835-8525, 073.B-0787(A), 072.A-0346(A)Ê 30 37,200
$\cdots$ 8665-10250
QSO B0227-369 02 29 28.47 -36 43 56.78 19.00 2.115 3300-4520,4620-5600,5675-6650 076.A-0389(A)Ê 12 12,800
QSO B0237-2322 02 40 08.18 -23 09 15.78 16.78$^\dagger$ 2.225 3050-5757,5835-8525,8660-10420 166.A-0106(A) 78 91,149
QSO J0242+0049 02 42 21.88 +00 49 12.67 18.44 2.071 3300-4985,5715-7520,7665-9460 075.B-0190(A)Ê 12 20,700
------------------ ------------- -------------- ----------------- -------------- ---------------------------------- ------------------------------- ------ ---------------
------------------ ------------- -------------- ----------------- -------------- -------------------------------- -------------------------------- ------ ---------------
Quasar RA Dec Mag. $z_{\rm em}$ Wavelength coverage Prog. ID No. T$_{\rm exp}$
2000 2000 Å spec sec
QSO J0243-0550 02 43 12.47 -05 50 55.30 19.00 1.805 3300-4518,4620-5600,5675-6650 076.A-0389(A)Ê 9 6,180
QSO B0241-01 02 44 01.84 -01 34 03.70 17.06$^\star$ 4.053 4165-6810 69.A-0613(A), 074.A-0306(A) 4Ê 5400
QSO B0244-1249 02 46 58.47 -12 36 30.80 18.40 2.201 3290-4520,4625-5600,5675-6650 076.A-0389(A)Ê 6 6,000
QSO B0253+0058 02 56 07.26 +01 10 38.62 18.88 1.346 3300-4515,4620-5600,5675-6650 074.A-0597(A)Ê 9 16,200
QSO B0254-404 02 56 34.03 -40 13 00.30 17.40 2.280 3292-4520 072.A-0442(A)Ê 1 1,800
QSO J0300+0048 03 00 00.57 +00 48 28.00 19.48 0.314 3100-5755 267.B-5698(A)Ê 9 16,492
J030640.8-301032 03 06 40.93 -30 10 31.91 19.32$^\dagger$ 2.096 3050-5600,5670-6650,6700-8525, 70.A-0031(A)Ê 12 8,558
$\cdots$ 8665-10420
J030643.7-301107 03 06 43.77 -30 11 07.58 19.85$^\dagger$ 2.130 3060-5595,5675-6650,6700-8520, 70.A-0031(A) 15 12,600
$\cdots$ 8665-10420
QSO B0307-195A 03 10 06.00 -19 21 25.00 18.60 2.144 3050-5758,5835-8525,8660-10420 69.A-0204(A), 68.A-0216(A),Ê 18 28,450
$\cdots$ 65.O-0299(A)
QSO B0307-195B 03 10 09.00 -19 22 08.00 19.10 2.122 3065-5758,5835-8520,8660-10420 69.A-0204(A), 68.A-0216(A), 27 37,700
$\cdots$ 65.O-0299(A)
J031856.6-060038 03 18 56.63 -06 00 37.75 19.31 1.927 3100-5760,5835-8525,8665-10250 078.B-0433(A)Ê 24 22,800
QSO B0329-385 03 31 06.41 -38 24 04.60 17.20 2.423 3060-5758,5835-8520,8655-10420 166.A-0106(A) 36 31,200
QSO B0329-2534 03 31 08.92 -25 24 43.27 17.10 2.736 3100-5763,5830-8525,8665-10420 166.A-0106(A) 66 77,363
QSO J0332-4455 03 32 44.10 -44 55 57.40 17.90 2.679 3300-4515,4780-5760,5840-6810 078.A-0164(A)Ê 12 12,000
QSO B0335-122 03 37 55.43 -12 04 04.67 20.11 3.442 3755-4980,6700-8520,8660-10420 71.A-0067(A)Ê 11 16,200
QSO J0338+0021 03 38 29.31 +00 21 56.30 18.79$^\star$ 5.020 6700-8525,8660-10420 072.B-0123(D)Ê 6 10,450
QSO J0338-0005 03 38 54.78 -00 05 20.99 18.90 3.049 3100-3870 074.A-0201(A)Ê 1 4,600
QSO B0336-017 03 39 00.90 -01 33 18.00 19.10 3.197 3757-4983,6700-8520,8660-10420 68.A-0461(A) 9 15,600
QSO B0347-383 03 49 43.68 -38 10 31.10 17.30 3.222 3300-5757,5835-8520,8660-10420 68.B-0115(A)Ê 18 28,800
QSO B0347-2111 03 49 57.83 -21 02 47.74 21.10 2.944 3300-4520,4775-5755,5832-6810 71.A-0067(A)Ê 9 8,873
J035320.2-231418 03 53 20.10 -23 14 17.80 17.00 1.911 3300-4515,4620-5600,5675-6650 078.A-0068(A) 3 1,120
QSO J0354-2724 03 54 05.56 -27 24 21.00 17.90 2.823 3300-4515,4780-5760,5835-6810 078.A-0003(A)Ê 12 10,240
J040114.0-395132 04 01 14.06 -39 51 32.70 17.00 1.507 3300-4515,4620-5600,5675-6650 078.A-0068(A) 3 2,160
QSO J0403-1703 04 03 56.60 -17 03 23.00 18.70 4.227 4780-5757,5832-8525,8660-10420 074.A-0306(A) 10 12,625
QSO J0407-4410 04 07 18.08 -44 10 14.10 17.60 3.020 3300-8520,8665-10420 68.A-0600(A), 70.A-0017(A),Ê 54 94,000
$\cdots$ 68.A-0361(A)
QSO J0422-3844 04 22 14.79 -38 44 52.90 16.90 3.123 3300-5600,5675-6650,6700-8515, 166.A-0106(A) 49 57,750
$\cdots $ 8660-10420
QSO J0427-1302 04 27 07.30 -13 02 53.64 17.50 2.166 3285-4515,4780-5760,5835-6810 078.A-0003(A)Ê 6 6,000
QSO J0430-4855 04 30 37.31 -48 55 23.70 16.20$^\dagger$ 1.940 3060-5765,5845-8530,8665-10420 66.A-0221(A)Ê 27 29,559
QSO B0432-440 04 34 03.23 -43 55 47.80 19.60 2.649 3300-4520,4780-5755,5830-6810 71.A-0067(A)Ê 12 14,300
QSO B0438-43 04 40 17.17 -43 33 08.62 19.50 2.852 3100-5757,5835-8515,8660,10420 072.A-0346(A)Ê, 69.A-0051(A)Ê 17 22,425
QSO J0441-4313 04 41 17.32 -43 13 45.40 16.40 0.593 3757-5758,5835-8515,8665-10420 70.C-0007(A) 18 20,742
QSO B0449-1645 04 52 13.60 -16 40 12.00 17.00 2.679 3285-4515,4620-5600,5675-6650 078.A-0646(A)Ê 9 15,600
QSO B0450-1310B 04 53 12.80 -13 05 46.00 16.50 2.250 3060-3874,4780-5760,5835-8525, 70.B-0258(A)Ê 15 15,000
$\cdots$ 8670-10420
QSO J0455-4216 04 55 23.05 -42 16 17.40 17.30 2.661 3055-5758,5835-8520,8660-10420 166.A-0106(A) 51 61,994
PKS 0454-220 04 56 08.93 -21 59 09.54 16.10 0.534 3050-3870,4170-5162,5230-6210 076.A-0463(A)Ê 9 8,500
4C-02.19 05 01 12.81 -01 59 14.26 18.40 2.286 3100-8520,8660-10250 074.B-0358(A), 66.A-0624(A), Ê 26 42,422
$\cdots$ 076.A-0463(A)
QSO B0512-3329 05 14 10.91 -33 26 22.40 17.00 1.569 3300-4520,4620-5605,5675-6655 70.A-0446(A), 66.A-0087(A)Ê 30 30,000
QSO B0515-4414 05 17 07.63 -44 10 55.50 14.90 1.713 3060-3875,4785-5760,5840-8525, 66.A-0212(A)Ê 9 13,500
$\cdots$ 8665-10420
J051939.8-364613 05 19 39.82 -36 46 11.60 17.30 1.394 3290-4515,4620-5600,5675-6650 078.A-0068(A)Ê 3 1,520
QSO B0528-2505 05 30 07.96 -25 03 29.84 17.30 2.765 3050-6810 68.A-0106(A), 66.A-0594(A), 43 78,988
$\cdots$ 68.A-0600(A)Ê
QSO J0530+13 05 30 56.42 +13 31 55.15 20.00 2.070 3300-5160,5230-6210,6705-8529, 70.C-0239(A) 18 12,600
$\cdots$ 8665-10420
QSO B0551-36 05 52 46.19 -36 37 27.60 17.00 2.318 3060-5757,5835-6810 66.A-0624(A) 17 26,100
J060008.1-504036 06 00 08.10 -50 40 36.80 18.20$^\dagger$ 3.130 3300-4520,4620-5600,5675-6650 68.A-0639(A)Ê 20 24,000
QSO B0606-2219 06 08 59.69 -22 20 20.96 20.00 1.926 3300-4515,4625-5600,5675-6650 076.A-0389(A)Ê 13 16,000
QSO B0642-5038 06 43 27.00 -50 41 12.80 18.50 3.090 3300-4520,4785-5757,5835-6810 073.A-0071(A)Ê 9 17,500
QSO B0736+01 07 39 18.03 +01 37 04.62 16.47 0.191 3300-5160,5230-6208,6715-8525, 70.C-0239(A) 26 17,600
$\cdots$ 8665-10420
QSO B0810+2554 08 13 31.29 +25 45 03.06 15.40$^\dagger$ 1.510 3050-3868,4625-5598,5675-6650 68.A-0107(A)Ê 12 14,100
QSO B0827+2421 08 30 52.08 +24 10 59.82 17.30 0.939 3060-3872,4620-5600,5675-6650 68.A-0170(A), 69.A-0371(A)Ê 23 34,071
QSO B0841+129 08 44 24.20 +12 45 48.90 18.50 2.505 3290-4520,6710-8525,8665-10420 70.B-0258(A)Ê 9 10,800
QSO B0908+0603 09 11 27.61 +05 50 54.28 18.30$^\dagger$ 2.793 3300-4522,4620-5600,5675-6650 70.A-0439(A)Ê 39 46,800
QSO B0913+0715 09 16 13.94 +07 02 24.30 17.10 2.785 3280-10420 078.A-0185(A)Ê, 68.B-0115(A)Ê 62 77,523
QSO B0919-260 09 21 29.36 -26 18 43.28 18.40 2.299 3300-4518,4618-5600,5675-6650 075.A-0158(A)Ê 12 13,200
QSO B0926-0201 09 29 13.57 -02 14 46.40 16.44 1.661 3050-5757,5835-8525,8660-10420 072.A-0446(A)Ê 15 15,325
QSO B0933-333 09 35 09.23 -33 32 37.69 20.00 2.910 3760-4984,6705-8525,8660-10420 71.A-0067(A), 69.A-0051(A)Ê 12 14,300
QSO B0951-0450 09 53 55.70 -05 04 18.00 19.00 4.369 4785-5750,5832-6807 072.A-0558(A)Ê 8 30,960
QSO B0952+179 09 54 56.82 +17 43 31.22 17.23 1.472 3060-3870,4618-5600,5675-6650 69.A-0371(A)Ê 9 17,100
QSO B0952-0115 09 55 00.10 -01 30 07.00 18.70 4.426 3760-5754,5832-8525,8662-10420 70.A-0160(A), 072.A-0558(A) 55 97,676
QSO B1005-333 10 07 31.39 -33 33 06.72 18.00 1.837 3300-4518,4620-5600,5675-6650 075.A-0158(B)Ê 6 6,000
QSO J1009-0026 10 09 30.47 -00 26 19.13 17.53 1.241 3300-4515,4620-5600,5675-6650 078.A-0003(A)Ê 6 6,000
LBQS 1026-0045B 10 28 37.02 -01 00 27.56 18.40 1.530 3300-4515,4620-5600,5675-6650 078.A-0003(A)Ê 12 10,240
------------------ ------------- -------------- ----------------- -------------- -------------------------------- -------------------------------- ------ ---------------
------------------ ------------- -------------- ----------------- -------------- -------------------------------- ------------------------------ ------ ---------------
Quasar RA Dec Mag. $z_{\rm em}$ Wavelength coverage Prog. ID No. T$_{\rm exp}$
2000 2000 Å spec sec
QSO B1027+0540 10 30 27.10 +05 24 55.00 18.87$^\star$ 6.311 8000-10040 69.A-0529(A)Ê 6 33,400
Q1036-272 10 38 49.00 -27 29 19.10 21.50 3.090 3757-4980,6700-8520,8660-10420 70.A-0017(A)Ê 9 12,133
QSO B1036-2257 10 39 09.52 -23 13 26.20 18.00 3.130 3300-5758,5838-8525,8660-10420 68.B-0115(A)Ê 12 21,600
QSO J1039-2719 10 39 21.86 -27 19 16.45 17.40 2.193 3290-5600,5675-6650,6705-8525, 69.B-0108(A), 70.A-0017(A)Ê 30 36,500
$\cdots$ 8665-10420
QSO B1038-2712 10 40 32.23 -27 27 49.00 17.70 2.331 3045-3865,4780-5757,5835-6810 67.B-0398(A), 65.O-0063(B)Ê 9 16,200
QSO B1036-268 10 40 40.32 -27 24 36.40 20.00 2.460 3285-5600,5675-6650,6700-8520, 67.B-0398(A) 15 27,000
$\cdots$ 8660-10420
QSO J1044-0125 10 44 33.04 -01 25 02.20 18.31$^\star$ 5.740 8660-10420 268.A-5767(A), 69.A-0529(A)Ê 7 30,270
J104540.7-101813 10 45 40.56 -10 18 12.80 17.40 1.261 3300-4515,4620-5600,5675-6650 078.A-0068(A)Ê 3 1,650
J104642.9+053107 10 46 42.84 +05 31 07.02 18.00 2.682 3060-3870,4780-5755,5835-6810 074.A-0780(A) 9 12,528
QSO B1044+059 10 46 56.71 +05 41 50.32 18.30 1.226 3060-3870,4780-5757,5835-6810 074.A-0780(A)Ê 12 13,488
QSO B1044+056 10 47 33.16 +05 24 54.88 17.99 1.306 3060-3870,4780-5757,5835-6810 074.A-0780(A)Ê 10 13,952
QSO B1045+056 10 48 00.40 +05 22 09.76 20.30 1.230 3060-3870,4780-5757,5830-6810 074.A-0780(A)Ê 12 20,932
QSO B1052-0004 10 54 40.98 -00 20 48.47 18.51 1.021 3300-4515,4620-5600,5675-6650 078.A-0003(A)Ê 12 10,240
QSO B1055-301 10 58 00.43 -30 24 55.03 19.50 2.523 3300-4520,4780-5757,5840-6810 71.A-0067(A) 9 10,725
QSO B1101-26 11 03 25.31 -26 45 15.88 16.00 2.145 3780-4985,6710-8525,8665-10250 076.A-0463(A)Ê 27 60,672
QSO B1104-181 11 06 33.39 -18 21 23.80 15.90 2.319 3060-8520,8655-10420 67.A-0278(A)Ê 51 60,948
QSO J1107+0048 11 07 29.03 +00 48 11.27 17.66 1.392 3300-4515,4620-5600,5675-6650 074.A-0597(A)Ê 6 7,200
QSO B1108-07 11 11 13.60 -08 04 02.00 18.10 3.922 3760-8520,8660-10420 67.A-0022(A), 68.B-0115(A), 14 28,498
$\cdots$ 68.A-0492(A)Ê
QSO J1113-1533 11 13 50.61 -15 33 33.90 18.70 3.370 3760-5600,5675-6650 68.A-0492(A)Ê 10 25,200
QSO B1114-220 11 16 54.50 -22 16 52.00 20.20 2.282 3300-4520,4620-5600,5675-6650 71.B-0081(A)Ê 30 14,115
QSO B1114-0822 11 17 27.10 -08 38 58.00 19.40 4.495 4780-5757,5835-8520,8665-10250 074.A-0801(B) 9 28,525
QSO B1122-168 11 24 42.87 -17 05 17.50 16.50 2.400 3290-4520,4620-5600,5675-6650 68.A-0570(A), 073.B-0420(A)Ê 82 104,493
J112910.9-231628 11 29 10.86 -23 16 28.17 17.30 1.019 3300-4520,4620-5600,5675-6650 078.A-0068(A)Ê 3 1,470
QSO J1142+2654 11 42 54.26 +26 54 57.50 17.00 2.630 3757-4980,6705-8520,8660-10420 69.A-0246(A) 42 48,696
QSO B1145-676 11 47 32.40 -67 53 42.70 18.50$^\dagger$ 0.210 3300-5160,5230-6210,6700-8525, 70.C-0239(A) 16 10,800
$\cdots$ 8660-10420
QSO B1151+068 11 54 11.12 +06 34 37.80 18.60 2.762 3060-5757,5835-8520,8660-10420 65.O-0158(A) 18 21,600
J115538.6+053050 11 55 38.60 +05 30 50.67 19.08 3.475 3300-5600,5675-7500,7665-9460 076.A-0376(A)Ê 11 14,400
QSO J1159+1337 11 59 06.52 +13 37 37.70 18.50 3.984 4780-5757,5835-6810 074.A-0306(B)Ê 2 3,000
QSO B1158-1842 12 00 44.94 -18 59 44.50 16.90 2.453 3050-5757,5835-8520,8660-10420 166.A-0106(A)Ê 36 43,200
QSO B1202-074 12 05 23.12 -07 42 32.40 17.50 4.695 3300-4520,4785-5765,5835-8525, 71.B-0106(A), 66.A-0594(A), 43 82,086
$\cdots$ 8665-10420 166.A-0106(A)
J120550.2+020131 12 05 50.19 +02 01 31.55 17.46 2.134 3290-4520,4780-5757,5840-6810 273.A-5020(A)Ê 3 3,000
QSO B1209+0919 12 11 34.95 +09 02 20.94 18.50 3.292 3300-8515,8650,10420 073.B-0787(A), 67.A-0146(A)Ê 15 23,765
LBQS 1209+1046 12 11 40.59 +10 30 02.03 17.80 2.193 3290-4520,4620-5600,5675-6650 68.A-0170(A)Ê 12 14,400
LBQS 1210+1731 12 13 03.03 +17 14 23.20 17.40 2.543 3060-3875,6705-8525,8665-10420 70.B-0258(A) 21 30,500
QSO J1215+3309 12 15 09.22 +33 09 55.23 16.50 0.616 3300-4520,4620-5600,5675-6650 69.A-0371(A)Ê 12 18,468
QSO B1220-1800 12 23 10.62 -18 16 42.40 18.10 2.160 3280-4520,4620-5600,5675-6650 273.A-5020(A)Ê 3 3,000
LBQS 1223+1753 12 26 07.20 +17 36 49.90 18.10 2.940 3300-5600,5675-6650,6700-8520, 69.B-0108(A), 65.O-0158(A)Ê 14 18,000
$\cdots$ 8660-10420
QSO B1228-113 12 30 55.54 -11 39 09.62 22.01$^\dagger$ 3.528 3300-4520,4780-5760,5835-6810 71.A-0067(A)Ê 14 17,875
QSO B1230-101 12 33 13.16 -10 25 18.44 19.80 2.394 3300-4515,4620-5600,5675-6650 075.A-0158(A)Ê 12 12,000
LBQS 1232+0815 12 34 37.55 +07 58 40.50 18.40 2.570 3285-4520,4620-5600,5675-6650 68.A-0106(A), 69.A-0061(A), 36 59,400
LBQS 1242+0006 12 45 24.60 -00 09 38.01 17.70 2.084 3290-4520,4620-5600,5675-6650 273.A-5020(A)Ê 3 3,000
QSO J1246-0730 12 46 04.24 -07 30 46.61 18.00 1.286 3050-3870,4780-5758,5835-6810 69.A-0410(A)Ê 12 12,000
LBQS 1246-0217 12 49 24.87 -02 33 39.76 18.10 2.117 3285-4520,4620-7500,7665-9460 076.A-0376(A), 67.A-0146(A)Ê 21 27,000
J124957.2-015929 12 49 57.24 -01 59 28.76 18.58 3.635 3300-4515,4775-5757,5835-6810 075.A-0464(A) 33 33,650
QSO B1249-02 12 51 51.39 -02 23 33.60 17.10 1.192 3300-4520,4620-5600,5675-6650 078.A-0068(A)Ê 3 1,218
QSO B1256-177 12 58 38.29 -18 00 03.18 20.20 1.956 3290-4520,4620-5600,5675-6650 075.A-0158(A) 12 12,000
QSO J1306+0356 13 06 08.26 +03 56 26.30 18.77$^\star$ 5.999 6700-10420 69.A-0529(A), 077.A-0713(A)Ê 8 38,131
QSO B1317-0507 13 20 29.98 -05 23 35.50 16.54 3.710 3300-4520,4775-5757,5835-6810 075.A-0464(A)Ê 23 24,040
QSO B1318-263 13 21 14.06 -26 36 11.19 20.40$^\dagger$ 2.027 3300-4520,4620-5600,5675-6650 075.A-0158(A)Ê 20 23,100
LBQS 1320-0006 13 23 23.79 -00 21 55.24 18.20 1.388 3300-4520,4620-5600,5675-6650 274.A-5030(A)Ê 15 13,841
QSO B1324-047 13 26 54.61 -05 00 58.98 19.00 1.882 3290-4520,4620-5600,5675-6650 075.A-0158(A)Ê 17 19,800
QSO J1330-2522 13 30 51.98 -25 22 18.80 18.46 3.910 3300-4515,4780-5757,5835-6810 077.A-0166(A)Ê 3 5,400
QSO B1331+170 13 33 35.78 +16 49 03.94 16.71 2.084 3060-4520,4620-5600,5675-5660, 67.A-0022(A), 68.A-0170(A)Ê 15 20,700
$\cdots$ 6692-8515,8650-10420
QSO J1342-1355 13 42 58.86 -13 55 59.78 19.20 3.212 3300-5600,5675-6650 68.A-0492(A)Ê 10 25,100
QSO J1344-1035 13 44 27.07 -10 35 41.90 17.10 2.134 3050-5760,5835-8520,8660-10420 166.A-0106(A)Ê 51 58,933
QSO B1347-2457 13 50 38.88 -25 12 16.80 16.30 2.578 3050-5765,5835-8525,8660-10420 166.A-0106(A)Ê 30 35,444
QSO J1356-1101 13 56 46.83 -11 01 29.22 19.20 3.006 3757-4985,6700-8520,8660-10420 71.A-0067(A), 71.A-0539(A), 18 13,725
$\cdots$ 69.A-0051(A)
QSO B1402-012 14 04 45.89 -01 30 21.84 18.38 2.522 3757-4985,5710-7518,7665-9460 075.A-0158(B)Ê 6 5,300
$\cdots$ 71.B-0136(A)
QSO B1409+0930 14 12 17.30 +09 16 25.00 18.60 2.838 3050-5757,5830-8520,8665-10420 65.O-0158(A), 69.A-0051(A)Ê 44 51,950
QSO B1412-096 14 15 20.83 -09 55 58.33 17.50 2.001 3300-4515,4620-5600,5675-6650 075.A-0158(B)Ê 3 3,300
QSO J1421-0643 14 21 07.76 -06 43 56.30 18.50 3.689 3757-4980,6705-8520,8665-10420 71.A-0067(A), 71.A-0539(A) 20 18,120
QSO B1424-41 14 27 56.30 -42 06 19.55 17.70 1.522 3300-5160,5235-6210,6700-8520, 67.C-0157(A) 10 8,000
$\cdots$ 8660-10420
QSO B1429-008B 14 32 28.95 -01 06 13.55 20.00$^\star$ 2.082 3300-5600,5675-6650-6700-8525, 69.A-0555(A)Ê 49 58,048
------------------ ------------- -------------- ----------------- -------------- -------------------------------- ------------------------------ ------ ---------------
------------------- ------------- -------------- ----------------- -------------- -------------------------------- ------------------------------- ------ ---------------
Quasar RA Dec Mag. $z_{\rm em}$ Wavelength coverage Prog. ID No. T$_{\rm exp}$
2000 2000 Å spec sec
$\cdots$ 8660-10420
QSO J1439+1117 14 39 12.04 +11 17 40.49 16.56$^\star$ 2.583 3300-4515,4780-7105 278.A-5062(A)Ê 23 34,273
QSO J1443+2724 14 43 31.17 +27 24 36.77 19.30 4.443 4780-5757,5835-7915,9900 077.A-0148(A), 072.A-0346(B)Ê 28 71,402
LBQS 1444+0126 14 46 53.04 +01 13 56.00 18.50 2.210 3290-4530,4620-5600-5675-6650 67.A-0078(A), 69.B-0108(A), Ê 36 54,000
$\cdots$ 65.O-0158(A)Ê, 71.B-0136(A)
QSO B1448-232 14 51 02.51 -23 29 31.08 16.96 2.215 3050-8515,8660-10420 077.A-0646(A), 166.A-0106(A)Ê 60 66,532
J145147.1-151220 14 51 47.05 -15 12 20.10 19.14 4.763 3755-5757,5835-8515,8655,10420 166.A-0106(A)Ê 31 48,400
QSO J1453+0029 14 53 33.01 +00 29 43.56 21.46 1.297 4780-5755,5830-8550,8650-10420 267.B-5698(A)Ê 8 18,000
J151352.52+085555 15 13 52.53 +08 55 55.74 15.57$^\star$ 2.904 3300-4520,4620-5600,5675-6650 69.B-0108(A), 71.B-0136(A)Ê 15 15,514
QSO J1621-0042 16 21 16.92 -00 42 50.90 17.23 3.700 3300-4515,4780-5757,5835-6810 075.A-0464(A)Ê 23 19,096
4C 12.59 16 31 45.16 +11 56 03.01 18.50 1.792 3060-3870,4780-5757,5835-6810 69.A-0410(A)Ê 12 12,000
QSO J1723+2243 17 23 23.10 +22 43 56.90 18.17 4.520 4780-5757,5835-6810 073.A-0071(A)Ê 2 4,500
QSO B1730-130 17 33 02.72 -13 04 49.49 18.50 0.902 3300-5160,5230-6210,6695-8515, 67.C-0157(A) 10 9,600
$\cdots$ 8655-10420
QSO B1741-038 17 43 58.86 -03 50 04.62 18.50 1.054 3300-5160,5230-6210,6690-8515, 67.C-0157(A) 13 12,000
$\cdots$ 8650-10420
QSO B1937-1009 19 39 57.25 -10 02 41.54 19.00 3.787 4780-5760,5835-6810 077.A-0166(A)Ê 10 27,000
QSO B1935-692 19 40 25.51 -69 07 56.93 18.80 3.152 4780-5757,5835-6810 65.O-0693(A)Ê 10 23,512
QSO B2000-330 20 03 24.12 -32 51 45.03 18.40 3.783 3760-8535,8650-10420 166.A-0106(A), 65.O-0299(A)Ê 43 75,600
QSO J2107-0620 21 07 57.67 -06 20 10.66 17.49 0.642 3757-4985 075.B-0190(A)Ê 3 5,400
LBQS 2113-4345 21 16 54.25 -43 32 34.00 18.50 2.053 3050-7500,7660-9460 69.A-0586(A), 077.A-0714(A) 17 10,536
LBQS 2114-4347 21 17 19.34 -43 34 24.40 18.30 2.040 3050-10420 69.A-0586(A), 077.A-0714(A) 29 22,325
J211739.5-433538 21 17 39.46 -43 35 38.50 18.30$^\dagger$ 2.050 3050-5757,5835-8520,8660-10420 69.A-0586(A) 14 10,300
QSO J2119-3536 21 19 27.60 -35 37 40.60 17.00 2.341 3290-4530,4620-5600,5675-6650 65.O-0158(A)Ê 6 7,200
QSO B2126-15 21 29 12.17 -15 38 41.17 17.30 3.268 3300-5600,5675-6650,6695-8520, 166.A-0106(A)Ê 87 102,302
$\cdots$ 8650-10420
QSO B2129-4653 21 33 02.10 -46 40 28.80 18.90 2.230 3290-4515,4620-5600,5675-6650 074.A-0201(A)Ê 6 7,200
J213314.2-464031 21 33 14.17 -46 40 31.80 19.96$^\dagger$ 2.208 3290-4520,4620-5600,5675-6650 073.A-0071(A)Ê 9 18,000
LBQS 2132-4321 21 36 06.04 -43 08 18.10 17.68 2.420 3290-4530,4620-5600,5675-6650 65.O-0158(A) 3 3,600
LBQS 2138-4427 21 41 59.79 -44 13 25.90 18.20 3.170 3757-4983,6692-8515,8650-10420 67.A-0146(A)Ê 9 15,120
QSO B2139-4433 21 42 22.23 -44 19 29.70 20.18 3.220 4778-5757,5835-6810 69.A-0204(A)Ê 8 21,600
QSO B2149-306 21 51 55.52 -30 27 53.63 18.00 2.345 3300-4518,4620-5600,5675-6650 075.A-0158(B)Ê 6 5,200
QSO B2204-408 22 07 34.41 -40 36 56.00 17.50 3.155 4778-5755,5835-6810 71.B-0106(B)Ê 6 9,900
LBQS 2206-1958A 22 08 52.07 -19 44 00.00 17.33 2.560 3060-8520,8665-10420 65.O-0158(A), 072.A-0346(A)Ê 27 33,900
QSO J2215-0045 22 15 11.94 -00 45 50.01 17.40 1.476 3060-5757,5830-8515,8650-10420 267.B-5698(A)Ê 18 21,600
QSO J2220-2803 22 20 06.76 -28 03 23.34 16.00 2.406 3060-8520,8655-10420 166.A-0106(A), 65.P-0183(A)Ê 60 72,000
QSO B2222-396 22 25 40.44 -39 24 36.66 17.90 2.198 3290-4520 072.A-0442(A)Ê 1 1,800
QSO J2227-2243 22 27 56.94 -22 43 02.60 17.60 1.891 3060-5757,5835-8515,8660-10420 67.A-0280(A)Ê 36 41,813
QSO B2225-4025 22 28 26.95 -40 09 58.90 18.10 2.030 3300-4520 072.A-0442(A)Ê 1 1,800
LBQS 2230+0232 22 32 35.23 +02 47 55.80 18.04 2.147 3060-3872,6705-8525,8665-10420 70.B-0258(A)Ê 9 13,500
J223851.0-295301 22 38 50.97 -29 53 00.59 19.53$^\dagger$ 2.387 3080-5750,5830-8520,8660-10420 69.A-0586(A) 14 12,180
J223922.9-294947 22 39 22.86 -29 49 47.73 19.59$^\dagger$ 1.849 3060-5757,5835-6810 69.A-0586(A)Ê 11 10,708
J223938.9-295451 22 39 38.91 -29 54 50.62 19.66$^\dagger$ 1.907 3080-7505,7670-9460 69.A-0586(A), 077.A-0714(A) 28 29,190
J223941.8-294955 22 39 41.76 -29 49 54.54 19.31$^\dagger$ 2.102 3060-7500,7670-9460 69.A-0586(A), 077.A-0714(A) 22 21,626
J223948.7-294748 22 39 48.67 -29 47 48.18 20.19$^\dagger$ 2.068 3060-10420 69.A-0586(A), 077.A-0714(A) 33 31,590
J223951.9-294837 22 39 51.84 -29 48 36.52 18.85$^\dagger$ 2.121 3070-5757,5835-6810 077.A-0714(A) 11 8,450
QSO B2237-0607 22 39 53.66 -05 52 19.90 18.30 4.558 4775-5753,5830-6810 69.A-0613(A)Ê 2 3,600
QSO J2247-1237 22 47 52.64 -12 37 19.72 18.50 1.892 3300-4518,4620-5600,5675-6650 075.A-0158(B)Ê 9 7,200
QSO B2311-373 23 13 59.71 -37 04 45.20 18.50 2.476 3290-4520,4780-5757,5835-6810 69.A-0051(A)Ê 9 10,725
J232046.7-294406 23 20 46.72 -29 44 06.06 19.83$^\dagger$ 2.401 3050-5757,5835-8520,8660-10420 69.A-0586(A)Ê 12 6,050
J232059.4-295520 23 20 59.41 -29 55 21.49 18.99$^\dagger$ 2.317 3060-5757,5835-8520,8660-10420 69.A-0586(A)Ê 12 6,050
J232114.3-294725 23 21 14.25 -29 47 24.29 19.99$^\dagger$ 2.677 3758-4982,6700-8525,8665-10420 70.A-0031(A)Ê 6 4,800
J232121.2-294350 23 21 21.31 -29 43 51.16 19.70$^\dagger$ 2.184 3060-5757,5835-8520,8660-10420 69.A-0586(A)Ê 12 9,155
QSO B2318-1107 23 21 28.80 -10 51 21.20 $\cdots$ 2.960 3050-4515,4780-5760,5840-6810 072.A-0442(A), 073.A-0071(A) 13 23,400
QSO J2328+0022 23 28 20.38 +00 22 38.26 17.95 1.302 3300-4520,4620-5600,5675-6650 074.A-0597(A)Ê 9 15,900
QSO B2332-094 23 34 46.40 -09 08 12.33 18.66 3.330 3757-5757,5835-8520,8660-10250 68.A-0600(A), 073.B-0787(A)Ê 25 31,920
J233544.2+150118 23 35 44.19 +15 01 18.33 18.20 0.791 3060-3870,4620-5600,5675-6650 078.A-0646(A)Ê 12 17,920
QSO B2342+3417 23 44 51.26 +34 33 48.74 19.10 3.010 3757-4985,6705-8525,8665-10420 71.A-0539(A)Ê 24 12,000
QSO J2346+1247 23 46 25.40 +12 47 44.00 $\cdots$ 2.578 3300-4518,4620-5600,5675-6650 075.A-0018(A)Ê 42 70,240
QSO B2343+125 23 46 28.22 +12 48 59.93 17.50 2.763 3060-3872,4780-5757,5835-8515, 67.A-0022(A) 19 25,200
$\cdots$ 8650-10420
QSO B2345+000 23 48 25.37 +00 20 40.11 19.3 2.654 3300-4518,4620-5600,5675-6650 076.A-0320(A) 24 25,600
QSO B2347-4342 23 50 34.27 -43 25 59.70 16.30 2.885 3070-5757,5835-8520,8655-10420 71.A-0066(A), 166.A-0106(A),Ê 71 125,443
$\cdots$ 68.A-0230(A)
QSO B2348-0180 23 50 57.89 -00 52 10.01 19.50 3.023 3300-4520,4780-5755,5835-6810 072.A-0346(A)Ê 9 16,200
QSO B2348-147 23 51 29.81 -14 27 56.90 16.90 2.933 3290-4520,6100-7928,8070-9760 70.B-0258(A) 9 10,350
J235534.6-395355 23 55 34.61 -39 53 55.30 16.30 1.579 3300-4515,4620-5600,5675-6650 078.A-0068(A) 3 612
J235702.5-004824 23 57 02.55 -00 48 24.00 19.53 3.013 3300-4986 075.B-0190(A) 7 25,200
QSO J2359-1241 23 59 53.64 -12 41 48.20 16.7 0.868 3050-5757,5835-8525,8665-10250 078.B-0433(A)Ê 24 22,800
\[3pt\] Total $\cdots$ $\cdots$ $\cdots$ $\cdots$ $\cdots$ $\cdots$ 4309 5,616,229
------------------- ------------- -------------- ----------------- -------------- -------------------------------- ------------------------------- ------ ---------------
$\dagger$ $B$-band magnitude\
$\star$ $J$-band magnitude
[^1]: <http://archive.eso.org/eso/eso_archive_adp.html>
[^2]: <http://www.sdss.org/dr7/>
[^3]: <http://leda.univ-lyon1.fr/>
[^4]: <http://www.2dfquasar.org/Spec_Cat/2qzsearch2.html>
[^5]: <http://simbad.u-strasbg.fr/>
|
---
abstract: 'Recent ALMA observations may indicate a surprising abundance of sub-Jovian planets on very wide orbits in protoplanetary discs that are only a few million years old. These planets are too young and distant to have been formed via the Core Accretion (CA) scenario, and are much less massive than the gas clumps born in the classical Gravitational Instability (GI) theory. It was recently suggested that such planets may form by the partial destruction of GI protoplanets: energy output due to the growth of a massive core may unbind all or most of the surrounding pre-collapse protoplanet. Here we present the first 3D global disc simulations that simultaneously resolve grain dynamics in the disc and within the protoplanet. We confirm that massive GI protoplanets may self-destruct at arbitrarily large separations from the host star provided that solid cores of mass $\sim $10-20 $ M_{\oplus}$ are able to grow inside them during their pre-collapse phase. In addition, we find that the heating force recently analysed by [@MassetVelascoRomero17] perturbs these cores away from the centre of their gaseous protoplanets. This leads to very complicated dust dynamics in the protoplanet centre, potentially resulting in the formation of multiple cores, planetary satellites, and other debris such as planetesimals within the same protoplanet. A unique prediction of this planet formation scenario is the presence of sub-Jovian planets at wide orbits in Class 0/I protoplanetary discs.'
author:
- |
J. Humphries[^1] & S. Nayakshin\
\
Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH, UK.
bibliography:
- 'humphries.bib'
date: 'Accepted XXX. Received YYY; in original form ZZZ'
title: 'On the origin of wide-orbit ALMA planets: giant protoplanets disrupted by their cores'
---
\[firstpage\]
accretion discs – planet-disc interactions – protoplanetary discs – brown dwarfs – planets and satellites: formation – planets and satellites: composition
Introduction
============
It is now widely believed that the gaps in the $\sim $ 1 mm dust discs observed by the Atacama Large Millimeter Array (ALMA) are the signatures of young planets [@Alma2015; @IsellaEtal16; @LongEtal18; @DSHARP1; @DSHARP7]. Modelling suggests that these planets are often wide-orbit Saturn analogues [@DipierroEtal16a; @ClarkeEtal18; @LodatoEtal19]. Such planets present a challenge for both of the primary planet formation theories, albeit for different reasons.
In the classical planetesimal-based Core Accretion (CA) theory [@PollackEtal96; @IdaLin04a] forming massive solid cores at wide separations in a $\sim 1$ Myr old disc is challenging as the process is expected to take more than an order of magnitude longer than this [e.g., @KL99]. Core growth via pebble accretion is much faster; however the process is not very efficient [@OrmelLiu18; @LinEtal18] and may require more pebbles than the observations indicate. Additionally, in any flavor of CA scenario the ALMA planets should be in the runaway gas accretion phase. This process is expected to produce planets much more massive than Jupiter [*very rapidly*]{} at these wide orbits, which does not seem to be the case [@NayakshinEtal19]. [@NduguEtal19] has very recently detailed these constraints. As much as $2000 {{\,{\rm M}_{\oplus}}}$ of pebbles in the disc are required to match the ALMA gap structures, and the resulting planet mass function indeed shows too few sub-Jovian planets and too many $M_{\rm p} > 1 {{\,{\rm M}_{\rm J}}}$ planets.
In the Gravitational Instability (GI) scenario [@Boss98; @Rice05; @Rafikov05] planets form very rapidly, e.g., in the first $\sim 0.1$ Myr [@Boley09]. The age of ALMA planets is thus not a challenge for this scenario; ALMA planets if anything, are ‘old’ for GI. However, the minimum initial mass of protoplantary fragments formed by GI in the disc is thought to be at least $\sim 1 {{\,{\rm M}_{\rm J}}}$ [@BoleyEtal10], and perhaps even $\sim 3-10 {{\,{\rm M}_{\rm J}}}$ [@KratterEtal10; @ForganRice13b; @KratterL16]. This is an order of magnitude larger than the typical masses inferred for the ALMA gap opening planets [@NayakshinEtal19].
Formation of a protoplanetary clump in a massive gas disc is only the first step in the life of a GI-made planet, and its eventual fate depends on many physical processes [e.g., see the review by @Nayakshin_Review]. In this paper we extend our earlier work [@HumphriesNayakshin18] on the evolution of GI protoplanets in their pebble-rich parent discs. Newly born GI protoplanets are initially very extended with radii of $\sim$ 1 AU. After a characteristic cooling time, Hydrogen molecules in the protoplanet dissociate and it collapses to a tightly bound Jupiter analogue [@Bodenheimer74; @HelledEtalPP62014]. During this cooling phase protoplanets are vulnerable to tidal disruption via interactions with the central star or with other protoplanets, which may destroy many of these nascent planets if they migrate to separations closer than 20 AU [@HumphriesEtal19].
Additionally, pebble accretion plays a key role in the evolution of GI protoplanets. Observations [@TychoniecEtal18] suggest that young Class 0 protoplanetary discs contain as much as hundreds of Earth masses in pebble-sized grains. These will be focused inside protoplanets as they are born [@BoleyDurisen10] and accreted during any subsequent migration [@BaruteauEtal11; @JohansenLacerda10; @OrmelKlahr10]. Both analytical work [@Nayakshin15a] and 3D simulations [@HumphriesNayakshin18] demonstrate that protoplanets accrete the majority of mm and above sized dust grains that enter their Hill spheres, considerably enhancing their total metal content. Once accreted, these pebbles are expected to grow and settle rapidly inside the protoplanet, likely becoming locked into a massive core [@Kuiper51b; @BossEtal02; @HS08; @BoleyEtal10; @Nayakshin10a; @Nayakshin10b].
[@Nayakshin16a] showed via 1D population synthesis models that core formation inside young GI protoplanets may in fact release enough heat to remove some or all of their gaseous envelopes. This process could therefore downsize GI protoplanets, making GI a more physically plausible scenario for hatching ALMA planets. This idea is also attractive because of parallels with other astrophysical systems, e.g. galaxies losing much of their initial gaseous mass due to energetic feedback from supermassive black holes [@DiMatteo05].
In [@HumphriesNayakshin18] we used a sink particle approximation to study the accretion of pebbles onto GI protoplanets. The sink particle approach provides a reliable estimate for the total mass of pebbles captured by a protoplanet, but it does not allow us to study their subsequent evolution. In this paper we improve on our previous work by modelling the protoplanet hydrodynamically, thus resolving both gas and dust dynamics within it. Nevertheless, it remains beyond current computational means to resolve the growing solid core in such simulations, and so we introduce a [*dust-sink*]{} in the protoplanet centre to model the core. As pebbles accrete onto the core, we pass the liberated gravitational potential energy to the surrounding gas in the protoplanet. Our simulations therefore also aim to explore the effects of core feedback as introduced in [@Nayakshin16a].
The paper is structured as follows. In Section \[sec:analytic\_fb\] we briefly calculate the expected core mass necessary to disrupt a young GI protoplanet. Following this, in Section \[sec:methods\] we describe the new physics added to our previous simulations [@HumphriesNayakshin18] in order to numerically model core growth and feedback. We also study the settling timescale for a variety of grain sizes inside protoplanets. In Section \[sec:results\] we present the main results of the paper, examining how feedback driven disruption over a range of feedback timescales and pebble to gas ratios can unbind protoplanets and leave behind rocky cores at tens of AU. In Section \[sec:core\] we extend this analysis and take a closer look at the core during the feedback process. In Section \[sec:discussion\] we outline the observational implications of protoplanet disruption process and also discuss some of the limitations of our model. Finally, we summarise the conclusions of our paper in Section \[sec:conclusions\]: if rocky cores rapidly form inside GI protoplanets, the resultant release of energy may disrupt these objects and leave super-Earth and potentially Saturn mass cores stranded at tens to hundreds of AU.
Analytical Estimates {#sec:analytic_fb}
====================
![Liberated gravitational potential energy from core formation as a fraction of protoplanet binding energy from Equation \[eq:feedback\]. The protoplanet has a mass of 5$M_J$ and a solid core central density of 5 gcm$^{-3}$. It takes a core of 17$M_{\oplus}$ to unbind this protoplanet if it has a radius of 1 AU.[]{data-label="fig:an_feedback"}](F/0503_fb_analytic.pdf){width="0.99\columnwidth"}
Here we estimate the core mass needed to unbind a newly born GI protoplanet with a radius of a few AU, which is typical for pre-collapse planets. We assume that the protoplanet can be modelled as a polytropic sphere with mass $M_{\rm p}$, radius $r_{\rm p}$, and adiabatic index $\gamma=7/5$. The Virial theorem tells us that the total energy of a polytrope ($E_{\rm tot}$) is half of its gravitational binding energy and is therefore given by
$$E_{\rm tot} = -\dfrac{3GM_{\rm p}^2}{5r_{\rm p}}.
\label{eq:binding}$$
After its birth, this protoplanet is able to accrete gas and dust from the disc. If accreted dust is able to grow large enough, it will rapidly settle to the centre and form a core [@HelledEtal08; @Nayakshin10a; @Nayakshin10b]. This process will release gravitational potential energy and heat the central regions of the protoplanet. Assuming that the accretion energy of the core is released into the surrounding gas instantaneously, as is done in the Core Accretion scenario [e.g., @Stevenson82; @PollackEtal96; @MordasiniEtal12], the total corresponding feedback energy ($E_{fb}$) increases at the rate
$$E_{\rm fb} = \int_0^t \dfrac{G M_{\rm c} \dot M_{\rm c}}{r_{\rm c}} dt',
\label{eq:E_fb}$$
where $M_{\rm c}$ and $r_{\rm c}$ are the mass and radius of the luminous core. If we assume that the rate of dust accretion onto the core is constant and we take a constant core density $\rho_{\rm c}$, we can rewrite $r_{\rm c}$ in terms of $M_{\rm c}$ and integrate over $t$ to find $E_{\rm fb}$ as
$$E_{\rm fb} = \dfrac{3}{5} \left( \dfrac{4 \pi \rho_{\rm c} }{3} \right)^{1/3} G (M_{\rm c})^{5/3},$$
where we have assumed that all of the gravitational potential energy is liberated instantaneously. We can then study this feedback energy as a fraction of the total energy of the protoplanet from Equation \[eq:binding\],
$$\dfrac{E_{\rm fb}}{E_{\rm tot}}
= 1.3 \left(\dfrac{\rho_{\rm c}}{5 \rm gcm^{-3}}\right)^{1/3} \left(\dfrac{M_{\rm c}}{20 M_{\oplus}}\right)^{5/3} \left( \dfrac{r_{\rm p}}{1 \rm AU}\right) \left( \dfrac{5 M_{J}}{M_{p}}\right)^2
\label{eq:feedback}$$
or alternatively,
$$\dfrac{E_{\rm fb}}{E_{\rm tot}} = \left( \dfrac{M_{\rm c}}{M_{\rm p}} \right)^2 \dfrac{ r_{\rm p}}{r_{\rm c}}.
\label{eq:fb_simple}$$
Let us consider a protoplanet with $M_{\rm p} = 5M_J$, $
r_{\rm p}$ = 1 AU, and a rocky core with mass $M_{\rm c}=M_\oplus$ and $\rho_{\rm c}$ = 5 gcm$^{-3}$. Equation \[eq:feedback\] gives us a value of $\sim$ 0.008, far too small to disrupt the protoplanet. However, if we allow the core to grow to 20 $M_\oplus$ [Jupiter’s core mass is now believed to be $\sim 10 M_{\oplus}$; @WahlEtal17; @Nettelmann17; @GuillotEtal18], we get a value of 1.3. This result is exciting since it shows us that reasonably modest cores have the potential to destroy young GI protoplanets. Figure \[fig:an\_feedback\] illustrates Equation \[eq:feedback\] for a 5 $M_J$ protoplanet with two different assumptions for the initial radius. In the remainder of this paper we demonstrate the significance of this result using 3D simulations.
One of the simplifications of this argument is that we have neglected cooling inside the protoplanet. However, [@HelledBodenheimer11] found that the cooling timescale for protoplanets is typically in the range $10^4-10^5$ years, this tells us that if solid cores can grow faster than the cooling timescale for GI protoplanets then they have the potential to completely unbind them.
Numerical methods {#sec:methods}
=================
This work extends previous simulations by [@NayakshinCha13], [@Nayakshin17a] and [@HumphriesNayakshin18] and complements 3D simulations of core formation inside GI planets by [@Nayakshin18]. We model gas and dust in protoplanetary discs with the coupled smoothed particle hydrodynamics (SPH) and N-body code <span style="font-variant:small-caps;">gadget-3</span> [see @Springel05]. Gas is modelled with an ideal equation of state with adiabatic index $\gamma = 7/5$, appropriate for a pre-collapse GI protoplanet dominated by molecular Hydrogen at a few hundred Kelvin [@BoleyEtal07]. An N-body tree algorithm is used to calculate the gravitational forces for all components in the system. We use a slight modification of the simple $\beta$ cooling model [@Gammie01] in which gas specific internal energy $u$ evolves according to
$$\dfrac{\mathrm{d}u}{\mathrm{d}t} = - \dfrac{u - u_{\rm eq}}{ t_{\rm cool}}\;,
\label{eq:beta0}$$
where $u_{\rm eq} = k_B T_{\rm eq}/(\mu (\gamma-1))$, $k_B$ is the Boltzmann constant and $\mu = 2.45 m_p$ is the mean molecular weight for gas of Solar composition. The equilibrium temperature mimics irradiation from the star and is given by $T_{\rm eq}$ = 20K (100AU/R)$^{1/2}$. Similar to [@Nayakshin17a], we set the cooling time $t_{\rm cool}$ to $$t_{\rm cool}(R) = \beta \Omega_K^{-1} \; \left(1 + \dfrac{\rho}{\rho_{\rm crit}}\right)^{5} \;,
\label{eq:beta_def}$$ where $\beta = 5$ and $\Omega_K = (GM_*/R^3)^{1/2}$ is the local Keplerian angular frequency at $R$. The term in the brackets in Equation \[eq:beta\_def\] quenches radiative cooling at gas densities higher than $\rho_{\rm crit} = 2 \times10^{-11}$ gcm$^{-3}$ in order to capture the long cooling timescales inside GI protoplanets [@HelledBodenheimer11]. Dust grains in our model are treated as a set of Lagrangian particles embedded in the gas that experience aerodynamical drag [@LAandBate15b]. We consider a variety of grain sizes ($a$) but consider a fixed grain density ($\rho_a$) of 3 gcm$^{-3}$ to simulate silicate grains. We use a semi-implicit integration scheme that captures both short and long stopping time regimes and include the back-reaction on the gas [@HumphriesNayakshin18]. Self-gravity of both dust and gas particles is computed via N-body methods.
Modelling the solid core {#sec:core_modelling}
------------------------
Sink particles are often introduced in hydrodynamical simulations to deal with gravitational collapse of a part of the system that can no longer be resolved [e.g., @Bate95]. For the problem at hand, we introduce a dust-only sink in the protoplanet centre. We set the sink radius $r_{\rm sink}$ to 0.03 AU, comparable to the minimum SPH smoothing length in the centre of the protoplanet. Dust particles that are within $r_{\rm sink}$ and gravitationally bound to the sink are accreted. The sink particle is introduced at $t=0$ and its initial mass is set to $10^{-3}{{\,{\rm M}_{\oplus}}}$, which is small enough to not influence the initial dynamics of either gas or dust. Note that collapse of the gas component onto the sink does not occur in our simulations even when the sink grows very massive. This is because the radiative cooling timescale for gas very near the solid core is much longer than the duration of these simulations.
We assign the core a constant material density $\rho_{\rm c}=5$ g cm$^{-3}$. This represents a constant density mix of silicates and iron with a similar composition to the Earth ($\rho_{\oplus}$ = 5.51 gcm$^{-3}$). As this core grows, its gravitational potential energy is released into the gas at the centre of the protoplanet. If the energy release was instantaneous the accretion luminosity $L_{\rm c} = G \dot{M} M_{\rm c} /r_{\rm c}$, would be very high, since the rates of core growth within our protoplanets are as large as $\dot M_{\rm c} = 10^{-2} {{\,{\rm M}_{\oplus}}}$ yr$^{-1}$.
However, the rate at which this potential energy is released is strongly dependent on the internal physics of the core and the core-gas boundary, neither of which we are able to resolve in this paper. [@BrouwersEtal18] show using 1D simulations in the context of the Core Accretion scenario that solids settling onto a massive core undergo vaporisation in the hot gas near to the core, this limits the rate at which feedback energy can be released during core growth. We set a parameter $t_{\rm acc}$ to smooth out energy release in order to account for this, although we are limited to relatively short time scales of $10^3$ and $10^4$ due to the numerical cost of our simulations. We address this point further in Section \[sec:discussion\].
In practice, the rate of the energy injection is therefore given by a luminosity $L_{\rm c}$ and follows the prescriptions from [@NayakshinPower10; @Nayakshin15b]. At any given time the luminosity of the core is given by
$$L_{\rm c} = \dfrac{G M_{\rm c}}{r_{\rm c}} \left( \dfrac{M_{\rm res}}{t_{\rm acc}} \right)
\label{eq:core_L}$$
where $M_{\rm res}$ represents a ‘reservoir’ of accreted pebbles that evolves such that
$$\dfrac{\mathrm{d} M_{\rm res}}{\mathrm{d} t} = \dot{M}_{\rm peb} - \dfrac{M_{\rm res}}{t_{\rm acc}}.$$
Note that this prescription releases the correct amount of energy when integrated over time and yields the instantaneous accretion luminosity in the limit $t_{\rm acc} \rightarrow 0$.
Since we do not model radiative transfer in this paper, we adopt a simplified energy transfer prescription and inject the heat into the SPH neighbour particles of the sink. The corresponding thermal energy is passed to the nearest 160 SPH neighbours of the sink, weighted by the SPH kernel[^2].
Simulation setup {#sec:setup}
----------------
Our protoplanetary disc has a surface density profile of $\Sigma \propto R^{-1}$ (consistent with simulations of protoplanetary disc formation by [@Bate18]), an initial mass of $100 {{\,{\rm M}_{\rm J}}}$ and an outer radius of 100 AU[^3]. The disc is relaxed for $\sim$ ten orbits at the outer edge. We then add dust particles to the disc at a suppressed vertical height of 10% relative to the gas scale height in order to represent dust settling. After this, a $5 {{\,{\rm M}_{\rm J}}}$ metal-free protoplanet (modelled as a polytropic sphere with an initial radius of 3 AU) is injected into the gas disc on a circular orbit at 50 AU. The mass of the protoplanet ($M_{\rm P}$) is taken to be the mass within its half Hill-sphere.
For these simulations we assume that large grains dominate the mass budget of metals in the disc and therefore we set the initial disc metallicity to $Z_0=0.01$, such that the initial mass in pebbles is 1% of the initial gaseous disc mass. This corresponds to 300$M_{\oplus}$ of metals, in line with the [@TychoniecEtal18] result for Class 0 discs. In [@HumphriesNayakshin18] we found that gas giants accreted nearly 100 percent of pebbles that entered their Hill spheres for a broad range of grain sizes from 0.03-30 cm. The rapid migration of the gas giants allowed accretion to proceed far beyond the ‘isolation mass’ [e.g., @LambrechtsJ12].
Dust sedimentation inside protoplanets {#sec:dust_sedimentation}
--------------------------------------
![Global gas surface density for a protoplanet with no feedback in a simulation with 1 cm pebbles. The pebbles are plotted with over-plotted greyscale points. The location of the protoplanet is marked with a black dot. Notice that the protoplanet has already carved a deep gap in the dust distribution in fewer than four orbits.[]{data-label="fig:no_fb_global"}](F/0503_a1cm_sig.pdf){width="1.0\columnwidth"}
![A zoom view on the protoplanet from the same simulation as Figure \[fig:no\_fb\_global\], the core sink particle is marked with a black dot. The sedimentation time for 1 cm grains is thousands of years and so the majority of accreted grains remain suspended in the upper atmosphere of the protoplanet between 0.4-0.8 AU from the centre.[]{data-label="fig:no_fb_zoom"}](F/0503_a1cm_zoom_rho.pdf){width="0.99\columnwidth"}
![Dynamics inside the protoplanet for different grain sizes. Top: Pebble accretion rates onto the protoplanet are comparable for mm, cm & 10 cm grains [@HumphriesNayakshin18]. Cores only grow for decoupled 10 cm grains while 1 mm and 1 cm grains remain in the atmosphere. Middle: Mean radial distance of dust from dust sink. Note the rapid settling of $a=10$cm grains. The dotted line shows the contraction of radius that contains the initial mass of the protoplanet ($r_P(M_0)$). Bottom: total protoplanet and atmosphere only metallic compositions. Small grains remain suspended in the atmosphere, causing it to become metal enriched.[]{data-label="fig:grain_stats"}](F/0812_grain_size_5MJ.pdf){width="1.0\columnwidth"}
Before including feedback in our simulations, we explore the behaviour of differently sized dust species inside our protoplanets using the simulation setup described in Section \[sec:setup\]. These simulations are identical to those in Section \[sec:fb\], save that core feedback is disabled.
Figure \[fig:no\_fb\_global\] shows the surface density of the gas disc for a simulation with 1 cm grains after 1273 years, the protoplanet core is marked with a black dot and is orbiting at 50 AU in an anti-clockwise direction. The location of pebbles in the disc is marked by the over-laid grey colour map. The protoplanet has carved a deep gap in the grains in less than four orbits. In agreement with our previous work [@HumphriesNayakshin18], this shows that pebble accretion is very efficient for GI protoplanets. Figure \[fig:no\_fb\_zoom\] shows a zoom view of the gas density inside the protoplanet in the plane of the disc from the simulation in Figure \[fig:no\_fb\_global\]. Recall that pre-collapse protoplanets are initially very extended with radii of a few AU. The core is marked with a black dot in the centre of the image while dust particles are marked in white. For the parameters of this protoplanet, 1 cm grains are expected to sediment on timescales of a few thousand years and so the majority remain suspended between 0.4-0.8 AU away from the centre of the protoplanet.
We now examine these simulations in more detail for a range of grain sizes, $a=0.1$ cm, $a=1$ cm and $a=10$ cm, which roughly correspond to Stokes numbers of 0.04, 0.4 and 4 in the disc. The top panel of Figure \[fig:grain\_stats\] shows that the total mass of pebbles accreted from the disc inside the half Hill sphere of the protoplanet is very similar for all of the three grain sizes. This result is consistent with [@HumphriesNayakshin18] who found that pebble capture from the disc into the Hill sphere is almost 100% efficient down to Stokes numbers of 0.1. Despite similarities in the accretion rates however, the dashed line shows that a core is only able to grow for the 10 cm grains.
The middle panel of Figure \[fig:grain\_stats\] shows the mean radial distance of the grains inside the protoplanet with respect to its centre (defined as the location of maximum gas density). 10 cm grains sediment to the core of the protoplanet rapidly whereas 1 mm and 1 cm grains remain suspended in the upper atmosphere of the protoplanet. This occurs because the sedimentation time of particles inside protoplanets depends linearly on grain size ($a$) as
$$t_{\rm sed} = \dfrac{v_{\rm th}(r) \rho_{\rm g}(r) r^3}{M_{\rm enc}(r) G a \rho_a}
\label{eq:vsed}$$
where $r$ is the distance from the protoplanet centre, $M_{\rm enc}$ is the enclosed mass, $\rho_a$ is the pebble density, $v_{\rm th}$ is the thermal gas speed and $\rho_{\rm g}$ is the gas density. For a typical protoplanet this gives a sedimentation timescale of $\sim 10^2$ years for 10 cm grains but $10^4$ years for mm grains.
Since the protoplanet is embedded in the disc, it also accretes gas onto its atmosphere during the course of the simulation, causing the original protoplanet to contract. We now examine whether grains remain coupled to this new atmosphere or sediment inside the radius that contains the initial mass of the protoplanet ($r_P(M_0)$), which we plot as a dotted line in the middle panel of Figure \[fig:grain\_stats\]. The orange line shows that only 10 cm grains are able to sediment inside this boundary, smaller grains remain suspended in the upper atmosphere of the protoplanet. Essentially, small grains remain coupled to the accreted disc material, whilst 10 cm grains penetrate into the protoplanet.
The bottom panel shows the fractional metallic composition of the total protoplanet and of only the atmosphere material. 1 mm grains cause a significant enhancement to the metallicity of the atmosphere, though in our simulations this enhancement is concentrated around the protoplanet midplane due to vertical dust settling in the global disc.
Based on Figure \[fig:grain\_stats\], we choose to continue this study using only 10 cm grains. This choice is numerically convenient as simulating the slowly sedimenting 1 mm grains requires very long integration times that makes it impractical for 3D simulations. From the top panel we see that the total accretion rate onto the protoplanet is comparable for 1 mm and 10 cm grains, so our choice of grain size does not affect the mass budget of grains accreted onto the protoplanet. [@Nayakshin18] showed that grain growth time scales inside the protoplanet can be very short (as little as $10^2$ years), we therefore make the assumption that accreted grains rapidly grow to large sizes inside the protoplanet. Whilst a more thorough model of grain growth would provide a more reliable result, we are satisfied that this choice strikes a balance between numerical convenience and reasonable physical assumptions. One dimensional analysis of core growth for smaller grains can be found in [@Nayakshin16a].
Key result: protoplanet disruption {#sec:results}
==================================
Name $t_{\rm acc}$ \[years\] $Z_{\rm 0}$ $N_{\rm disc}$ \[$10^6$\] $N_{\rm PP}$ \[$10^6$\]
-------------------------- ------------------------- ------------- --------------------------- -------------------------
No Feedback - 0.01 2 0.1
$t_{\rm acc} = 10^3$ Yrs 1000 0.01 2 0.1
$t_{\rm acc} = 10^4$ Yrs 10,000 0.01 2 0.1
$Z_0$ = 0.3% 1000 0.003 2 0.1
$Z_0$ = 3% 1000 0.03 2 0.1
High res. 1000 0.01 16 0.8
: Summary of run parameters for the models presented in Figure \[fig:feedback\]. $N_{\rm disc}$ and $N_{\rm PP}$ represent the total number of SPH particles in the initial disc and protoplanet respectively, there is typically one dust particle for every three SPH particles.[]{data-label="table:fb_params"}
We now extend the simulations from the previous section in order to explore the effects of core feedback from the release of gravitational potential energy, using the prescriptions detailed in Section \[sec:core\_modelling\]. We find that in several cases this feedback is able to completely unbind young GI protoplanets. See Table \[table:fb\_params\] for a summary of the various models.
Feedback from dust protocores {#sec:fb}
-----------------------------
![The effect of including feedback from solid cores built through dust accretion. Top panel: Orbital separation of each core, cores are released on eccentric orbits after protoplanet disruption. Upper middle panel: Gas mass inside the half Hill sphere of each protoplanet. Middle panel: sum of thermal and gravitational potential energy of gas within the half Hill sphere. Lower middle panel: core luminosity; for this protoplanet the core must reach almost $10^{-3} L_{\odot}$ to cause a disruption. Bottom panel: the solid lines plot the total metal mass inside the half Hill sphere. Dashed lines plot only the core mass.[]{data-label="fig:feedback"}](F/0812_fb_comp.pdf){width="1.0\columnwidth"}
The top panel of Figure \[fig:feedback\] charts the orbital distance between the star and the dust only sink particle for several simulations with a variety of initial dust to gas ratios and feedback timescales. In all cases, the planet migrates inwards rapidly in the type I regime as expected from previous simulations [@BaruteauEtal11; @Nayakshin17a] and begins to open a gap since it is relatively massive [@MalikEtal15; @FletcherEtal19]. The upper middle panel shows the mass inside the half Hill sphere of the protoplanet. The planet mass increases from 5 to 10 ${{\,{\rm M}_{\rm J}}}$ due to gas accretion, unless a disruption event triggered by core feedback takes place. Notice that the final stages of disruption are always rapid, as expected from analytical work on the Roche lobe overflow of polytropic spheres with adiabatic index $\gamma = 7/5$ [@NayakshinLodato12].
The middle panel shows the sum of the thermal and gravitational potential energy for gas in the protoplanet. In the no feedback case (cyan line) the protoplanet contracts due to external gas pressure from the disc and becomes increasingly more bound. When feedback is included, the internal energy of the protoplanet is modified considerably. The lower middle panel in Figure \[fig:feedback\] plots the core luminosity as described in Section \[sec:core\_modelling\]. If the core becomes sufficiently luminous then it will begin to increase the internal energy of its parent protoplanet, this leads to a disruption event once the total energy becomes positive.
The solid lines in the bottom panel of Figure \[fig:feedback\] show the total metal mass inside the protoplanet whilst the dashed lines show only the mass of the dust sink. With no feedback, the sink accretes all of the available 10 cm pebbles within 1500 years. The resulting core is 50$M_{\oplus}$, around 16% of the initial metal mass in our simulation. The interplay between the accretion timescale and the available metal mass sets the final core mass at disruption. With a short feedback timescale of $10^3$ years we find a final core mass of 15 $M_{\oplus}$. However, if the feedback time is long the core is able grow to larger sizes before the feedback energy disrupts the protoplanet. For the fiducial parameters chosen in this paper ($5M_J$ protoplanet at $50$ AU, 10 cm grains and pebble to gas ratio of 1%) we see disruption of our protoplanets when we set the accretion feedback timescale to $10^3$ and $10^4$ years. We also see a very rapid disruption when we set the initial pebble to gas ratio at $Z=$ 3%. Setting $Z=$ 0.3% suppresses the total available pebble mass, preventing a disruption event. This shows crudely that disruption events are less likely in low metallicity environments. After disruption, accreted grains not incorporated into the core are redistributed to the disc. This restocks the reservoir of large grains for pebble accretion onto subsequent protoplanets. In these simulations, disruption events release between ten and forty Earth masses of metals back into the disc.
Our setup is conservative in that we assume a massive and initially metal free protoplanet in order to reduce uncertainty in the initial conditions; in reality we expect a factor of two initial metal enhancement of the protoplanet [@BoleyDurisen10]. The fact that disruption occurs in this conservative case demonstrates that core growth feedback provides a powerful mechanism for destroying GI protoplanets[^4]. On the other hand, our large grains and short feedback timescales (choices made for numerical convenience), help to speed up the disruption. We address these assumptions further in Section \[sec:discussion\].
Inside the protoplanets
-----------------------
![Gas density inside the $t_{\rm{acc}}$=1000 years protoplanet in the x-y plane, the sink particle is marked with a black dot. Note the weak rotational profile.[]{data-label="fig:zoom_rho_xy"}](F/0213_rho_xy.pdf){width="0.99\columnwidth"}
![Gas density inside the $t_{\rm{acc}}$=1000 years protoplanet in the x-z plane, the sink particle is marked with a black dot. The velocity field shows the outflow of hot gas through the midplane of the protoplanet.[]{data-label="fig:zoom_rho_xz"}](F/0213_rho_xz.pdf){width="0.99\columnwidth"}
![Gas temperature inside the $t_{\rm{acc}}$=1000 years protoplanet. The sink particle is marked with a black dot and is orbiting in a clockwise direction with a period $\sim$20 years. Note the high temperature trail behind it. Escaping hot gas can be seen in an arc in the top half of the plot.[]{data-label="fig:zoom_T"}](F/0131_temp_zoom.pdf){width="0.99\columnwidth"}
Figures \[fig:zoom\_rho\_xy\]-\[fig:zoom\_T\] show different views of the protoplanet from the $t_{acc}=1000$ years simulation in Figure \[fig:feedback\] at 681 years. Figures \[fig:zoom\_rho\_xy\] and \[fig:zoom\_rho\_xz\] show density slices from this protoplanet in the x-y and x-z planes. We can see that the density rises from $10^{-11}$ to $10^{-9}$ g cm$^{-3}$ between the outer and inner regions. The overplotted black arrows in each figure show the velocity field. Figure \[fig:zoom\_rho\_xz\] shows that low density gas from the core heated via feedback is escaping through the midplane. Figure \[fig:zoom\_T\] shows a zoom view of internal temperature, centred on the maximum central density of the protoplanet. Including core feedback heats gas in the centre of the protoplanet: once this heating becomes slightly anisotropic, hot gas begins to escape preferentially through low density channels and bubbles up to the surface. This process forms a low density gas trail behind the core as seen in Figure \[fig:zoom\_T\]. In this figure the core is orbiting in a clockwise direction with a period of $\sim$ 20 years.
It is immediately obvious from these figures that the core is moving inside the protoplanet, but what is causing this? In all of our feedback simulations we see that the core gradually leaves the minimum of the gravitational potential well of the protoplanet and begins to orbit this central region.
Dynamics of the luminous core {#sec:core}
=============================
![Tracks of the core motion for the $t_{acc}$ = 1000 years simulation in the x-y and x-z planes, centred on the maximum gas density inside the protoplanet. The core starts at the centre then spirals outwards due to the heating force, but mostly remains in the x-y plane. After disruption the core is released with a slight eccentricity[]{data-label="fig:core_tracks"}](F/0613_lowres_core_tracks.pdf){width="0.99\columnwidth"}
In Figure \[fig:core\_tracks\] we plot tracks to show the motion of the core in the x-y and x-z planes for the $t_{acc}$ = 1000 years simulation. We see that once the core leaves the central $\sim 0.2$ AU region its orbit appears to be rather circular. We also note that the orbit of the core is not confined to the x-y plane; since our feedback scheme is homogeneous and the centre of the protoplanet is essentially spherically symmetric it is not obvious that there should be a preferential orientation. In this simulation the core reaches an offset radius of $\sim$ 1 AU with respect to the centre of the protoplanet before the disruption event happens at $\sim$ 1700 years.
Similar, seemingly bizarre, ‘wandering’ behaviour for massive luminous objects has recently been found by authors in different branches of Astrophysics. The displacement of luminous super-massive black holes from the centre of a young gas-rich galaxy was found by [@SijackiEtal10] in their simulations of merging super-massive black holes displaced from the galactic centre due to gravitational kicks. They found that luminous black holes were ejected further than expected given their kick and did not come back to the galaxy centre afterwards. Black holes of zero luminosity, on the other hand, returned to the galaxy centre in agreement with classical dynamical friction theory. [@SijackiEtal10] identified the asymmetry in the gas distribution around the SMBH as the driver of this unexpected behavior. In Chandrasekhar dynamical friction, there is a higher density trail [*behind*]{} the massive perturber. In the case of a very luminous object, a high temperature, low density halo inflated by the perturber is blown off by the higher density headwind, and thus the trail behind the perturber is a low density one. Therefore, the direction of the gravitational torque on the perturber changes sign and acts to accelerate it. [@ParkBogdanovic17] found a similar effect in simulations of a SMBH producing ionizing feedback on the background neutral medium.
[@MassetVelascoRomero17] performed a detailed analytic study of this [*heating force*]{} exerted on a massive object due to a hot tail of gas in an otherwise homogeneous gaseous medium. In particular, they equate the dynamical friction force ($\vec{F}_{df}$) with the heating force ($\vec{F}_{heat}$) to find an equilibrium speed $V_0$ at which the object moves through the medium. In the sub-sonic limit, $$V_0 = \dfrac{3 \gamma (\gamma-1) L_{\rm c} c_s}{8 \pi \rho_0 G M_{\rm c} \chi},
\label{eq:v_MVR17}$$ where $\gamma$ is the adiabatic index of the gas, $L_{\rm c}$ and $M_{\rm c}$ are the luminosity and mass of the perturbing core and $\chi$ is the thermal conductivity of the gas that they assumed to be constant. They noted that the effects of the heating force are expected to be significant for planet formation application, via, e.g., the Earth developing a non-negligible eccentricity and inclination with respect to the protoplanetary disc for realistic disc parameters. These conclusions and the analytic result (Equation \[eq:v\_MVR17\]) were confirmed with numerical simulations by [@VelascoRomeroMasset19], [@ChrenkoLambrechts19] and [@GuileraEtal19].
In application to our particular problem, we note that the density in the central part of the protoplanet is initially homogeneous with a nearly constant $\rho_0 \approx 10^{-9}$ gcm$^{-3}$. Hence, the constant background density results of [@MassetVelascoRomero17] are applicable. However, our protoplanet is finite in extent. When the core is displaced by a distance $R$ from the centre, there is a returning force of protoplanet gravity, given by $$F_{\rm g} = - \frac{G M(R) M_{\rm c}}{R^2}\;,
\label{eq:Fg0}$$ where $M(R) \approx (4\pi/3) \rho_0 R^3$ is the enclosed mass within radius $R$.
The motion and radius of the orbiting core is then set by a balance of forces. In the azimuthal direction the heating and dynamical friction force specify the core velocity from Equation \[eq:v\_MVR17\]. Given this velocity ($V_0$), the radial position of the core ($R$) is simply set by the gravitational force of the enclosed protoplanet which provides the centrifugal force. We see that $V_0$ is independent of orbit radius $R$ whereas the gravitational force is propotional to $R$, as long as the enclosed gas density is roughly constant. The balance of these therefore establishes an equilibrium radius of the orbit for the core of mass $M_{\rm c}$ and luminosity $L_{\rm c}$:
$$R_0 = \left(\frac{3}{4\pi G\rho_0}\right)^{3/2} \frac{\gamma (\gamma-1) c_s}{2\chi} \frac{L_{\rm c}}{M_{\rm c}}\;.
\label{eq:R0}$$
![Core diagnostics for the high res. $t_{\rm{acc}}$=1000 years protoplanet. Top: core offset from protoplanet centre (solid) and central SPH smoothing length (dashed). Top middle: core mass and mass enclosed between the offset core and the protoplanet centre. Bottom middle: core luminosity due to pebble accretion driven growth. Bottom: velocity of the core (solid), Keplerian velocity based on enclosed mass (dashed), predicted core velocity based on thermal acceleration [dotted, @MassetVelascoRomero17] and central protoplanet sound speed (yellow).[]{data-label="fig:fb_vel"}](F/0507_core_vs.pdf){width="1.0\columnwidth"}
The [@MassetVelascoRomero17] problem setup assumes that there is a heat flow through the gas via thermal diffusion with a diffusivity coefficient $\chi$. However, our current models do not include thermal diffusion. In our simulations the heat is instead transferred from the core to the surrounding gas using the prescription described in Section \[sec:methods\] above. The diffusivity of the heat flow from the core into the surrounding gas can then be estimated through a simple dimensional analysis as $$\chi = h c_{\rm s}\;,
\label{chi0}$$ where $h$ is the SPH smoothing length inside the protoplanet. We emphasize that this estimate reflects the heat flow in our numerical feedback implementation. In Appendix \[App:fb\_NN\] we demonstrate the dependence of our results on numerical resolution, which affect $h$, and we also discuss a more physical estimate for $\chi$.
With this assumption about $\chi$, Figure \[fig:fb\_vel\] compares the motion of the core with the [@VelascoRomeroMasset19] theory. The top panel shows the offset radius of the core relative to the maximum gas density inside the protoplanet. The dashed line plots the smoothing length inside the protoplanet which marks the SPH resolution limit inside the protoplanet. We see that over time feedback causes the core to drift further and further from the central density, the core has clearly left the central 0.1 AU of the protoplanet only a few hundred years after the onset of feedback. The second panel shows the mass of the core (solid) and the mass of enclosed gas (dashed) within the offset radius. We see that the enclosed mass dominates the core mass once the core offset grows above 0.1 AU. The third panel shows the luminosity of the core calculated from Equation \[eq:E\_fb\] (also seen in Figure \[fig:feedback\]), we see that this is $\sim 10^{-3} L_{\odot}$ once the core reaches $\sim 10 M_{\oplus}$. The yellow dashed line also plots the luminosity of the protoplanet, this is computed by summing the rate of energy loss for each SPH particle inside the protoplanet using Equation \[eq:beta0\]. Despite our simplified cooling scheme, this luminosity agrees very well with the luminosity of isolated protoplanet calculations from [@VazanHelled12] who calculated that a 7 $M_J$ protoplanet has a luminosity of $\sim 2 \times 10^{-4} L_{\odot}$ for its first ten thousand years of life.
The bottom panel of Figure \[fig:fb\_vel\] is the most useful for understanding this process. We plot the velocity of the core (solid), the predicted Keplerian velocity for the core based on an orbit around the enclosed mass (dashed), the predicted [@MassetVelascoRomero17] velocity from Equation \[eq:v\_MVR17\] (dotted) and the sound speed in the gas (yellow). We see that the velocity of the core is very close to the Keplerian velocity, demonstrating that once it has moved beyond 0.1 AU it is orbiting the central gas mass of the protoplanet. We also see that the velocity of the core seems to be limited by the sound speed inside the protoplanet. We do not expect the analytic theory to apply if (a) the core offset $R_0$ is smaller than $h$; (b) The enclosed mass within $R_0$ is smaller than $M_{\rm c}$. Finally, the agreement of the theory and simulations breaks down as $V_0$ approaches the sound speed. This could be because the core orbit starts to close-in on itself as it orbits around: the hot tail isn’t completely dispersed by the time the planet makes one full orbit. The assumption of constant background density also breaks down away from the centre of the protoplanet.
A second core and smaller debris formation {#sec:2nd_core}
------------------------------------------
![Dust density slice from the 2x res run, centred on the maximum gas density. The core is marked by the pink dot and is orbiting at 0.4 AU. Dust continues to collect at the protoplanet centre, eventually causing the simulation to stall. There is approximately an Earth mass of solids in the central 0.04 AU, potentially sufficient to start a second core. This feature can also be seen in the lower resolution runs.[]{data-label="fig:zoom_dust"}](F/0131_dust_hires.pdf){width="0.99\columnwidth"}
![Interior profiles for the high resolution protoplanet at 1232 years. The orange line indicates the radial position of the perturbed core at this time. The top panel shows internal temperature, since the gas is adiabatic the heating from the orbiting core does not reach the central regions. Panel two shows the enclosed dust and gas masses, 1.8 $M_{\oplus}$ of dust particles have collected at the centre of the protoplanet. Panel three shows the enhancement of the central dust density due to sedimentation. Panel four shows that the innermost dust to gas ratio reaches a value of over 200, this has implications for secondary core formation. The bottom panel shows the SPH smoothing length. Note that the <span style="font-variant:small-caps;">gadget-3</span> smoothing length is defined to be twice as large as that commonly used in other SPH codes.[]{data-label="fig:zoom_profile"}](F/0617_hires_interior.pdf){width="0.99\columnwidth"}
Once the core is driven away from the centre of the protoplanet, the rate at which it accretes pebbles drops significantly. Meanwhile, sedimenting pebbles continue to be focused to the centre of the protoplanet and collect there. Figure \[fig:zoom\_dust\] shows a plot of the dust surface density in the centre of the protoplanet from Figure \[fig:fb\_vel\], the pink dot shows the location of the orbiting core. The core is orbiting at 0.3 AU from the centre of the protoplanet which has allowed a large mass of dust to collect in the central 0.1 AU.
Figure \[fig:zoom\_profile\] displays information about the central regions of our high resolution protoplanet at the same time as Figure \[fig:zoom\_dust\], centred on the maximum dust density inside the protoplanet. The top panel shows the shell averaged gas temperature. Since cooling time in the centre of the clump is very long, the central gas is adiabatic, and thus thermal energy escapes in bubbles (as seen in Figure \[fig:zoom\_T\]). This leaves a slightly cooler zone of gas inside the orbit of the core. In addition, heating due to the core energy release is now also offset meaning that the peak temperature in the clump is no longer at its center. Of course, gas in the vicinity of the core may be much hotter than the shell averaged value plotted in Figure \[fig:zoom\_profile\]. In particular, Figure \[fig:zoom\_T\] shows that gas within 0.05 AU of the core reaches temperatures of 1200 Kelvin.
Panels two, three and four of Figure \[fig:zoom\_profile\] plot the enclosed gas and dust masses, the shell averaged gas and dust densities and the dust-to-gas ratio. They show that dust collects on scales below the SPH smoothing length in the centre of the protoplanet. 1.8 $M_{\oplus}$ of dust has collected at the centre of the protoplanet below the smoothing length resolution, this corresponds to a dust-to-gas ratio of over 200. In broader terms, the dust-to-gas ratio increases above one inside the inner 0.1 AU of the protoplanet. Panel five shows the shell averaged SPH smoothing length[^5]. It reaches a value just over 0.05 AU in the central regions, which is sadly almost three orders of magnitudes greater than the radius of Earth. It remains very challenging to resolve core formation directly in 3D simulations of GI planet formation.
In Figure \[fig:zoom\_dust\] we also see numerous additional dust clumps of much smaller mass formed in this simulation. Two of the largest of these are also seen as small spikes in the dust density profile in Figure \[fig:zoom\_profile\] at 0.4 and 0.5 AU away from the central dust concentration. It is tempting to interpret this as evidence for formation of planetesimal-like bodies within gas clumps in the framework of the GI theory, as proposed by [@NayakshinCha12], but we caution that the presence of these clumps and their properties depend on our numerical resolution parameters. Future high resolution 3D modelling will be necessary to correctly capture core and potential planetesimal formation inside GI protoplanets.
Discussion {#sec:discussion}
==========
Overview of core driven protoplanet disruption
----------------------------------------------
Our main results can be summarised as follows. Pebble accretion transports grains from the disc into the outer regions of a protoplanet. If the grains grow large enough (in our simulations $a=$10 cm) they sediment rapidly into the protoplanet and start to form a solid core. Now the outcomes diverge. If pebble accretion rates are high, the core is able to grow quickly and may be able to disrupt the protoplanet by over-heating and unbinding its gaseous envelope. Alternatively, when pebble supply is subdued, or grain growth is too slow, core growth is not vigorous enough to disrupt the protoplanet from within. What happens to the planet then does not depend on the core properties but depends on the competing effects of planetary migration versus planet contraction due to radiative cooling. The planet could be disrupted with only a small core surviving [@BoleyEtal10] or it may collapse and survive as a proper GI gas giant with a small core [@HelledEtal08]. Similar conclusions were already reached in [@Nayakshin16a]; however in this study we have modelled the disruption of GI protoplanets by the core feedback for the first time in 3D SPH simulations.
Protoplanet disruption provides a promising mechanism for forming $\sim 10 {{\,{\rm M}_{\oplus}}}$ cores and perhaps Saturn-mass planets (if gas envelope removal is only partial) at wide-orbits within the first $10^5$ years of disc evolution. Such objects are now invoked for explaining dozens of gaps and rings observed in dust emission of young discs [@Alma2015; @AndrewsEtal16; @LongEtal18; @DSHARP1; @DSHARP7]. We note that for our scenario to work, we require a reservoir of pebbles with a total mass of $\sim$ a hundred $M_{\oplus}$. This condition is satisfied by many of the observed ringed disc systems, although ALMA surveys are most sensitive to emission from $\sim 1$ mm sized dust, which is smaller than the $a=10$ cm used here. However, the discs with annular structures are older than our $t\approx 0$ disc, i.e., typically in the range of $1-10$ Myr. It is possible that these discs contained even more dust when they were younger, and that their first million years of life is sufficiently long to allow mm-sized grains to grow and sediment to form cores inside protoplanets on longer timescales than studied in this work [@HS08; @Nayakshin10a; @Nayakshin10b].
In contrast, forming wide-orbit super-Earth mass cores at such young ages is very challenging for the planetesimal-based Core Accretion theory [e.g., @KL99]. While pebble accretion may work much faster [@OrmelKlahr10; @LambrechtsEtal14], more recent work emphasised the inefficiency of locking pebbles into planets [@OrmelLiu18; @LinEtal18].
Our results are particularly relevant given recent developments in the field. Meteorite population arguments suggest that Jupiter had already reached a mass of $20 M_{\oplus}$ within the first million years of the lifetime of our solar system [@KruijerEtal17]. Additionally, disc masses observed at one million years seem too low to explain observed planet masses, suggesting more evidence for rapid core formation to ‘hide’ additional metal mass within the first million years of disc lifetime [@ManaraEtal18].
Connection to Hall et al. (2017)
--------------------------------
![Comparison to data from [@HallEtal17]. Top: internal energy of the [@HallEtal17] protoplanets against time. Middle: purple, orange and cyan lines mark core luminosity against time for different rates of pebble accretion driven core growth. The intersection of these lines with the yellow lines marks the point at which core luminosity dominates over the rate of internal energy decrease for 20, 40, 60 and 80 percent of the [@HallEtal17] protoplanets. Bottom: histogram of final core masses in cases where the energy release during core formation was greater then the total energy of the protoplanet after only 1500 years.[]{data-label="fig:Cass"}](F/0508_cass_dUdt.pdf "fig:"){width="0.99\columnwidth"} ![Comparison to data from [@HallEtal17]. Top: internal energy of the [@HallEtal17] protoplanets against time. Middle: purple, orange and cyan lines mark core luminosity against time for different rates of pebble accretion driven core growth. The intersection of these lines with the yellow lines marks the point at which core luminosity dominates over the rate of internal energy decrease for 20, 40, 60 and 80 percent of the [@HallEtal17] protoplanets. Bottom: histogram of final core masses in cases where the energy release during core formation was greater then the total energy of the protoplanet after only 1500 years.[]{data-label="fig:Cass"}](F/0508_cass_Mcore.pdf "fig:"){width="0.99\columnwidth"}
It is important to examine how our conclusions relate to the wider field of gravitational instability protoplanet formation. In order to do this we examine the internal energies of GI protoplanets from [@HallEtal17 hereafter H17] and compare them to our models. In contrast to our idealised radiative cooling prescription, the simulations of H17 use a more sophisticated approach to allow self-gravitating SPH discs to self-consistently fragment and form protoplanets. These protoplanets then evolve and contract until the simulation timestep in their cores becomes prohibitively small.
Starting from the output of their simulations, we calculate the internal energies of these protoplanets and plot them in the top panel of Figure \[fig:Cass\]. These show a broad spread in values but are generally comparable to the typical protoplanet binding energy of $\sim$ 10$^{41}$ erg found in our simulations. In order to estimate whether core disruption may be a significant process for the H17 protoplanets, we calculate the average rate of change of their internal energies over time. We then take the analytic luminosity due to core formation used in Section \[sec:analytic\_fb\] and compare these two quantities. When they are equal, the collapse of the protoplanet will be first stalled and then reversed.
The middle panel of Figure \[fig:Cass\] plots the core luminosity against time for three rates of core growth comparable to those we found in our 3D simulations (see Figure \[fig:feedback\]). The intersection of the diagonal and horizontal yellow lines marks the time at which the core luminosity balances the rate of decrease in internal energy for 20, 40, 60 and 80 percent of the H17 protoplanets. Even under the smallest core growth rate considered, e.g., 4 $M_{\oplus}$ per 1000 years, core luminosity becomes dominant for 40 percent of the H17 protoplanets after 1000 years[^6]. This percentage rises to 80 if core growth is very rapid. These results indicate that if pebble accretion driven core growth is vigourous, it may be able to destroy many of these otherwise bound protoplanets within their first few thousand years of life.
In the bottom panel of Figure \[fig:Cass\] we plot a histogram of core masses that are able to unbind H17 protoplanets in less than 1500 years. Using Equation \[eq:feedback\] we compared the total energy of the protoplanet with the total energy released due to core formation for the three different rates of core growth. The core masses are distributed between 1-25 $M_{\oplus}$ as expected from the analysis in Section \[sec:analytic\_fb\], the sub $10 M_{\oplus}$ cores are formed since many of the H17 protoplanets initially have lower binding energies than our 5 $M_J$ protoplanet. It is likely that in a self-consistent calculation more of the H17 protoplanets would be at risk of disruption since core feedback slows the rate of protoplanet collapse and therefore allows more time for the growth of larger cores. Further analysis of this process will require self-consistent simulations of protoplanet evolution coupled with core feedback models, we direct the reader to [@Nayakshin16a] and (Nayakshin submitted) for 1D examples of this.
These results have major implications for the final predicted population of GI protoplanets, provided massive rocky cores form before the protoplanets undergo H$_2$ dissociative collapse. The release of energy has the potential to destroy many GI protoplanets, leaving behind massive solid cores in their place. Further studies are needed to quantify the subsequent fate of these planets.
Super-Earth cores and other by-products of protoplanet disruption
-----------------------------------------------------------------
For the range of parameters considered in this paper, it seems that the critical core mass needed to trigger protoplanet disruption is roughly 5-30 $M_{\oplus}$. This is supported by the analytic estimates of core feedback in Section \[sec:analytic\_fb\]. There are two mechanisms that limit core growth. First, if the feedback luminosity is high, protoplanet disruption happens early and pebble accretion onto the core ends. In this case the subsequent core growth will be limited by the pebble isolation mass of $\sim 10-20{{\,{\rm M}_{\oplus}}}$, studied extensively in the CA field[^7] [@MorbidelliNesvorny12; @LambrechtsEtal14; @BitschEtal18]. Second, the total available mass of pebbles in the disc can limit core growth. This can be seen in Figure \[fig:feedback\] for the $Z_0=0.3$% and the no feedback runs. If there is no disruption, the protoplanet should carve a deep and wide gap in the global pebble distribution which should be observable within the first $10^5$ years. We have sampled only the tip of the parameter space of this process; relative disruption rates will vary depending on the initial size and spatial distribution of pebbles; the mass and orbital separation of protoplanets and their interior density and temperature profiles.
In our simulations protoplanets are disrupted very quickly, e.g., at $t\ll 10^4$ yrs. It must be stressed that this result is strongly dependent on the size of pebbles and the internal structure of the protoplanet. In our simulations we used very large pebbles, $a=10$ cm, motivating this by the fact that grain growth inside the clump can be very rapid [e.g., @HelledEtal08; @Nayakshin16a]. However, detailed models [e.g., Figures 5-8 in @HB10] of planet evolution show that grain growth and the sedimentation process couple with convective cooling of the planet in very non-linear ways.
Furthermore, [@BrouwersEtal18] found for CA that once cores grow above 0.5 $M_{\oplus}$, pebbles sublimate before reaching the core surface and instead form a metal rich atmosphere. Based on this work, the rapid formation of a high density core may be an extreme assumption. However, our rates of pebble accretion are orders of magnitude higher than those considered in [@BrouwersEtal18] and it is unclear how this might affect their conclusions. Pebble sublimation would reduce the magnitude of gravitational potential energy released in our simulations, though calculation of the exact reduction will require an update to the [@BrouwersEtal18] models for GI cores. Additionally, the equation of state in the central regions of the protoplanet needs to be modified to include dust latent heat and other chemical processes ignored in our paper [@PodolakEtal88; @BrouwersEtal18]. It is possible that the end product of a disruption event could be a partially disrupted object that still holds on to some of its primordial gas envelope. Better models resolving the region immediately surrounding the core are needed in order to properly constrain the end products of the core disruption process. In light of this, the final masses of our cores remain model dependent.
As stated in the opening of this paper, we have assumed core formation in order to explore an extreme scenario for this problem. A less concentrated core would reduce the total available feedback energy, but would not necessarily prevent core feedback from disrupting less massive, initially metal enriched or more extended GI protoplanets. We have chosen to examine the extreme case of a rapid disruption with large pebbles and a short accretion time, in part due to the prohibitive computional cost of running our simulations for more than a few $10^3$ years. However, since most ALMA observations of protoplanetary discs are made at $\sim 10^6$ years the disruption process could be slower and still explain the presence of ‘young’ cores. A potential cause of such a delay could be caused by less efficient cooling and contraction due to the high dust opacity in metal enriched protoplanets, coupled with a less extreme rate of core growth. In fact, [@Nayakshin16a] studied similar scenarios for smaller grain sizes using 1D radiative transfer simulations and did indeed find that disruption events took place at later times. Those 1D simulations will be explored further in Nayakshin (submitted).
We also found that feedback causes these proto-cores to wander away from the protoplanet centre, allowing many Earth masses of dust to collect there in their absence. The formation of these ‘secondary cores’ suggests that the by-products of core driven disruption events may be a rich collection of debris, in addition to the primary rocky core. We leave the study of this debris to future work due to the resolution limitations of our simulations.
Finally, the orbits of the super-Earth cores released back into the disc in our simulations are initially mildly eccentric and may be inclined by a few degrees to the disc midplane. This is caused by the chaotic nature of the feedback inside their spherically symmetric parent protoplanets; the orbit of the core within the protoplanet need not be confined to the plane of the global disc. It is questionable whether these initial deviations from a circular orbit would survive the dissipation of the disc since [@TanakaWard04] showed that the eccentricity and inclination damping timescales are much shorter than the migration timescales for low mass planets.
Conclusions {#sec:conclusions}
===========
Our 3D simulations support suggestions made on the basis of an earlier 1D study [@Nayakshin16a] that GI protoplanets can be disrupted by core driven feedback, thus reducing their mass at tens of AU. The mechanism is general enough to work at yet wider separations, and may be promising for forming, broadly speaking, Saturn mass and super-Earth ALMA planetary candidates.
Since feedback causes our rocky cores to drift from the protoplanet centre, we see tantalising hints that these disruption events could also lead to a rich array of secondary cores and debris. This may have important implications for the formation of giant planets with their own satellite systems. Furthermore, the disruption of protoplanets with a rich retinue of smaller solid bodies may be an alternative way of forming debris discs and possibly the Kuiper belt in the Solar System [@NayakshinCha12], but confirmation of this will require further and higher resolution modelling of dust and gas dynamics inside young protoplanets.
Although we did not focus on this topic specifically, we note that core-initiated disruptions of GI protoplanets may also help explain why they are so rarely observed at wide separations. Many argue that GI rarely occurs since the fraction of wide-orbit companions above 2$M_J$ is now thought to be only $\sim$ 1% [@ViganEtal17]. However, GI protoplanet formation can be much more prevalent if a large fraction of GI protoplanets are disrupted from within. Coupled with work demonstrating rapid protoplanet migration [@BaruteauEtal11; @MullerEtal18], it seems that the outcome of GI is not limited to massive wide-orbit planets. In fact, these objects way well be in the minority.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank the anonymous referee for their helpful comments and to thank Cass Hall for the output of simulations from [@HallEtal17] which we used in Section \[sec:discussion\]. JH and SN acknowledge support from STFC grants ST/N504117/1 and ST/N000757/1, as well as the STFC DiRAC HPC Facility (grant ST/H00856X/1 and ST/K000373/1). DiRAC is part of the National E-Infrastructure.
Varying the feedback injection lengthscale {#App:fb_NN}
==========================================
![Same as Figure \[fig:fb\_vel\], but now for different feedback injection length scales. Whilst the feedback scheme changes the amount by which the core is offset from the centre of the protoplanet, the final disruption time remains comparable in each case. Final disruption is set by the total energy injected by the core regardless of its offset.[]{data-label="fig:fb_NN"}](F/0319_core_NN.pdf){width="1.0\columnwidth"}
![A visualisation of the gas smoothing length inside the protoplanet from Section \[sec:results\]. There is about a 10% difference in the smoothing length inside and outside the hot trail. The trail width is approximately set by the smoothing length, indicating that the core motion is dependent on our numerical resolution.[]{data-label="fig:zoom_h"}](F/0220_N2e6_h.pdf){width="0.99\columnwidth"}
In Section \[sec:core\_modelling\] we showed that our cores were accelerated away from the protoplanet centre by a low density, hot gas tail. We also showed that the magnitude of this acceleration was limited by the feedback injection lengthscale in our simulations. In order to demonstrate this conclusively, in this Appendix section we vary the injection lengthscale and study the effect on the core velocity.
Figure \[fig:fb\_NN\] is similar to Figure \[fig:fb\_vel\], but now we vary the number of nearest neighbours ($NN_{FB}$) that the feedback is passed to. The fiducial value in the paper was 160. Due to the three dimensional nature of the problem, the feedback injection lengthscale ($l_{FB}$) varies as $l_{FB} \propto NN_{FB}^{1/3}$. We see from the blue and yellow lines that when $l_{FB}$ is shorter, the core velocity is greater and the core reaches a larger offset inside the protoplanet. This demonstrates qualitatively that a larger injection lengthscale leads to a weaker thermal acceleration, as expected from [@MassetVelascoRomero17].
Figure \[fig:zoom\_h\] shows the spatial SPH smoothing length inside our low resoltion $t_{acc}$ = $10^3$ years protoplanet. We see that the width of the hot gas trail is comparable to the smoothing length of the gas inside it. This indicates that further study of the internal temperature structures of these protoplanets is necessary. Results from the AGN feedback community suggest that this will be a highly costly numerical investigation.
Regardless of the feedback lengthscale, we find that the ultimate protoplanet disruption time remains very similar, only deviating by $\sim$ 5 % between runs. Note that the $NN_{FB}$=40 run did not disrupt, the large offset allows dust to collect at the centre of the protoplanet and makes the simulation timestep very short. This is the same behaviour observed for our high resolution run in Figure \[fig:zoom\_dust\].
The disruption of the protoplanet is governed by the total energy release. The similarity in final disruption time indicates that the core offset does not have a significant impact on the magnitude of energy release. Since hot gas escapes convectively from close to the core, a small thermal conductivity does not prevent the core from heating the entire protoplanet.
We stress that future work should not neglect the thermal acceleration of these luminous cores. The departure of the core away from the protoplanet centre allows for subsequent accreted dust to collect at the centre of the protoplanet and potentially form secondary cores and debris.
Turning off feedback
====================
![Experiments in which we switch of luminosity to demonstrate that the core offset is caused by the heating force. Blue: test case with no feedback, yellow: $t_{\rm acc}=10^3$ years feedback case and orange: feedback ‘turned off’ after 1600 years. After the turn off, the core returns to the centre of the protoplanet and can once again accrete metals.[]{data-label="fig:fb_shutdown"}](F/0812_shutdown.pdf){width="0.99\columnwidth"}
The wandering motion of our cores is very unusual. In order to reassure ourselves that this was linked to the luminous feedback, we ran a simulation in which we ‘turn off’ the feedback part way into the run. The results are shown in Figure \[fig:fb\_shutdown\]. The blue line shows a control case for accretion with no feedback, the core is able to grow rapidly since it remains near the centre of the protoplanet where dust is collecting. For reference, the resolution limit in these simulations is a few $10^{-2}$ AU. The yellow line plots the standard $t_{\rm acc}=10^3$ years feedback, as seen already the core rapidly leaves the centre of the protoplanet, limiting its subsequent accretion of metals after just 500 years. Finally, the orange line shows a case in which we switch off the feedback after 1600 years. Once feedback is turned off, dynamical friction on the core causes it to return to it’s initial position in the centre of the protoplanet where it can once again start to accrete metals. These results show that the motion of the core is inextricably linked to our prescription for luminous feedback. Note that this data comes from an earlier simulation setup and so the values do not precisely correspond to those in the main paper.
Radiative diffusion approximation
=================================
We can however make a physical prediction about the importance of this effect on the core by examining how thermal conductivity actually operates inside an optically thick protoplanet.
We can recast the thermal conductivity as the thermal diffusivity ($k$) using the equation $\chi = k /(\rho c_p)$ where $c_p = k_B /(\mu m_P (\gamma-1))$ is the specific heat capacity of the gas. Since we expect the optical depth to be very high in the central regions of protoplanets, we can model the thermal diffusivity with the radiative diffusion approximation
$$k = \dfrac{16}{3}\dfrac{ \sigma_{B}T^3}{\rho \kappa}$$
where $\sigma_B$ is the Stefan-Boltzmann constant and $\kappa=\kappa(T,\rho)$ is the opacity[^8]. In this way we obtain an estimate for the radiative diffusion conductivity in terms of $\rho$, $T$ and $\mu$. Plugging this into Equation \[eq:v\_MVR17\], we find a very high value for the resulting velocity perturbation, at least two orders of magnitude greater than the sound speed.
However, at this point the analysis becomes inconsistent since it does not account for the increased local gas temperature due to such a localised injection of energy. If the local gas becomes hotter than $\sim$ 1500 K then it will become optically thin and the velocity perturbation will decrease. Although this is just an estimate, it demonstrates the importance of considering the dynamic effects of core feedback in future research. This is a new paradigm in GI core formation, in previous research GI cores have always been assumed to be fixed at the centre of the gravitational potential well of their parent protoplanet.
Protoplanet internal velocity profiles
======================================
![Azimuthal velocity field inside the $t_{\rm{acc}}$=1000 years protoplanet, the black dot marks the location of the core. The black arrows plot the radial gas motion, note the outflow of hot gass through the midplane of the protoplanet.[]{data-label="fig:zoom_vphi"}](F/0213_vphi.pdf){width="0.99\columnwidth"}
![A plot of the radial velocity profile at 0.5 AU inside the $t_{\rm{acc}}$=1000 years protoplanet in $\phi$-cos($\theta$) space. Dust is over-plotted as white points and the location of the planet is marked with a blue dot. Note the large contrast between mid-plane outflows driven by core feedback and inflowing gas in the polar regions.[]{data-label="fig:zoom_vr"}](F/0131_vz_hammer.pdf){width="0.99\columnwidth"}
In the following Appendix section we present some additional figures from the $t_{acc}$ = 1000 years simulation to show the velocity profile of gas inside our protoplanets. They show the system at 681 years, the same time as figures \[fig:zoom\_rho\_xy\]-\[fig:zoom\_T\].
Figure \[fig:zoom\_vphi\] shows the azimuthal velocity profile inside this protoplanet, there is some differential rotation inside the structure but the rotational velocities are much lower than the sound speed which is $\sim 10^5 $cms$^{-1}$.
Figure \[fig:zoom\_vr\] shows the radial velocity profile in $\phi$-cos($\theta$) space at 0.5 AU from the protoplanet centre at the same time as Figures \[fig:zoom\_rho\_xy\]-\[fig:zoom\_T\]. The location of the core is marked by a black dot in the centre of the plot. It is off-set from the centre by $R=0.18$ AU, and its nearly circular orbit corresponds to motion from right to left on this figure. We see that the hot gas trail in the x-y plane causes an outflow of gas along the protoplanet midplane which is matched by an inflow of colder gas through the poles. This convective process allows the core to heat the entire protoplanet. Dust in the protoplanet is over-plotted as white dots, it is broadly concentrated in the x-y plane but the core feedback has caused it to be stirred up in the vertical cos($\theta$) direction.
\[lastpage\]
[^1]: rjh73@le.ac.uk
[^2]: We vary this neighbour number in Appendix \[App:fb\_NN\]
[^3]: GI discs may be much larger, but we take 100 AU in order to reduce the numerical cost of the simulations
[^4]: Additional simulations not presented in this paper found that increasing the initial protoplanet metallicity makes disruption more efficient.
[^5]: Note that the <span style="font-variant:small-caps;">gadget-3</span> smoothing length is defined to be twice as large as that commonly used in other SPH codes [@Springel05].
[^6]: These rates may seem high for core accretion, but they are motivated by high surface densities in young GI discs and the large capture radii of migrating Jupiter mass protoplanets.
[^7]: Note that [@HumphriesNayakshin18] show that this pebble isolation mass does not apply to the pre-disruption massive GI planets since they are rapidly migrating through the disc, and they sweep pebbles as they go.
[^8]: We use the opacity laws provided in [@ZhuEtal09] for this calculation.
|
---
abstract: 'Materials exhibiting a substitutional disorder such as multicomponent alloys and mixed metal oxides/oxyfluorides are of great importance in many scientific and technological sectors. Disordered materials constitute an overwhelmingly large configurational space, which makes it practically impossible to be explored manually using first-principles calculations such as density functional theory (DFT) due to the high computational costs. Consequently, the use of methods such as cluster expansion (CE) is vital in enhancing our understanding of the disordered materials. CE dramatically reduces the computational cost by mapping the first-principles calculation results on to a Hamiltonian which is much faster to evaluate. In this work, we present our implementation of the CE method, which is integrated as a part of the Atomic Simulation Environment (ASE) open-source package. The versatile and user-friendly code automates the complex set up and construction procedure of CE while giving the users the flexibility to tweak the settings and to import their own structures and previous calculation results. Recent advancements such as regularization techniques from machine learning are implemented in the developed code. The code allows the users to construct CE on any bulk lattice structure, which makes it useful for a wide range of applications involving complex materials. We demonstrate the capabilities of our implementation by analyzing the two example materials with varying complexities: a binary metal alloy and a disordered lithium chromium oxyfluoride.'
author:
- Jin Hyun Chang
- David Kleiven
- Marko Melander
- Jaakko Akola
- Juan Maria Garcia Lastra
- Tejs Vegge
title: 'CLEASE: A versatile and user-friendly implementation of Cluster Expansion method'
---
Introduction
============
Computational modeling of materials with a substitutional disorder such as multicomponent alloys and mixed metal oxides is said to have a *configurational* problem. The vast configurational space of these materials makes it practically impossible to explore directly using first-principles calculations such as density functional theory (DFT). A quantitative method capable of establishing the relationship between the structure and property of materials is therefore essential. Cluster Expansion (CE) [@Sanchez1984; @DeFontaine1994; @Zunger1993; @Asta2001; @VandeWalle2008; @Zhang2016] is a method that has been used successfully in the past few decades to parameterize and express the configurational dependence of physical properties. The most widely parameterized physical property is energy computed using first-principles methods, but CE can also be used to parameterize other quantities such as band gap [@Magri1991; @Franceschetti1999] and density of states [@Geng2005].
Despite its success and usefulness in predicting physical properties of crystalline materials, CE remains as a niche tool used in a small subfield within the computational materials science, primarily used by specialists. On the other hand, the research fields in which CE is becoming relevant is on the rise; one such example is the use of disordered materials for battery applications [@Wang2015a; @Abdellahi2016; @Abdellahi2016a; @Urban2016; @Kitchaev2018]. The objective of our work is to make cluster expansion more accessible for a broad range of computational scientists who do not necessarily possess expertise in cluster expansion. Our approach to achieving such a goal is to implement CE as a part of a widely used, open-source Atomic Simulation Environment (ASE) package [@ASE]. Henceforth, we refer to our implementation as CLEASE, which stands for CLuster Expansion in Atomic Simulation Environment.
Having CE as a part of a widely used package with interfaces to a multitude of open-source and commercial atomic-scale simulation codes accompanies several practical benefits: (1) a large existing user base does not need to install or learn a new program as the CE module is a part of ASE and inherits its syntax and code style, and (2) all of the atomic-scale simulation codes supported by ASE are also automatically supported by the implemented module. In addition, CLEASE utilizes the database management feature implemented in ASE, which provides an efficient way to store, maintain and share both DFT and CE results. Therefore, the implementation presented in this article appeals to a significant portion of computational materials science community as a versatile and easy-to-learn package, thereby lowering the barrier to incorporate cluster expansion as a part of their research methods to accelerate computational materials prediction and design.
The rest of the paper is organized as follows. A brief overview of cluster expansion formalism and other important concepts are provided in section \[sec:theory\] in order to aid the readers who are not familiar with the cluster expansion method. The implementation of CLEASE is described in section \[sec:implementation\]. Section \[sec:example\] contains two application examples with different levels of complexities, namely a binary metal alloy and a lithium metal oxyfluoride. The computational settings and technical details for the examples are provided in section \[sec:methods\].
Theory {#sec:theory}
======
Cluster Expansion Formalism
---------------------------
The core concept of the cluster expansion is to express the scalar physical quantity of a material, $q(\bm{\sigma})$, to its configuration, $\bm{\sigma}$, where a crystalline system is represented with a fixed underlying grid of atomic sites. In such a representation, any configuration with the same underlying topology can be completely specified by the atomic occupation of each atomic site. For the case of a crystalline material with $N$ atomic sites, any configuration can be specified by an $N$-dimensional vector $\bm{\sigma} = \{s_1, s_2, \ldots, s_N\}$, where $s_i$ is a site variable that specifies which type of atom occupies the atomic site $i$ (also referred to as an occupation variable [@Zarkevich2004; @Meng2009; @VandeWalle2009] or pseudospin [@DeFontaine1994; @Magri1991; @Nelson2013; @Nelson2013a; @Seko2014]). It is noted that the terms configuration and structure are often used interchangeably.
For the case of multinary systems consisting of $M$ different atomic species, $s_i$ takes one of $M$ distinct values. The original formulation of Sanchez et al. [@Sanchez1984] specifies the $s_i$ to take any values from $\pm m$, $\pm (m-1)$, $\ldots$, $\pm 1$ for $M = 2m$ (for the case where there is an odd number of element types, an additional value of 0 should be included in the possible values of $s_i$, and the relation between $M$ and $m$ becomes $M = 2m-1$). Other choices of $s_i$ are also commonly used such as values ranging from $0$ to $M-1$ by van de Walle [@VandeWalle2009] and from $1$ to $M$ by Mueller and Ceder [@Mueller2010]. Based on the original formalism by Sanchez et al., single-site basis functions are determined through an orthogonality condition $$\frac{1}{M} \sum_{s_i=-m}^{m}\Theta_n(s_i)\Theta_{n'}(s_i) = \delta_{nn'},
\label{eq:orthogonality_condition}$$ where $\Theta_{n}(s_i)$ is the $n$th single-site basis function (e.g., Chebyshev polynomials) for $i$th site and $\delta_{nn'}$ is a Kronecker delta.
The configuration is decomposed into a sum of clusters as shown in figure \[fig:cluster\_decomposition\]. Each cluster has a set of associated cluster functions, which are defined as $$\Phi_{\bm{n}}(\bm{s}) = \prod_{i} \Theta_{n_i}(s_i),
\label{eq:cluster_function}$$ where $\bm{n}$ and $\bm{s}$ are vectors specifying the order of the single-site basis function and the site variables in the cluster, respectively. $n_i$ and $s_i$ specify the $i$th element of the respective vectors. The use of orthogonal basis functions guarantees that the cluster functions defined in (\[eq:cluster\_function\]) are also orthogonal. The symmetrically equivalent clusters are classified as the same cluster, and the collection of all symmetrically equivalent clusters are denoted with an $\alpha$.
![image](fig1.pdf){width="150mm"}
The average value of the cluster functions in cluster $\alpha$ is referred to as a correlation function, $\phi_{\alpha}$. The physical quantity, $q(\bm{\sigma})$, normalized with the number of atomic sites $N$ is then expressed as $$q(\bm{\sigma}) = \sum_{\alpha} m_{\alpha} J_{\alpha} \phi_{\alpha},
\label{eq:cluster_expansion1}$$ where $m_{\alpha}$ is the multiplicity factor indicating the number of cluster $\alpha$ per atom and $J_{\alpha}$ is the effective cluster interaction (ECI) per occurrence, which needs to be determined. It is noted that the cluster $\alpha$ includes the cluster of size zero, which have $m_{\alpha} \phi_{\alpha} = 1$. Alternatively, (\[eq:cluster\_expansion1\]) can be written in a more explicitly form, $$q(\bm{\sigma}) = J_{0} + \sum_{\alpha} m_{\alpha} J_{\alpha} \phi_{\alpha},
\label{eq:cluster_expansion2}$$ where $J_0$ is the ECI of an empty cluster while $\alpha$ in this case corresponds to the clusters of size one and higher. It is often more practical and convenient to express the ECI per atom rather than per occurrence [@Zhang2016], in which case $m_{\alpha}$ and $J_{\alpha}$ are combined into one term, $\widetilde{J}_{\alpha}= m_{\alpha}J_{\alpha}$ and (\[eq:cluster\_expansion1\]) becomes $$q(\bm{\sigma}) = \sum_{\alpha} \widetilde{J}_{\alpha} \phi_{\alpha}.
\label{eq:cluster_expansion3}$$ CLEASE uses the ECI per atom ($\widetilde{J}_{\alpha}$), but interested users can determine the value of $J_{\alpha}$ based on the values of $m_{\alpha}$ and $\widetilde{J}_{\alpha}$.
Theoretically, there is an infinite number of terms in (\[eq:cluster\_expansion3\]) for an infinite crystal, and the resulting expression can represent any scalar function $q(\bm{\sigma})$ given that appropriate ECI values are found. In practice, sufficient accuracy is often reached with clusters with small number of atoms (e.g., one-, two- and three-body clusters) that are relatively compact in size (e.g., 5 to 7 Å in diameter).
Cluster Selection & Determination of ECI Values
-----------------------------------------------
A crucial element of CE approach is to select relevant clusters from a theoretically infinite number of possible clusters. Many multicomponent systems yield thousands of clusters even when the expansion is limited to relatively compact size and small number of atoms, and they are vastly truncated since only a small fraction of them is needed to achieve the required accuracy. Determining the optimal set of clusters that minimizes the number of clusters without losing its predictive power has been a topic of keen interest in the past decade [@Zarkevich2004; @Blum2005; @Hart2005; @Diaz-Ortiz2007; @Lerch2009], and the cluster selection based on genetic algorithms [@Blum2005; @Hart2005; @Lerch2009] was considered to be the most robust method.
More recently, the use of compressive sensing [@Nelson2013a] was proposed to efficiently select the clusters and determine their ECIs in one shot. The compressive sensing is based on $\ell_1$ norm (a special case of $\ell_p$ norm where $p = 1$), which is defined as $$||\mathbf{x}||_{p} = \left( \sum_{i} \left| x_i \right|^p \right)^{1/p},$$ where $\mathbf{x}$ is a vector quantity. It is noted that cluster expansion defined in (\[eq:cluster\_expansion3\]) is in the same form as a linear regression model in statistics and machine learning. Therefore, one can treat CE as a linear regression problem and apply regularization techniques based not only on $\ell_1$ norm but also on any other $p$ values, although $\ell_1$ and $\ell_2$ norms are most commonly used.
The use of regularization techniques for CE can be illustrated by expressing (\[eq:cluster\_expansion3\]) in a matrix form, $$\mathbf{q} = \mathbf{X} \bm{\omega}.$$ $\mathbf{X}$ is a matrix containing the correlation functions of the training data where each element in row $i$ and column $\alpha$ is defined as $$\mathbf{X}_{i\alpha} = \phi_{\alpha}(\bm{\sigma}_i).$$ $\mathbf{q}$ is a column vector in which the $i$th element is the physical quantity $q$ of the configuration $\bm{\sigma}_{i}$ and $\bm{\omega}$ is a column vector in which $\alpha$th element is $\widetilde{J}_{\alpha}$.
The simplest way of determining $\bm{\omega}$ is by using ordinary least squares (OLS) method, which minimizes the residual sum of squared errors (RSS). RSS is defined as $$\mathrm{RSS} = ||\mathbf{X} \bm{\omega} - \mathbf{q}||_{2}^{2},$$ and the minimization of the RSS has a unique solution $\hat{\bm{\omega}}$ where $$\begin{aligned}
\begin{split}
\hat{\bm{\omega}} &= \operatorname*{arg\, min}_{\bm{\omega}} ||\mathbf{X} \bm{\omega} - \mathbf{q}||_{2}^{2}\\
&=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{T} \mathbf{q}.
\end{split}
\label{eq:OLS}\end{aligned}$$ The OLS has two major drawbacks [@Nelson2013a]. The first is the requirement on which the number of configurations in the training set needs to be larger than the number of clusters being considered. The matrix $\mathbf{X}^{T}\mathbf{X}$ becomes singular in such a case, and the limitations imposed by the first requirement become more severe for systems consisting of many element types since even strict expansion conditions (i.e., small number of atoms per cluster and compact size) can lead to a large number of clusters. The second drawback is the susceptibility to possible overfitting, which refers to the conditions in which the ECI values are over-tuned to accurately represent $q(\bm{\sigma})$ of the training set at a cost of losing its predictive power for the new configurations that are not included in the training set. The overfitting also makes the model prone to noise present in the training data because the model attempts to meticulously fit the model to the training data including the noise therein.
Regularization is an efficient technique to address the aforementioned drawbacks of OLS by adding a regularization term to (\[eq:OLS\]). The most common regularization scheme are $\ell_1$ and $\ell_2$ regularization, which respectively uses $\ell_1$ and $\ell_2$ norm as a regularization term. For $\ell_1$ regularization, the solution becomes $$\hat{\bm{\omega}} = \operatorname*{arg\, min}_{\bm{\omega}} ||\mathbf{X} \bm{\omega} - \mathbf{q}||_{2}^{2} + \lambda ||\omega||_1,
\label{eq:l1_regularization}$$ where $\lambda$ is a regularization parameter that controls the weight given to the regularization term. The main benefit of $\ell_1$ regularization is its promotion of sparsity. In context of CE, the sparsity means a selection of a fewer number of clusters, or many clusters with their ECI values set to zero. It is noted that there is no unique analytical solution for (\[eq:l1\_regularization\]), and it needs to be solved iteratively. Unlike $\ell_1$ regularization, $\ell_2$ regularization has a unique analytical solution which is expressed as $$\begin{aligned}
\begin{split}
\hat{\bm{\omega}} &= \operatorname*{arg\, min}_{\bm{\omega}} ||\mathbf{X} \bm{\omega} - \mathbf{q}||_{2}^{2} + || \bm{\omega} ||_{2}^{2}\\
&=(\mathbf{X}^T\mathbf{X} + \lambda \mathbf{I})^{-1}\mathbf{X}^{T} \mathbf{q}.
\end{split}
\label{eq:l2_regularization}\end{aligned}$$ However, $\ell_2$ regularization does not promote sparsity, and the resulting solution is likely to contain more clusters than necessary. It is noted that Bayesian compressive sensing [@Nelson2013] scheme is introduced for cluster expansion, which effectively eliminates the parameter $\lambda$ in $\ell_1$ and $\ell_2$ regularization schemes while promoting sparsity.
Regardless of the fitting technique used, the predictive power of the expansion needs to be assessed to determine its accuracy and reliability. Cross-validation (CV) is a technique used for assessing the prediction accuracy of the model. A leave-one-out (LOO) scheme is most commonly used in CE community, and the LOOCV score is defined as $$\mathrm{LOOCV} = \left(\frac{1}{N_\textrm{config}} \sum_{i=1}^{N_\textrm{config}} \left(\hat{q}_{i} - q_i \right)^2\right)^{1/2},
\label{eq:loocv}$$ where $N_\textrm{config}$ is the number of configurations in the training set, $\hat{q}_{i}$ is the physical quantity of a structure $i$ predicted by CE using $N_\textrm{config} - 1$ structures without a structure $i$ and $q_i$ is the calculated physical quantity of structure $i$. While OLS has only one (likely overfitted) solution, $\ell_1$ and $\ell_2$ regularization schemes have a solution for each $\lambda$ value. The solution — a selection of clusters and their ECI values — that yields the lowest LOOCV score is chosen. Although LOO is the most common cross validation scheme in cluster expansion community, $k$-fold CV is one of the most common schemes used in machine learning community. In a $k$-fold CV scheme, the pool of configurations are randomly partitioned into $k$ parts of equal size. The structures in $k-1$ parts are used as training data while the remaining one part is used as a validation set, and the cross validation is repeated $k$ times.
Thermodynamics in Lattice Models
--------------------------------
The true benefit of CE is in its ability to predict the expanded scalar quantity $q(\bm{\sigma})$ based on trained data. An accurate prediction can be made if the CV score of the expanded $q(\bm{\sigma})$ is sufficiently low, and the prediction speed is very fast on modern computer architecture since it only involves executions of only a small number of simple numerical calculations specified in (\[eq:cluster\_expansion3\]). Such a speed boost allows one to conduct types of analyses that require substantial statistical sampling.
In contrast to zero temperature studies where the system occupies the state with lowest energy, an ensemble of configurations with the lowest free energy are occupied at finite temperature. The free energy $G$ is given by [@andersen2012introduction] $$G = -\frac{\ln Z}{\beta},
\label{eq:free_energy}$$ where $\beta = 1/k_B T$ and $Z$ is the partition function. $k_B$ is the Boltzmann constant and $T$ is temperature in Kelvin. It is noted that the DFT energies are obtained for fully relaxed structures without any external forces or pressure. Thus, the resulting thermodynamic quantities are effectively obtained in the NPT ensemble (fixed number of particles, fixed pressure and fixed temperature). However, the energy predicted by CE is only valid for the volume leading to the minimum energy of a particular atomic arrangement, and the volume fluctuations are neglected. The free energy can be calculated by utilizing the exact differential $$\begin{aligned}
\begin{split}
\mathrm{d}(\beta G) &= -\frac{\partial \ln Z}{\partial \beta} \mathrm{d}\beta \\
&= U \mathrm{d}\beta
\end{split}
\label{eq:dF_canonical}\end{aligned}$$ where $U$ is the average internal energy. The free energy can be obtained by a thermodynamic integration from a reference temperature $\beta_\mathrm{ref}$ where $G$ is known, which can be written as [@Tuckerman2010] $$\beta G = (\beta G)_\mathrm{ref} + \int_{\beta_\mathrm{ref}}^{\beta} \mathrm{d}\beta ' U(\beta').
\label{eq:G_int_C}$$ Important information of the materials under study such as the stability of ordered/disordered phases can be determined by comparing the free energy of the material at a given composition with respect to the free energies in the pure phases of its constituents.
Implementation {#sec:implementation}
==============
CLEASE utilizes the existing classes and methods of ASE to perform necessary manipulations and analyses for carrying out CE. Among many adopted features, the most noteworthy are the use of
- an `Atoms` object to represent an atomic configuration ($\bm{\sigma}$),
- a built-in database to efficiently store, maintain and share settings, atomic configurations of the training set, values of the correlation functions ($\phi_{\alpha}(\bm{\sigma})$) and DFT energies,
- Python programming language and modular design to remove the strict input file/format requirements and to enable easy implementation of new features, and
- a `Calculator` class to determine the physical quantity $q(\bm{\sigma})$ of a new configuration based on its correlation functions and their ECI values.
It is noted that the evaluation of correlation functions of a new configuration and the determination of physical quantity, $q(\bm{\sigma})$, based on ECI values can be a slow process using Python programming language. It is especially true for carrying out Monte Carlo simulations after the CE model training is complete. CLEASE includes an optional external module written in C++ programming language that can be installed to accelerate the critical and repetitive calculations, but the usage of the code remains unchanged even when the external module is installed (i.e., CLEASE automatically determines if the C++ add-on is installed, and uses the C++ version if it is present).
The inheritance of the existing features of ASE allows CLEASE to be fully integrated to ASE where the users can incorporate CE as a part of their research without losing the continuity with the rest of their workflow. The existing users of ASE do not have to install or learn a new CE program nor select a particular DFT package that a CE code supports. In addition to the benefits of integrating CE as a part of ASE, highlights of the features that makes CLEASE versatile and user-friendly include:
- a multicomponent cluster expansion that goes beyond binary systems,
- a support for several types of basis functions (e.g., basis functions by Sanchez et al. [@Sanchez1984], Van de Walle [@VandeWalle2009] and Zhang and Sluiter [@Zhang2016]) for a comparison and compatibility with other CE codes,
- many methods for selecting clusters and determining ECI values such as OLS, $\ell_1$ and $\ell_2$ regularization schemes, Bayesian compressive sensing and genetic algorithm, and
- both leave-one-out and $k$-fold cross validation schemes.
A simple flowchart illustrating the procedure for constructing CE using CLEASE is shown in figure \[fig:ce\_flowchart\]. The CLEASE workflow can be divided in to three main components: definition of CE settings, generation of training structures and evaluation of CE convergence. CLEASE takes an object-oriented approach where each component has its own class. The modular design approach not only enables easy implementation of new features but also makes the code flexible to use and intuitive to follow the CE construction and evaluation procedure shown in figure \[fig:ce\_flowchart\]. A more detailed description of main components of the procedure is provided below.
![A flowchart of constructing and evaluating CE using CLEASE.[]{data-label="fig:ce_flowchart"}](fig2.png){width="86mm"}
Definition of Cluster Expansion Settings
----------------------------------------
The most fundamental component is to define which underlying crystal structure to use. ASE offers two functions to generate a crystal structure: `bulk` and `crystal`. The `bulk` function provides a simple way of generating common types of crystal structures by specifying the name of the crystal structure and its lattice constant value(s). The crystal structures supported by the `bulk` function are simple cubic, face-centered cubic, body-centered cubic, hexagonal close packed, diamond, zinc blende, rocksalt, cesium chloride, fluorite and wurtzite structures. For more complicated crystal structures, a `crystal` function is used to generate a crystal structure by providing its space group, lattice parameters and scaled coordinates of the unique atomic sites. The definitions of the cluster expansion settings are specified using `CEBulk` and `CECrystal` classes, which respectively calls `bulk` and `crystal` functions to generate an `Atoms` object with the user-specified crystal structure.
The maximum size of the supercell (of the primitive cell) on which the DFT calculations are performed is also defined along with the definition of the underlying crystal structure. The maximum supercell size is specified using a `supercell_factor` parameter, which is an integer corresponding to the product of the absolute values of expansion coefficients (integer weights of a general linear combinations of the unit cell vectors). In other words, if a unit cell has three vectors $\vec{a}$, $\vec{b}$ and $\vec{c}$, the configurations in the training set on which the DFT calculations are performed have cell vectors $\vec{a'}$, $\vec{b'}$ and $\vec{c'}$, which are defined as $$\begin{aligned}
\begin{split}
\vec{a'} = i_1 \vec{a} + j_1 \vec{b} + k_1 \vec{c}\\
\vec{b'} = i_2 \vec{a} + j_2 \vec{b} + k_2 \vec{c}\\
\vec{c'} = i_3 \vec{a} + j_3 \vec{b} + k_3 \vec{c}
\end{split}
\label{eq:size_coefficients}\end{aligned}$$ with integer coefficients $i_x$, $j_x$ and $k_x$, where $x\in\{1,2,3\}$. The `supercell_factor` is then defined as $$\texttt{supercell\_factor} \geq \prod_{x=1}^{3} \prod_{y=1}^{3}\prod_{z=1}^{3}\left|i_x\right| \left|j_y\right| \left|k_z\right|,
\label{eq:super_cell_condition}$$ and all of the cells used in the training set should have coefficients satisfying the condition in (\[eq:super\_cell\_condition\]). Only unique cell shapes are included in the pool by omitting the cells that can be mapped on to the existing cells in the pool by rotation and reflection. The use of supercells with varying sizes and shapes enables the exploration of a larger configurational space without adding extra computational burden compared to using one fixed supercell size and shape. A set of training structures for CE are later generated iteratively from the pool of possible structures that are realizable using these supercells. To reduce the required computational resources, the structures using smaller supercells are generated (and calculated) first, followed by the larger supercells. The users also have a flexibility to select the supercell size using an optional `size` parameter, which is a $3 \times 3$ matrix (or a nested list in Python) specifying the values of the integer coefficients in (\[eq:size\_coefficients\]).
Theoretically, an infinite number of clusters can be generated for a given system. The number of clusters is limited to a finite size in practice, and CLEASE takes an approach to generate all possible clusters that are under the truncation threshold (i.e., a maximum number of atoms in clusters and maximum diameter) specified by the user. A whole or subset of the generated clusters is selected during the convergence evaluation process. By default, up to four-body clusters (i.e., empty, one-, two-, three- and four-body clusters) with their diameters up to 5 Å are generated. The users have an option to define their own threshold settings both at the beginning of the CE procedure and at a later stage of the CE iteration cycles. CLEASE also offers a feature to visualize the generated clusters in order to assist the user to develop an intuition on the generated clusters.
Within the CE formalism, there does not exist a unique set of definitions for basis functions; the basis functions are considered valid if they form a complete set. Consequently, several definitions are used in practice. The two most widely used definitions are the original definitions by Sanchez et al. [@Sanchez1984] and the one later developed by van de Walle [@VandeWalle2009], which is used in the Alloy Theoretic Automated Toolkit (ATAT) [@VandeWalle2009; @VandeWalle2002a]. The two definitions are equally valid, and both are implemented in CLEASE.
CLEASE offers an option to ignore a set of symmetrically inequivalent atomic sites if they are always occupied by one element type for all possible configurations. The contributions of these atoms are not explicitly included in the cluster expansion and are automatically included in the constant term ($J_0$) in (\[eq:cluster\_expansion2\]). For example, lithium metal oxides () with first-row transition metals (M = {Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu}) have a rocksalt lattice structure [@Urban2016; @Urban2014] with an exception of , which is orthorhombic [@Hewston1987]. The rocksalt lattice structure consists of two face-centered cubic sublattices. For the case of the cation-disordered rocksalt lattice , one sublattice is occupied by lithium and other metal atoms while the other is occupied by oxygen atoms. The complexity of the CE model of such systems can be reduced to a cation sublattice consisting only of two element types (the oxygen sublattice is ignored). As such, an optional Boolean argument is present in CLEASE to enable/disable the reduction of the complexity of the model by ignoring the such atoms if they exist in the system.
A range of compositions (or concentration) of the system to be studied is specified using a `Concentration` class. First, the constituting elements of the system are categorized into the basis which they belong. For example, in a rocksalt lattice structure is expressed using two lists: \[‘’, ‘’\] and \[‘’\]. It is noted that CE needs to keep track of the location of vacancies when they are present in the system. The location of vacancies are tracked by treating a vacancy as a regular atom with its atomic symbol set to ‘X’ or atomic number set to zero. The with vacancies is then expressed using \[‘’, ‘’, ‘X’\] and \[‘’\].
The range of each element (including vacancies) can be specified in one of the two convenient methods built in to the `Concentration` class. The simplest method is to specify the concentration range of each constituting element by calling `set_conc_ranges` method in `Concentration` class. For the cases where concentrations of two or more elements depend on one another, one can specify concentration range using `set_conc_formula_unit` method where the relationships between the concentrations of two or more elements can be expressed in a list of strings. For the example of with vacancies, a list of strings that specifies relationship between the number of atoms and the number of vacancies, \[“Li$<$x$>$V$<$1$>$X$<$1-x$>$”, “O$<$2$>$”\], is passed as an argument to the `set_conc_formula_unit` method. Another argument specifying the range of the concentration variable, e.g., {‘x’: (0, 1)}, is also passed to the `set_conc_formula_unit` method in order to specify the concentration range of Li and Li vacancies. The concentration ranges specified by either `set_conc_ranges` or `set_conc_formula_unit` methods are internally interpreted in the `Concentration` class as a list of linear equations that specify (1) the relationships of the concentrations of constituting elements and (2) their upper/lower bounds. The advanced users can alternatively specify the coefficients of the linear equations used in the `Concentration` class if a greater flexibility is needed in specifying the concentration ranges.
Generation of Training Structures
---------------------------------
CLEASE uses `NewStructures` class to generate training structures, which provides three different methods perform the task. The first and most trivial method is to generate a set of random structures. This method serves to generate an initial pool consisting of a small number of structures. The random generation method is used in the first iteration cycle of CE construction as shown in figure \[fig:ce\_flowchart\]. An initial cluster expansion is capable of making a first set of predictions albeit with a low accuracy. It is noted that all of the generated training structures, along with their correlation function values, are stored in a database file.
Once the initial CE is constructed, the user is given three different choices for introducing an additional set of training structures. The first and most straightforward option is to keep generating random structures. Although trivial, generating random structures is claimed to be the best strategy when compressive sensing is used to select clusters [@Nelson2013a]. The second method is to generate ground-state and other low-energy structures based on current cluster expansion (i.e., based on the pool of structures already calculated) [@van2002automating], which have the enthalpies of formation either on or near the convex hull [@Urban2016a]. The inclusion of ground-state and near-ground-state structures serves an important purpose of increasing the accuracy in predicting the correct ground states. A global minimization technique can be used to generate (near) ground-state configurations, and CLEASE uses a simulated annealing technique.
The last method of generating the training set is referred to as a “probe structure” method [@Seko2009a; @Seko2014]. The probe structure method introduces a new structure that minimizes the mean variance of the predicted physical quantity $q(\bm{\sigma})$. The mean variance of the predicted quantity $q$ is written as [@Seko2009a] $$\begin{aligned}
\begin{split}
\mathrm{Var}[\hat{q}_i] &= \frac{1}{N_\mathrm{config}} \sum_{i=1}^{N_\mathrm{config}} [\mathbf{X}_i (\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}_i^T]e^2\\
&= \{\mathrm{tr}[(\mathbf{X}^T\mathbf{X})^{-1} \Sigma ] + \mu (\mathbf{X}^T\mathbf{X})^{-1} \mu^T \} e^2 \\
&= \Lambda \cdot e^2,
\end{split}\end{aligned}$$ where $e^2$ is the variance of the error in the training set, $\Sigma$ is the covariance matrix of the correlation functions of the training set and $\mu$ is a vector of the mean correlation functions of the structures in the training set. The probe structure is the one that reduce the value of $\Lambda$ the most when introduced to the training set, which is found using the simulated annealing procedure.
The newly generated structures are compared with the existing structures in the training set in order to avoid introducing duplicate structures. We adopted the structure comparison algorithm developed by Lonie and Zurek [@Lonie2012] to identify equivalent structures. It is desirable to have the new structure compared against the existing structures in the training set as efficiently as possible. As a first step, the structures that have different chemical composition than the newly generated structure are filtered, and the new structure is compared only with the remaining structures. Once the candidate transformations for mapping the new structure onto one structure in the database are identified using the algorithm suggested by Lonie and Zurek, we note that exactly the same transformations can be used for the remaining structures in the database. Therefore, the structure comparison algorithm implemented in ASE is optimized for the case where one structure is to be compared against many.
In addition to the aforementioned methods of generating the training structures, CLEASE also offers a built-in function to import structures to the database. The import function also has an option to specify the calculated $q$ value, which allows users to easily import the previously calculated results.
Evaluation of Cluster Expansion Convergence
-------------------------------------------
An evaluation process to determine the convergence of CE includes a selection of clusters, a determination of their ECI values and an assessment of the LOO or $k$-fold CV score using the selected clusters and their ECI values. An entire evaluation process is performed using an `Evaluate` class.
The simplest way to determine the ECI values of the generated clusters is by using OLS to minimize residual sum of squared errors (RSS). It is highly likely that the ECI values found using OLS are overfitted. Therefore, Bayesian compressive sensing and $\ell_1$ and $\ell_2$ regularization methods are implemented, and it is highly recommended to use a regularization methods to select clusters and evaluate their ECI values.
A default option in the `Evaluate` class is to include all of the clusters generated using the cluster truncation conditions specified in `CEBulk` or `CECrystal` class, and either the entire or a subset of these clusters are selected for fitting depending on the method used. The `Evaluate` class provides additional options in which the users can select a subset of the generated clusters to perform any of OLS, Baysian compressive sensing and $\ell_1$ and $\ell_2$ regularization. The first option is by manually specifying which clusters to include, while the second option is to provide a stricter truncation conditions than the ones set in the `CEBulk` or `CECrystal` class. The first option allows the `Evaluate` class to be used in conjunction with other cluster selection methods such as genetic algorithm. For example, a user can optionally use genetic algorithm (included in CLEASE as a separate `GAFit` class) to pre-screen a large cluster pool and subsequently pass a subset of clusters to the `Evaluate` class. The feature to freely select a subset of a large pool of clusters along with the use of OLS, Bayesian compressive sensing and $\ell_1$ and $\ell_2$ regularization methods allows the users to easily experiment with various settings to understand how the system behaves and to optimize the ECI values for achieving the lowest LOOCV score.
To further assist the evaluation process, the `Evaluate` class contains two built-in methods that automatically determine the LOOCV when a regularization method is used. The first method, `plot_fit`, determines and stores the selected clusters and their ECI values for a value of regularization parameter ($\lambda$) specified by the user. It also plots the fit of all data points in the training set to their calculated values and presents the LOO/$k$-fold CV score of the specified $\lambda$ value. Since the most cumbersome task in determining the convergence of CE is finding the optimal $\lambda$ value that yields the lowest CV score, another method, `plot_CV`, is also implemented. It takes a range and number of $\lambda$ values to evaluate as inputs and returns the best $\lambda$ value in the specified range along with its LOO/$k$-fold CV score. The `plot_CV` method also plots LOO/$k$-fold CV score as a function of $\lambda$ and provides an option to store the results in a log file such that the users can add more $\lambda$ values to the list at a later stage without having to re-evaluate the same $\lambda$ values in the process.
Metropolis Monte Carlo and Simulated Annealing
----------------------------------------------
The user can perform statistical sampling of the system on a larger simulation cell once the cluster expansion is constructed. The final selection of cluster and their ECI values can be stored and passed to other classes to conduct statistical analyses. A separate `Calculator` class for cluster expansion is implemented in ASE. The `Clease` calculator class takes a list of clusters and their ECI values as inputs, and the users can select what type of trial moves are allowed. The sampling in the canonical ensemble allows the swapping two atoms with different constraint conditions (i.e., swap any two atoms, swap any two atoms in the same basis, swap two nearest neighbors, swap two nearest neighbors in the same basis) while the semi-grand canonical ensemble allows changing the type of occupying element at a random site.
The evaluation of the physical quantity $q(\bm{\sigma})$ is performed using (\[eq:cluster\_expansion3\]), which is a fast because the `Clease` calculator keeps track of the changes in the `Atoms` object to update the correlation functions. When the physical quantity being modeled is energy, a trial move of the standard Metropolis algorithm has an acceptance probability [@thijssen2007computational] $$P_\mathrm{acc} = \mathrm{min}\left\{1, \exp\left(\frac{-(E_\mathrm{new} - E_\mathrm{old})}{k_BT}\right)\right\},
\label{eq:accept}$$ where $E_\mathrm{new}$ and $E_\mathrm{old}$ are the energy of the new and old configuration, respectively. As the `Clease` calculator keeps track of the change in the `Atoms` object after each move, updating the correlation functions is restricted to the contributions of one and two atoms for the semi-grand canonical ensemble and canonical ensemble, respectively.
Examples {#sec:example}
========
Here, we present two example systems to illustrate the capabilities of the CLEASE code. The first example illustrates the investigation of a Au–Cu binary alloy. The second example shows the cluster expansion on a more complex system consisting of four types of elements and vacancy. All of the interactions of cluster expansions are computed from DFT calculations of energies, and the computational settings used for generating the results shown in this section are specified in section \[sec:methods\].
Au–Cu Alloy
-----------
The binary Au–Cu alloy system is studied at temperatures ranging from 100 K to 800 K over the entire composition range. The resulting values obtained for both $\ell_1$ and $\ell_2$ regularization are shown in figure \[fig:eci\_aucu\]. The ECI value of the empty cluster is found to be $-3.49$ eV/atom for both cases, and the ECI value of the one-body cluster is 0.27 eV/atom and 0.13 eV/atom for $\ell_1$ and $\ell_2$ regularization, respectively. The ECI values of empty and one-body clusters are not included in figure \[fig:eci\_aucu\] for better visibility. The LOOCV score for the $\ell_1$- and $\ell_2$-regularized fit were 4.49 meV/atom and 4.67 meV/atom, respectively. The $\ell_1$ regularization scheme yields a slightly lower CV score despite having a smaller number of clusters ($\ell_1$-regularized fit has 20 clusters while the $\ell_2$-regularized fit has 34 clusters).
![ECIs obtained via a) $\ell_1$ regularization and b) $\ell_2$ regularization.[]{data-label="fig:eci_aucu"}](fig3.pdf){width="86mm"}
A qualitative information on the thermodynamic behavior of the system can be extracted by inspecting the ECI values for simple binary system. Based on the fact that the energetically favorable configurations have DFT energies that are more negative than less favorable ones and that the two site variables are $+1$ or $-1$, one can infer that a positive ECI value of the pair interaction term means that a pair consisting of two different elements is energetically preferred at a low temperature. It can be seen in figure \[fig:eci\_aucu\] that the ECIs of the nearest-neighbor and second nearest-neighbor pairs are positive and positive, respectively. The signs indicate that the Au–Cu system energetically favors the strong mixing of the constituting elements such that the alternating patterns found in L1$_0$- and L1$_2$-type ordered structures are likely to emerge, which is in a good agreement with experimental and computational observations [@van2002automating; @Wei1987; @Ozolins1998_1; @Ozolins1998_2; @Wolverton1998; @Lysgaard2015; @massalski1986binary; @Hultgren1973; @Fedorov2016].
It is experimentally determined that Au–Cu alloys have three ordered phases at low temperatures [@massalski1986binary; @Hultgren1973; @Fedorov2016]: , and . Furthermore, the transition temperatures for , and are reported to be 663 K, 683 K and $\sim$490 K, respectively, and they are often used as reference values for assessing the computational models [@van2002automating; @Wei1987; @Ozolins1998_1]. The formation energy, free energy of formation and configurational entropy are obtained through Metropolis Monte Carlo simulations and are shown in figure \[fig:aucu\_thermo\]. As the CE is trained with fully relaxed structures (zero pressure), the formation energy is determined using $$\Delta U = U - xU_{\mathrm{Au}} - (1-x)U_{\mathrm{Cu}},$$ where $U$ is the internal energy of the configuration, $x$ is the gold concentration, $U_{\mathrm{Au}}$ is the internal energy of pure gold and $U_{\mathrm{Cu}}$ is the internal energy of pure copper. Similarly, the free energy of formation is obtained by subtracting the weighted average of the free energy for the pure phases. The configurational entropy is given by the difference between the internal energy and the free energy, divided by the temperature at which the Monte Carlo is sampled. The three ordered phases (, and ) are found on the convex hull of the free energy of formation in figure \[fig:aucu\_thermo\]b. Furthermore, the entropy of the ordered phases form local minima as shown in figure \[fig:aucu\_thermo\]c. As the temperature increases, the free energy becomes a smooth convex curve with a minimum at around $50\%$ composition, and the system is in a random phase with no short-range order.
![Thermodynamic quantities for the Au–Cu computed for temperatures ranging from 100 K to 800 K over the entire composition range. a) Formation energy. b) Free energy of formation. c) Entropy.[]{data-label="fig:aucu_thermo"}](fig4.pdf){width="86mm"}
An accurate estimate of the order/disorder transition temperature can be found by tracking the evolution of an order parameter. The average fraction of sites having a different element than the same site in the ground state, $f_\mathrm{diff}$, is tracked as the system evolves. $f_\mathrm{diff}$ is normalized by the expected fraction of different sites in a random phase, $f_\mathrm{diff, rnd}$, and an order parameter, $\eta$, is defined as $$\eta = 1 - f_\mathrm{diff}/f_\mathrm{diff, rnd}.$$ The order parameter is is used for detecting the phase transition as shown in figure \[fig:order\_param\]. The computationally predicted order/disorder transition temperature of , and are around 600 K, 665 K and 385 K, respectively, which are in a good agreement with the experimental reference values [@massalski1986binary; @Hultgren1973; @Fedorov2016].
![Order parameter as a function of temperature (1 in an ordered phase and 0 in a random phase).[]{data-label="fig:order_param"}](fig5.pdf){width="86mm"}
One of the most common way to describe the characteristics of a binary alloy is by constructing a phase diagram. A phase diagram can be generated computationally using a semi-grand canonical MC where a grand potential is obtained via thermodynamic integration in (\[eq:G\_int\_C\]) at fixed chemical potentials. The integration starts from the low temperature limit for the ordered phases and from the high temperature limit for disordered phases where the free energy per atom is given by $k_BT \ln 2$. The phase boundary between two phases is identified by locating the intersection point between the grand potential in the two co-existing phases. The phase diagram generated via semi-grand canonical MC is shown in figure \[fig:phase\_diagram\]. The phase diagram closely resembles the phase diagrams constructed from the experimental measurements [@massalski1986binary; @Hultgren1973; @Fedorov2016] and is also in a qualitatively agreement with the phase diagrams constructed from computational results [@van2002automating; @Wei1987; @Walle2002a].
![Phase diagram of Au$_x$Cu$_{1-x}$ where $0 \leq x \leq 0.5$. Circles are computed phase boundary points and lines are spline fits of the computed boundary points.[]{data-label="fig:phase_diagram"}](fig6.pdf){width="86mm"}
Lithium Chromium Oxyfluoride
----------------------------
One of the recent focus areas of lithium-ion battery research is the development of high-capacity cathode materials. Lithium metal oxyfluorides (, $\mathrm{M} = ${V, Cr, Mn, Ti, Ni, $\ldots$}) is a family of materials that is at the forefront of the current research. The challenges for studying is in the vast size of the configurational space, which exhibit not only the cation disorder commonly found in lithium metal oxides [@Abdellahi2016; @Urban2016] but also anion disorder which is also present due to the mixed O/F composition [@Chen2015; @Chen2016]. The fact that the underlying crystal structure of can vary at different lithiation levels [@Cambaz2016] adds the complexity to investigate their properties. It is, however, known that the most predominant crystal structure is of disordered rocksalt type [@Ren2015], particularly at high-lithiation levels. We therefore show an example CE study of in a rocksalt lattice configuration.
The Monte Carlo annealing study reveals that (i.e., fully lithiated compound) takes a layered structure at room temperature (293 K) as shown in figure \[fig:licro2f\_structures\]a. The layer structure shows a $\ldots$–Li–F–Li–O–Cr–O–$\ldots$ pattern, which is similar to a $\ldots$–Li–O–M–O–$\ldots$ layered pattern observed in lithium metal oxides [@Urban2016; @Mizushima1981; @VanDerVen2004]. The layered structure is lost upon delithiation, which leads to disordered structures as shown in figure \[fig:licro2f\_structures\]b. The emergence of disordered structures agrees well with the previous experimental observations [@Chen2016; @Ren2015], and it is important to model the disordered atomic arrangement as it has a direct link to the Li transport mechanism (e.g., a presence of zero-transition-metal pathways [@Urban2016; @Urban2014; @Lee2014]).
![A snapshot of Li$_{x}$CrO$_2$F during the Monte Carlo run at 293 K where $x$ is a) 2.0, b) 1.5. The Li atoms are shown in green, the Cr atoms are shown in blue, the oxygen atoms are shown in red and the F atoms are shown in white.[]{data-label="fig:licro2f_structures"}](fig7.pdf){width="73mm"}
Thermodynamics quantities of Li$_x$CrO$_2$F can be extracted with the same procedure described for the Au–Cu system. One of the most crucial thermodynamic parameters for characterizing cathode materials for Li-ion batteries is the free energy as it is directly linked to the operating voltage of the cell. The operating voltage of Li$_x$CrO$_2$F is defined as $$\begin{aligned}
\begin{split}
\mathrm{voltage} &= - \frac{ \mu_\mathrm{Li}^\mathrm{cathode} - \mu_\mathrm{Li}^\mathrm{anode}}{e}\\
&= - \frac{\frac{\mathrm{d}G_{\mathrm{Li}_x\mathrm{Cr}_2\mathrm{F} }}{\mathrm{d}x} - \mu_\mathrm{Li}^\mathrm{anode}}{e},
\end{split}
\label{eq:voltage}\end{aligned}$$ where $\mu_\mathrm{Li}$ is the chemical potential in eV per Li atom, $e$ is an electron charge and $G_{\mathrm{Li}_x\mathrm{Cr}_2\mathrm{F}}$ is the free energy of Li$_x$CrO$_2$F in eV per formula unit. Li metal is used as an anode and thus, $\mu_\mathrm{Li}^\mathrm{anode}$ is constant.
The free energy of Li$_x$CrO$_2$F and its voltage profile at 293 K are shown in figure \[fig:free\_energy\_voltage\]. The free energy in figure \[fig:free\_energy\_voltage\]a has three parts: free energy values computed from MC simulations, a smooth curve fitted to the computed values (using Redlich-Kister polynomials [@RK_fit]) and a convex hull of the fitted curve. The curve fit is used for generating the voltage plot because the derivative of the free energy used for calculating the voltage values are susceptible to small noise that are present in the MC simulation results. Furthermore, a range in which the free energy curve is above the convex hull represents the region where a phase transition occurs: the cathode forms a mixture of two phases at which the fitted curve and the convex hull intersect. The voltage profile in figure \[fig:free\_energy\_voltage\]b is generating using (\[eq:voltage\]) where the values on the convex hull are used for $G_{\mathrm{Li}_x\mathrm{Cr}_2\mathrm{F}}$. The voltage profile in figure \[fig:free\_energy\_voltage\]b is in a good agreement with those observed experimentally [@Chen2016; @Ren2015].
![Free energy of Li$_{x}$CrO$_2$F and its voltage with respect to Li metal at 293 K.[]{data-label="fig:free_energy_voltage"}](fig8.pdf){width="86mm"}
Methods {#sec:methods}
=======
Density Functional Theory Calculations
--------------------------------------
All of the calculations are performed with the Vienna Ab initio Simulation Package (VASP) [@VASP1; @VASP2; @VASP3; @VASP4] using the projector augmented-wave (PAW) method [@Blochl1994a]. The generalized gradient approximation as parametrized by Perdew, Burke and Ernzerhof [@Perdew1996] is used as the exchange-correlation functional. It is important to have a consistent and accurate dataset (i.e., DFT calculations with high energy cutoff and $k$-point mesh density) in order to minimize the numerical noise introduced to the CE training. The plane-wave cutoff of 500 eV is used, and both the cell and atomic positions are fully relaxed such that all the forces are smaller than 0.02 eV/Å. A rotationally invariant Hubbard *U* correction [@Anisimov1991; @Cococcioni2005a] is applied to the $d$ orbital of Cr with the $U$ value of 3.7 eV. The calculations are performed with supercells containing up to 18 and 54 atoms for Au–Cu alloy and systems, respectively. Integrations over the Brillouin zone were carried out using the Monkhorst-Pack scheme [@Monkhorst1976] with a grid with a maximal interval of 0.04 Å$^{-1}$.
Cluster Expansion Model
-----------------------
The CE model for Au–Cu alloy and Li$_x$CrO$_2$F are trained using 34 and 390 DFT calculations, respectively. CE model is trained for the entire composition range of Au–Cu alloy (from pure Au to pure Cu) and Li$_x$CrO$_2$F on a rocksalt lattice with $x$ ranges from 0 to 2. Up to four-body clusters with the maximum diameter of 6.0 Å are generated for Au–Cu alloy. Up to four-body clusters are generated for Li$_x$CrO$_2$F with the maximum diameter of 7.0 for two- and three-body clusters and 4.5 Å for four-body clusters. $\ell_1$ and $\ell_2$ regularization schemes with the regularization parameter ranging from $10^{-7}$ to $10^2$ are assessed at various maximum radii to find the optimal setting that leads to the lowest LOOCV score. For the Au–Cu alloy, $\ell_1$ regularization with the maximum diameter of 6.0 Å, 5.0 Å and 5.0 Å for 2-, 3- and 4-body clusters, respectively, yields the lowest LOOCV score of 4.49 meV/atom. The minimum LOOCV score achieved using $\ell_2$ regularization scheme is 4.67 meV/atom when the maximum diameter is set to 6.0 Å, 6.0 Å and 5.0 Å for 2-, 3- and 4-body clusters, respectively. Similarly, $\ell_1$ regularization performed better than $\ell_2$ regularization on Li$_x$CrO$_2$F with the lowest LOOCV score of 21.38 meV/atom (maximum diameter set to 7.0 Å, 7.0 Å and 4.5 Å for 2-, 3- and 4-body clusters, respectively). It is noted that although the LOOCV of Li$_x$CrO$_2$F seems larger compared to that of Au–Cu, it should be taken into account that the cohesive energy of metallic alloys are in general much smaller than those of oxyfluorides.
Metropolis Monte Carlo Simulations
----------------------------------
For Au–Cu alloy, Metropolis Monte Carlo simulations are carried out using a $10 \times 10 \times 10$ supercell consisting of 1,000 atoms for determining thermodynamic quantities. The system is equilibrated with 100 sweeps, and an average energy is collected through an additional 2,000 sweeps at each temperature for determining the thermodynamic quantities. A $30 \times 30 \times 30$ supercell consisting of 27,000 atoms is used to determine the transition temperatures and to construct a phase diagram. The transition temperatures are determined by equilibrating the systems with 100 sweeps, followed by sampling the order parameter via an additional 1,000 sweeps. A phase diagram is generated by performing a semi-grand canonical MC, where the system is equilibrated using 100 sweeps, followed an additional 1,000 sweeps to obtain an average semi-grand canonical energy at each temperature at a fixed chemical potential. A $9 \times 9 \times 9$ cell consisting of 1,458 atoms is used for Li$_x$CrO$_2$F. The temperature is gradually lowered from 10,000 K, and the structures are equilibrated at each temperature via 100 sweeps to ensure that the system is equilibrated before sampling. The average energy is then sampled via 1,000 sweeps at each temperature.
Conclusions
===========
We present the implementation of CLEASE, which fully integrates the cluster expansion method to ASE package. The aim of the developed code is to make cluster expansion more accessible to non-specialists and to incorporate modern machine learning techniques to cluster expansion method in one comprehensive and versatile package. The use of the popular Python programming language and implementing the code as a part of widely used ASE package lowers the barrier for the newcomers to the field to easily learn and use CE as a part of their research methods. By automatically generating clusters and calculating the correlation functions of both semi-automatically generated and user-supplied structures, it minimizes both the possible introduction of user errors and complicated process of constructing/evaluating the cluster expansion. The capability of CLEASE is presented with two example usage cases with a different level of system complexity. The examples demonstrate that CE can correctly predict the material behavior that require statistical sampling on a large simulation cell.
Acknowledgements
================
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 711792 (FET‐OPEN project LiRichFCC). Authors thank valuable discussions with Dr. Juhani Teeriniemi and Prof. Kari Laasonen. JMGL acknowledges support from the Villum Foundation’s Young Investigator Programme (4$^{th}$ round, project: *In silico design of efficient materials for next generation batteries.* Grant number: 10096).
[62]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase
10.1016/0378-4371(84)90096-7) in [**](\doibase 10.1016/S0081-1947(08)60639-6), Vol. , (, ) pp. in [**](\doibase
10.1007/978-1-4615-2476-2_23), (, , ) pp. [****, ()](\doibase
10.1007/s11837-001-0062-3) [****, ()](\doibase 10.1038/nmat2200) [****, ()](\doibase 10.1007/s11669-015-0427-x) [****, ()](\doibase
10.1103/PhysRevB.44.8672) [****, ()](\doibase 10.1038/46995) [****, ()](\doibase 10.1063/1.1926276) [****, ()](\doibase 10.1016/j.elecom.2015.08.003) [****, ()](\doibase
10.1021/acs.chemmater.6b00205) [****, ()](\doibase
10.1021/acs.chemmater.6b01438) [****, ()](\doibase 10.1002/aenm.201600488) [****, ()](\doibase
10.1039/C8EE00816G) [****, ()](\doibase
10.1088/1361-648X/aa680e) [****, ()](\doibase 10.1103/PhysRevLett.92.255702) [****, ()](\doibase 10.1039/b901825e) [****, ()](\doibase
10.1016/j.calphad.2008.12.005), [****, ()](\doibase
10.1103/PhysRevB.88.155105) [****, ()](\doibase 10.1103/PhysRevB.87.035125) [****, ()](\doibase
10.1088/0953-8984/26/11/115403) [****, ()](\doibase 10.1103/PhysRevB.82.184107) [****, ()](\doibase
10.1103/PhysRevB.72.165113) [****, ()](\doibase
10.1038/nmat1374) [**** (), 10.1088/0953-8984/19/40/406206](\doibase 10.1088/0953-8984/19/40/406206) [****, ()](\doibase 10.1088/0965-0393/17/5/055003) @noop [**]{} (, ) @noop [**]{} (, ) [****, ()](\doibase
10.1016/S0364-5916(02)80006-2) [****, ()](\doibase 10.1002/aenm.201400478) [****, ()](\doibase
10.1016/0022-3697(87)90076-X) @noop [****, ()]{} [**** (), 10.1038/npjcompumats.2016.2](\doibase 10.1038/npjcompumats.2016.2) [****, ()](\doibase
10.1103/PhysRevB.80.165122) [****, ()](\doibase
10.1016/j.cpc.2011.11.007) @noop [**]{} (, ) [****, ()](\doibase
10.1103/PhysRevB.36.4163) [****, ()](\doibase
10.1103/PhysRevB.57.6427) [****, ()](\doibase 10.1103/PhysRevB.58.R5897) [****, ()](\doibase
10.1103/PhysRevB.57.4332) [****, ()](\doibase 10.1039/C5CP00298B) @noop [ ()]{} (, ) [****, ()](\doibase 10.1134/S0036023616060061) [****, ()](\doibase
10.1088/0965-0393/10/5/304) [****, ()](\doibase 10.1002/aenm.201401814) [****, ()](\doibase 10.1002/celc.201600033) [****, ()](\doibase
10.1021/acs.inorgchem.5b02687) [****, ()](\doibase 10.1002/advs.201500128) [****, ()](\doibase
10.1016/0167-2738(81)90077-1) [****, ()](\doibase 10.1016/j.elecom.2004.07.018) [****, ()](\doibase 10.1126/science.1246432) [****, ()](\doibase 10.1021/ie50458a036) [****, ()](\doibase
10.1103/PhysRevB.47.558) [****, ()](\doibase 10.1103/PhysRevB.49.14251) [****, ()](\doibase
http://dx.doi.org/10.1016/0927-0256(96)00008-0) [****, ()](\doibase 10.1103/PhysRevB.54.11169) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase
10.1103/PhysRevB.44.943) [****, ()](\doibase 10.1103/PhysRevB.71.035105) @noop [****, ()]{}
|
---
abstract: 'The time-dependent Dirac equation was solved for zero-impact-parameter bare U-U collisions in the monopole approximation using a mapped Fourier grid matrix representation. A total of 2048 states including bound, as well as positive- and negative-energy states for an $N=1024$ spatial grid were propagated to generate occupation amplitudes as a function of internuclear separation. From these amplitudes spectra were calculated for total inclusive positron and electron production, and also the correlated spectra for ($e^+,e^-$) pair production. These were analyzed as a function of nuclear sticking time in order to establish signatures of spontaneous pair creation, i.e., QED vacuum decay. Subcritical Fr-Fr and highly supercritical Db-Db collisions both at the Coulomb barrier were also studied and contrasted with the U-U results.'
author:
- Edward Ackad and Marko Horbatsch
bibliography:
- 'collisionUU.bib'
title: 'Calculation of electron-positron production in supercritical uranium-uranium collisions near the Coulomb barrier'
---
Introduction
============
In heavy-ion collisions, dynamical electron-positron pairs can be created due to the time-varying potentials [@belkacemheavyion]. For a sufficiently strong static potential, pairs can also be created spontaneously. Such *super-critical* potentials can be achieved in quasi-static adiabatic collision systems. The process of static pair creation is predicted by QED [@pra10324], but has not yet been demonstrated unambiguously by experiment. Therefore, it continues to be of interest as a test of non-perturbative QED with strong fields.
The Dirac equation is a good starting point to describe relativistic electron motion. As the nuclear charge, $Z$, of a hydrogen-like system increases, the ground state energy decreases. Thus, for a sufficiently high $Z$ value, the ground-state becomes embedded in the lower continuum, the so-called Dirac sea. The ground state changes character from a bound state to a resonant state with a finite lifetime, and is called a supercritical resonance state [@ackad:022503]. An initially *vacant* supercritical state decays gradually into a pair consisting of a free positron and a bound electron. The bound electron will Pauli-block any other (spin-degenerate) pairs from being subsequently created.
Currently, the only known realizable set-up capable of producing a supercritical ground state occurs in certain heavy-ion collision systems. When two bare nuclei collide near the Coulomb barrier, the vacant quasi-molecular ground state, i.e., the 1S$\sigma$ state, can become supercritical when the nuclei are sufficiently close. For the uranium-uranium system, the 1S$\sigma$ becomes supercritical at an internuclear separation of $R\lsim 36$ fm. As the nuclei continue their approach, the supercritical resonance experiences a shorter decay time. Thus, it is most probable that the supercritical resonance will decay at the closest approach of the nuclei.
Rutherford trajectories result in collisions were the nuclei decelerate as they approach and come to a stop at closest approach before accelerating away. Since the nuclei are moving very slowly at closest approach, pairs created at this time are due to the intense static potential rather than due to the nuclear dynamics. Nuclear theory groups have predicted that if the nuclei are within touching range the combined Coulomb potential may remain static (“stick") for up to $T=10$ zeptoseconds ($10^{-21}$s) [@EurPJA.14.191; @zagrebaev:031602; @Sticking2]. Such a phenomenon would enhance the static pair creation signal without changing the dynamical pair creation background.
In the present work, the time-dependent radial Dirac equation was solved in the monopole approximation for all initial states of a mapped Fourier grid matrix representation of the hamiltonian [@me1]. Single-particle amplitudes were obtained for an initially bare uranium-uranium head-on (zero-impact-parameter) collision at $E_{\rm CM}=740$ MeV, i.e, at the Coulomb barrier. The amplitudes were used to calculate the total positron and electron spectra for different nuclear sticking times, $T$.
Previous work [@PhysRevA.37.1449; @eackadconf2007] obtained the electron bound-state contribution to the total positron spectrum by following the time evolution of a finite number of bound-state vacancies. In the present work the complete correlated spectrum was calculated, by which we mean that the final ($e^+,e^-$) phase space contributions include both of (nS $e^-+$ free $e^+$) and (free $e^-+$ free $e^+$) pairs.
While the method is capable of handling any head-on collision system (symmetric or non-symmetric), the uranium system was chosen on the basis of planned experiments. Previous searches for supercritical resonances used partially ionized projectiles and solid targets [@PhysRevLett.51.2261; @PhysRevLett.56.444; @PhysLettB.245.153]. The ground state had a high percentage of being occupied, thus damping the supercritical resonance decay signal significantly due to Pauli-blocking. Over the next decade the GSI-FAIR collaboration is planning to perform bare uranium-uranium merged-beam collisions. A search will be conducted for supercritical resonance decay and for the nuclear sticking effect. Therefore, the present work will aid these investigations by providing more complete spectrum calculations.
The limitation to zero-impact-parameter collisions is caused by the computational complexity. While this implies that direct comparison with experiment will not be possible, we note that small-impact-parameter collisions will yield similar results [@PhysRevA.37.1449].
Theory
======
The information about the state of a collision system is contained in the single-particle Dirac amplitudes. They are obtained by expanding the time-evolved state into a basis with direct interpretation, namely the target-centered single-ion basis. These amplitudes, $a_{\nu,k}$, can be used to obtain the particle creation spectra [@PhysRevA.45.6296; @pra10324].
The total electron production spectrum, $n_k$, and positron production spectrum, $\bar{n}_q$, where $k$ and $q$ label positive and negative (discretized) energy levels respectively, are given by $$\begin{aligned}
\label{creat2}
\langle n_k \rangle &=& \sum_{\nu<F}{|a_{\nu,k}|^2} \\ \label{creat1}
\langle \bar{n}_q\rangle &=& \sum_{\nu>F}{|a_{\nu,q}|^2}.\end{aligned}$$ Here the coefficients are labeled such that $\nu$ represents the initial state and $F$ is the Fermi level [@PhysRevA.37.1449; @PhysRevA.45.6296]. Equations \[creat2\] and \[creat1\] contain sums over all the propagated initial states above (positrons) or below (electrons) the Fermi level, which is placed below the ground state separating the negative-energy states from the bound and positive-energy continuum states.
Therefore, all initial positive-energy states (both bound and continuum) must be propagated through the collision to calculate the total positron spectrum, which is obtained from vacancy production in the initially fully occupied Dirac sea. The dominant contribution (and a first approximation) to this spectrum is obtained by the propagation of the initial 1S state only [@eddiethesis]. Müller *et al.* [@PhysRevA.37.1449] reported partial positron spectra for bare U-U collisions by propagating a number of initial bound-state vacancies.
Propagating all the (discretized) states is accomplished in the present work by solving the time-independent Dirac equation (for many intermediate separations) using a matrix representation. The wavefunction is first expanded into spinor spherical harmonics, $\chi_{\kappa,\mu}$, $$\Psi_{\mu}(r,\theta,\phi)=\sum_{\kappa}{ \left(
\begin{array}{c}
G_{\kappa}(r)\chi_{\kappa,\mu}(\theta,\phi) \\
iF_{\kappa}(r)\chi_{-\kappa,\mu}(\theta,\phi)
\end{array}
\right)} \quad ,$$ which are labeled by the relativistic angular quantum number $\kappa$ and the magnetic quantum number $\mu$ [@greiner]. The Dirac equation for the scaled radial functions, $f(r)=rF(r)$ and $g(r)=rG(r)$, then becomes ($\hbar=$ c $=$ m$_{\rm e}=1$), $$\begin{aligned}
\label{syseqnG}
\frac{df_{\kappa}}{dr} - \frac{\kappa }{r}f_{\kappa} & = & - \left(E -1 \right)g_{\kappa} + \sum_{\bar{\kappa}=\pm1}^{\pm\infty}{\langle \chi_{\kappa,\mu} \left| V(r,R) \right| \chi_{\bar{\kappa},\mu} \rangle }g_{\bar{\kappa}} \quad , \\
\label{syseqnG2}
\frac{dg_{\kappa}}{dr} + \frac{\kappa}{r}g_{\kappa} & = & \left(E + 1 \right) f_{\kappa} - \sum_{\bar{\kappa}=\pm1}^{\pm\infty}{\langle \chi_{-\kappa,\mu} \left| V(r,R) \right| \chi_{-\bar{\kappa},\mu} \rangle }f_{\bar{\kappa}} \quad ,\end{aligned}$$ where $V(r,R)$ is the potential for two uniformly charged spheres displaced along the $z$-axis. This potential is expanded into Legendre polynomials according to $V(r,R)=\sum^{\infty}_{l=0}{V_l(r,R) P_l(\cos{\theta})}$ [@greiner] separating the equations into multipolar coupled equations. The mapped Fourier grid method is used to build a matrix representation of Eqs. \[syseqnG\] and \[syseqnG2\]. Upon diagonalization of the matrix representation a complete basis is obtained spanning both the positive and negative continua [@me1].
Propagating the initial states is accomplished by the application of the propagator to each initial state. The full propagator is given by, $$\label{expH}
| \Psi(\mathbf{r},t)\rangle=\hat{{\rm T}}\left\{ \exp\left(-i \int_{t_0}^t dt' H(t')\right) \right\}| \Psi(\mathbf{r},t_0) \rangle,$$ where $| \Psi(\mathbf{r},t_0) \rangle$ is the initial state, $\hat{{\rm T}}$ is the time-ordering symbol, and $H(t')$ is the time-dependent Dirac hamiltonian for the collision. Since the direct application of this propagator is not efficient for our purposes we use a short-time-step approximation: $H(t)$ is approximated to change linearly to $H(t+\Delta t)$.
In order to propagate the initial state forward a time-step $\Delta t$, it is expanded into an eigenbasis of $H(t+\Delta t)$, with eigenvalues $E_m(t+\Delta t)$. The propagator applied to the $m$th state thus yields, $$\label{proptrap}
\hat{{\rm T}}\left\{ \exp\left(-i \int_{t}^{t+\Delta t} dt' H(t')\right) \right\} \approx
\exp\left(-i \frac{E_m(t+\Delta t)-E_m(t)}{2}\Delta t \right).$$ This approximation to the integral is known as the trapezoid rule and has an error of $O(\Delta t^3)$. By solving for the basis at $t+\Delta t$, then projecting the state onto this basis and applying the approximate propagator in Eq. \[proptrap\], a state can be propagated by $\Delta t$. Re-applying the method many times allows one to propagate the approximate eigenstates throughout the collision. It should be noted that this method allows one to propagate a large number of states (the complete discretized spectrum) efficiently.
Correcting for the non-orthogonality of the basis
-------------------------------------------------
The matrix representation of the hamiltonian generated by the mapped Fourier grid method is non-symmetric. Therefore, the eigenvectors do not have to be orthogonal. The mapping implies a non-uniform coordinate representation mesh [@me1], being dense at the origin but increasingly sparse as $r\rightarrow \infty$. Thus, for large $r$ the representation of Rydberg states and continuum states suffers from aliasing. We find that the lower-lying well-localized bound states satisfy orthogonality rather well (e.g., $\langle \phi_i|\phi_j\rangle \lsim 10^{-10}$ when $i\neq j$ for a calculation with $N=1024$), but that most continuum states are not really orthogonal. The deviation from zero in the orthogonality condition for continuum states can reach a value of 0.16 for $N=1024$ between continuum states near the ends of the continuum were the energies reach $10^6$ m$_e$c$^2$. For the relevant continuum states in this work, $E \lsim 20$m$_e$c$^2$, the deviation from zero is on the order of $10^{-4}$.
To correct for this deficiency the inverse of the inner-product matrix, $S_{i,j}=\langle \phi_i|\phi_j\rangle$, is needed. For a state $$\label{psia}
\left| \Psi \rangle\right. = \sum_n{b_n \left| \phi_n \rangle\right.}$$ the inner-product coefficient is given by $$\label{cm}
c_m = \langle \phi_m \left|\right. \Psi\rangle.$$ Inserting \[psia\] into \[cm\] the expansion coefficient, $b_n$, is related to the inner-product coefficient according to $$\begin{aligned}
\label{expcoefdef}
c_m & = & \sum_n{b_n \left( \langle \phi_m \left|\right. \phi_n\rangle \right)} \\
\Rightarrow b_n & = & \sum_m{c_m \left( \langle \phi_m \left|\right. \phi_n\rangle \right)^{-1}}.\end{aligned}$$
The non-orthogonality of the mapped Fourier grid basis has no effect on the orthogonality properties of the Fock space basis. Thus, we can use the expressions in Eqs. \[creat2\] and \[creat1\]. The new (orthogonal-basis) expansion coefficients, $b_n$, and not the inner-product coefficients are the single-particle amplitudes need for Eqs. \[creat2\] and \[creat1\].
Supercritical resonances
------------------------
Resonances are characterized by two parameters: the energy, $E_{\rm res}$, and the width, $\Gamma$, which is related to the lifetime, $\tau$ by $\Gamma = \hbar/\tau$. The interpretation for supercritical resonances is somewhat different from scattering resonances, e.g., in the case of a single supercritical nucleus ($Z\gsim 169$), a ground-state vacancy can decay into a ground-state electron and a free positron [@ackadcs]. The situation may be viewed as the scattering of the negative-energy electrons from the supercritical vacancy state. The behavior is then similar to electrons scattering off an atom in an external electric field, i.e., a standard atomic resonance. The decay of the resonance is characterized, in this case, by the ejection of a positron and the creation of a ground-state electron from the static QED vacuum. Thus, the interpretation of supercritical resonances differs from atomic resonances: the outgoing state represents a bound-free pair and not the release of a previously bound electron. Supercritical resonance decay is sometimes referred to as charged vacuum decay, since pairs of charged particles are created spontaneously [@rafelski].
The basis obtained from the diagonalization of the hamiltonian generated by the mapped Fourier grid method contains the supercritical resonance [@ackadcs]. Thus, the effects of charged vacuum decay are included when the states are propagated through the collision with the current method: the positron spectrum includes the effects of supercritical resonance decay, as well as dynamical pair production.
Since nuclei with charges $Z\gsim 169$ do not exist one can resort to adiabatically changing Dirac levels (such as the 1S$\sigma$ quasi-molecular ground-state) in heavy-ion collisions near the Coulomb barrier [@pra10324]. As the nuclei approach, the quasi-molecular ground-state energy decreases continuously from its initial (subcritical) value to that of the supercritical resonance at closest approach. For a given combined charge $Z_1+Z_2$ a continuous range of supercritical resonance states will occur from the edge of the negative-energy continuum to the deepest-possible value at closest approach. As previously shown in Ref. [@ackadcs; @pra10324], deeper-energy resonance states, have shorter life times. The resonance decay signal will be dominated by the deepest resonance, i.e. the resonance formed at closest approach, since the collision system spends most time at the Coulomb barrier, and also because $\tau$ decreases with $E_{\rm res}$ [@ackadcs; @pra10324].
Previous calculations for spontaneous electron-positron production from bare heavy-ion collisions [@PhysRevA.37.1449] were performed within a computational basis with limited continuum energy resolution. While the present work is restricted to the zero-impact-parameter geometry, it is novel from the point of view of also propagating the discretized continua. This permits us to calculate not only all contributions to the total inclusive electron and positron spectra, but also the complete $(k,q)$ correlated spectra. Here $k$ labels any bound or unbound electron states, while $q$ represents the positron energy (or Dirac negative-energy) states.
Recently we developed methods for calculating resonance parameters including adding a complex absorbing potential (CAP) [@ackadcs], complex scaling (CS) [@ackadcs] and smooth exterior scaling (SES) [@ackad:022503]. Using a CAP or SES, augmented with a Padé approximant, highly accurate results for the resonance parameters were obtained with little numerical effort [@ackad:022503]. These have been used to parameterize the resonance ($E_{\rm res}$ and $\Gamma$) of the U-U collision, as well as a hypothetical Db-Db collision. The accurate knowledge of the decay times allows for a detailed comparison of these different supercritical systems, which is presented in section \[t0sec\].
Correlated spectra {#seccorr}
------------------
The positron and electron spectra obtained from Eqs. \[creat2\] and \[creat1\] are the result of both dynamical and static pair creation processes. They contain no information about the state of the partner particle, i.e., they are inclusive. Dynamical positron production can lead to pairs in very different energy states. Static pair production can only produce pairs with a ground-state electron (assuming the field is not supercritical for excited bound states). The positrons emanating from the resonance decay are likely to have an energy close to the resonance energy. Positrons from dynamical (1S $e^-+$ free $e^+$) pairs will have a much broader spectrum determined by the collision’s Fourier profile. Thus, the positron spectrum correlated to ground-state electrons should provide a cleaner signal of supercritical resonance decay than the inclusive $e^+$ spectrum.
The correlated spectrum is the sum of random (chance) correlations and true correlations. It is given by, $$\label{corr}
\langle n_k \bar{n}_q \rangle= \langle n_k\rangle \langle \bar{n}_q\rangle + \left| \sum_{j>F} a^*_{j,k}a_{j,q} \right|^2,$$ where $k>F$ and $q<F$ [@PhysRevLett.43.1307]. Here the sum is taken over positive $e^+$ energy states, while in [@PhysRevLett.43.1307] it is expressed in terms of negative-energy initial states. The first term is interpreted as the random coincidence of a pair being in the $(k,q)$-state, while the second term is interpreted as the purely correlated spectrum, called $C_{k,q}$. Thus, the signal for the decay of the supercritical resonance will be primarily contained in the spectrum $C_{{\rm 1S}\sigma,q}$. For most situations the random contribution is small compared to the pure correlation term, since it involves the product of two small probabilities.
Results
=======
The supercritical collision of two fully-ionized uranium nuclei was computed for a head-on Rutherford trajectory at a center-of-mass energy of $E_{\rm CM}=740$ MeV. All results are given in natural units ($\hbar=$ m$_{\rm e}=$ c $=1$): thus an energy of $E=\pm 1$ corresponds to $E=\pm$m$_{\rm e}$c$^2$, and an internuclear distance $R=1$ corresponds to $R=\hbar/($m$_{\rm e}$c$)\approx386.16$ fm.
The calculations presented in this work were performed with a mesh size of $N=1024$ resulting in 2048 eigenstates [@me1]. The mapped Fourier grid scaling parameter value of $s=700$ was found to give stable results (cf. Ref [@ackadcs]). The time mesh (in atomic time units where 1 a.t.u.$\approx 2.42\times10^{-17}$s) was broken up into three different parts with $\Delta t=4\times10^{-4}$ for $R>8$, $\Delta t=1\times10^{-4}$ for $8\geq R \geq 2$ and $\Delta t=1\times10^{-6}$ for $R < 2$.
For the U-U system a center-of-mass energy of 740 MeV corresponds to a closest-approach distance of 16.5 fm. This results in a deep resonance with $E_{\rm res}=1.56$ m$_{\rm e}$c$^2$ and $\Gamma=1.68$ keV calculated using the technique of Ref [@ackad:022503]. The nuclei are assumed to be spherical with a radius of $R_{\rm n}=7.44$ fm (using $R_{\rm n}=1.2\times A^{1/3}$ with $A=$238 for uranium). The nuclei are thus 1.63 fm away from touching. This choice was made to facilitate comparison with previous calculations by Müller *et al.* [@PhysRevA.37.1449].
Additional collision systems were also considered to help understand the U-U results. The subcritical collision system of francium-francium ($A=222$) at $E_{\rm CM}=674.5$ MeV was chosen to analyze the purely dynamical processes. A dubnium-dubnium ($A=268$) collision at $E_{\rm CM}=928.4$ MeV was also chosen to better understand the supercritical resonance decay. The dubnium isotope, $A=268$ has a half-life of 1.2 days [@CRC], perhaps making such experiments possible. Practical limitations, of course, exist as one would need to produce large numbers of ions in order to perform collision experiments.
At closest approach the Db-Db system has resonance parameters $E_{\rm res}=3.57$ m$_{\rm e}$c$^2$ and $\Gamma=47.8$ keV, the latter giving for the lifetime of the resonance, $\tau=13.8$ zeptoseconds ($10^{-21}$ s). A strong signal should be detectable in this case since the decay time is only about an order of magnitude larger than the collision time. The collision energies were chosen to result in the nuclei being 1.63 fm from touching for each system. Thus the dynamics of all three systems is expected to be approximately the same.
The data for the three systems are summarized in Table \[tb1\]. The resonance decay time $\tau$ should be compared to the dynamical time, $T_0$, which is the time when the 1S$\sigma$ state is supercritical (without sticking).
Collision system $Z_{\rm united}$ $E_{CM}$ \[MeV\] $E_{{\rm 1S}\sigma}$ \[m$_{\rm e}$c$^2$\] $\Gamma$ \[keV\] $\tau$ \[$10^{-21}$s\] $T_0$ \[$10^{-21}$s\]
------------------ ------------------ ------------------ ------------------------------------------- ------------------ ------------------------ -----------------------
Fr-Fr 174 674 -0.99 - - -
U-U 184 740 -1.56 1.68 392 2.3
Db-Db 210 928 -3.57 47.8 13.8 4.1
: \[tb1\] Comparison of the parameters for the Fr-Fr, U-U and Db-Db systems at a closest-approach distance of near-touching (cf. text). For the supercritical U-U and Db-Db cases the positron resonance energy is obtained as $E_{res}=-E_{{\rm 1S}\sigma}$. For Fr-Fr the Dirac eigenvalue at closest approach is just above $E=-$m$_{\rm e}$c$^2$. The final column gives the time the 1S$\sigma$ state is supercritical along the Rutherford trajectory.
It can be seen that for the Db-Db system the resonance decay time $\tau$ approaches the total supercritical time $T_0$.
Collisions without sticking {#t0sec}
---------------------------
Propagating all initial (discretized) positive-energy states allows for the calculation of the total positron spectrum using Eq. \[creat1\]. The eigenstates of a single, target-centered nucleus were used for the initial conditions. The change of basis from a single nucleus to the quasi-molecular system in the target frame does not induce transitions provided the initial distance is far enough ($R \geq 25$).
Figure \[fig1\] shows the total energy-differential positron probabilities for the three collision systems.
![(Color Online) The total energy-differential positron probability spectrum for three collision systems at center-of-mass energies in which the nuclei are 1.63 fm from touching at closest approach. The dotted (blue) curve is for the Db-Db system which has a deep supercritical resonance (cf. Table \[tb1\]). The dashed (green) curve is for the U-U system under study. The solid (red) curve is for the Fr-Fr system which remains subcritical, providing a background spectrum resulting solely from the dynamics of the collision.[]{data-label="fig1"}](dpdeZs.ps)
The results for all three cases agree below $E<1.25$, which indicates that this part of the spectrum is purely dynamical. The Db-Db result shows a broad peak centered at $E \approx 2.6$. The U-U data show a deviation from the Fr-Fr results in the range $E=1.25-4.5$. The latter can be understood to represent the differential positron probability solely due to the dynamics of the system, without any effects from supercritical field decay. For positron energies where the Fr-Fr spectrum matches the other two, only positrons created by the changing potential in the collision are contributing.
The data are suggestive of the fact that the spontaneous and dynamic contributions to the inclusive positron spectra are additive, although when comparing the Fr-Fr and Db-Db data one shouldn’t rule out the possibility of an increase in dynamical positron production due to the dramatic change in $Z$, particularly at high positron energies.
The results for all systems are not entirely smooth. While the structures are stable with respect to change in calculation parameters such as the basis size, they do vary slightly with final separation.
The Db-Db peak is not centered near $E_{\rm res}=3.57$, but at $E\approx 2.6$. In part this is due to the continuously varying intermediate supercritical resonance states before and after closest approach contributing in addition to the dominating closest-approach resonance [@PhysRevA.37.1449]. One can find how $\Gamma$ changes with $E_{\rm res}$ using data from different systems [@ackadcs], and deduce that an energy peak of $E\approx 2.9$ is expected using $$\label{predictE}
E_{\rm peak}\approx\frac{\int_{1}^{E_{\rm res}} \Gamma(E) E dE}{\int_{1}^{E_{\rm res}} \Gamma(E) dE}.$$ Thus, the interplay of the intermediate resonance state decays and the dynamical positrons (from the rapidly varying bound states) make a single sharp resonance peak at $E_{\rm res}$ unattainable without nuclear sticking. Note that a significant energy broadening is to be expected due to the collision time, $T_0$, being significantly shorter than the decay time of the resonance. Here $T_0$ is chosen as the time of supercriticality, and is given in Table \[tb1\].
The dynamical background is almost exclusively due to the inclusion of positive-continuum initial states in Eq. \[creat1\]. Propagating only bound states yields a peak where the tails contain negligible differential probability, as was found in Ref. [@PhysRevA.37.1449] based upon the propagation of bound vacancy states and their coupling to the negative-energy continuum. The low-lying positive-continuum states in Fig. \[fig1\] show significant positron production probabilities. These states are excited due to their necessity in representing the quickly changing (super- or sub-critical) ground state in the asymptotic basis.
The subcritical dynamical background may be subtracted from the U-U spectrum, highlighting the effects of the supercritical resonance decay on $dP/dE$. This is done by subtracting the Fr-Fr spectrum from the U-U spectrum. The result is the dashed (blue) curve shown in Fig. \[fig2\].
![(Color Online) The difference between the U-U and Fr-Fr total differential positron spectra shown in Fig. \[fig1\] is represented by the dashed (blue) curve. The solid (red) curve shows the correlated differential spectrum, $C_{{\rm 1S},q}$. []{data-label="fig2"}](UUmFr_fix.ps)
The decay signal is heavily smeared by the dynamical process leading into, and out of, supercriticality. Thus, a sharp resonance peak with a width $\Gamma$ is not observed. The spectrum still includes dynamical effects of the system being supercritical with the ground state inside the negative-energy continuum. Nonetheless, the resultant spectrum shows a single peak centered at $E\approx2.1$, close to $E=1.8$ predicted by Eq. \[predictE\].
This difference spectrum can be compared with the correlated (1S $e^-$, free $e^+$) spectrum, $C_{{\rm 1S},q}$, which is shown as the (red) solid curve in Fig. \[fig2\]. This correlated spectrum was calculated using the second term of Eq. \[corr\] and provides a more distinct signal for supercritical resonance decay, as can be seen by comparing Fig. \[fig1\] and \[fig2\]. All the states of the finite representation spanning the two continua and the bound states were used.
The correlated spectrum is also peaked at $E\approx 2.1$ and matches the inclusive spectrum at low and high energies. The difference peak is slightly below the correlated peak. This is unexpected as the correlated spectrum contains fewer processes, such as other (nS $e^-$, free $e^+$) pair production when the system is supercritical, predominantly dynamical (2S $e^-$, free $e^+$) production. If there was no change in the dynamical contributions to the inclusive positron spectra when going from $Z_{\rm united}=174$ to $Z_{\rm united}=184$, and if the additivity of the dynamical and supercritical positron productions was perfect, then one would expect the inclusive difference spectrum to exceed somewhat the $C_{{\rm 1S},q}$ correlated U-U spectrum. Thus, the comparison shows that a proper isolation of the supercritical contribution in the inclusive spectra requires a determination of how the dynamical contributions scale with $Z$.
The coincidence ($n$S $e^-$, free $e^+$) spectra can be compared to previous calculations where only a few ($n$S) vacancies were propagated to obtain a partial inclusive spectrum [@PhysRevA.37.1449]. We find very good agreement in the case of (1S $e^-$, free $e^+$) pair production, and an overestimation of the correlated spectra by the simpler calculation in the case of the (2S $e^-$, free $e^+$) channel.
Effects of nuclear sticking on uranium-uranium collisions
---------------------------------------------------------
The phenomenon of nuclear sticking enhances the supercritical resonance decay signal. Trajectories with nuclear sticking are obtained by propagating the states at closest approach while keeping the basis stationary for a time, $T$. No basis projections are needed for this part, since the basis remains static and the time evolution of each eigenstate is obtainable from a phase factor.
Figure \[fig3\] shows $dP/dE$ for $T=0,1,2,5$ and 10 zeptoseconds calculated from Eq. \[creat1\]. A sticking time of 10 zeptoseconds is the longest time predicted as realistically attainable by the nuclear theory groups [@EurPJA.14.191; @zagrebaev:031602; @Sticking2]. All initial positive-energy states were propagated in order to compute the inclusive spectrum.
![(Color Online) The total differential positron spectrum, $dP/dE$ for the U-U system at $E_{CM}=740$ MeV, for different nuclear sticking times calculated using Eq. \[creat1\]. The solid (red) curve is for $T=0$, the long dash (green) for $T=1$, the short dashed (blue) for $T=2$, the dotted (magenta) for $T=5$ and the dashed-dot (cyan) for $T=10$. []{data-label="fig3"}](UUspecTsr.ps)
As $T$ increases the peaks become narrower and move closer to the closest-approach resonance position of $E_{\rm res}=1.56$. All curves converge for $E<1.25$ and are close for $E>4.2$. The secondary peaks exhibited by the $T\geq 5$ curves, in the range displayed, are at $E=2.5$ for $T=10$ and $E=3.25$ for $T=5$. The decay of the supercritical resonance also results in an increase in the total number of positrons created in conjunction with electrons in the ground state. Most of these electrons remain in the ground state, while in a few cases they are subsequently excited into higher states. Table \[table\] gives the probabilities for the three lowest bound states calculated using Eq. \[creat2\] including all positron continuum states.
$T$ 1S 2S 3S
----- ------- -------- ---------------------
0 0.633 0.0225 7.38$\times10^{-3}$
1 0.916 0.0354 1.10$\times10^{-2}$
2 1.16 0.0475 1.30$\times10^{-2}$
5 1.95 0.0321 9.39$\times10^{-3}$
10 3.08 0.0958 3.58$\times10^{-2}$
: \[table\] Final electron creation probabilities of the low-lying bound states for U-U at $E_{CM}=740$ MeV for different nuclear sticking times, $T$. These were calculated by evolving all positron continuum states of a matrix representation of the hamiltonian using Eq. \[creat2\].
The increase in the probability of the bound states shown from $T=0$ to just $T=1$ is over 45%. The probabilities increase further as $T$ increases in all cases except for the excited states in the case of $T=5$. For this collision we observe a decrease in the population of the excited states. It does not happen for all of the exicted states. We suspect that for a sticking time of about $T=5$ the dynamical interference which causes the many secondary peaks in the positron spectrum is responsible for this behavior. Thus, the 2S and 3S populations are lower due to destructive interference with dynamical processes.
There is a particularly strong increase in the ground-state population as $T$ increases. An almost five-fold increase is observed when increasing the sticking time from $T=0$ to $T=10$. This is primarily due to the supercritical resonance decay. The average charge state of the uranium ions after the collision can thus be used as an aid in the detection the supercritical resonance decay by using a coincidence counting technique.
The continuum electron spectrum was also calculated using Eq. \[creat2\]. The results are the same for all $T$, and follow the same curve as the $T=0$ positron spectrum in Fig. \[fig3\]. Thus, the free electrons are not influenced by the sticking time and the statically created ground-state electrons are very unlikely to be excited to the continuum during the outgoing part of the collision.
The increase in the 1S population, as well as the increase in the main peak in the positron spectrum are all indirect signs of supercritical resonance decay. As shown above, a more direct signal is given by the correlation spectrum. Figure \[corpos\] shows the total differential positron spectra along with the differential correlation spectrum, $C_{k,q}$, using Eq. \[corr\] with $k=$1S. The $T=0$ system is shown as a solid (red) curve, the $T=5$ as a dashed (green) curve and the $T=10$ as a dotted (blue) curve. The correlated spectra are in the same style (color) as the total differential spectra of the same sticking time (the lower curve representing the correlated spectrum). This is due to the spectrum being selective about which positrons are included, i.e., only positrons created together with an electron in the ground-state (either the 1S or 1S$\sigma$ for large $R$).
![(Color Online) The total positron spectrum for the U-U system at $E_{CM}=740$ MeV for different nuclear sticking times (upper curves) compared to the correlated spectrum of positrons with partner electrons in the 1S state (lower curves with the same color). The red curves are for $T=0$, the (green) dashed curves are for $T=5$ and the (blue) dotted curves are for $T=10$. The correlated positron spectra contain only a small amount of dynamical positron background. The points with horizontal error bars of the same color represent the width estimate ($\hbar/(T+T_0)$ with $T_0=2.3\times10^{-21}$s, cf. Table \[tb1\]). []{data-label="corpos"}](poscorfinal.ps)
The correlated peaks also become narrower and are centered closer to $E_{\rm res}$ as $T$ increases in the same manner as $dP/dE$. The correlated spectra resemble closely the results from propagating the bound states only, as done in Ref. [@PhysRevA.37.1449]. They lack the significant dynamical background, which can be estimated from the Fr-Fr collision system (cf. Fig. \[fig1\]). Thus, the correlated spectra are dominated by the supercritical resonance decay, i.e. the desired signal. In experiments one can aim, therefore, for a comparison of inclusive positron spectra with those obtained from a coincidence detection where one of the ions has changed its charge state.
For $T\geq5$ additional peaks are observed in the correlated spectra. The $T=10$ spectrum exhibits a secondary peak at $E \approx 2.6$, and a third smaller peak, in the range shown, at $E\approx3.25$. The $T=5$ result has a single secondary peak in the range shown, coincidentally also at $E\approx 3.25$. The second peak for $T=5$ is much broader than both the second and third peaks in the $T=10$ correlation spectrum.
The secondary peaks are found for the trajectories including sticking. The sticking changes the dynamics of the collision, and therefore causes the peaks. These peaks were also observed in the Fr-Fr system when sticking was included, eliminating the possibility that they are due to the supercriticality of the system. The energy separation between the peaks was found to depend only on $T$. The secondary peaks have also been previously seen in less energetic U-U collisions where $E_{CM}=610$ MeV [@eackadconf2007]. The series of peaks begin with the second peak and decrease in amplitude with peaks extending to energies above 50 m$_{\rm e}$c$^2$. In all systems studied, it was found that this series of secondary peaks had separations that were largely independent of the system. Thus, the decay of the supercritical resonance adds to the amplitude of the secondary peaks, but does not cause them.
At half of the respective peak’s height in Fig. \[corpos\], a width estimate of the resonance decay spectrum $\hbar/(T+T_0)$ is shown in the same color as the curve. Here $T_0=2.3$ zeptoseconds, i.e., the time the system is supercritical without sticking, was chosen to estimate the supercritical decay without sticking. The comparison shows that the width of the dominant peak structures are not explained perfectly. The estimate ignores the interplay between dynamical and spontaneous positron production. Better agreement would be obtained for smaller $T_0$ which can be justified by the observation that supercritical resonance decay becomes appreciable only when the 1S$\sigma$ state is embedded more deeply in the $E<-$m$_{\rm e}$c$^2$ continuum.
The correlation spectrum, $C_{k,q}$, defined by Eq. \[corr\] can also be calculated for (free $e^-$, free $e^+$) correlated pairs. This type of pair production can be detected more easily. For all systems described in this paper, none had a discernible signal. This is not surprising due to the intermediate collision energies, and the fact that static pair creation does not lead to free electrons.
Using Eqs. \[creat2\], \[creat1\] and \[corr\] it is possible to examine the spectra immediately following supercriticality. The resultant spectra in the target-frame two-center basis, would be indistinguishable from those shown in Figs. \[fig3\] and \[corpos\]. There is almost no change in the total positron spectrum. Thus, almost all the positrons are created once the system emerges from supercriticality. Similarly, the correlated spectrum does not change. After supercriticality no static pairs can be created, and we find that dynamic pair creation ceases to be effective.
Conclusions
===========
Total inclusive positron and electron spectra were calculated for U-U zero-impact-parameter collisions at $E_{CM}=740$ MeV with nuclear sticking times $T=0,1,2,5$ and 10 zeptoseconds. The present calculations, while limited in geometry, represent a first attempt at propagating a dense representation of both positive- and negative-energy eigenstates. Correlated spectra were also calculated and results were presented for (1S $e^-+$ free $e^+$) pair creation, the only channel available for supercritical decay in this system. While we found that the (1S $e^-+$ free $e^+$) correlated spectra agreed rather well with naively calculated spectra from propagating a 1S vacancy, it is important to note that substantial differences were observed for the case of (2S $e^-+$ free $e^+$) pairs.
The results were compared to subcritical Fr-Fr and highly supercritical Db-Db collisions at the Coulomb barrier. It was found that the correlation spectra provide the clearest signal for the supercritical resonance decay, although one can subtract the inclusive positron spectra to isolate the supercritical positron production from the purely dynamical contributions. This was demonstrated by the fact that at low and at high energies the inclusive spectra merge for the three systems: U-U, Fr-Fr and Db-Db. This differs from previous findings of Ref. [@pra10324], where a constant $E_{CM}$ was used, whereas the present work employed a constant internuclear touching distance by changing $E_{CM}$.
The secondary peaks found in previous work [@eackadconf2007; @PhysRevA.37.1449] were explained as resulting from the change in the trajectory due to nuclear sticking. They were found in subcritical collisions, eliminating the possibility that they were related to supercritical resonance decay.
Some bound-state population results were also presented. The results show large increases in the bound-state populations as the nuclear sticking time $T$ increases. Given that near-zero-impact-parameter collisions can be selected by a nuclear coincidence count (large scattering angles), this channel will provide a strong indication of the nuclear sticking effect. Thus, an increase in the average charge state of the nuclei, together with sharper positron peaks, could be used to demonstrate the existence of long sticking times and the existence of charged vacuum decay.
The calculations presented in this paper reported single-electron Dirac calculations with spin degeneracy. Thus, for small pair production probabilities it is correct to multiply them by a factor of two. For the (1S $e^-$ , free $e^+$) correlated spectra one would have to take into account Pauli blocking effects and make use of many-electron atomic structure once the probabilities become appreciable.
The current work was performed in the monopole approximation to the two-center Coulomb potential. In previous work where static resonance parameters were calculated it was shown that the next-order effect is caused by the quadrupole coupling of the S$-$D states [@eackadconf2007]. While this coupling will certainly modify the resonance, e.g., increasing its energy, $E_{\rm res}$, it is unlikely that it will change the spectra significantly. This conclusion is based on the observation that substantial changes in $\Gamma$ due to higher-order contributions occur at larger internuclear separations where $\Gamma$ is small. It is expected that the main peak will be somewhat higher and broader, as indicated by the Db-Db results, due to the increased width $\Gamma$.
Acknowledgments
===============
The authors would like to thank Igor Khavkine for useful discussions. This work was supported by NSERC Canada, and was carried out using the Shared Hierarchical Academic Research Computing Network (SHARCNET:www.sharcnet.ca). E. Ackad was supported by the Ontario Graduate Scholarship program.
|
---
abstract: 'In this paper, we propose a new weight initialization method called [*even initialization*]{} for wide and deep nonlinear neural networks with the ReLU activation function. We prove that no poor local minimum exists in the initial loss landscape in the wide and deep nonlinear neural network initialized by the even initialization method that we propose. Specifically, in the initial loss landscape of such a wide and deep ReLU neural network model, the following four statements hold true: 1) the loss function is non-convex and non-concave; 2) every local minimum is a global minimum; 3) every critical point that is not a global minimum is a saddle point; and 4) bad saddle points exist. We also show that the weight values initialized by the even initialization method are contained in those initialized by both of the (often used) standard initialization and He initialization methods.'
author:
- |
Tohru Nitta\
National Institute of Advanced Industrial Science and Technology (AIST), Japan\
`tohru-nitta@aist.go.jp`\
title: Weight Initialization without Local Minima in Deep Nonlinear Neural Networks
---
Introduction {#intro}
============
Hinton et al. (2006) proposed Deep Belief Networks with a learning algorithm that trains one layer at a time. Since that report, deep neural networks have attracted attention extensively because of their human-like intelligence achieved through learning and generalization. To date, deep neural networks have produced outstanding results in the fields of image processing and speech recognition (Mohamed et al., 2009; Seide et al., 2011; Taigman et al., 2014). Moreover, their scope of application has expanded, for example, to the field of machine translation (Sutskever et al., 2014).
In using deep neural networks, finding a good initialization becomes extremely important to achieve good results. Heuristics have been used for weight initialization of neural networks for a long time. For example, a uniform distribution $
U [ -1/\sqrt{n}, 1/\sqrt{n} ]
$ has been often used where $n$ is the number of neurons in the preceding layer. Pre-training might be regarded as a kind of weight initialization methods, which could avoid local minima and plateaus (Bengio et al., 2007). However, several theoretical researches on weight initialization methods have been progressing in recent years. Glorot and Bengio (2010) derived a theoretically sound uniform distribution $
U [\ - \sqrt{6} / \sqrt{n_i + n_{i+1} },
\sqrt{6} /\sqrt{n_i + n_{i+1} } \ ]
$ for the weight initialization of deep neural networks with an activation function which is symmetric and linear at the origin. He et al. (2015) proposed a weight initialization method (called [*He initialization*]{} here) with a normal distribution (either $N ( 0, 2 / n_i )$ or $N ( 0, 2 / n_{i+1} )$ ) for the neural networks with the ReLU (Rectified Linear Unit) activation function. The above two initialization methods are driven by experiments to monitor activations and back-propagated gradients during learning.
On the other hand, local minima of deep neural networks has been investigated theoretically in recent years. Local minima cause plateaus which have a strong negative influence on learning in deep neural networks. Dauphin et al. (2014) experimentally investigated the distribution of the critical points of a single-layer MLP and demonstrated that the possibility of existence of local minima with large error (i.e., bad or poor local minima) is very small. Choromanska et al. provided a theoretical justification for the work of Dauphin et al. (2014) on a deep neural network with ReLU units using the spherical spin-glass model under seven assumptions (Choromanska, Henaff, Mathieu, Arous & LeCun, 2015). Choromanska et al. also suggested that discarding the seven unrealistic assumptions remains an important open problem (Choromanska, LeCun & Arous, 2015). Kawaguchi (2016) discarded most of these assumptions and proved that the following four statements for a deep nonlinear neural network with only two out of the seven assumptions: 1) the loss function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) bad saddle points exist.
In this paper, we propose a new weight initialization method (called [*even initialization*]{}) for wide and deep nonlinear neural networks with the ReLU activation function: weights are initialized independently and identically according to a probability distribution whose probability density function is even on $[ -1/n, 1/n]$ where $n$ is the number of neurons in the layer. Using the research results presented by Kawaguchi (2016), we prove that no poor local minimum exists in the initial loss landscape in the wide and deep nonlinear neural network initialized by the even initialization method that we propose. We also show that the weight values initialized by the even initialization method are contained in those initialized by both of the (often used) standard initialization and He initialization methods.
Weight Initialization without Local Minima {#initialize_weight}
==========================================
In this section, we propose a weight initialization method for a wide and deep neural network model. There is no local minimum in the initial weight space of the wide and deep neural network initialized by the weight initialization method. The weight values initialized by the weight initialization method are contained in those initialized by both of the standard initialization and He initialization methods. In other words, there exists an interval of initial weights where there is no local minimum in both cases of the standard initialization and He initialization methods.
Kawaguchi Model {#kawa-model}
---------------
This subsection presents a description of the deep nonlinear neural network model analyzed by Kawaguchi (2016) (we call it [*Kawaguchi model*]{} here). We will propose a new weight initialization method for the Kawaguchi model in the subsection \[even\_weight\].
First, we consider the following neuron. The net input $U_n$ to a neuron $n$ is defined as: $U_n = \sum_mW_{nm}I_m$, where $W_{nm}$ represents the weight connecting the neurons $n$ and $m$, $I_m$ represents the input signal from the neuron $m$. It is noteworthy that biases are omitted for the sake of simplicity. The output signal is defined as $\varphi(U_n)$ where $\varphi(u) {\stackrel{\rm def}{=}}\max(0, u)$ for any $u \in {\mbox{\boldmath $R$}}$ and is called [*Rectified Linear Unit*]{} ([*ReLU*]{}, denotes the set of real numbers).
The deep nonlinear neural network described in (Kawaguchi, 2016) consists of such neurons described above (Fig. \[fig1\]). The network has $H+2$ layers ($H$ is the number of hidden layers). The activation function $\psi$ of the neuron in the output layer is linear, i.e., $\psi(u)=u$ for any $u\in{\mbox{\boldmath $R$}}$. For any $0 \leq k \leq H+1$, let $d_k$ denote the number of neurons of the $k$-th layer, that is, the width of the $k$-th layer where the 0-th layer is the input layer and the $(H+1)$-th layer is the output layer. Let $d_x = d_0$ and $d_y = d_{H+1}$ for simplicity.
Let $({\mbox{\boldmath $X$}}, {\mbox{\boldmath $Y$}})$ be the training data where ${\mbox{\boldmath $X$}}\in {\mbox{\boldmath $R$}}^{d_x \times m}$ and ${\mbox{\boldmath $Y$}}\in {\mbox{\boldmath $R$}}^{d_y \times m}$ and where $m$ denotes the number of training patterns. We can rewrite the $m$ training data as $\{ ({\mbox{\boldmath $X$}}_i, {\mbox{\boldmath $Y$}}_i) \}_{i=1}^m$ where ${\mbox{\boldmath $X$}}_i \in {\mbox{\boldmath $R$}}^{d_x}$ is the $i$-th input training pattern and ${\mbox{\boldmath $Y$}}_i \in {\mbox{\boldmath $R$}}^{d_y}$ is the $i$-th output training pattern. Let ${\mbox{\boldmath $W$}}_k$ denote the weight matrix between the $(k-1)$-th layer and the $k$-th layer for any $1 \leq k \leq H+1$. Let ${\mbox{\boldmath $\Theta$}}$ denote the one-dimensional vector which consists of all the weight parameters of the deep nonlinear neural network.
Kawaguchi specifically examined a path from an input neuron to an output neuron of the deep nonlinear neural network (Fig. \[fig2\]), and expressed the actual output of output neuron $j$ of the output layer of the deep nonlinear neural network for the $i$-th input training pattern ${\mbox{\boldmath $X$}}_i \in {\mbox{\boldmath $R$}}^{d_x}$ as $$\begin{aligned}
\label{eqn2-1}
\hat{{\mbox{\boldmath $Y$}}}_i({\mbox{\boldmath $\Theta$}}, {\mbox{\boldmath $X$}}_i)_j = q\sum_{p=1}^\Psi [{\mbox{\boldmath $X$}}_i]_{(j,p)} [{\mbox{\boldmath $Z$}}_i]_{(j,p)}
\prod_{k=1}^{H+1} w_{(j,p)}^{(k)} \in {\mbox{\boldmath $R$}}\end{aligned}$$ where $\Psi$ represents the total number of paths from the input layer to output neuron $j$, $[{\mbox{\boldmath $X$}}_i]_{(j,p)} \in {\mbox{\boldmath $R$}}$ denotes the component of the $i$-th input training pattern ${\mbox{\boldmath $X$}}_i \in {\mbox{\boldmath $R$}}^{d_x}$ that is used in the $p$-th path to the $j$-th output neuron. Also, $[{\mbox{\boldmath $Z$}}_i]_{(j,p)} \in \{0, 1\}$ represents whether the $p$-th path to the output neuron $j$ is active or not for each training pattern $i$ as a result of ReLU activation. $[{\mbox{\boldmath $Z$}}_i]_{(j,p)} = 1$ means that the path is active, and $[{\mbox{\boldmath $Z$}}_i]_{(j,p)} = 0$ means that the path is inactive. $w_{(j,p)}^{(k)} \in {\mbox{\boldmath $R$}}$ is the component of the weight matrix ${\mbox{\boldmath $W$}}_k \in {\mbox{\boldmath $R$}}^{d_k \times d_{k-1} }$ that is used in the $p$-th path to the output neuron $j$.
The objective of the training is to find the parameters which minimize the error function defined as $$\label{eqn2-2}
L({\mbox{\boldmath $\Theta$}}) = \frac{1}{2} \sum_{i=1}^m
E_{{\mbox{\boldmath $Z$}}} \Vert \hat{{\mbox{\boldmath $Y$}}}_i({\mbox{\boldmath $\Theta$}}, {\mbox{\boldmath $X$}}_i) - {\mbox{\boldmath $Y$}}_i \Vert^2$$ where $\Vert \cdot \Vert$ is the Euclidean norm, that is, $\Vert {\mbox{\boldmath $u$}}\Vert = \sqrt{u_1^2 + \cdots + u_N^2}$ for a vector ${\mbox{\boldmath $u$}}= (u_1 \cdots u_N)^T \in {\mbox{\boldmath $R$}}^N$, and $\hat{{\mbox{\boldmath $Y$}}}_i({\mbox{\boldmath $\Theta$}}, {\mbox{\boldmath $X$}}_i) \in {\mbox{\boldmath $R$}}^{d_y}$ is the actual output of the output layer of the deep nonlinear neural network for the $i$-th training pattern ${\mbox{\boldmath $X$}}_i$. The expectation in Eq. (\[eqn2-2\]) is made with respect to random vector ${\mbox{\boldmath $Z$}}= \{ [{\mbox{\boldmath $Z$}}_i]_{(j,p)} \}$.
The Kawaguchi model has been analyzed based on the two assumptions.
[**A1p-m**]{} [ *$P([{\mbox{\boldmath $Z$}}_i]_{(j,p)} = 1) = \rho$ for all $i$ and $(j,p)$ where $\rho \in {\mbox{\boldmath $R$}}$ is a constant. That is, $[{\mbox{\boldmath $Z$}}_i]_{(j,p)}$ is a Bernoulli random variable.*]{}
[**A5u-m**]{} [ *${\mbox{\boldmath $Z$}}$ is independent of the input ${\mbox{\boldmath $X$}}$ and the parameter ${\mbox{\boldmath $\Theta$}}$.*]{}
A1p-m and A5u-m are weaker ones of the two assumptions A1p and A5u in (Choromanska, Henaff, Mathieu, Arous & LeCun, 2015), respectively. Assumption A5u-m is used for the proof of Corollary 3.2 presented by Kawaguchi (2016). Strictly speaking, the following assumption A5u-m-1 suffices for the proof instead of the assumption A5u-m described above
[**A5u-m-1**]{} [ *For any $i$ and any $(j,p)$, $[{\mbox{\boldmath $Z$}}_i]_{(j,p)} \in \{0, 1\}$ is independent of the $i$-th input training pattern ${\mbox{\boldmath $X$}}_i \in {\mbox{\boldmath $R$}}^{d_x}$ and the sequence of the weights on the $p$-th path $\{ w_{(j,p)}^{(k)} \in {\mbox{\boldmath $R$}}\}_{k=1}^{H+1}$ where $w_{(j,p)}^{(k)}$ is the weight between the layer $k-1$ and the layer $k$ on the $p$-th path ($k=1, \cdots, H+1$).* ]{}
Actually, according to assumption A5u-m-1, $$\begin{aligned}
\label{eqn2-3}
& & E_{{\mbox{\boldmath $Z$}}} \left[\hat{{\mbox{\boldmath $Y$}}}_i({\mbox{\boldmath $\Theta$}}, {\mbox{\boldmath $X$}}_i)_j \right] \nonumber\\
&=& E_{{\mbox{\boldmath $Z$}}} \left[q\sum_{p=1}^\Psi [{\mbox{\boldmath $X$}}_i]_{(j,p)} [{\mbox{\boldmath $Z$}}_i]_{(j,p)}
\prod_{k=1}^{H+1} w_{(j,p)}^{(k)} \right] \hspace*{1cm} \mbox{(from Eq.} (\ref{eqn2-1})) \nonumber\\
&=& q\sum_{p=1}^\Psi [{\mbox{\boldmath $X$}}_i]_{(j,p)} E_{{\mbox{\boldmath $Z$}}} \left[ [{\mbox{\boldmath $Z$}}_i]_{(j,p)} \right]
\prod_{k=1}^{H+1} w_{(j,p)}^{(k)}. \hspace*{1cm} \mbox{(from the assumption A5u-m-1) }
$$
Even Initialization {#even_weight}
-------------------
This subsection proposes a new weight initialization method for the Kawaguchi model described in the subsection \[kawa-model\]. We assume that the width of the deep nonlinear neural network is sufficiently large, that is, $d_0 (=d_x), d_1, \cdots, d_{H-1}$ are sufficiently large (Fig. \[fig1\]), and that each element of the $i$-th input training pattern ${\mbox{\boldmath $X$}}_i$ takes a value between $-\alpha$ and $\alpha$, that is, ${\mbox{\boldmath $X$}}_i \in {\mbox{\boldmath $I$}}^{d_x}$ for any $i$ where ${\mbox{\boldmath $I$}}=[-\alpha, \alpha]$ and $\alpha$ is a positive real number. Then we propose the following weight initialization scheme.
[**Even Initialization**]{} [ *Elements $\{w_n\}_{n=1}^{d_{k-1}}$ of the weight vector ${\mbox{\boldmath $w$}}= (w_1 \cdots w_{d_{k-1}})^T$ of any hidden neuron in the $k$-th hidden layer are initialized independently and identically according to a probability distribution whose probability density function $f_k: {\mbox{\boldmath $J$}}_k \rightarrow [0, +\infty)$ is an even function where ${\mbox{\boldmath $J$}}_k = [-1/d_{k-1}, 1/d_{k-1}]$, that is, $f_k(-x) = f_k(x)$ for any $x \in {\mbox{\boldmath $J$}}_k$ $( k = 1, \cdots, H$).* ]{} [$\Box$]{}
We designate it as [*even initialization*]{}.
[**Remarks**]{} (a) The expectation of each weight is zero because the probability density function is an even function. (b) The probability distribution in the even initialization can be a normal distribution with zero mean or a uniform distribution. (c) For the weight vectors ${\mbox{\boldmath $w$}}_1=(w_{11} \cdots w_{1M})^T, {\mbox{\boldmath $w$}}_2=(w_{21} \cdots w_{2N})^T$ of any two hidden neurons, the probability distribution which $w_{1k}$ obeys can be different from that which $w_{2l}$ obeys ($k=1, \cdots, M, l=1, \cdots, N$) (Fig. \[fig3\]).
Analysis
--------
This subsection presents an analysis of the initial loss landscape of the Kawaguchi model initialized by the even initialization method described in Section \[even\_weight\].
We prove in the following that the two assumptions A1p-m and A5u-m (A5u-m-1) are satisfied in the Kawaguchi model initialized by the even initialization.
\[thm1\] For any training pattern $i$, any output neuron $j$, and any path $p$ from an input neuron to the output neuron $j$ in the Kawaguchi model initialized by the even initialization method, $$\label{eqn4-1}
P\left([Z_i]_{(j,p)} = 1 \right) = \frac{1}{2^H}$$ where $H$ is the number of hidden layers ($H \geq 1$). [$\Box$]{}
[*Proof*]{}. Denote by $j_1, \cdots, j_H$ the hidden neurons on path $p$ where $j_k$ is the hidden neuron in the $k$-th hidden layer $(k=1, \cdots, H$) (Fig. \[fig4\]). Then, $$\begin{aligned}
\label{eqn4-2}
P\Big( [Z_i]_{(j,p)} = 1 \Big)
&=& P\Big( \mbox{Net input to the hidden neuron}\ j_k > 0 \ \
(k=1, \cdots, H) \Big) \nonumber\\
&=& P \left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0, \left[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) \right]
{\mbox{\boldmath $w$}}_{j_2} > 0, \right. \nonumber\\
& & \left.
\cdots, \left[ \varphi( U_1^{(H-1)} ) \cdots \varphi(U_{d_{H-1}}^{(H-1)}) \right]
{\mbox{\boldmath $w$}}_{j_H} > 0 \right)\end{aligned}$$ where ${\mbox{\boldmath $X$}}_i \in {\mbox{\boldmath $I$}}^{d_x}$ is the $i$-th input training pattern, ${\mbox{\boldmath $w$}}_{j_k} \in {\mbox{\boldmath $J$}}_k^{d_{k-1}}$ is the weight vector of the hidden neuron $j_k$ in the hidden layer $k$, and $U_l^{(k)}$ is the net input to the hidden neuron $l$ in the hidden layer $k$ (Fig. \[fig5\]).
We prove by mathematical induction that Eq. (\[eqn4-1\]) holds true.
\[For $H=1$\] This case corresponds to a three-layered neural network. It follows that $$\begin{aligned}
\label{eqn4-3}
P\Big( [Z_i]_{(j,p)} = 1 \Big) &=& P ( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 ) \ \ \
(\mbox{from Eq.} (\ref{eqn4-2})) \nonumber\\
&=& \frac{1}{2} \end{aligned}$$ where $j_1$ is a hidden neuron on path $p$. We can see below that the last equality of Eq. (\[eqn4-3\]) holds true. For a given input training pattern ${\mbox{\boldmath $X$}}_i \in {\mbox{\boldmath $I$}}^{d_x}$, $\{ {\mbox{\boldmath $w$}}_{j_1} \in {\mbox{\boldmath $J$}}_1^{d_x} \vert \ {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 \}$ is a half-open hyperspace with a normal vector ${\mbox{\boldmath $X$}}_i$ through the origin in a $d_x$-dimensional Euclidean space (Fig. \[fig6\]). According to the even initialization scheme, the random variables $w_{j_11}, \cdots, w_{j_1d_x}$ which are the components of the weight vector ${\mbox{\boldmath $w$}}_{j1} = (w_{j_11} \cdots w_{j_1d_x})^T$ of the hidden neuron $j_1$ are [*i.i.d.*]{}, and the probability density function $f_1: {\mbox{\boldmath $J$}}_1 \rightarrow [ 0, +\infty )$ of the probability distribution which each weight $w_{j_1k}$ obeys is an even function ($k = 1, \cdots, d_x$). Consequently, $P\left({\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 \right) = P\left({\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} \leq 0 \right)$, which means $P\left({\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 \right) = 1/2$.
\[For $H$\] This case corresponding to a deep nonlinear neural network with $H$ hidden layers, we show that if the case of $H-1$ holds, then case $H$ also holds. Assuming that the case of $H-1$ holds, then $$\begin{aligned}
\label{eqn4-4}
& &P\Big( [Z_i]_{(j,p)} = 1 \Big) \nonumber \\
&=& P \left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0, \left[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) \right]
{\mbox{\boldmath $w$}}_{j_2} > 0, \right. \nonumber\\
& & \left. \cdots, \left[ \varphi( U_1^{(H-1)} ) \cdots \varphi(U_{d_{H-1}}^{(H-1)}) \right]
{\mbox{\boldmath $w$}}_{j_H} > 0 \right) \ \
(\mbox{from Eq.} (\ref{eqn4-2})) \nonumber\\
&=& P \left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0, \left[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) \right]
{\mbox{\boldmath $w$}}_{j_2} > 0, \right. \nonumber\\
& & \left. \cdots, \left[ \varphi( U_1^{(H-1)} ) \cdots \varphi(U_{d_{H-1}}^{(H-1)}) \right]
{\mbox{\boldmath $w$}}_{j_H} > 0 \ \ \Big\vert \ {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 \right)
\cdot P\left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 \right) \nonumber\\
& & + P \left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0, \left[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) \right]
{\mbox{\boldmath $w$}}_{j_2} > 0, \right. \nonumber\\
& & \left. \cdots, \left[ \varphi( U_1^{(H-1)} ) \cdots \varphi(U_{d_{H-1}}^{(H-1)}) \right]
{\mbox{\boldmath $w$}}_{j_H} > 0 \ \ \Big\vert \ {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} \leq 0 \right)
\cdot P\left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} \leq 0 \right) \nonumber\\
&=& P \left( \left[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) \right]
{\mbox{\boldmath $w$}}_{j_2} > 0, \cdots, \left[ \varphi( U_1^{(H-1)} ) \cdots \varphi(U_{d_{H-1}}^{(H-1)}) \right]
{\mbox{\boldmath $w$}}_{j_H} > 0 \right. \nonumber\\
& & \hspace*{2cm} \left. \Big\vert \ {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 \right)
\cdot P\left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 \right). $$ Here, the first term of the right-hand-side of Eq. (\[eqn4-4\]) represents the probability that the path passing through the $H-1$ hidden neurons $j_2, \cdots, j_H$ for the input training pattern $[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) ]^T \in {\mbox{\boldmath $R$}}^{d_1}$ such that $U_{j_1}^{(1)} = {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0$ is active. Also, for any $1 \leq s \leq d_1$, by denoting ${\mbox{\boldmath $X$}}_i = (x_1 \cdots x_{d_0})^T$ and ${\mbox{\boldmath $w$}}_s = (w_1 \cdots w_{d_0})$ for the sake of simplicity, $\vert \varphi(U_s^{(1)}) \vert \leq \vert U_s^{(1)} \vert
=\vert {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_s \vert
\leq \sum_{t=1}^{d_0} \vert x_t \vert \vert w_t \vert
\leq \alpha \sum_{t=1}^{d_0} (1/d_0) = \alpha$. Hence, according to the assumption of mathematical induction, the first term of the right-hand-side of Eq. (\[eqn4-4\]) is equal to $(1/2)^{H-1}$. In addition, the second term of the right-hand-side of Eq. (\[eqn4-4\]) is equal to 1/2 from Eq.(\[eqn4-3\]). Therefore, $$\begin{aligned}
\label{eqn4-5}
P\Big( [Z_i]_{(j,p)} = 1 \Big) &=& \left(\frac{1}{2}\right)^{H-1} \cdot \frac{1}{2} \nonumber\\
&=& \frac{1}{2^H}, \end{aligned}$$ which means that the case of $H$ indeed holds. Therefore, by mathematical induction, Eq. (\[eqn4-1\]) holds for any $H \geq 1$. Theorem \[thm1\] states that assumption A1p-m holds. Because $$\begin{aligned}
\lim_{H \rightarrow +\infty} P \left( [Z_i]_{(j,p)} = 1 \right)
&=& \lim_{H \rightarrow +\infty} \frac{1}{2^H} \nonumber\\
&=& 0, \end{aligned}$$ the probability that path $p$ is active decreases exponentially. It converges to zero as the number of hidden layers $H$ increases. The probability that path $p$ is active decreases by half when a hidden layer is added.
\[thm2\] For any training pattern $i$, any output neuron $j$, and any path $p$ from an input neuron to the output neuron $j$ in the Kawaguchi model initialized by the even initialization method, random variable $[Z_i]_{(j,p)}$ is independent of the sequence of weights on path $p$. [$\Box$]{}
[*Proof*]{}. Take training pattern $i$, output neuron $j$, and path $p$ from an input neuron to output neuron $j$ of the Kawaguchi model arbitrarily and fix them. Denote by $j_0, \cdots, j_H$ the neurons on path $p$ where $j_k$ is the neuron in the $k$-th layer $(k=0, \cdots, H$). Let also $w_{j_1 j_0}, \cdots, w_{j_{H+1} j_H }$ denote $H+1$ weights on path $p$ where $w_{j_{k+1} j_k }$ is the weight between the layer $k$ and the layer $k+1$ on path $p$ ($k=0, \cdots, H$) (Fig. \[fig7\]). Then, for any $\lambda_1, \cdots, \lambda_{H+1} \in {\mbox{\boldmath $R$}}$, $$\begin{aligned}
\label{eqn4-6}
& & \hspace*{-0.3cm} P \left( [Z_i]_{(j,p)} = 1 \ \Big\vert \ w_{j_1 j_0} = \lambda_1,
\cdots, w_{j_{H+1} j_H } = \lambda_{H+1} \right) \nonumber\\
&=& \hspace*{-0.3cm} P \left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0, \left[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) \right] {\mbox{\boldmath $w$}}_{j_2} > 0, \cdots, \left[ \varphi( U_1^{(H-1)} ) \cdots \varphi(U_{d_{H-1}}^{(H-1)}) \right]
{\mbox{\boldmath $w$}}_{j_H} > 0 \right. \nonumber \\
& & \hspace*{5.5cm} \left. \Big\vert \ w_{ j_1 j_0 } = \lambda_1, \cdots,
w_{ j_{H} j_{H-1} } = \lambda_{H} \right) \ \ (\mbox{from Eq.} (\ref{eqn4-2})) \nonumber\\
& & (\mbox{$w_{j_{H+1} j_H} = \lambda_{H+1}$ is removed because it is independent of
$[Z_i]_{(j,p)}$}) \nonumber\\
&\fallingdotseq& \hspace*{-0.3cm} \frac{1}{2^H}.
$$ We prove below by mathematical induction that the approximate equality $(\fallingdotseq)$ in Eq. (\[eqn4-6\]) holds true.
\[For $H=1$\] This case corresponds to a three-layered neural network. For the sake of simplicity, we let ${\mbox{\boldmath $X$}}_i = (x_1 \cdots x_{j_0} \cdots x_{d_0})^T$ and ${\mbox{\boldmath $w$}}_{j_1} = (w_1 \cdots w_{j_0} \cdots w_{d_0})^T$ where $w_{j_0} = w_{j_1 j_0}$. Then, $$\begin{aligned}
\label{eqn4-7}
& & P \left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 \ \Big\vert \ w_{j_1 j_0} = \lambda_1\right) \nonumber\\
&=& P \left(x_1 w_1 + \cdots + x_{j_0} w_{j_1 j_0} + \cdots + x_{d_0} w_{d_0} > 0 \
\Big \vert \ w_{j_1 j_0} = \lambda_1\right) \nonumber\\
&=& P \left(x_1 w_1 + \cdots + x_{j_0} \lambda_1 + \cdots + x_{d_0} w_{d_0} > 0 \
\Big \vert \ w_{j_1 j_0} = \lambda_1\right) \nonumber\\
&=& P \Big(x_1 w_1 + \cdots + x_{j_0} \lambda_1 + \cdots + x_{d_0} w_{d_0} > 0 \ \Big)
\nonumber\\
& & (\mbox{$w_{j_1 j_0} = \lambda_{1}$ is removed because it is independent of
the other weights}) \nonumber\\
&=& P \Big(x_1 w_1 + \cdots + x_{j_0 - 1} w_{j_0 -1} + x_{j_0 + 1} w_{j_0 + 1}
+ \cdots + x_{d_0} w_{d_0} > - x_{j_0} \lambda_1 \ \Big) \nonumber\\
&\fallingdotseq&
P \Big(x_1 w_1 + \cdots + x_{j_0 - 1} w_{j_0 -1} + x_{j_0 + 1} w_{j_0 + 1}
+ \cdots + x_{d_0} w_{d_0} > 0 \ \Big) \nonumber\\
& & (\mbox{because $\vert- x_{j_0} \lambda_1 \vert = \vert x_{j_0} \vert
\vert \lambda_1 \vert \leq \alpha \cdot (1/d_0)$ and
the number of input neurons $d_0$ is} \nonumber\\
& & \mbox{sufficiently large}) \nonumber\\
&=& \frac{1}{2}. \ \ (\mbox{for the same reason that the last equality
of Eq. (\ref{eqn4-3}) holds true.})
$$
\[For $H$\] This case corresponding to a deep nonlinear neural network with $H$ hidden layers, we show that if the case of $H-1$ holds, then case $H$ also holds. Assuming that the case of $H-1$ holds, then $$\begin{aligned}
\label{eqn4-8}
& & P \left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0, \left[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) \right] {\mbox{\boldmath $w$}}_{j_2} > 0,
\cdots, \left[ \varphi( U_1^{(H-1)} ) \cdots \varphi(U_{d_{H-1}}^{(H-1)}) \right]
{\mbox{\boldmath $w$}}_{j_H} > 0 \right. \nonumber\\
& & \hspace*{8.0cm} \left. \Big\vert \ w_{j_1 j_0} = \lambda_1,
\cdots, w_{j_H j_{H-1} } = \lambda_H \right) \nonumber\\
&=& P \left( \left[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) \right]
{\mbox{\boldmath $w$}}_{j_2} > 0, \cdots, \left[ \varphi( U_1^{(H-1)} ) \cdots \varphi(U_{d_{H-1}}^{(H-1)}) \right]
{\mbox{\boldmath $w$}}_{j_H} > 0 \ \ \right. \nonumber\\
& & \hspace*{2cm} \left.
\Big\vert \ {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0, w_{j_1 j_0} = \lambda_1,
\cdots, w_{j_H j_{H-1} } = \lambda_H \right) \cdot \nonumber\\
& & \hspace*{2cm} P\left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 \ \Big\vert \ w_{j_1 j_0} = \lambda_1,
\cdots, w_{j_H j_{H-1} } = \lambda_H \right) \nonumber\\
&=& P \left( \left[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) \right]
{\mbox{\boldmath $w$}}_{j_2} > 0, \cdots, \left[ \varphi( U_1^{(H-1)} ) \cdots \varphi(U_{d_{H-1}}^{(H-1)}) \right]
{\mbox{\boldmath $w$}}_{j_H} > 0 \right. \nonumber\\
& & \hspace*{0.5cm} \left. \Big\vert \ {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0,
\ w_{j_1 j_0} = \lambda_1, \cdots, w_{j_H j_{H-1} } = \lambda_H \right) \cdot P\left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0 \ \Big\vert \ w_{j_1 j_0} = \lambda_1 \right). \\
& & (\mbox{$w_{j_2 j_1} = \lambda_2, \cdots, w_{j_H j_{H-1}} = \lambda_H$ are removed
because they are independent of ${\mbox{\boldmath $w$}}_{j_1}$ } ) \nonumber\end{aligned}$$ Here, the first term of the right-hand-side of Eq. (\[eqn4-8\]) represents the probability that the path passing through the $H-1$ hidden neurons $j_2, \cdots, j_H$ for the input training pattern $[ \varphi( U_1^{(1)} ) \cdots \varphi(U_{d_1}^{(1)} ) ]^T \in {\mbox{\boldmath $R$}}^{d_1}$ such that $U_{j_1}^{(1)} = {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0$ and $w_{j_1 j_0} = \lambda_1$ is active. Also, for any $1 \leq s \leq d_1$, by denoting ${\mbox{\boldmath $X$}}_i = (x_1 \cdots x_{d_0})^T$ and ${\mbox{\boldmath $w$}}_s = (w_1 \cdots w_{d_0})$ for the sake of simplicity, $\vert \varphi(U_s^{(1)}) \vert \leq \vert U_s^{(1)} \vert
=\vert {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_s \vert
\leq \sum_{t=1}^{d_0} \vert x_t \vert \vert w_t \vert
\leq \alpha \sum_{t=1}^{d_0} (1/d_0) = \alpha$. Hence, according to the assumption of mathematical induction, the first term of the right-hand-side of Eq. (\[eqn4-8\]) is nearly equal to $(1/2)^{H-1}$. In addition, the second term of the right-hand-side of Eq. (\[eqn4-8\]) is nearly equal to 1/2 from Eq.(\[eqn4-7\]). So, $$\begin{aligned}
\label{eqn4-9}
&P& \hspace*{-0.3cm} \left( {\mbox{\boldmath $X$}}_i^T {\mbox{\boldmath $w$}}_{j_1} > 0, \left[ \varphi( U_1^{(1)} )
\cdots \varphi(U_{d_1}^{(1)} ) \right] {\mbox{\boldmath $w$}}_{j_2} > 0,
\cdots, \left[ \varphi( U_1^{(H-1)} ) \cdots \varphi(U_{d_{H-1}}^{(H-1)}) \right]
{\mbox{\boldmath $w$}}_{j_H} > 0 \right. \nonumber\\
& & \hspace*{7.0cm} \left. \Big\vert \ w_{j_1 j_0} = \lambda_1,
\cdots, w_{j_H j_{H-1} } = \lambda_H \right) \nonumber\\
&\fallingdotseq& \left(\frac{1}{2}\right)^{H-1} \cdot \frac{1}{2} \nonumber\\
&=& \frac{1}{2^H}, \end{aligned}$$ which means that the case of $H$ indeed holds. Thus, by mathematical induction, Eq. (\[eqn4-6\]) holds for any $H \geq 1$. Therefore, $$\begin{aligned}
\label{eqn4-10}
& & P \left( [Z_i]_{(j,p)} = 1 \ \Big\vert \ w_{j_1 j_0} = \lambda_1,
\cdots, w_{j_{H+1} j_H } = \lambda_{H+1} \right) \nonumber\\
&\fallingdotseq& \frac{1}{2^H} \hspace*{1cm} \mbox{(from Eq. (\ref{eqn4-6}))}\nonumber\\
&=& P \left( [Z_i]_{(j,p)} = 1 \right). \hspace*{1cm} \mbox{(from Theorem \ref{thm1})}\end{aligned}$$ This completes the proof.
\[thm3\] For any training pattern $i$, any output neuron $j$, and any path $p$ from an input neuron to the output neuron $j$ in the Kawaguchi model initialized by the even initialization method, random variable $[Z_i]_{(j,p)}$ is independent of the input training singnal ${\mbox{\boldmath $X$}}_i$. [$\Box$]{}
[*Proof*]{}. Take training pattern $i$, output neuron $j$, path $p$ from an input neuron to output neuron $j$ of the deep nonlinear neural network in the Kawaguchi model initialized by the even initialization method, and ${\mbox{\boldmath $\mu$}}\in {\mbox{\boldmath $R$}}^{d_x}$ arbitrarily and fix them. Then, in the same mode of the proof of Theorem \[thm1\], it is apparent that $$\label{eqn4-12}
P\left([Z_i]_{(j,p)} = 1 \ \big\vert \ {\mbox{\boldmath $X$}}_i = {\mbox{\boldmath $\mu$}}\right) = \frac{1}{2^H}.$$ Therefore, it follows that $$\begin{aligned}
\label{eqn4-13}
P\left([Z_i]_{(j,p)} = 1 \right) &=& \frac{1}{2^H} \hspace*{1.0cm} (\mbox{from Theorem} \ref{thm1} ) \nonumber\\
&=& P\left([Z_i]_{(j,p)} = 1 \ \ \big\vert \ \ {\mbox{\boldmath $X$}}_i = {\mbox{\boldmath $\mu$}}\right) \hspace*{0.5cm} (\mbox{from Eq.} (\ref{eqn4-12}) ) \end{aligned}$$ holds true. Eq. (\[eqn4-13\]) completes the proof. Theorem \[thm2\] and Theorem \[thm3\] state that assumption A5u-m-1 holds in the initial loss landscape of the Kawaguchi model initialized by the even initialization method.
Thus, it follows from Theorem \[thm1\] that assumptions A1p-m and A5u-m (A5u-m-1) hold in the initial loss landscape of the Kawaguchi model initialized by the even initialization method, both of which were introduced by Kawaguchi (2016). Therefore, no poor local minimum exists in the initial loss landscape of the Kawaguchi model initialized by the even initialization method according to Corollary 3.2 presented by Kawaguchi (2016). Specifically, the following four statements hold true: 1) the loss function is non-convex and non-concave; 2) every local minimum is a global minimum; 3) every critical point that is not a global minimum is a saddle point; and 4) bad saddle points exist.
Discussion and Related Work {#discuss}
===========================
In this section, we compare the even initialization proposed in the subsection \[even\_weight\] with the existing methods. First, let us briefly review several existing methods on the initialization for weights in neural networks.
The following uniform distribution has been often used for setting the initial value for a weight of a neuron: $$\label{standard-initialization}
U \left[ -\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}} \right]$$ where $n$ is the number of neurons in the preceding layer (we call it [*standard initialization*]{} here). To the best of the authors’ knowledge, it is a heuristics.
In contrast, the following two weight initialization methods have been derived theoretically. Glorot and Bengio (2010) proposed the [*normalized initialization*]{}: the initial value of each weight between the layer $i$ and the layer $i+1$ is set according to the uniform distribution $$\label{glorot-initialization}
U \left[ -\frac{ \sqrt{6} }{ \sqrt{n_i + n_{i+1} }},
\frac{ \sqrt{6} }{ \sqrt{n_i + n_{i+1} }} \right]$$ independently where $n_i$ is the number of neurons in the layer $i$. Eq. (\[glorot-initialization\]) was derived by keeping the variance of the output vector in each layer equal and keeping the variance of the back-propagated gradients equal, respectively, for the purpose of avoiding their saturation under the assumption that the variances of the input signals are all the same. The object of the normalized initialization is neural networks with symmetric activation function $f$ such that $\partial f(x)/\partial x \vert_{x=0} = 1$ (for example, sigmoid function and $\tanh$ function). Thus, the ReLU activation function is out of scope.
He et al. (2015) proposed a weight initialization method for neural networks with the ReLU activation function: the initial value of each weight between the layer $i$ and the layer $i+1$ is set according to either of the normal distributions $$\label{He-initialization}
N \left( 0, \frac{2}{ n_i } \right) \ \ \mbox{or} \ \ N \left( 0, \frac{2}{ n_{i+1} } \right)$$ independently where $n_i$ is the number of neurons in the layer $i$. Eq. (\[He-initialization\]) was derived by keeping the variance of the net input vector in each layer equal and keeping the variance of the back-propagated gradients equal, respectively, for the purpose of avoiding their saturation.
The normalized initialization is inappropriate for a comparison object of the even initialization because it is invalid for neural networks with the ReLU activation function as described above. Then we compare below the even initialization method with the standard initialization and He initialization. A schematic relationship among the standard initialization, He initialization and the even initialization is shown in Fig. \[fig8\]. The interval of the initial values of weights initialized by the even initialization is contained in both of those of the standard initialization and He initialization because $1/n \leq 1/\sqrt{n}$ holds true for any $n=1,2, \cdots$ where $n$ is the number of neurons in the preceding layer, and the probability distribution which weights obey in the even initialization can be a normal distribution with zero mean or a uniform distribution. Conversely, there exists an interval which does not have any local minimum in both of the intervals of initial weight values of the standard initialization and He initialization. As a numerical example, $[-3\sqrt{2}/\sqrt{n}, 3\sqrt{2}/\sqrt{n}] \fallingdotseq [ -0.424, 0.424 ]$ (He initialization), $[-1/\sqrt{n}, 1/\sqrt{n}] = [ -0.1, 0.1 ]$ (standard initialization) and $[-1/n, 1/n] = [ -0.01, 0.01 ]$ (even initialization) when $n=100$: the number of neurons in the preceding layer is 100.
Conclusions {#concl}
===========
We introduced a weight initialization scheme called [*even initialization*]{} for the wide and deep ReLU neural network model, and proved using the research results presented by Kawaguchi (2016) that no poor local minimum exists in the initial loss landscape in the wide and deep nonlinear neural network initialized by the even initialization method. We also elucidated that the weight values initialized by the even initialization method are contained in those initialized by both of the standard initialization and He initialization methods. We clarified the essential property of the even initialization theoretically. Applying the even initialization to large-scale real-world problems is a future topic.
[**References**]{}
Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. Greedy layer-wise training of deep networks. In [*Advances in Neural Information Processing Systems 19 (NIPS’06)*]{}, (B. Schölkopf, J. Platt, and T. Hoffman, eds.), 153-160, 2007.
Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B., and LeCun, Y. The loss surfaces of multilayer networks. In [*Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics*]{}, 192-204, 2015.
Choromanska, A., LeCun, Y., and Arous, G. B. Open problem: the landscape of the loss surfaces of multilayer networks. In [*Proceedings of The 28th Conference on Learning Theory*]{}, 1756-1760, 2015.
Dauphin, Y. N., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S., and Bengio, Y. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In [*Advances in Neural Information Processing Systems*]{}, 2933-2941, 2014.
Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In [*International Conference on Artificial Intelligence and Statistics*]{}, 249-256, 2010.
He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In [*Proceedings of the IEEE International Conference on Computer Vision*]{}, 1026-1034, 2015.
Hinton, G. E., Osindero, S., and Teh, Y. A fast learning algorithm for deep belief nets. [*Neural Computation*]{}, 18: 1527-1554, 2006.
Kawaguchi, K. Deep learning without poor local minima. In [*Advances in Neural Information Processing Systems 29*]{}, 2016.
Mohamed, A-R, Dahl, G. E., and Hinton, G. E. Deep belief network for phone recognition. In [*NIPS Workshop on Deep Learning for Speech Recognition and Related Applications*]{}, 2009.
Seide, F., Li, G., and Yu, D. Conversational speech transcription using context-dependent deep neural networks. In [*Proc. Interspeech*]{}, 437-440, 2011.
Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neural networks. In [*Advances in Neural Information Processing Systems*]{}, 3104-3112, 2014.
Taigman, Y., Yang. M., Ranzoto, M., and Wolf, L. Deepface: Closing the gap to human-level performance in face verification. In [*Proc. Conference on Computer Vision and Pattern Recognition*]{}, 1701-1708, 2014.
|
---
author:
- 'P. A. Price, D. W. Fox, S. R. Kulkarni, B. A. Peterson, B. P. Schmidt, A. M. Soderberg, S. A. Yost, E. Berger, S. G. Djorgovski, D. A. Frail, F. A. Harrison, R. Sari, A. W. Blain, and S. C. Chapman.'
title: '**Discovery of the Bright Afterglow of the Nearby Gamma-Ray Burst of 29 March 2003**'
---
24.3cm-2cm+1.5cm
-1cm
\#1[to 0pt[\#1]{}]{}
On 2003 March 29, 11$^{\rm h}$37$^{\rm m}$14$^{\rm s}$.67 UT, triggered all three instruments on board the High Energy Transient Explorer II ([*HETE-II*]{}). About 1.4 hours later, the [*HETE-II*]{} team disseminated via the GRB Coordinates Network (GCN) the 4-arcminute diameter localisation[@vcd+03] by the Soft X-ray Camera (SXC). We immediately observed the error-circle with the Wide Field Imager (WFI) on the 40-inch telescope at Siding Spring Observatory under inclement conditions (nearby thunderstorms). Nevertheless, we were able to identify a bright source not present on the Digitised Sky Survey (Figure \[fig:discovery\]) and rapidly communicated the discovery to the community [@pp03]. The same source was independently detected by the RIKEN automated telescope[@t03].
With a magnitude of 12.6 in the $R$ band at 1.5 hours, the optical afterglow of is unusually bright. At the same epoch, the well-studied GRB 021004 was $R \sim 16\,$mag[@fyk+03], and the famous GRB 990123 was $R\sim 17\,$mag[@abb+99]. The brightness of this afterglow triggered observations by over 65 optical telescopes around the world, ranging from sub-metre apertures to the Keck I 10-metre telescope. Unprecedented bright emission at radio[@bsf03], millimetre[@ksn03], sub-millimetre[@hmt+03], and X-ray[@ms03] was also reported (Figure \[fig:broadband\]).
Greiner et al.[@gpe+03] made spectroscopic observations with the Very Large Telescope (VLT) in Chile approximately 16 hours after the GRB and, based on absorption as well as emission lines, announced a redshift of $z=0.1685$. From Keck spectroscopic observations obtained 8 hours later (Figure \[fig:spectrum\]) we confirm the VLT redshift, finding $z=0.169\pm 0.001$. We note that the optical afterglow of was, at 1.5 hours, approximately the same brightness as the nearest quasar, 3C 273 ($z=0.158$); it is remarkable that such a large difference in the mass of the engine can produce an optical source with the same luminosity.
With a duration of about 25-s and multi-pulse profile[@vcd+03], is typical of the long duration class of GRBs. The fluence of , as detected by the Konus experiment [@gmp+03], of $1.6\times10^{-4}$ erg cm$^{-2}$ (in the energy range 15-5000 keV) places this burst in the top 1% of GRBs.
At a redshift of 0.169, is the nearest of the cosmological GRBs studied in the 6-year history of afterglow research. Assuming a Lambda cosmology with $H_0 = 71$ km/s/Mpc, $\Omega_M = 0.27$ and $\Omega_\Lambda=0.73$, the angular-diameter distance is $d_A=589\,$Mpc and the luminosity distance is $d_L=805\,$Mpc. The isotropic $\gamma$-ray energy release, $E_{\gamma,\rm iso} \sim 1.3\times
10^{52}\,$erg, is typical of cosmological GRBs[@fks+01]. Likewise, the optical and radio luminosities of the afterglow of are not markedly different from those of cosmological GRBs. In particular, the extrapolated isotropic X-ray luminosity at $t=10$ hr is $L_{X,\rm iso} \sim 6.4\times 10^{45}\,$ergs$^{-1}$, not distinctly different from that of other X-ray afterglows (e.g. ref.).
Nonetheless, two peculiarities about the afterglow of are worth noting. First, the optical emission steepens from $f(t) \propto
t^{-\alpha}$ with $\alpha_1 = 0.873 \pm 0.025$ to $\alpha_2 = 1.97 \pm
0.12$ at epoch $t_* = 0.481 \pm 0.033\,$d (Figure \[fig:lightcurve\]; see also ref. ). This change in $\alpha$ is too large to be due to the passage of a cooling break ($\Delta\alpha=1/4$; ref. ) through the optical bands. On the other hand, $\alpha\sim 2$ is typically seen in afterglows following the so-called “jet-break” epoch ($t_j$). Before this epoch, the explosion can be regarded as isotropic, and following this epoch the true collimated geometry is manifested. Such an early jet break would imply a substantially-lower energy release than $E_{\gamma,\rm iso}$.
If $t_*\sim t_j$ then using the formalism and adopting the density and $\gamma$-ray efficiency normalisations of Frail et al.[@fks+01] we estimate the true $\gamma$-ray energy release to be $E_\gamma \sim
3\times 10^{49}$ erg. This estimate is $4\sigma$ lower than the “standard energy” of $5\times 10^{50}$ erg found by Frail et al.[@fks+01] The geometric-corrected x-ray luminosity[@bkf03] would also be the lowest of all x-ray afterglows. If the above interpretation is correct then may be the missing link between cosmological GRBs and the peculiar GRB 980425[@gvv+98] ($E_{\gamma,\rm iso}\sim 10^{48}\,$erg) which has been associated with SN 1998bw at $z=0.0085$.
Second, the decay of the optical afterglow is marked by bumps and wiggles (e.g. ref. ). These features could be due to inhomogeneities in the circumburst medium or additional shells of ejecta from the central engine. In either case, the bumps and wiggles complicate the simple jet interpretation offered above. We note that if the GRB had been more distant, and hence the afterglow fainter, then the break in the light curve would likely have been interpreted as the jet break without question.
The proximity of offers us several new opportunities to understand the origin of GRBs. Red bumps in the light curve have been seen in several more-distant ($z\sim 0.3$ to 1) GRB afterglows (e.g.refs ,) and interpreted as underlying SNe that caused the GRBs. While these red bumps appeared to be consistent with a SN light curve, prior to , it had not yet been unambiguously demonstrated that they were indeed SNe. As this paper was being written, a clear spectroscopic signature for an underlying SN has been identified in the optical afterglow of [@smg+03]. Our own spectroscopy at Palomar and Keck confirm the presence of these SN features. This demonstrates once and for all that the progenitors of at least some GRBs are massive stars that explode as SNe.
However, there remain a number of issues still to be resolved, relating to the physics of GRB afterglows and the environment around the progenitor. The “fireball” model of GRB afterglows predicts a broad-band spectrum from centimetre wavelengths to x-rays that evolves as the GRB ejecta expand and sweep up the surrounding medium[@spn98]. Testing this model in detail has in the past been difficult, primarily due to both low signal-to-noise and interstellar scintillation at the longer wavelengths, from which come the majority of spectral and temporal coverage of the afterglow evolution (e.g.refs. , ). , with bright emission at all wavelengths (Figure \[fig:broadband\]), and limited scintillation (due to the larger apparent size) will allow astronomers to test the predictions of the fireball model with unprecedented precision through the time evolution of the broad-band spectrum, the angular size of the fireball, and its proper motion (if any). It has been long predicted that if the progenitors of GRBs are massive stars, the circumburst medium should be rich and inhomogenous[@cl99], but it has been difficult to find evidence for this. However, for , it should be possible to trace the distribution of circumburst material and determine the environment of the progenitor.
Even in the Swift (launch December 2003) era, we expect only one such nearby ($z < 0.2$) GRB every decade (scaling from ref. ). Thus , the nearest of the cosmological GRBs to date, has given astronomers a rare opportunity to be up close and personal with a GRB and its afterglow. We eagerly await reports of the many experiments that have been and will be conducted to shed new light on the GRB phenomenon.
Acknowledgments {#acknowledgments .unnumbered}
===============
PAP and BPS thank the ARC for supporting Australian GRB research. GRB research at Caltech is supported in part by funds from NSF and NASA. We are, as always, indebted to Scott Barthelmy and the GCN, as well as the [*HETE-II*]{} team for prompt alerts of GRB localisations.
[10]{}
, M. [Luminosities and Space Densities of Gamma-Ray Bursts]{}. , L117–L120 October 1999.
, R., [Crew]{}, G., [Doty]{}, J., [Villasenor]{}, J., [Monnelly]{}, G. [ *et al.*]{} [GRB030329 (=H2652): a long, extremely bright GRB localized by the HETE WXM and SXC.]{} , (2003).
, B. A. & [Price]{}, P. A. [GRB 030329: optical afterglow candidate.]{} , (2003).
, J., [Peimbert]{}, M., [Estaban]{}, C., [Kaufer]{}, A., [Jaunsen]{}, A. [ *et al.*]{} [Redshift of GRB 030329.]{} , (2003).
, K. Z., [Matheson]{}, T., [Garnavich]{}, P. M., [Martini]{}, P., [Berlind]{}, P. [*et al.*]{} [Spectroscopic Discovery of the Supernova 2003dh Associated with GRB 030329]{}. , 4173 April 2003.
, K. [GRB 030329: OT candidate.]{} , (2003).
, D. W., [Yost]{}, S., [Kulkarni]{}, S. R., [Torii]{}, K., [Kato]{}, T. [*et al.*]{} [Early optical emission from the [$\gamma$]{}-ray burst of 4 October 2002]{}. , 284–286 March 2003.
, C., [Balsano]{}, R., [Barthelemy]{}, S., [Bloch]{}, J., [Butterworth]{}, P. [*et al.*]{} [Observation of contemporaneous optical radiation from a gamma-ray burst.]{} , 400–402 (1999).
, E., [Soderberg]{}, A. M. & [Frail]{}, D. A. [GRB 030329: radio observations.]{} , (2003).
, N., [Sato]{}, N. & [Nakanishi]{}, H. [GRB 030329 Radio 23/43/90 GHz observations at Nobeyama]{}. GCN Circular 2089 (2003).
, J. C., [Meijerink]{}, R., [Tilanus]{}, R. P. J. & [Smith]{}, I. A. [GRB 030329: Sub-millimeter detection]{}. GCN Circular 2088 (2003).
, F. & [Swank]{}, J. H. [RXTE detection of GRB 030329 afterglow.]{} , (2003).
, S., [Mazets]{}, E., [Pal’Shin]{}, V., [Frederiks]{}, D. & [Cline]{}, T. [GRB030329a: detection by konus-wind.]{} , (2003).
, D. A., [Kulkarni]{}, S. R., [Sari]{}, R., [Djorgovski]{}, S. G., [Bloom]{}, J. S. [*et al.*]{} [Beaming in Gamma-Ray Bursts: Evidence for a Standard Energy Reservoir]{}. , L55–L58 November 2001.
, E., [Kulkarni]{}, S. R. & [Frail]{}, D. A. [A Standard Kinetic Energy Reservoir in Gamma-Ray Burst Afterglows]{}. , 1268 January 2003.
, P., [Stanek]{}, K. Z. & [Berlind]{}, P. [GRB 030329, optical photometry.]{} , (2003).
, R., [Piran]{}, T. & [Narayan]{}, R. Spectra and light curves of gamma-ray burst afterglows. , L17 (1998).
, T. J., [Vreeswijk]{}, P. M., [van Paradijs]{}, J., [Kouveliotou]{}, C., [Augusteijn]{}, T. [*et al.*]{} [An unusual supernova in the error box of the gamma-ray burst of 25 April 1998.]{} , 670–672 (1998).
, Y., [Ofek]{}, E. O. & [Gal-Yam]{}, A. [GRB030329 - light curve flattening.]{} , (2003).
, J. S., [Kulkarni]{}, S. R., [Djorgovski]{}, S. G., [Eichelberger]{}, A. C., [Cote]{}, P. [*et al.*]{} [The unusual afterglow of the gamma-ray burst of 26 March 1998 as evidence for a supernova connection.]{} , 453–456 (1999).
, P. M., [Stanek]{}, K. Z., [Wyrzykowski]{}, L., [Infante]{}, L., [Bendek]{}, E. [*et al.*]{} [Discovery of the Low-Redshift Optical Afterglow of GRB 011121 and Its Progenitor Supernova SN 2001ke]{}. , 924–932 January 2003.
, E., [Sari]{}, R., [Frail]{}, D. A., [Kulkarni]{}, S. R., [Bertoldi]{}, F. [*et al.*]{} [A Jet Model for the Afterglow Emission from GRB 000301C]{}. , 56–62 December 2000.
, A. & [Kumar]{}, P. [Jet Energy and Other Parameters for the Afterglows of GRB 980703, GRB 990123, GRB 990510, and GRB 991216 Determined from Modeling of Multifrequency Data]{}. , 667–677 June 2001.
, R. A. & [Li]{}, Z.-Y. Gamma-ray burst environments and progenitors. , L29–L32 July 1999.
, A. P., [Chandra]{}, C. H. I. & [Bhattacharya]{}, D. [GRB 030329: Radio observations at GMRT]{}. GCN Circular 2073 (2003).
, G. [GRB 030329 15 GHz radio observation]{}. GCN Circular 2043 (2003).
, R., [Sunyaev]{}, R., [Pavlinsky]{}, M., [Denissenko]{}, D., [Terekhov]{}, O. [*et al.*]{} [GRB 030329: light curve observed during the change of its slope.]{} , (2003).
, S., [Benitez]{}, E., [Torrealba]{}, J. & [Stepanian]{}, J. [GRB 030329: SPM optical observations]{}. GCN Circular 2022 (2003).
, J. B. & [Orosz]{}, J. A. [GRB 030329: optical observations (correction)]{}. , (2003).
, A. [GRB030329, BVRcIc field photometry.]{} , (2003).
[**Figure \[fig:discovery\]:**]{} Discovery of the bright optical afterglow of . The 600-s exposure taken with the Wide-Field Imager at the 40-inch telescope of the Siding Spring Observatory (SSO) using an $R$-filter (a) started at March 29, 13$^{\rm h}$05$^{\rm m}$ UT, 2003, about 1.5 hours after the $\gamma$-ray event, and was strongly affected by clouds. Nevertheless, comparison of the SSO image with the Second Digitised Sky Survey (b) in the $R_F$-band at the telescope allowed us to identify a 12th magnitude afterglow (arrowed) within the 4-arcmin [*HETE-II*]{} SXC error-circle[@vcd+03] (marked). The position of the optical afterglow was determined with respect to USNO-A2.0 and found to be $\alpha_{2000} = 10^{\rm h}44^{\rm m}59^{\rm
s}.95$, $ \delta_{2000} = +21^\circ31^\prime17^{\prime\prime}.8$ with uncertainty of 0.5 arcsec in each axis.
[**Figure \[fig:broadband\]:**]{} A snapshot spectral flux distribution of the afterglow of GRB 030329. This broad-band spectrum of the afterglow at 0.5 days after the GRB[@ksn03; @hmt+03; @ms03; @rcb03; @p03; @bsp+03; @zbt+03] demonstrates both the brightness of the afterglow, with the resulting spectral coverage. Solid circles represent measurements made near the nominal time; open circles represent measurements extrapolated to the nominal time assuming evolution appropriate for a constant-density medium; this figure is therefore meant to be illustrative rather than entirely accurate. A simple fit of an afterglow broad-band spectrum yields the following spectral parameters (we use the convention and symbols of ref. ): synchrotron self-absorption frequency, $\nu_a
\sim 25\,$GHz; peak frequency, $\nu_m\sim 1270\,$GHz; cooling frequency, $\nu_c\sim 6.2\times 10^{14}\,$Hz; peak flux, $f_m\sim
65\,$mJy; and electron energy index, $p\sim 2$. The physical parameters inferred are as follows: explosion energy, $E\sim 5.7\times
10^{51}\,$erg; ambient density, $n\sim 5.5$ atom cm$^{-3}$; electron energy fraction, $\epsilon_e\sim 0.16$ and magnetic energy fraction, $\epsilon_B\sim 0.012$.
[**Figure \[fig:spectrum\]:**]{} Spectrum of the optical afterglow. The observation was made with the Low Resolution Imaging Spectrometer on the Keck I telescope, using the 400 lines/mm grism on the blue side, giving an effective resolution of 4.2Å. Our observation consisted of a single 600 second exposure on the afterglow. We reduced and extracted the spectra in the standard manner using IRAF. No standard star observations were available, so we have simply fit and normalised the continuum before smoothing with a 4Å boxcar. We identify narrow emission lines from \[O II\], H$\beta$ and \[O III\], and absorption lines from Mg II at a mean redshift, $z=0.169 \pm 0.001$, making GRB 030329 the lowest-redshift cosmological GRB. These emission lines are typical of star-forming galaxies, whereas the absorption lines are caused by gas in the disk of the galaxy. In addition, we identify Ca II at $z\approx 0$, presumably due to clouds in our own Galaxy.
[**Figure \[fig:lightcurve\]:**]{} Light-curve of the optical afterglow of . This $R$-band light curve spans from our discovery to approximately 1 day after the GRB. Due to the brightness of the GRB, errors in the measurements are smaller than the plotted points. In addition to measurements gleaned from the GCN Circulars[@bsp+03; @fo03], we include the following measurements from observations with the SSO 40-inch telescope with WFI in 2003 March: 29.5491, $R = 12.649 \pm 0.015$ mag; 29.5568, $R = 12.786 \pm
0.017$ mag; 30.5044, $R = 16.181 \pm 0.010$ mag; 30.5100, $R = 16.227
\pm 0.009$ mag. These measurements are relative to field stars calibrated by Henden[@h03].
|
---
address:
- '$^1$Physics Department, Northeastern University, Boston, Massachusetts 02115'
- '$^2$Department of Physics, Columbia University, New York, New York 10027'
- '$^3$Physics Department, City College of the City University of New York, New York, New York 10031'
author:
- 'S. V. Kravchenko$^1$, D. Simonian$^2$, and M. P. Sarachik$^3$'
title: 'Comment on “Charged impurity scattering limited low temperature resistivity of low density silicon inversion layers” (Das Sarma and Hwang, cond-mat/9812216)'
---
[2]{} In a recent preprint, Das Sarma and Hwang[@dassarma98] propose an explanation for the sharp decrease in the $B=0$ resistivity at low temperatures which has been attributed to a transition to an unexpected conducting phase in dilute high-mobility two-dimensional systems (see Refs.\[1-4\] in [@dassarma98]). The anomalous transport observed in these experiments is ascribed in Ref.[@dassarma98] to temperature-dependent screening and energy averaging of the scattering time. The model yields curves that are qualitatively similar to those observed experimentally: the resistivity has a maximum at a temperature $\sim E_F/k_B$ and decreases at lower temperatures by a factor of 3 to 10. The anomalous response to a magnetic field ([*e.g.*]{}, the increase in low-temperature resistivity by orders of magnitude [@simonian97]), is not considered in Ref. [@dassarma98].
Two main assumptions are made in the proposed model[@dassarma98]: (1) the transport behavior is dominated by charged impurity scattering centers with a density $N_i$, and (2) the metal-insulator transition, which occurs when the electron density ($n_s$) equals a critical density ($n_c$), is due to the freeze-out of $n_c$ carriers so that the net free carrier density is given by $n\equiv n_s-n_c$ at $T=0$. The authors do not specify a mechanism for this carrier freeze-out and simply accept it as an experimental fact. Although not included in their calculation, Das Sarma and Hwang also note that their model can be extended to include a thermally activated contribution to the density of “free” electrons.
In this Comment, we examine whether the available experimental data support the model of Das Sarma and Hwang.
(i) Comparison with the experimental data (see Fig. 1 of Ref. [@dassarma98]) is made for an assumed density of charged impurities of $3.5\times10^{9}$ cm$^{-2}$, a value that is too small. In an earlier publication, Klapwijk and Das Sarma [@klapwijk98] explicitly stated that the number of ionized impurities is “$3\times10^{10}$ cm$^{-2}$ for high-mobility MOSFET’s used for the 2D MIT experiments. There is very little room to vary this number by a factor of two”. Without reference to this earlier statement, the authors now use a value for $N_i$ that is one order of magnitude smaller [@sample].
(ii) According to the proposed model, the number of “free” carriers at zero temperature is zero at the “critical” carrier density ($n_s=n_c$) and it is very small ($n=n_s-n_c<<n_c$) near the transition. In this range, the transport must be dominated by thermally activated carriers, which decrease exponentially in number as the temperature is reduced. It is known from experiment that at low temperatures the resistance is independent of temperature[@krav; @hanein98] at $n_c$ (the separatrix between the two phases) and depends weakly on temperature for nearby electron densities. In order to give rise to a finite conductivity $\sim e^2/h$ at the separatrix, an exponentially small number of carriers must have an exponentially large mobility, a circumstance that is rather improbable.
(iii) Recent measurements of the Hall coefficient and Shubnikov-de Haas oscillations yield electron densities that are independent of temperature and equal to $n_s$ rather than a density $n=n_s-n_c$ of “free” electrons [@hanein98; @hall]. This implies that [*all*]{} the electrons contribute to the Hall conductance, including those that are frozen-out or localized. Although this is known to occur in quantum systems such as Hall insulators, it is not clear why it can hold within the simple classical model proposed by Das Sarma and Hwang.
[10]{} S. Das Sarma and E. H. Hwang, preprint cond-mat/9812216. D. Simonian, S. V. Kravchenko, M. P. Sarachik, and V. M. Pudalov, Phys. Rev. Lett. [**79**]{}, 2304 (1997); V. M. Pudalov, G. Brunthaler, A. Prinz, and G. Bauer, JETP Lett. [**65**]{}, 932 (1997). T. M. Klapwijk and S. Das Sarma, preprint cond-mat/9810349. We note that although sample Si-15 had a particularly high peak mobility at 4.2 K (almost twice that of other samples), this cannot account for a reduction in $N_i$ by a factor of 10. The density of charged traps $N_i$ in samples of comparable quality was estimated to be $1.5\times10^{10}$ [@pudalov93]. V. M. Pudalov, M. D’Iorio, S. V. Kravchenko, and J. W. Campbell, Phys. Rev. Lett. [**70**]{}, 1866 (1993). S. V. Kravchenko, W. E. Mason, G. E. Bowker, J. E. Furneaux, V. M. Pudalov, and M. D’Iorio, Phys. Rev. B [**51**]{}, 7038 (1995). Y. Hanein, D. Shahar, J. Yoon, C. C. Li, D. C. Tsui, and H. Shtrikman, Phys. Rev. B [**58**]{}, R7520 (1998). D. Simonian, K. Mertes, M. P. Sarachik, S. V. Kravchenko, and T. M. Klapwijk, in preparation.
|
---
abstract: 'We generalize and apply the key elements of the Kibble-Zurek framework of nonequilibrium phase transitions to study the non-equilibrium critical cumulants near the QCD critical point. We demonstrate the off-equilibrium critical cumulants are expressible as universal scaling functions. We discuss how to use off-equilibrium scaling to provide powerful model-independent guidance in searches for the QCD critical point.'
address:
- 'Department of Physics, Brookhaven National Laboratory, Upton, New York 11973-5000,'
- 'Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.'
author:
- Swagato Mukherjee
- Raju Venugopalan
- 'Yi Yin[^1]'
title: 'Universality regained: Kibble-Zurek dynamics, off-equilibrium scaling and the search for the QCD critical point'
---
QCD critical point ,critical fluctuations ,non-Gaussian cumulants ,critical slowing down
Introduction {#intro}
============
The searches for the conjectured QCD critical point in heavy-ion collision experiments have attracted much theoretical and experimental effort. The central question is to identify telltale experimental signals of the presence of the QCD critical point. In particular, universality arguments can be employed to locate the QCD critical point.
We begin our discussion with the static properties of QCD critical point, which is argued to lie in the static universality class of the 3d Ising model. The equilibrium cumulants of the critical mode satisfy the static scaling relation: $$\begin{aligned}
\label{eq:kappa-eq}
\kappa^{\rm eq}_{n} \sim \xi_{\rm eq}^{\frac{-\beta+(2-\alpha-\beta)\,(n-1)}{\nu}}\, f^{\rm eq}_{n}(\theta)~\sim \xi_{\rm eq}^{\frac{-1+5\,(n-1)}{2}}\, f^{\rm eq}_{n}(\theta)\, ,
\qquad
n=1,2,3,4\ldots\end{aligned}$$ where $\xi_{\rm eq}$ is the correlation length, which will grow universally near the critical point and $\theta$ is the scaling variable. $\alpha, \beta, \nu$ are standard critical exponents. Here and hereafter, we will use the approximate value taken from the 3d Ising model: $\alpha\approx 0, \beta\approx 1/3, \nu\approx 2/3$. The static scaling relation indicates that non-Gaussian cumulants are more sensitive to the growth of the correlation length (e.g. $\kappa^{\rm eq}_{3}\sim \xi^{4.5}_{\rm eq}, \kappa^{\rm eq}_{4}\sim \xi^{7}_{\rm eq}$ while the Gaussian cumulant $\kappa_{2}\sim \xi^{2}_{\rm eq}$). Moreover, for non-Gaussian cumulants, the universal scaling functions $f^{\rm eq}_{n}(\theta)$ can be either positive or negative and depends on $\theta$ only. Since critical cumulants contribute to the corresponding cumulants of net baryon number multiplicity, the enhanced non-Gaussian fluctuations (of baryon number multiplicities) as well as their change in sign and the associated non-monotonicity as a function of beam energy $\sqrt{s}$ can signal the presence of critical point in the QCD phase diagram. (see Ref. [@Luo:2017faz] for a recent review on experimental measurements.)
However, off-equilibrium effects upset the naive expectation based on the static properties of the critical system. The relaxation time of critical modes $\tau_{\rm eff}\sim \xi^{z}$ grows universally with the universal dynamical critical exponent $z\approx 3$ for the QCD critical point [@Son:2004iv]. Consequently, critical fluctuations inescapably fall out of equilibrium. In Ref. [@Mukherjee:2015swa], we studied the real time evolution of non-Gaussian cumulants in the QCD critical regime by significantly extending prior work on Gaussian fluctuations [@Berdnikov:1999ph]. We found off-equilibrium critical cumulants can differ in both magnitude and sign from equilibrium expectations. The resulting off-equilibrium evolution of critical fluctuations depend on a number of non-universal inputs such as the details of trajectories in QCD phase diagram and the mapping relation between the QCD phase diagram and the 3d Ising model. (See also C. Herold’s, L. Jiang’s, and M. Nahrgang’s talk in this Quark Matter and the plenary talk in the last Quark Matter [@Nahrgang:2016ayr] for related theoretical efforts.).
Is critical universality completely lost in the complexity of off-equilibrium evolution? Are there any universal features of off-equilibrium evolution of critical cumulants? In Ref. [@Mukherjee:2016kyu], we attempted to answer those questions based on the key ideas of the Kibble-Zurek framework of non-equilibrium phase transitions. We demonstrated the emergence of off-equilibrium scaling and universality. Such off-equilibrium scaling and universality open new possibilities to identify experimental signatures of the critical point. The purpose of this proceeding is to illustrate the key idea of Ref. [@Mukherjee:2016kyu] and discuss how to apply this idea to search for QCD critical point.
Kibble-Zurek dynamics: the basic idea and an illustrative example {#idea}
=================================================================
The basic idea of Kibble-Zurek dynamics was pioneered by Kibble in a cosmological setting and was generalized to describe similar problem in condensed matter system (see Ref. [@Zurek] for a review). The Kibble-Zurek dynamics is now considered to be the paradigmatic framework to describe critical behavior out of equilibrium. Its application covers an enormous variety of phenomena over a wide range of scales ranging from low-temperature physics to astrophysics . We will to use this idea to explore the dynamics of QCD matter near the critical point.
![image](KZxidef.pdf){width="32.00000%"} ![image](BRsol.pdf){width="32.00000%"} ![image](scalingBR.pdf){width="32.00000%"}
The dramatic change in the behavior of the quench time relative to the relaxation time is at the heart of the KZ dynamics. As the system approaches the critical point, the relaxation time grows due to the critical slowing down (c.f. dashed curves in Fig. \[fig:KZ-idea\] (left)). In contrast, the quench time $\tau_{\rm quench}$, defined as the relative change time of correlation length, becomes shorter and shorter due to the rapid growth of the correlation length (c.f. Fig. \[fig:KZ-idea\] (left)) . Therefore at some proper time, say $\tau^{*}$, $\tau_{\rm eff}$ is equal to $\tau_{\rm quench}$. After $\tau^{*}$, the system expands too quickly (i.e. $\tau_{\rm quench} > \tau_{\rm eff}$) for the evolution of correlation length to follow the growth of equilibrium correlation length. It is then natural to define an emergent length scale, the Kibble-Zurek length $l_{\rm KZ}$, which is the value of the equilibrium correlation length at $\tau^{*}$. In another word, $l_{KZ}$ is the maximum correlation length the system could develop when passing through the critical point. Likewise, one could introduce an emergent time scale, referred as Kibble-Zurek time, $\tau_{\rm KZ}=\tau_{\rm eff}(\tau^{*})$, which is the relaxation time at $\tau^{*}$. If the critical fluctuations are in thermal equilibrium, their magnitudes are completely determined by the equilibrium correlation length. For off-equilibrium evolution, we expect that magnitude of off-equilibrium fluctuations is controlled by $l_{\rm KZ}$ and their temporal evolutions are characterized by $\tau_{\rm KZ}$.
The emergence of such a characteristic length scale $l_{\rm KZ}$ and the time scale $\tau_{\rm KZ}$ indicates that cumulants will scale as a function of them. As an illustrative example, we revisited the study of Ref. [@Berdnikov:1999ph]. Fig. \[fig:KZ-idea\] (middle) plots the evolution of the off-equilibrium Gaussian cumulant $\kappa_{2}$ by solving the rate equation proposed in Ref. [@Berdnikov:1999ph] from different choices of non-universal inputs. Those off-equilibrium cumulants are different from the equilibrium one and look different at first glance. What happens if we rescale those Gaussian cumulants by $l^{2}_{\rm KZ}$ and present their temporal evolution as a function of the rescaled time $\tau/\tau_{\rm KZ}$? As shown in Fig. \[fig:KZ-idea\] (right), the rescaled evolutions now look nearly identical, which confirms the expectation from the KZ scaling.
Off-equilibrium scaling of critical cumulants {#result}
=============================================
In Ref. [@Mukherjee:2016kyu], we proposed the following scaling hypothesis for the off-equilibrium evolution of critical cumulants: $$\begin{aligned}
\label{eq:scaling}
\kappa_{n}\left(\tau;\Gamma\right) \sim l^{\frac{-1+5\,(n-1)}{2}}_{\rm KZ}\, \bar{f}_{n}\left(\,\frac{\tau}{\tau_{\rm KZ}}, \theta_{\rm KZ}\,\right)\, ,
\qquad
n=1,2, 3, 4\ldots ,\end{aligned}$$ which is motivated in part by the equilibrium scaling relation and in part by the key ideas of Kibble-Zurek dynamics. The scaling hypothesis extends the scaling hypothesis for Gaussian cumulants (see for example Ref. [@Gubser]) to non-Gaussian cumulants, which have thus far received little attention in the literature. We further introduced the Kibble-Zurek scaling variable $\theta_{\rm KZ}$. In analogue to $l_{\rm KZ}$ and $\tau_{\rm KZ}$, $\theta_{\rm KZ}$ is defined as the value of the scaling variable $\theta$ when the quench time is equal to the relaxation time and can be considered as the memory of the sign of non-Gaussian cumulants of the system. The off-equilibrium scaling functions $\bar{f}_{n}$ in is universal and does not explicitly depend on non-universal inputs which are collectively denoted by $\Gamma$. The dependence of those off-equilibrium cumulants $\kappa_{n}(\tau;\Gamma)$ on non-universal inputs $\Gamma$ are absorbed in $\tau_{\rm KZ}(\Gamma), l_{\rm KZ}(\Gamma)$ and $\theta_{\rm KZ}(\Gamma)$.
![image](Trajplot.pdf){width="32.00000%"} ![image](kappa4plotB.pdf){width="32.00000%"} ![image](f4plotB.pdf){width="32.00000%"}
Since $\bar{f}_{n}(\tau/\tau_{\rm KZ}, \theta_{\rm KZ})$ in is a universal scaling function, the temporal evolution of rescaled cumulants would become similar if those evolutions share the same $\theta_{\rm KZ}$. In Fig. \[fig:scaling\] (middle), we present the evolution of the fourth cumulants, kurtosis $\kappa_{4}$ from four different trajectories (c.f. Fig. \[fig:scaling\] (left)). Those evolutions are determined from different non-universal inputs. We however combine those non-universal inputs in such a way that the resulting $\theta_{\rm KZ}$ are the same. The rescaled kurtosis vs $\tau/\tau_{\rm KZ}$ is plotted in Fig. \[fig:scaling\] (right). Those rescaled kurtosis look different away from the crossover curve since they are close to their corresponding equilibrium which are different for different trajectories. However, in the vicinity of the crossover curve, those rescaled kurtosis again merge into a single curve, confirming the validity of our hypothesis. For further numerically test of the scaling hypothesis as well as analytical insights into scaling form, see Ref. [@Mukherjee:2016kyu].
The off-equilibrium scaling as a signature of the QCD critical point {#summary}
====================================================================
We now consider the implications of our findings for the beam energy scan (BES) search for the QCD critical point. The critical contribution to the cumulants of baryon number multiplicity $\kappa^{\rm data}_{n}$ measured in experiment is proportional to the critical cumulants on the freeze-out curve. The scaling hypothesis indicates: $$\begin{aligned}
\label{eq:scalingf}
\tilde{\kappa}^{\rm data}_{n}\equiv\frac{\kappa^{\rm data}_{n}\left(\tau_{f};\Gamma\right)}{l^{-\frac{1}{2}+\frac{5}{2}(n-1)}_{\rm KZ}}\propto\frac{\kappa_{n}\left(\tau_{f};\Gamma\right)}{l^{-\frac{1}{2}+\frac{5}{2}(n-1)}_{\rm KZ}} \sim \, \bar{f}_{n}\left(\,\frac{\tau_{f}}{\tau_{\rm KZ}}, \theta_{\rm KZ}\,\right)\, . \end{aligned}$$ Note $\tau_{f}$ is the proper time of the system on the freeze-out curve and the universal scaling function $\bar{f}_{n}$ does not depend on the non-universal inputs collectively denoted by $\Gamma$.
The scaling relation means that the rescaled data $\tilde{\kappa}^{\rm data}_{n}$ will only be sensitive to $\left(\tau_{f}/\tau_{\rm KZ}, \theta_{\rm KZ}\right)$. In another word, let us rescale an ensemble of experimental data in heavy-ion collisions with different beam energy $\sqrt{s}$, impact parameter $b$ (or centrality) and rapidity $\eta$ using the appropriate power of $l_{KZ}$ according to , those collisions with the similar $\left(\tau_{f}/\tau_{\rm KZ}, \theta_{\rm KZ}\right)$ will have approximately the similar values for rescaled data if the QCD critical regime is probed by those collisions.
A crucial step in testing scaling as outlined above is to determine $l_{KZ}, \tau_{KZ},\theta_{KZ}$ and the freeze-out time $\tau_{f}$ as a function of $\left(\sqrt{s}, b, \eta\, \right)$. We need to use hydrodynamical simulations to determine trajectories in the phase diagram as well as the expansion rate of the system for a given collision of $\left(\sqrt{s}, b, \eta\,\right)$. We then compare the relaxation time and quench time to determine the corresponding $l_{KZ}\left(\sqrt{s}, b, \eta\, \right)$, $\tau_{KZ}\left(\sqrt{s}, b, \eta\,\right)$, $\theta_{KZ}\left(\sqrt{s}, b, \eta\,\right)$. Since those scaling variables are non-trivial functions of $\left(\sqrt{s}, b, \eta\, \right)$, the success of such theory-data comparisons would provide strong evidence for the existence of the QCD critical point. The study of procedures outlined above, with examples including mock BES data is in progress [@search].\
[**Acknowledgments.**]{} This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Contract Number DE-SC0012704 (SM, RV), DE-SC0011090 (YY) and within the framework of the Beam Energy Scan Theory (BEST) Topical Collaboration.
[15]{}
X. Luo and N. Xu, arXiv:1701.02105 \[nucl-ex\].
D. T. Son and M. A. Stephanov, Phys. Rev. D [**70**]{}, 056001 (2004) doi:10.1103/PhysRevD.70.056001 \[hep-ph/0401052\]. S. Mukherjee, R. Venugopalan and Y. Yin, Phys. Rev. C [**92**]{}, no. 3, 034912 (2015) doi:10.1103/PhysRevC.92.034912 \[arXiv:1506.00645 \[hep-ph\]\]. B. Berdnikov and K. Rajagopal, Phys. Rev. D [**61**]{}, 105017 (2000) doi:10.1103/PhysRevD.61.105017 \[hep-ph/9912274\]. M. Nahrgang, Nucl. Phys. A [**956**]{}, 83 (2016) doi:10.1016/j.nuclphysa.2016.02.074 \[arXiv:1601.07437 \[nucl-th\]\]. S. Mukherjee, R. Venugopalan and Y. Yin, Phys. Rev. Lett. [**117**]{}, no. 22, 222301 (2016) doi:10.1103/PhysRevLett.117.222301 \[arXiv:1605.09341 \[hep-ph\]\]. W. H. Zurek, W. H. (1996). Physics Reports, 276(4) ,177.
A. Chandran, A. Erez, S. Gubser, and S. Sondhi, Phys. Rev. B [**86**]{}, 064304, (2012).
S. Mukherjee, B. Schenke, C. Shen, R. Venugopalan and Y. Yin, in progress.
[^1]: Presenter
|
---
abstract: '[Quantum liquids, in which an effective Lorentzian metric and thus some kind of gravity gradually arise in the low-energy corner, are the objects where the problems related to the quantum vacuum can be investigated in detail. In particular, they provide the possible solution of the cosmological constant problem: why the vacuum energy is by 120 orders of magnitude smaller than the estimation from the relativistic quantum field theory. The almost complete cancellation of the cosmological constant does not require any fine tuning and comes from the fundamental “trans-Planckian” physics of quantum liquids. The remaining vacuum energy is generated by the perturbation of quantum vacuum caused by matter (quasiparticles), curvature, and other possible sources, such as smooth component – the quintessence. This provides the possible solution of another cosmological constant problem: why the present cosmological constant is on the order of the present matter density of the Universe. We discuss here some properties of the quantum vacuum in quantum liquids: the vacuum energy under different conditions; excitations above the vacuum state and the effective acoustic metric for them provided by the motion of the vacuum; Casimir effect, etc. ]{}'
author:
- |
G.E. Volovik\
Low Temperature Laboratory, Helsinki University of Technology\
P.O.Box 2200, FIN-02015 HUT, Finland\
and\
L.D. Landau Institute for Theoretical Physics,\
Kosygin Str. 2, 117940 Moscow, Russia\
title: Vacuum in quantum liquids and in general relativity
---
2 truecm
Introduction.
=============
Quantum liquids, such as $^3$He and $^4$He, represent the systems of strongly interacting and strongly correlated atoms, $^3$He and $^4$He atoms correspondingly. Even in its ground state, such liquid is a rather complicated object, whose many body physics requires extensive numerical simulations. However, when the energy scale is reduced below about 1 K, we cannot resolve anymore the motion of isolated atoms in the liquid. The smaller the energy the better is the liquid described in terms of the collective modes and the dilute gas of the particle-like excitations – quasiparticles. This is the Landau picture of the low-energy degrees of freedom in quantum Bose and Fermi liquids. The dynamics of collective modes and quasiparticles is described in terms of what we call now ‘the effective theory’. In superfluid $^4$He this effective theory, which incorporates the collective motion of the ground state – the quantum vacuum – and the dynamics of quasiparticles in the background of the moving vacuum, is known as the two-fluid hydrodynamics [@Khalatnikov].
Such an effective theory does not depend on details of microscopic (atomic) structure of the quantum liquid. The type of the effective theory is determined by the symmetry and topology of the ground state, and the role of the microscopic physics is only to choose between different universality classes on the basis of the minimum energy consideration. Once the universality class is determined, the low-energy properties of the condensed matter system are completely described by the effective theory, and the information on the microscopic (trans-Planckian) physics is lost [@LaughlinPines].
In some condensed matter the universality class produces the effective theory, which reminds very closely the relativistic quantum field theory. For example, the collective fermionic and bosonic modes in superfluid $^3$He-A reproduce chiral fermions, gauge fields and even in many respects the gravitational field [@PhysRepRev].
This allows us to use the quantum liquids for the investigation of the properties related to the quantum vacuum in relativistic quantum field theories, including the theory of gravitation. The main advantage of the quantum liquids is that in principle we know their vacuum structure at any relevant scale, including the interatomic distance, which plays the part of one of the Planck length scales in the hierarchy of scales. Thus the quantum liquids can provide possible routes from our present low-energy corner of the effective theory to the “microscopic” physics at Planckian and trans-Planckian energies.
One of the possible routes is related to the conserved number of atoms $N$ in the quantum liquid. The quantum vacuum of the quantum liquids is constructed from the discrete elements, the bare atoms. The interaction and zero point motion of these atoms compete and provide an equilibrium ground state of the ensemble of atoms, which can exist even in the absence of external pressure. The relevant energy and the pressure in this equilibrium ground state are exactly zero in the absence of interaction with environment. Translating this to the language of general relativity, one obtains that the cosmological constant in the effective theory of gravity in the quantum liquid is exactly zero without any fine tuning. The equilibrium quantum vacuum is not gravitating.
This route shows a possible solution of the cosmological constant problem: why the estimation the vacuum energy using the relativistic quantum field theory gives the value, which is by 120 orders of magnitude higher than its upper experimental limit. In quantum liquids there is a similar discrepancy between the exact zero result for the vacuum energy and the naive estimation within the effective theory. We shall also discuss here how different perturbations of the vacuum in quantum liquids lead to small nonzero energy of quantum vacuum. Translating this to the language of general relativity, one obtains that the in each epoch the vacuum energy density must be either of order of the matter density of the Universe, or of its curvature, or of the energy density of the smooth component – the quintessence.
Here we mostly discuss the Bose ensemble of atoms: a weakly interacting Bose gas, which experiences the phenomenon of Bose condensation, and a real Bose liquid – superfluid $^4$He. The consideration of the Bose gas allows us to use the microscopic theory to derive the ground state energy of the quantum system of interacting atoms and the excitations above the vacuum state – quasiparticles. We also discuss the main differences between the bare atoms, which comprise the vacuum state, and the quasiparticles, which serve as elementary particles in the effective quantum field theory.
Another consequence of the discrete number of the elements comprising the vacuum state, which we consider, is related to the Casimir effects. The dicreteness of the vacuum – the finite-$N$ effect – leads to the to the mesoscopics Casimir forces, which cannot be derived within the effective theory. For these perposes we consider the Fermi ensembles of atoms: Fermi gas and Fermi liquid.
Einstein gravity and cosmological constant problem
==================================================
Einstein action
---------------
The Einstein’s geometrical theory of gravity consists of two main elements [@Damour].
\(1) Gravity is related to a curvature of space-time in which particles move along the geodesic curves in the absence of non-gravitational forces. The geometry of the space-time is described by the metric $g_{\mu\nu}$ which is the dynamical field of gravity. The action for matter in the presence of gravitational field $S_{\rm M}$, which simultaneously describes the coupling between gravity and all other fields (the matter fields), is obtained from the special relativity action for the matter fields by replacing everywhere the flat Minkowski metric by the dynamical metric $g_{\mu\nu}$ and the partial derivative by $g$-covariant derivative. This follows from the principle that the equations of motion do not depend on the choice of the coordinate system (the so called general covariance). This also means that the motion in the non-inertial frame can be described in the same manner as motion in some gravitational field – this is the equivalence principle. Another consequence of the equivalence principle is that the the space-time geometry is the same for all the particles: the gravity is universal.
\(2) The dynamics of the gravitational field is determined by adding the action functional $S_{\rm G}$ for $g_{\mu\nu}$, which describes propagation and self-interaction of the gravitational field: $$S=S_{\rm G}+S_{\rm M}~.
\label{GravitationalEinsteinAction}$$ The general covariance requires that $S_{\rm G}$ is the functional of the curvature. In the original Einstein theory only the first order curvature term is retained: $$S_{\rm G}= -{1\over
16\pi G}\int d^4x\sqrt{-g}{\cal R}~,
\label{EinsteinCurvatureAction}$$ where $G$ is gravitational Newton cosntant; and ${\cal
R}$ is the Ricci scalar curvature. The Einstein action is thus $$S= -{1\over
16\pi G}\int d^4x\sqrt{-g}{\cal R} +S_{\rm M}
\label{EinsteinAction}$$ Variation of this action over the metric field $g_{\mu\nu}$ gives the Einstein equations: $${\delta S\over \delta g^{\mu\nu}}= {1\over
2}\sqrt{-g}\left[ -{1\over 8\pi G}\left(
R_{\mu\nu}-{1\over 2}{\cal R}g_{\mu\nu} \right) +T^{\rm
M}_{\mu\nu}\right]=0~,
\label{EinsteinEquation1}$$ where $T^{\rm M}_{\mu\nu}$ is the energy-momentum of the matter fields. Bianchi identities lead to the “covariant” conservation law for matter $$T^{\mu {\rm M}}_{\nu;\mu}=0 ~~,~~{\rm or}~~
\partial_\mu \left(T^{\mu {\rm M}}_\nu \sqrt{-g}\right) =
{1\over 2}\sqrt{-g}T^{\alpha\beta {\rm M}} \partial_\nu
g_{\alpha\beta}
\, ,
\label{CovariantConservation}$$ But actually this “covariant” conservation takes place in virtue of the field equation for “matter” irrespective of the dynamics of the gravitational field.
Vacuum energy and cosmological term
-----------------------------------
In particle physics, field quantization allows a zero point energy, the constant energy when all fields are in their ground states. In the absence of gravity, only the difference between zero points can be measured, for example in the Casimir effect, while the absolute value in unmeasurable, However, Einstein’s equations react to $T^{\rm M}_{\mu\nu}$ and thus to the value of vacuum energy itself.
If the vacuum energy is taken seriously, the energy-momentum tensor of the vacuum must be added to the Einstein equations. The corresponding action is given by the so-called cosmological term, which was introduced by Einstein in 1917 [@EinsteinCosmCon]: $$S_\Lambda = -\rho_\Lambda
\int d^4x\sqrt{-g}~,~~T^\Lambda_{\mu\nu}={2\over \sqrt{-g}}{\delta
S_\Lambda\over
\delta g^{\mu\nu}}=\rho_\Lambda g_{\mu\nu}~.
\label{CosmologicalTerm}$$ The energy-momentum tensor of the vacuum shows that the quantity $\rho_\Lambda\sqrt{-g}$ is the vacuum energy density, and the equation of state of the vacuum is $$P_\Lambda=-\rho_\Lambda~,
\label{EquationOfState}$$ where $P_\Lambda\sqrt{-g}$ is the partial pressure of the vacuum. The Einstein’s equations are modified: $${1\over 8\pi G}\left(
R_{\mu\nu}-{1\over 2}{\cal R}g_{\mu\nu} \right) = T^\Lambda_{\mu\nu}
+T^{\rm M}_{\mu\nu}~.
\label{EinsteinEquation}$$
Cosmological constant problem
-----------------------------
The most severe problem in the marriage of gravity and quantum theory is why is the vacuum not gravitating [@WeinbergReview]. The vacuum energy density can be easily estimated: the positive contribution comes from the zero-point energy of the bosonic fields and the negative – from the occupied negative energy levels in the Dirac sea. Since the largest contribution comes from the high momenta, where the energy spectrum of particles is massless, $E=cp$, the energy density of the vacuum is $$\rho_\Lambda \sqrt{-g}= {1\over V}\left(\nu_{\rm bosons}\sum_{\bf p}{1\over 2}
cp~~ -\nu_{\rm fermions}
\sum_{\bf p} cp\right)
\label{VacuumEnergyPlanck}$$ where $V$ is the volume of the system; $\nu_{\rm bosons}$ is the number of bosonic species and $\nu_{\rm fermions}$ is the number of fermionic species. The vacuum energy is divergent and the natural cut-off is provided by the gravity itself. The cut-off Planck energy is determined by the Newton’s constant: $$E_{\rm Planck}=\left({\hbar c^5\over G}\right)^{1/2}~,
\label{PlanckCutoff}$$ It is on the order of $10^{19}$ GeV. If there is no symmetry between the fermions and bosons (supersymmetry) the Planck energy scale cut-off provides the following estimation for the vacuum energy density: $$\rho_\Lambda \sqrt{-g}\sim
\pm {1\over c^3} E^4_{\rm Planck} =\pm \sqrt{-g}~E^4_{\rm Planck}~,
\label{VacuumEnergyPlanck2}$$ with the sign of the vacuum energy being determined by the fermionic and bosonic content of the quantum field theory. Here we considered the flat space with Minkowski metric $g_{\mu\nu}={\rm
diag}(-1,c^{-2},c^{-2},c^{-2})$.
The “cosmological constant problem” is a huge disparity, between the naively expected value in Eq.(\[VacuumEnergyPlanck2\]) and the range of actual values: the experimental observations show that $\rho_\Lambda$ is less than or on the order of $10^{-120} E^4_{\rm Planck}$ [@Supernovae]. In case of supersymmetry, the cut-off is somewhat less, being determined by the scale at which supersymmetry is violated, but the huge disparity persists.
This disparity demonstrates that the vacuum energy in Eq. (\[VacuumEnergyPlanck\]) is not gravitating. This is in apparent contradiction with the general principle of equivalence, according to which the inertial and gravitating masses must coincide. This indicate that the theoretical criteria for setting the absolute zero point of energy are unclear and probably require the physics beyond general relativity. To clarify this issue we can consider such quantum systems where the elements of the gravitation are at least partially reproduced, but where the structure of the quantum vacuum is known. Quantum liquids are the right systems.
Sakharov induced gravity
------------------------
Why is the Planck energy in Eq.(\[PlanckCutoff\]) the natural cutoff in quantum field theory? This is based on the important observation made by Sakharov that the second element of the Einstein’s theory can follow from the first one due to the quantum fluctuations of the relativistic matter field [@Sakharov]. He showed that vacuum fluctuations of the matter field induce the curvature term in action for $g_{\mu\nu}$. One can even argue that the whole Einstein action is induced by vacuum polarization, and thus the gravity is not the fundamental force but is determined by the properties of the quantum vacuum.
The magnitude of the induced Newton’s constant is determined by the value of the cut-off: $G^{-1}\sim \hbar
E^2_{\rm cutoff}/c^5$. Thus in this Sakharov’s gravity induced by quantum fluctuations the causal connection between the gravity and the cut-off is reversed: the physical high-energy cut-off determines the gravitational constant. The $E^2_{\rm cutoff}$ dependence of the inverse gravitational constant explains why the gravity is so small compared to the other forces, whose running coupling “constants” have only mild logarithmic dependence on $E_{\rm cutoff}$.
The same cut-off must be applied for the estimation of the cosmological constant, which thus must be of order of $E^4_{\rm cutoff}$. But this is in severe contradiction with experiment. This shows that, though the effective theory is apprpopriate for the calculation of the Einstein curvature term, it is not applicable for the calculation of the vacuum energy: the trans-Planckian physics must be evoked for that.
The Sakharov theory does not explain the first element of the Einstein’s theory: it does not show how the metric field $g_{\mu\nu}$ appears. This can be given only by the fundamental theory of quantum vacuum, such as string theory where the gravity appears as a low-energy mode. The quantum liquid examples also show that the metric field can naturally and in some cases even emergently appear as the low-energy collective mode of the quantum vacuum.
Effective gravity in quantum liquids
------------------------------------
The first element of the Einstein theory of gravity (that the motion of quasiparticles is governed by the effective curved space-time) arises in many condensed matter systems in the low-energy limit. An example is the motion of acoustic phonons in distorted crystal lattice, or in the background flow field of superfluid condensate. This motion is described by the effective acoustic metric [@Unruh; @Vissersonic; @StoneIordanskii; @StoneMomentum]. For this “relativistic matter field” (acoustic phonons with dispersion relation $E=cp$, where $c$ is a speed of sound, simulate relativistic particles) the analog of the equivalence principle is fulfilled. As a result the covariant conservation law in Eq.(\[CovariantConservation\]) does hold for the acoustic mode, if $g_{\mu\nu}$ is replaced by the acoustic metric.
The second element of the Einstein’s gravity is not easily reproduced in condensed matter. In general, the dynamics of acoustic metric $g_{\mu\nu}$ does not obey the equivalence principle inspite of the Sakharov mechanism of the induced gravity. In the existing quantum liquids the Einstein action induced by the quantum fluctuations of the “relativistic matter field” is much smaller than the non-covariant action induced by “non-relativistic” high-energy component of the quantum vacuum, which is overwhelming in these liquids. Of course, one can find some very special cases where the Einstein action for the effective metric is dominating, but this is not a rule.
Nevertheless, inspite of the incomlete analogy with the Einstein theory, the effective gravity in quantum liquids can be useful for investigation of the cosmological constant problem.
Microscopic ‘Theory of Everything’ in quantum liquids
=====================================================
Microscopic and effective theories
----------------------------------
There are two ways to study quantum liquids:
\(i) The fully microscopic treatment. It can be realized completely (a) by numerical simulations of the many body problem; (b) analytically for some special models; (3) perturbatively for some special ranges of the material parameters, for example, in the limit of weak interaction between the particles.
\(ii) Phenomenological approach in terms of effective theories. The hierarchy of the effective theories correspond to the low-frequency long-wave-length dynamics of quantum liquids in different ranges of frequency. Examples of effective theories: Landau theory of Fermi liquid; Landau-Khalatnikov two-fluid hydrodynamics of superfluid $^4$He [@Khalatnikov]; theory of elasticity in solids; Landau-Lifshitz theory of ferro- and antiferromagnetism; London theory of superconductivity; Leggett theory of spin dynamics in superfluid phases of $^3$He; effective quantum electrodynamics arising in superfluid $^3$He-A; etc. The latter example indicates, that the existing Standard Model of electroweak, and strong interactions, and the Einstein gravity too, are the phenomenological effective theories of high-energy physics, which describe its low-energy edge, while the microscopic theory of the quantum vacuum is absent.
Theory of Everything for quantum liquid
---------------------------------------
The microscopic “Theory of Everything” for quantum liquids – “a set of equations capable of describing all phenomena that have been observed” [@LaughlinPines] in these quantum systems – is extremely simple. On the “fundamental” level appropriate for quantum liquids and solids, i.e. for all practical purposes, the $^4$He or $^3$He atoms of these quantum systems can be considered as structureless: the $^4$He atoms are the structureless bosons and the $^3$He atoms are the structureless fermions with spin $1/2$. The Theory of Everything for a collection of a macroscopic number $N$ of interacting $^4$He or $^3$He atoms is contained in the non-relativistic many-body Hamiltonian $${\cal H}= -{\hbar^2\over 2m}\sum_{i=1}^N {\partial^2\over
\partial{\bf r}_i^2} +\sum_{i=1}^N\sum_{j=i+1}^N U({\bf
r}_i-{\bf r}_j)~,
\label{TheoryOfEverythingOrdinary}$$ acting on the many-body wave function $\Psi({\bf
r}_1,{\bf r}_2, ... , {\bf r}_i,... ,{\bf r}_j,...)$. Here $m$ is the bare mass of the atom; $U({\bf r}_i-{\bf r}_j)$ is the pair interaction of the bare atoms $i$ and $j$. When written in the second quantized form it becomes the Hamiltonian of the quantum field theory $${\cal H}-\mu {\cal N}=\int d{\bf x}\psi^\dagger({\bf
x})\left[-{\nabla^2\over 2m}
-\mu
\right]\psi({\bf x}) +{1\over 2}\int d{\bf x}d{\bf y}U({\bf x}-{\bf
y})\psi^\dagger({\bf x})
\psi^\dagger({\bf y})\psi({\bf y})\psi({\bf x})
\label{TheoryOfEverything}$$ In $^4$He, the bosonic quantum field $\psi({\bf x})$ is the annihilation operator of the $^4$He atoms. In $^3$He, $\psi({\bf x})$ is the fermionic field and the spin indices must be added. Here ${\cal N}=\int d{\bf
x}~\psi^\dagger({\bf x})\psi({\bf x})$ is the operator of particle number (number of atoms); $\mu$ is the chemical potential – the Lagrange multiplier which is introduced to take into account the conservation of the number of atoms.
Importance of discrete particle number in microscopic theory
------------------------------------------------------------
This is the main difference from the relativistic quantum field theory, where the number of any particles is not restricted: particles and antiparticles can be created from the quantum vacuum. As for the number of particles in the quantum vacuum itself, this quantity is simply not determined today. At the moment we do not know the structure of the quantum vacuum and its particle content. Moreover, it is not clear whether it is possible to describe the vacuum in terms of some discrete elements (bare particles) whose number is conserved. On the contrary, in quantum liquids the analog of the quantum vacuum – the ground state of the quantum liquid – has the known number of atoms. If $N$ is big, this difference between the two quantum field theories disappears. Nevertheless, the mear fact, that there is a conservation law for the number of particles comprising the vacuum, leads to the definite conclusion on the value of the relevant vacuum energy. Also, as we shall see below in Sec. \[EffectsDiscreteNumberN\], the discreteness of the quantum vacuum can be revealed in the mesoscopic Casimir effect.
Enhancement of symmetry in the low energy corner. Appearance of effective theory.
---------------------------------------------------------------------------------
The Hamiltonian (\[TheoryOfEverything\]) has very restricted number of symmetries: It is invariant under translations and $SO(3)$ rotations in 3D space; there is a global $U(1)$ group originating from the conservation of the number of atoms: ${\cal H}$ is invariant under gauge rotation $\psi({\bf x})\rightarrow e^{i\alpha}\psi({\bf
x})$ with constant $\alpha$; in $^3$He in addition, if the relatively weak spin-orbit coupling is neglected, ${\cal
H}$ is also invariant under separate rotations of spins, $SO(3)_S$. At low temperature the phase transition to the superfluid or to the quantum crystal state occurs where some of these symmetries are broken spontaneously. For example, in the $^3$He-A state all of these symmetries, except for the translational symmetry, are broken.
However, when the temperature and energy decrease further the symmetry becomes gradually enhanced in agreement with the anti-grand-unification scenario [@FrogNielBook; @Chadha]. At low energy the quantum liquid or solid is well described in terms of a dilute system of quasiparticles. These are bosons (phonons) in $^4$He and fermions and bosons in $^3$He, which move in the background of the effective gauge and/or gravity fields simulated by the dynamics of the collective modes. In particular, phonons propagating in the inhomogeneous liquid are described by the effective Lagrangian $$L_{\rm effective}=\sqrt{-g}g^{\mu\nu}\partial_\mu\alpha
\partial_\nu\alpha~,
\label{LagrangianSoundWaves}$$ where $g^{\mu\nu}$ is the effective acoustic metric provided by inhomogeneity and flow of the liquid [@Unruh; @Vissersonic; @StoneMomentum].
These quasiparticles serve as the elementary particles of the low-energy effective quantum field theory. They represent the analogue of matter. The type of the effective quantum field theory – the theory of interacting fermionic and bosonic quantum fields – depends on the universality class of the fermionic condensed matter (see review [@PhysRepRev]). The superfluid $^3$He-A, for example, belongs to the same universality class as the Standard Model. The effective quantum field theory describing the low energy phenomena in $^3$He-A, contains chiral “relativistic” fermions. The collective bosonic modes interact with these “elementary particles” as gauge fields and gravity. All these fields emergently arise together with the Lorentz and gauge invariances and with elements of the general covariance from the fermionic Theory of Everything in Eq.(\[TheoryOfEverything\]).
The emergent phenomena do not depend much on the details of the Theory of Everything [@LaughlinPines], in our case on the details of the pair potential $U({\bf x}-{\bf y})$. Of course, the latter determines the universality class in which the system enters at low energy. But once the universality class is established, the physics remains robust to deformations of the pair potential. The details of $U({\bf x}-{\bf y})$ influence only the “fundamental” parameters of the effective theory (“speed of light”, “Planck” energy cut-off, etc.) but not the general structure of the theory. Within the effective theory the “fundamental” parameters are considered as phenomenological.
Weakly interacting Bose gas
===========================
The quantum liquids are strongly correlated and strongly interacting systems. That is why, though it is possible to derive the effective theory from first principles in Eq.(\[TheoryOfEverything\]), if one has enough computer time and memory, this is a rather difficult task. It is instructive, however, to consider the microscopic theory for some special model potentials $U({\bf x}-{\bf y})$. This allow us to solve the problem completely or perturbatively. In case of the Bose-liquids the proper model is the Bogoliubov weakly interacting Bose gas, which is in the same universality class as a real superfluid $^4$He. Such model is very useful, since it simultaneously covers the low-energy edge of the effective theory, and the high-energy “transPlanckian” physics.
Model Hamiltonian
-----------------
Here we follow mostly the book by Khalatnikov [@Khalatnikov]. In the model of weakly interacting Bose gas the pair potential in Eq.(\[TheoryOfEverything\]) is weak. As a result the most of the particles at $T=0$ are in the Bose condensate, i.e. in the state with the momentum ${\bf p}=0$. The Bose condensate is characterized by the nonzero vacuum expectation value (vev) of the particle annihilation operator at ${\bf p}=0$: $$\left<a_{{\bf p}=0}\right>=\sqrt{N_0}e^{i\Phi}~, ~\left<a^\dagger_{{\bf
p}=0}\right>=\sqrt{N_0}e^{-i\Phi}.
\label{VEV}$$ Here $N_0$ is the particle number in the Bose condensate, and $\Phi$ is the phase of the condensate. The vacuum is degenerate over global $U(1)$ rotation of the phase. Further we consider particular vacuum state with $\Phi=0$.
If there is no interaction between the particles (an ideal Bose gas), all the particles at $T=0$ are in the Bose condensate, $N_0=N$. Small interaction induces a small fraction of particles which are not in condensate, these particle have small momenta ${\bf p}$. As a result only zero Fourier component of the pair potential is relevant, and Eq.(\[TheoryOfEverything\]) has the form: $$\begin{aligned}
{\cal H}-\mu {\cal N}=-\mu N_0+{N_0^2U\over 2V}
+\label{TheoryOfEverythingBoseGas1}\\
\sum_{{\bf p}\neq 0}\left({p^2\over
2m}- \mu\right)a^\dagger_{\bf p}a_{\bf p}+{N_0U\over
2V}\sum_{{\bf p}\neq 0}\left(2a^\dagger_{\bf p}a_{\bf
p}+2a^\dagger_{-{\bf p}}a_{-{\bf p}}+a_{{\bf p}}a_{-{\bf
p}}+a^\dagger_{{\bf p}}a^\dagger_{-{\bf p}}\right)
\label{TheoryOfEverythingBoseGas2}\end{aligned}$$ Here $N_0=a^\dagger_0a_0=a_0a^\dagger_0=
a^\dagger_0a^\dagger_0=a_0a_0$ is the particle number in the Bose-condensate (we neglected quantum fluctuations of the operator $a_0$ and consider $a_0$ as $c$-number); $U$ is the matrix element of pair interaction for zero momenta ${\bf p}$ of particles. Minimization of the main part of the energy in Eq.(\[TheoryOfEverythingBoseGas1\]) over $N_0$ gives $UN_0/V=\mu$ and one obtains: $$\begin{aligned}
{\cal H}-\mu {\cal N}=-{\mu^2 \over 2U}V
+\sum_{{\bf p}\neq 0}{\cal H}_{\bf p}
\label{TheoryOfEverythingBoseGas3}\\ {\cal H}_{\bf p} =
{1\over 2}\left({p^2\over 2m}+\mu\right)
\left(a^\dagger_{\bf p}a_{\bf p}+a^\dagger_{-{\bf
p}}a_{-{\bf p}}\right)+{\mu\over 2}\left( a_{{\bf
p}}a_{-{\bf p}}+a^\dagger_{{\bf p}}a^\dagger_{-{\bf
p}}\right)
\label{TheoryOfEverythingBoseGas4}\end{aligned}$$
Pseudorotation – Bogoliubov transformation
------------------------------------------
At each ${\bf p}$ the Hamiltonian can be diagonalized using the following consideration. The operators $${\cal L}_3={1\over 2}(a^\dagger_{{\bf p}} a_{{\bf p}}+
a^\dagger_{-{\bf p}} a_{-{\bf p}}+1)~~,~~ {\cal L}_1+i
{\cal L}_2= a^\dagger_{{\bf p}} a^\dagger_{-{\bf p}}~~,~~
{\cal L}_1-i {\cal L}_2=a_{{\bf p}} a_{-{\bf p}}
\label{PseudoRotationOperators}$$ form the group of pseudorotations, $SU(1,1)$ (the group which conserves the form $x_1^2+x_2^2-x_3^2$), with the commutation relations: $$[{\cal L}_3, {\cal L}_1]=i {\cal L}_2~,~[ {\cal L}_2,{\cal L}_3]=i {\cal
L}_1~,~[
{\cal L}_1, {\cal L}_2]=-i{\cal L}_3~,
\label{PseudoRotationOperatorsCommutations}$$ In terms of the pseudomomentum the Hamiltonian in Eq.(\[TheoryOfEverythingBoseGas4\]) has the form $${\cal H}_{\bf p} = \left({p^2\over
2m}+\mu\right){\cal L}_3 +\mu {\cal L}_1- {1\over 2}\left({p^2\over
2m}+\mu\right)~.
\label{TheoryOfEverythingBoseGas5}$$ In case of the nonzero phase $\Phi$ of the Bose condensate one has $${\cal H}_{\bf p} = \left({p^2\over
2m}+\mu\right){\cal L}_3 +\mu\cos(2\Phi)~{\cal
L}_1+\mu\sin(2\Phi)~ {\cal L}_2- {1\over 2}\left({p^2\over
2m}+\mu\right)~.
\label{BoseGasNonZeroPhi}$$
The diagonalization of this Hamiltoiniaby is achieved first by rotation by angle $2\Phi$ around axis $z$, and then by the Lorentz transformation – pseudorotation around axis $y$: $${\cal L}_3=\tilde {\cal L}_3{\rm ch}\chi + \tilde {\cal L}_1{\rm sh}\chi ~~,~~
{\cal L}_1 =\tilde {\cal L}_1{\rm ch}\chi +\tilde {\cal L}_3{\rm sh}\chi_y
~~,~~{\rm th}\chi ={\mu\over {p^2\over
2m}+\mu}~~.
\label{Pseudorotation}$$ This corresponds to Bogoliubov transformation and gives the following diagonal Hamiltonian: $$\begin{aligned}
{\cal H}_{\bf p} =- {1\over 2}\left({p^2\over
2m}+\mu\right)+\tilde {\cal L}_3 \sqrt{\left({p^2\over
2m}+\mu\right)^2 -\mu^2}=
\label{DiagonalizedHamiltonian1}\\
= {1\over 2} E({\bf p})\left(\tilde a^\dagger_{\bf p}\tilde a_{\bf p}+
\tilde a^\dagger_{-{\bf p}}\tilde a_{-{\bf p}}\right)
+ {1\over
2}
\left(E({\bf p}) -\left({p^2\over 2m}+\mu\right) \right)~,
\label{DiagonalizedHamiltonian2}\end{aligned}$$ where $\tilde a_{\bf p}$ is the operator of annihilation of quasiparticles, whose energy spectrum $E({\bf p})$ is $$E({\bf p})=\sqrt{\left({p^2\over
2m}+\mu\right)^2 -\mu^2}=\sqrt{p^2c^2 +{p^4\over
4m^2}}~~,~~c^2={\mu\over m}~.
\label{BogoliubovEnergySpectrum}$$
Vacuum and quasiparticles
-------------------------
The total Hamiltonian now represents the ground state – the vacuum – and the system of quasiparticles $${\cal H}-\mu {\cal N}=\left<{\cal H}-\mu {\cal N}\right>_{\rm vac} +
\sum_{\bf p}
E({\bf p})\tilde a^\dagger_{\bf p}\tilde a_{\bf p}
\label{EnergyBoseGas}$$ The lower the energy the more dilute is the system of quasiparticles and thus the weaker is the interaction between them. This description in terms of the vacuum state and dilute system of quasiparticle is generic for the condensed matter systems and is valid even if the interaction of the initial bare particles is strong. The phenomenological effective theory in terms of vacuum state and quasiparticles was developed by Landau both for Bose and Fermi liquids. Quasiparticles (not the bare particles) play the role of elementary particles in such effective quantum field theories.
In a weakly interacting Bose-gas in Eq.(\[BogoliubovEnergySpectrum\]), the spectrum of quasiparticles at low energy (i.e. at $p\ll mc$) is linear, $E=cp$. The linear slope coincides with the speed of sound, which can be obtained from the leading term in energy: $N(\mu)=-d(E-\mu N)/d\mu=\mu V/U$, $c^2=N(d\mu/dN)/m=\mu/m$. These quasiparticles are phonons – quanta of sound waves. The same quasiparticle spectrum occurs in the real superfluid liquid $^4$He, where the interaction between the bare particle is strong. This shows that the qualitative low-energy properties of the system do not depend on the microscopic (trans-Planckian) physics. The latter determines only the speed of sound $c$. One can say, that weakly and strongly interacting Bose systems belong to the same universality class, and thus have the same low-energy properties. One cannot distinguish between the two systems if the observer measures only the low-energy effects, since they are described by the same effective theory.
Particles and quasiparticles
----------------------------
It is necessary to distinguish between the bare particles and quasiparticles.
Particles are the elementary objects of the system on a microscopic “transPlanckian” level, these are the atoms of the underlying liquid ($^3$He or $^4$He atoms). The many-body system of the interacting atoms form the quantum vacuum – the ground state. The nondissipative collective motion of the superfluid vacuum with zero entropy is determined by the conservation laws experienced by the atoms and by their quantum coherence in the superfluid state.
Quasiparticles are the particle-like excitations above this vacuum state, they serve as elementary particles in the effective theory. The bosonic excitations in superfluid $^4$He and fermionic and bosonic excitations in superfluid $^3$He represent the matter in our analogy. In superfluids they form the viscous normal component responsible for the thermal and kinetic low-energy properties of superfluids. Fermionic quasiparticles in $^3$He-A are chiral fermions, which are the counterpart of the leprons and quarks in the Standard Model [@PhysRepRev].
Galilean transformation for particles and quasiparticles {#GalileanTransformation}
--------------------------------------------------------
The quantum liquids considered here are essentially nonrelativistic: under the laboratory conditions their velocity is much less than the speed of light. That is why they obey with great precision the Galilean transformation law. Under the Galilean transformation to the coordinate system moving with the velocity ${\bf u}$ the superfluid velocity – the velocity of the quantum vacuum – transforms as ${\bf v}_{\rm s}\rightarrow {\bf
v}_{\rm s} + {\bf u}$.
As for the transformational properties of bare particles (atoms) and quasiparticles, it appears that they are essentially different. Let us start with bare particles. If ${\bf p}$ and $E({\bf p})$ are the momentum and energy of the bare particle (atom with mass $m$) measured in the system moving with velocity ${\bf u}$, then from the Galilean invariance it follows that its momentum and energy measured by the observer at rest are correspondingly $$\tilde{\bf p} ={\bf p} + m {\bf u}~,~\tilde E({\bf p})
=E({\bf p}+m{\bf u})= E({\bf p}) + {\bf p}\cdot {\bf u} +
{1\over 2}m {\bf u}^2~.
\label{ParticleTransformationLaw}$$ This transformation law contains the mass $m$ of the bare atom.
However, when the quasiparticles are concerned, one can expect that such characteristic of the microscopic world as the bare mass $m$ cannot enter the transformation law for quasiparticles. This is because quasiparticles in effective low-energy theory have no information on the transPlanckian world of the bare atoms comprising the vacuum state. All the information on the quantum vacuum, which the low-energy quasiparticle has, is encoded in the effective metric $g_{\mu\nu}$. Since the mass $m$ must drop out from the transformation law for quasiparticles, we expect that the momentum of quasiparticle is invariant under the Galilean transformation: ${\bf p}\rightarrow {\bf p}$, while the quasiparticle energy is simply Doppler shifted: $E({\bf p})\rightarrow
E({\bf p}) +{\bf p}\cdot {\bf u}$. Such a transformation law allows us to write the energy of quasiparticle in the moving superfluid vacuum. If ${\bf p}$ and $E({\bf p})$ are the quasiparticle momentum and energy measured in the coordinate system where the superfluid vacuum is at rest (i.e. ${\bf v}_{\rm s}=0$, we call this frame the superfluid comoving frame), then its momentum and energy in the laboratory frame are $$\tilde {\bf p}={\bf p}~,~\tilde E({\bf p})=E({\bf p})+{\bf p}\cdot {\bf
v}_{\rm s}~.
\label{DopplerShiftInSuperfluids}$$
The difference in the transformation properties of bare particles and quasiparticles comes from their different status. While the momentum and energy of bare particles are determined in “empty” space-time, the momentum and energy of quasiparticles are counted from that of the quantum vacuum. This difference can be easily visualized if one considers the spectrum of quasiparticles in the weakly interacting Bose gas in Eq.(\[BogoliubovEnergySpectrum\]) in the limit of large momentum $p\gg mc$, when the energy spectrum of quasiparticles approaches that of particles, $E\rightarrow
p^2/2m$. In this limit the difference between particles and quasiparticles disappears, and at first glance one may expect that quasiparticle should obey the same transformation property under Galilean transformation as a bare isolated particle. To add more confusion let us consider an ideal Bose gas of noninteracting bare particles, where quasiparticles have exactly the same spectrum as particles. Why the transformation properties are so different for them?
The ground state of the ideal Bose gas has zero energy and zero momentum in the reference frame where the Bose condensate is at rest (the superfluid comoving reference frame). In the laboratory frame the condensate momentum and energy are correspondingly $$\begin{aligned}
\left<{\cal P}\right>_{{\rm vac}}=Nm{\bf v}_{\rm s}~,
\label{GalileanParticleParticleP}\\
\left<{\cal H}\right>_{{\rm vac}}=N{m{\bf v}_{\rm s}^2\over 2} ~.
\label{GalileanParticleParticleE}\end{aligned}$$ The state with one quasiparticle is the state in which $N-1$ particles have zero momenta, ${\bf
p}=0$, while one particle has nonzero momentum ${\bf
p}\neq 0$. In the comoving reference frame the momentum and energy of such state with one quasiparticle are correspondingly $\left<{\cal P}\right>_{{\rm vac}+1qp}={\bf p}$ and $\left<{\cal H}\right>_{{\rm vac}+1qp}=E({\bf p})=p^2/2m$. In the laboratory frame the momentum and energy of the system are obtained by Galilean transformation $$\begin{aligned}
\left<{\cal P}\right>_{{\rm vac}+1qp}=(N-1)m{\bf
v}_{\rm s} + ({\bf p} + m{\bf v}_{\rm s})= \left<{\cal P}\right>_{{\rm vac}}+
{\bf p} ~,
\label{GalileanParticleQuasiparticleP}\\
\left<{\cal H}\right>_{{\rm vac}+1qp}=(N-1){m{\bf v}_{\rm s}^2\over 2} +{({\bf
p} + m{\bf v}_{\rm s})^2\over 2m}=\left<{\cal H}\right>_{{\rm vac}}+ E({\bf
p})+{\bf p}\cdot {\bf v}_{\rm s}~.
\label{GalileanParticleQuasiparticleE}\end{aligned}$$ Since the energy and the momentum of quasiparticles are counted from that of the quantum vacuum, the transformation properties of quasiparticles are different from the Galilean transformation law. The part of the Galilean transformation, which contains the mass of the atom, is absorbed by the Bose-condensate which represents the quantum vacuum.
Effective metric from Galilean transformation {#EffectiveMetricGalileanTransformation}
---------------------------------------------
The right hand sides of Eqs.(\[GalileanParticleQuasiparticleP\]) and (\[GalileanParticleQuasiparticleE\]) show that the energy spectrum of quasiparticle in the moving superfluid vacuum is given by Eq.(\[DopplerShiftInSuperfluids\]). Such spectrum can be written in terms of the effective acoustic metric: $$(\tilde E-{\bf p}\cdot {\bf v}_{\rm
s})^2=c^2p^2~~, ~~{\rm or}~~ g^{\mu\nu}p_\mu p_\nu=0~.
\label{PhononEnergySpectrumConformal}$$ where the metric has the form: $$\begin{aligned}
g^{00}=-1 ~,~ g^{0i}=- v_{\rm s}^i ~,~
g^{ij}= c^2\delta^{ij} -v_{\rm s}^i v_{\rm s}^j ~,
\label{ContravarianAcousticMetric}\\
g_{00}=- \left(1-{{\bf v}_{\rm s}^2\over c^2}\right) ~,~
g_{0i}=- {v_{{\rm s} i}\over c^2} ~,~ g_{ij}= {1\over
c^2}\delta_{ij} ~,\label{CovarianAcousticMetric}\\
\sqrt{-g}=c^{-3}~.
\label{DeterminantAcousticMetric}\end{aligned}$$ The Eq.(\[PhononEnergySpectrumConformal\]) does not determine the conformal factor. The derivation of the acoustic metric with the correct conformal factor can be found in Refs.[@Vissersonic; @StoneIordanskii; @StoneMomentum].
Broken Galilean invariance {#Broken Galilean invariance}
--------------------------
The modified transformation law for quasiparticles is the consequence of the fact that the mere presence of the gas or liquid with nonzero number $N$ of atoms breaks the Galilean invariance. While for the total system, quantum vacuum + quasiparticles, the Galilean invariance is a true symmetry, it is not applicable to the subsystem of quasiparticles if it is considered independently on the quantuam vacuum. This is the general feature of the broken symmetry: the vacuum breaks the Galilean invariance. This means that in the Bose gas and in the superfluid $^4$He, two symmetries are broken: the global $U(1)$ symmetry and the Galilean invariance.
Momentum vs pseudomomentum {#MomentumPseudomomentum}
--------------------------
On the other hand, due to the presence of quantum vacuum, there are two different types of translational invariance at $T=0$ (see detailed discussion in Ref. [@StoneMomentum]): (i) Invariance under the translation of the quantum vacuum with respect to the empty space; (ii) Invariance under translation of quasiparticle with respect to the quantum vacuum.
The operation (i) leaves the action invariant provided that the empty space is homogeneous. The conserved quantity, which comes from the translational invariance with respect to the empty space is the momentum. The operation (ii) is the symmetry operation if the quantum vacuum is homogeneous. This symmetry gives rise to the pseudomomentum. Accordingly the bare particles in empty space are characterized by the momentum, while quasiparticles – excitations of the quantum vacuum – are characterized by pseudomomentum. That is why the different transformation properties for momentum of particles in Eq.(\[ParticleTransformationLaw\]) and quasiparticles in Eq.(\[DopplerShiftInSuperfluids\]).
The Galilean invariance is the symmetry of the underlying microscopic physics of atoms in empty space. It is broken and fails to work for quasiparticles. Instead, it produces the transformation law in Eq.(\[DopplerShiftInSuperfluids\]), in which the microscopic quantity – the mass $m$ of bare particles – drops out. This is an example of how the memory on the microscopic physics is erased in the low-energy corner. Furthemore, when the low-energy corner is approached and the effective field theory emerges, these modified transformations gradually become the part of the more general coordinate transformations appropriate for the Einstein theory of gravity.
Vacuum energy of weakly interacting Bose gas
--------------------------------------------
The vacuum energy of the Bose gas as a function of the chemical potential $\mu$ is $$\left<{\cal H}-\mu {\cal N}\right>_{\rm vac}= -{\mu^2
\over 2U}V+ {1\over 2}\sum_{{\bf p}}\left(E({\bf p}) -
{p^2\over 2m}-mc^2 +{m^3c^4\over p^2}\right)
\label{VacuumEnergyBoseGas}$$ The last term in round brackest is added to take into account the perturbative correction to the matrix element $U$ [@Khalatnikov]. If the total number of particles is fixed, the corresponding vacuum energy is the function of $N$: $$\begin{aligned}
\left<{\cal H}\right>_{\rm vac}=E_{\rm
vac}(N)={1\over 2}Nmc^2+
\label{VacuumEnergyBoseGas2}\\
{1\over
2}\sum_{{\bf p}}\left(E({\bf p}) - {p^2\over 2m}-mc^2
+{m^3c^4\over p^2}\right)
\label{VacuumEnergyBoseGas1}\end{aligned}$$
Inspection of the vacuum energy shows that it does contain the zero point energy of the phonon field, ${1\over
2}\sum_{{\bf p}}E({\bf p})$. This divergent term is balanced by three counterterms in Eq.(\[VacuumEnergyBoseGas1\]). They come from the microscopic physics (they explicitly contain the microscopic parameter – the mass $m$ of atom). This regularization, which naturally arises in the microscopic physics, is absolutely unclear within the effective theory. After the regularization, the contribution of the zero point energy of the phonon field in Eq.(\[VacuumEnergyBoseGas1\]) becomes $${1\over
2}\sum_{{\bf p}~{\rm reg}}E({\bf p})={1\over
2}\sum_{{\bf p}}E({\bf p}) - {1\over
2}\sum_{{\bf p}}\left( {p^2\over 2m}+mc^2 -{m^3c^4\over
p^2}\right)={8\over 15\pi^2} Nmc^2 {m^3c^3\over n}~,
\label{ZerPointContribution}$$ where $n=N/V$ is particle density in the vacuum. Thus the total vacuum energy $$\begin{aligned}
E_{\rm vac}(N)\equiv \epsilon (n)~V=
\label{VacuumEnergyBoseGas3}\\
{1\over 2}Vmc^2
\left(n+{16\over 15\pi^2} {m^3c^3\over \hbar^3} \right)=
\label{VacuumEnergyBoseGas4}\\
V\left({1\over 2}Un^2+{8\over 15\pi^2 \hbar^3} m^{3/2}
U^{5/2} n^{5/2}\right)
\label{VacuumEnergyBoseGas5}\end{aligned}$$ In the weakly interacting Bose gas the contribution of the phonon zero point motion (the second terms in Eqs.(\[VacuumEnergyBoseGas4\]) and (\[VacuumEnergyBoseGas5\])) is much smaller than the leading contribution to the vacuum energy, which comes from interaction (the first terms in Eqs.(\[VacuumEnergyBoseGas4\]) and (\[VacuumEnergyBoseGas5\])). The small parameter, which regulates the perturbation theory in the above procedure is $mca/\hbar \ll 1$ (where $a$ is the interatomic distance: $a\sim n^{-1/3}$), or $mU/\hbar^2a\ll 1$. Small speed of sound reflects the smallness of the pair interaction $U$.
Planck energy scales
--------------------
The microscopic physics also shows that there are two energy parameters, which play the role of the Planck energy scale: $$E_{\rm Planck~1}=mc^2~~,~~E_{\rm
Planck~2}={\hbar c\over a}~.
\label{TwoPlanckScales}$$ The Planck mass, which corresponds to the first Planck scale $E_{\rm Planck~1}$, is the mass of Bose particles $m$, that comprise the vacuum. The second Planck scale $E_{\rm Planck~2}$ reflects the discreteness of the vacuum: the microscopic parameter, which enters this scale, is the mean distance between the particles in the vacuum. The second energy scale corresponds to the Debye temperature in solids. In a given case of weakly interacting particles one has $E_{\rm Planck~1}\ll E_{\rm
Planck~2}$, i.e. the distance between the particles in the vacuum is so small, that the quantum effects are stronger than interaction. This is the limit of strong correlations and weak interactions.
Below the first Planck scale $E\ll E_{\rm Planck~1}=mc^2$, the energy spectrum of quasiparticles is linear, which corresponds to the relativistic field theory arising in the low-energy corner. At this Planck scale the “Lorentz” symmetry is violated. The first Planck scale $E_{\rm Planck~1}=mc^2$ also determines the convergence of the sum in Eq.(\[VacuumEnergyBoseGas1\]). In terms of this scale the Eq.(\[VacuumEnergyBoseGas1\]) can be written as $$V {8\over 15\pi^2}\sqrt{-g}E_{\rm Planck~1}^4 ~,
\label{CosmologicalTermBoseGas1}$$ where $g=-1/c^6$ is the determinant of acoustic metric in Eq.(\[DeterminantAcousticMetric\]). This contribution to the vacuum energy has the same structure as the cosmological term in Eq.(\[VacuumEnergyPlanck2\]). However, the leading term in the vacuum energy, Eq.(\[VacuumEnergyBoseGas2\]), is higher and is determined by both Planck scales: $${1\over 2}V\sqrt{-g}E_{\rm Planck~2}^3E_{\rm Planck~1} ~.
\label{CosmologicalTermBoseGas2}$$
Vacuum pressure and cosmological constant
-----------------------------------------
The relevant vacuum energy of the grand ensemble of particles is the thermodynamic potential at fixed chemical potential: $\left<{\cal H}-\mu {\cal N}\right>_{\rm vac}$. It is related to the pressure of the liquid as (see the prove of this thermodynamic equation below, Eq.(\[VacuumEnergyPressure\])) $$P= -{1\over V} \left<{\cal H}-\mu {\cal N}\right>_{\rm vac} ~.
\label{ThermpPotVsPressure}$$ Such relation between pressure and energy is similar to that in Eq.(\[EquationOfState\]) for the equation of state of the relativistic quantum vacuum, which is described by the cosmological constant.
This vacuum energy for the weakly interacting Bose gas is given by $$\left<{\cal H}-\mu {\cal N}\right>_{\rm vac}= {1\over
2}V\sqrt{-g}\left(-E_{\rm Planck~2}^3E_{\rm Planck~1}
+{16\over 15\pi^2}E_{\rm Planck~1}^4\right)~.
\label{ThermpPotEnergyBoseGas}$$ Two terms in Eq.(\[ThermpPotEnergyBoseGas\]) represent two contributions to the vacuum pressure in the weakly interacting Bose gas. The zero point energy of the phonon field, the second term in Eq.(\[ThermpPotEnergyBoseGas\]), which coincides with Eq.(\[ZerPointContribution\]), does lead to the negative vacuum pressure as is expected from the effective theory. However, the magnitude of this negative pressure is smaller than the positive pressure coming from the microscopic “trans-Planckian” degrees of freedom (the first term in Eq.(\[ThermpPotEnergyBoseGas\]) which is provided by the repulsive interaction of atoms). Thus the weakly interacting Bose-gas can exist only under positive external pressure.
Quantum liquid {#Quantum liquid}
==============
Real liquid $^4$He {#RealLiquidHelium}
------------------
In the real liquid $^4$He the interaction between the particles (atoms) is not small. It is strongly correlated and strongly interacting system, where the two Planck scales are of the same order, $mc^2\sim \hbar c/a$. This means that the interaction energy and the energy of zero-point motion of atoms are of the same order. This is not the coincidince but reflects the stability og the liquid state. Each of the two energies depend on the particle density $n$. One can find the value of $n$ at which the two contributions to the vacuum pressure compensate each other. This means that the system can be in equilibrium even at zero external pressure, $P=0$, i.e. the quantum liquid can exist as a completely autonomous isolated system without any interaction with environment. This is what we must expect from the quantum vacuum in cosmology, since there are no exteranl environment for the vacuum.
In case of the collection of big but finite number $N$ of $^4$He atoms at $T=0$, they do not fly away as it happens for gases, but are held together to form a droplet of liquid with a finite mean particle density $n$. This density $n$ is fixed by the attractive interatomic interaction and repulsive zero point oscillations of atoms, only a part of this zero point motion being described in terms of the zero point energy of phonon mode.
The only macroscopic quantity which characterizes the homogeneous stationary liquid at $T=0$ is the mean particle density $n$. The vacuum energy density is the function of $n$ $$\epsilon(n)={1\over V}\left<{\cal H}\right>_{\rm vac}~,
\label{EnergyOnDensity}$$ and this function determines the equation of state of the liquid. The relevant vacuum energy density – the density of the thermodynamic potential of grand ensemble $$\tilde\epsilon(n)=\epsilon(n) -\mu n={1\over V}\left<{\cal H}-\mu {\cal
N}\right>_{\rm vac}~.
\label{ThermodynamicPotentialOnDensity}$$ Since the particle number $N=nV$ is conserved, $\tilde\epsilon(n)$ is the right quantity which must be minimized to obtain the equilibrium state of the liquid at $T=0$ (the equilibrium vacuum). The chemical potential $\mu$ plays the role of the Lagrange multiplier responsible for the conservation of bare atoms. Thus an equilibrium number of particles $n_0(\mu)$ is obtained from equation: $${d\tilde\epsilon\over dn}=0~,~{\rm or} ~~{d\epsilon\over dn}=\mu~.
\label{EquationForEquilibriumDensity}$$ Here we discuss only spatially homogeneous ground state, i.e. with spatially homogeneous $n$, since we know that the ground state of helium at $T=0$ is homogeneous: it is uniform liquid, not the crystal.
From the definition of the pressure, $$P=-{d(V\epsilon(N/V))\over dV}=-\epsilon(n)+n{d\epsilon\over dn}~,
\label{DefinitionOfPressure}$$ and from Eq.(\[EquationForEquilibriumDensity\]) for the density $n$ in equilibrium vacuum one obtains that in equilibrium the vacuum energy density $\tilde\epsilon$ and the vacuum pressure $P$ are related by $$\tilde \epsilon_{\rm vac~eq} = -P_{\rm vac}~.
\label{VacuumEnergyPressure}$$ The thermodynamic relation between the energy and pressure in the ground state of the quantum liquid’ $P=-\tilde \epsilon$, is the same as obtained for vacuum energy and pressure from the Einstein cosmological term. This is because the cosmological term also does not contain derivatives.
Close to the equilibrium state one can expand the vacuum energy in terms of deviations of particle density from its equilibrium value. Since the linear term disappears due to the stability of the superfluid vacuum, one has $$\tilde\epsilon(n)\equiv \epsilon(n)-\mu n=-P_{\rm vac} +{1\over 2} {m
c^2\over n_0(\mu)}
(n-n_0(\mu))^2~.
\label{VacuumCloseToEquil}$$
Gas-like vs liquid-like vacuum {#GasLiquidVacuum}
------------------------------
It is important that the vacuum of real $^4$He is not a gas-like but liquid-like, i.e. it can be in equilibrium at $T=0$ without interaction with the environment. Such property of the collection of atoms at $T=0$ is determined by the sign of the chemical potential, if it is counted from the energy of an isolated $^4$He atom. $\mu$ is positive in a weakly interacting Bose gas, but is negative in a real $^4$He where $\mu\sim -7$ K [@Woo].
Due to the negative $\mu$ the isolated atoms are collected together forming the liquid droplet which is self sustained without any interaction with the outside world. If the droplet is big enough, so that the surface tension can be neglected compared to the volume effects, the pressure in the liquid is absent, $P_{\rm vac}=0$, and thus the vacuum energy density $\tilde\epsilon$ is zero in equilibrium: $$\tilde\epsilon_{\rm vacuum~of~self-sustaining~system} \equiv 0~.
\label{ZeroVacuumEnergy}$$ This condition cannot be fulfilled for gas-like states for which $\mu$ is positive and thus they cannot exist without an external pressure.
Model liquid state
------------------
It is instructive to discuss some model energy density $\epsilon(n)$ describing a stable isolated liquid at $T=0$. Such a model must satisfy the following condition: (i) $\epsilon(n)$ must be attractive (negative) at small $n$ and repulsive (positive) at large $n$ to provide equilibrium density of liquid at intermediate $n$; (ii) The chemical potential must be negative to prevent evaporation; (iii) The liquid must be locally stable, i.e. the eigen frequencies of collective modes must be real.
All these conditions can be satisfied if we modify the Eq.(\[VacuumEnergyBoseGas5\]) in the following way. Let us change the sign of the first term describing interaction and leave the second term coming from vacuum fluctuations intact assuming that it is valid even at high density of particles. Due to the attractive interaction at low density the Bose gas collapses forming the liquid state. Of course, this is rather artifical construction, but it qualitatively desribes the liquid state. So we come to the following model $$\epsilon(n)= -{1\over 2}\alpha n^2+{2\over 5}\beta n^{5/2}~,
\label{ModelEnergy}$$ though, in addition to $\alpha$ and $\beta$, one can use also the exponents of $n$ as the fitting parameter. An equilibrium particle density in terms of chemical potential is obtained from the minimization of the relevant vacuum energy $\tilde\epsilon=\epsilon -\mu n$ over $n$: $${d\epsilon\over dn}=\mu ~\rightarrow~~ -\alpha n_0 +\beta n_0^{3/2}=\mu
\label{ModelEquilibrium}$$ The equation of state of such a liquid is $$P(n_0)=-\left(\epsilon(n_0) -\mu n_0\right)= -{1\over
2}\alpha n_0^2+{3\over 5}\beta n_0^{5/2}
\label{PressureVsDensity}$$ This equation of state allows the existence of the isolated liquid droplet, for which an external pressure is zero, $P=0$. The equilibrium density, chemical potential and speed of sound in the isolated liquid are $$\begin{aligned}
n_0(P=0)= \left({5\alpha\over 6\beta}\right)^2 ~,
\label{EquilibriumDensityAtZeroP}\\
\mu(P=0)=-{1\over 6} n_0 \alpha ~,
\label{EquilibriumChemicalPotentialAtZeroP}\\
~mc^2=\left({dP\over
dn_0}\right)_{P=0}=\left(n{d^2 \epsilon\over
dn^2}\right)_{P=0}={7\over 8} n_0 \alpha =5.25 ~|\mu|~.
\label{EquilibriumSpeedSoundAtZeroP}\end{aligned}$$ This liquid state is stable: the chemical potential $\mu$ is negative preventing evaporation, while $c^2$ is positive, i.e. the compressibility is negative, which indicates the local stability of the liquid.
The Eq.(\[PressureVsDensity\]) shows that the quantum zero point energy produces a positive contribution to the vacuum pressure, instead of the negative pressure expected from the effective theory and from Eq.(\[ThermpPotEnergyBoseGas\]) for the weakly interacting Bose gas. Let us now recall that in this model we changed the sign of the interaction term, compared to that in the weakly interacting Bose gas. As a result both terms in Eq.(\[ThermpPotEnergyBoseGas\]) have changed sign.
The equilibrium state of the liquid is obtained due to the competition of two effects: attractive interaction of bare atoms (corresponding to the negative vacuum pressure in Eq.(\[PressureVsDensity\])) and their zero point motion which leads to repulsion (corresponding to the positive vacuum pressure in Eq.(\[PressureVsDensity\])). These effects are balanced in equilibrium, that is why the two “Planck” scales in Eq.(\[TwoPlanckScales\]) become of the same order of magnitude.
Quantum liquid from Theory of Everything
----------------------------------------
The parameters of liquid $^4$He at $P=0$ have been calculated in exact microscopic theory, where the many-body wave function of $^4$He atoms has been constructed using the “Theory of Everything” in Eq.(\[TheoryOfEverything\]) with realistic pair potential [@Woo]. For $P=0$ one has $$\begin{aligned}
n_0\sim 2\cdot 10^{22}~{\rm
cm}^{-3}~,~\mu={\epsilon(n_0)\over n_0}
\sim -7~{\rm K}~,~c\sim 2.5\cdot 10^4~{\rm cm}/{\rm sec}~,
\label{ParametersMicroscopicTheory}\\
mc^2\sim
30~{\rm K}~, ~ \hbar cn_0^{1/3}\sim 7~{\rm K}
~,\label{PlanckScaleExactTheory}\\
~\tilde\epsilon
\equiv 0~.
\label{AgainVacuumEnergy}\end{aligned}$$ These derived parameters are in a good agreement with their experimental values.
Vacuum energy and cosmological constant
=======================================
Nullification of “cosmological constant” in quantum liquid
----------------------------------------------------------
If there is no interaction with environment, the external pressure $P$ is zero, and thus in equilibrium the vacuum energy density $\epsilon
-\mu n=-P$ in Eqs.(\[ThermpPotVsPressure\]) and (\[VacuumEnergyPressure\]) is also zero. The energy density $\tilde\epsilon$ is the quantity which is relevant for the effective theory: just this energy density enters the effective action for the soft variables, including the effective gravity field, which must be minimized to obtain the stationary states of the vacuum and matter fields. Thus $\tilde\epsilon$ is the proper counterpart of the vacuum energy density, which is responsible for the cosmological term in the Einstein gravity.
Nullification of both the vacuum energy density and the pressure in the quantum liquid means that $P_{\Lambda}=-\rho_{\Lambda}=0$, i.e. the effective cosmological constant in the liquid is identically zero. Such nullification of the cosmological constant occurs without any fine-tuning or supersymmetry. Note that the supersymmetry – the symmetry between the fermions and bosons – is simply impossible in $^4$He, since there are no fermionic fields in the Bose liquid. The same nullification occurs in Fermi liquids, in superfluid phases of $^3$He, since these are also the quantum liquids with the negative chemical potential [@PhysRepRev]. Some elements of supersymmetry can be found in the effective theory of superfluid $^3$He [@Nambu; @PhysRepRev], but this is certainly not enough to produce the nullification.
Applying this to the quantum vacuum, the mere assumption that the “cosmological liquid” – the vacuum of the quantum field theory – belongs to the class of states, which can exist in equilibrium without external forces, leads to the nullification of the vacuum energy in equilibrium at $T=0$.
Whether this scenario of nullification of cosmological constant can be applied to the cosmological fluid (the physical vacuum) is a question under discussion (see discussion in Ref. [@WhoIsInflaton], where the inflaton field is considered as the analog of the variable $n$ in quantum liquid).
Role of zero point energy of bosonic and fermionic fields
---------------------------------------------------------
The advantage of the quantum liquid is that we know both the effective theory and the fundamental Theory of Everything in Eq. (\[TheoryOfEverything\]). That is why we can compare the two approaches. The microscopic wave function used for microscopic calculations contains, in principle, all the information on the system, including the quantum fluctuations of the low-energy phonon degrees of freedom, which are considered in the effective theory in Eq.(\[VacuumEnergyEffective\]). That is why the separate treatment of the contribution to the vacuum energy of the low-energy degrees of freedom described by effective theory has no sense: this leads at best to the double counting.
The effective theory in quantum Bose liquid contains phonons as elementary bosonic quasiparticles and no fermions. That is why the analogue of Eq. (\[VacuumEnergyPlanck\]) for the vacuum energy produced by the zero point motion of “elementary particles” is $$\rho_\Lambda = {1\over 2V}\sum_{\rm phonons}cp \sim
{1\over c^3} E^4_{\rm Planck}=\sqrt{-g}~E^4_{\rm Planck}~.
\label{VacuumEnergyEffective}$$ Here $g$ is the determinant of the acoustic metric in Eq. (\[DeterminantAcousticMetric\]). The “Planck” energy cut-off can be chosen either as the Debye temperature $E_{\rm Debye}=\hbar c/a=\hbar
cn_0^{1/3}$ in Eq.(\[PlanckScaleExactTheory\]) with $a$ being the interatomic distance, which plays the role of the Planck length; or as $mc^2$ which has the same order of magnitude.
The disadvantages of such a naive calculation of the vacuum energy within the effective field theory are: (i) The result depends on the cut-off procedure; (ii) The result depends on the choice of the zero from which the energy is counted: a shift of the zero level leads to a shift in the vacuum energy.
In the microscopic theory these disadvantages are cured: (i) The cut-off is not required; (ii) The relevant energy density, $\tilde\epsilon =\epsilon - \mu n$, does not depend on the choice of zero level: the shift of the energy $\int d^3r \epsilon$ is exactly compensated by the shift of the chemical potential $\mu$.
At $P=0$ the microscopic results for both vacuum energies characterizing the quantum liquid are: $\tilde\epsilon(n_0)=0$, $\epsilon(n_0)=\mu n_0 <0$. Both energies are in severe disagreement with the naive estimation in Eq.(\[VacuumEnergyEffective\]) obtained within the effective theory: $\rho_\Lambda$ in Eq.(\[VacuumEnergyEffective\]) is nonzero in contradiction with $\tilde\epsilon(n_0)=0$; comparing it with $\epsilon(n_0)$ one finds that $\rho_\Lambda$ is about of the same order of magnitude, but it has an opposite sign.
This is an important lesson from the condensed matter. It shows that the use of the zero point fluctuations of bosonic or fermionic modes in Eq.(\[VacuumEnergyPlanck\]) in the cis-Planckian effective theory is absolutely irrelevant for the calculations of the vacuum energy density. Whatever are the low-energy modes, fermionic or bosonic, for equilibrium vacuum they are exactly cancelled by the transn-Planckian degrees of freedom, which are not accessible within the effective theory.
Why is equilibrium vacuum not gravitating?
------------------------------------------
We discussed the condensed matter view to the problem, why the vacuum energy is so small, and found that the answer comes from the “fundamental trans-Planckian physics”. In the effective theory of the low energy degrees of freedom the vacuum energy density of a quantum liquid is of order $E_{\rm Planck}^4$ with the corresponding “Planck” energy appropriate for this effective theory. However, from the exact “Theory of Everything” of the quantum liquid, i.e. from the microscopic physics, it follows that the “trans-Planckian” degrees of freedom exactly cancel the relevant vacuum energy without fine tuning. The vacuum energy density is exactly zero, if the following conditions are fulfilled: (i) there are no external forces acting on the liquid; (ii) there are no quasiparticles (matter) in the liquid; (iii) no curvature or inhomogeneity in the liquid; and (iv) no boundaries which give rise to the Casimir effect. Each of these factors perturbs the vacuum state and induces a nonzero value of the vacuum energy density of order of the energy density of the perturbation, as we shall discuss below.
Let us, however, mention, that the actual problem for cosmology is not why the vacuum energy is zero (or very small when it is perturbed), but why the vacuum is not (or almost not) gravitating. These two problems are not necessarily related since in the effective theory the equivalence principle is not the fundamental physical law, and thus does not necessarily hold when applied to the vacuum energy. That is why, one cannot exclude the situation, when the vacuum energy is huge, but it is not gravitating. The condensed matter provides an example of such situation too. The weakly interacting Bose gas discussed above is just the proper object. This gas-like substance can exists only at positive external pressure, and thus it has the negative energy density. The translation to the relativistic language gives a huge vacuum energy is on the order of the Planck energy scale (see Eq.(\[ThermpPotEnergyBoseGas\])). Nevertheless, the effective theory remains the same as for the quantum liquid, and thus even in this situation the equilibrium vacuum, which exists under an external pressure, is not gravitating, and only small deviations from equilibrium state are gravitating. Just this situation was discussed in Ref. [@WhoIsInflaton].
In condensed matter the effective gravity appears as an emergent phenomenon in the low energy corner. The gravitational field is not fundamental, but is one of the low energy collective modes of the quantum vacuum. This dynamical mode provides the effective metric (the acoustic metric in $^4$He and weakly interacting Bose gas) for the low-energy quasiparticles which serve as an analogue of matter. This gravity does not exist on the microscopic (trans-Planckian) level and appears only in the low energy limit together with the “relativistic” quasiparticles and the acoustics itself. The bare atoms, which live in the “trans-Planckian” world and form the vacuum state there, do not experience the “gravitational” attraction experienced by the low-energy quasiparticles, since the effective gravity simply does not exist at the micriscopic scale (we neglect here the real gravitational attraction of the atoms, which is extremely small in quantum liquids). That is why the vacuum energy cannot serve as a source of the effective gravity field: the pure completely equilibrium homogeneous vacuum is not gravitating.
On the other hand, the long-wave-length perturbations of the vacuum are within the sphere of influence of the low-energy effective theory; such perturbations can be the source of the effective gravitational field. Deviations of the vacuum from its equilibrium state, induced by different sources discussed below, are gravitating.
Why is the vacuum energy unaffected by the phase transition?
------------------------------------------------------------
It is commonly believed that the vacuum of the Universe underwent one or several broken symmetry phase transitions. Since each of the transtions is accompanied by a substantial change in the vacuum energy, it is not clear why the vacuum energy is (almost) zero after the last phase transition. In other words, why has the true vacuum the zero energy, while the energies of all other false vacua are enormously big?
What happens in quantum liquids? According to the conventional wisdom, the phase transition, say, to the broken symmetry vacuum state, is accompanied by the change of the vacuum energy, which must decrease in a phase transition. This is what usually follows from the Ginzburg-Landau description of phase transitions. However, let us compare the energy densities of the false and the true vacuum states. Let us assume that the phase transition is of the first order, and the false vacuum is separated from the true vacuum by a large energy barrier, so that it can exist as a (meta)stable state. Since the false vacuum is stable, the Eq. (\[ZeroVacuumEnergy\]) can also be applied to the false vacuum, and one obtains the paradoxical result: in the absence of external forces the energy density of the false vacuum must be the same as the energy density of the true vacuum, i.e. the relevant energy density $\tilde\epsilon$ must be zero for both vacua. Thus the first order phase transition occurs without the change in the vacuum energy.
To add more confusion, note that the Eq. (\[ZeroVacuumEnergy\]) can be applied even to the unstable vacuum which corresponds to a saddle point of the energy functional, if such a vacuum state can live long enough. Thus the vacuum energy density does not change in the second order phase transition either.
There is no paradox, however: after the phase transition to a new state has occured, the chemical potential $\mu$ will be automatically ajusted to preserve the zero external pressure and thus the zero energy $\tilde\epsilon$ of the vacuum. Thus the relevant vacuum energy is zero before and after transition, which means that the $T=0$ phase transitions do not disturb the zero value of the cosmological constant. Thus the scenario of the nullification of the vacuum energy suggested by the quantum liquids survives even if the phase transition occurs in the vacuum. The first order phase transition between superfluid phases $^3$He-A and $^3$He-B at $T=0$ and $P=0$ gives the proper example [@PhysRepRev].
Why is the cosmological constant nonzero?
-----------------------------------------
We now come to another problem in cosmology: Why is the vacuum energy density presently of the same order of magnitude as the energy density of matter $\rho_M$, as is indicated by recent astronomical observations [@Supernovae]. While the relation between $\rho_M$ and $\rho_\Lambda$ seems to depend on the details of trans-Planckian physics, the order of magnitude estimation can be readily obtained. In equilibrium and without matter the vacuum energy is zero. However, the perturbations of the vacuum caused by matter and/or by the inhomogeneity of the metric tensor lead to disbalance. As a result the deviations of the vacuum energy from zero must be on the of order of the perturbations.
Let us consider how this happens in quantum liquids for different types of perturbations, i.e. how the vacuum energy, which is zero at $T=0$ and in complete equilibrium in the absence of external forces, is influenced by different factors, which lead to small but nonzero value of the cosmological constant.
Vacuum energy from finite temperature
-------------------------------------
A typical example derived from quantum liquids is the vacuum energy produced by temperature. Let us consider for example the superfluid $^4$He in equilibrium at finite $T$ without external forces. If $T\ll -\mu$ one can neglect the exponentially small evaporation and consider the liquid as in equilibrium. The quasiparticles – phonons – play the role of the hot relativistic matter, and their equaton of state is $P_{M}=(1/3)\rho_M=(\pi^2/30\hbar^3 c^3) T^4 $, with $c$ being the speed of sound [@Khalatnikov]. In equilibrium the pressure caused by thermal quasiparticles must be compensated by the negative vacuum pressure, $P_{\Lambda}=-P_{M}$, to support the zero value of the external pressure, $P=P_{\Lambda}+P_{M}=0$. In this case one has the following nonzero values of the vacuum pressure and vacuum energy density: $$\rho_{\Lambda}=-P_{\Lambda}=P_{M}={1\over
3}\rho_{M}=\sqrt{-g}\frac{\pi^2}{30\hbar^3} T^4~,
\label{VacuumMatterEnergy}$$ where $g=-c^{-6}$ is again the determinant of acoustic metric. In this example the vacuum energy density $\rho_{\Lambda}$ is positive and always on the order of the energy density of matter. This indicates that the cosmological constant is not actually a constant but is ajusted to the energy density of matter and/or to the other perturbations of the vacuum discussed below.
Vacuum energy from Casimir effect
---------------------------------
Another example of the induced nonzero vacuum energy density is provided by the boundaries of the system. Let us consider a finite droplet of $^4$He with radius $R$. If this droplet is freely suspended then at $T=0$ the vacuum pressure $P_{\Lambda}$ must compensate the pressure caused by the surface tension due to the curvature of the surface. For a spherical droplet one obtains the negative vacuum energy density: $$\rho_{\Lambda}=-P_{\Lambda}=-{2\sigma\over
R} \sim - {E^3_{\rm Debye}\over\hbar^2c^2 R}\equiv -\sqrt{-g}E^3_{\rm
Planck}{\hbar c\over R}~,
\label{VacuumDropletEnergy}$$ where $\sigma$ is the surface tension. This is an analogue of the Casimir effect, in which the boundaries of the system produce a nonzero vacuum pressure. The strong cubic dependence of the vacuum pressure on the “Planck” energy $E_{\rm Planck}\equiv E_{\rm Debye}$ reflects the trans-Planckian origin of the surface tension $\sigma \sim
E_{\rm Debye}/a^2\sim \hbar c/a^3$: it is the energy (per unit area) related to the distortion of atoms in the surface layer of the size of the interatomic distance $a$.
Such term of order $E^3_{\rm Planck}/R$ in the Casimir energy has been considered in Ref.[@Ravndal]. In Ref. [@Bjorken] such vacuum energy, with $R$ being the size of the horizon, has been connected to the energy of the Higgs condensate in the electroweak phase transition.
This form of the Casimir energy – the surface energy $4\pi R^2\sigma$ normalized to the volume of the droplet – can also serve as an analogue of the quintessence in cosmology [@Quintessence]. Its equation of state is $P_{\sigma} = -(2/3)\rho_{\sigma}$: $$\rho_{\sigma}={4\pi R^2 \sigma \over {4\over 3} \pi
R^3}={3 \sigma\over R}~~,~~P_{\sigma} =-{2\sigma\over R}= -{2\over 3}
\rho_{\sigma}~.
\label{VacuumDropletEnergy2}$$ The equilibrium condition within the droplet can be written as $P=P_{\Lambda}+P_{\sigma}=0$. In this case the quintessence is related to the wall – the boundary of the droplet. In cosmology the quintessence with the same equation of state, $<P_{\sigma}> = -(2/3)<\rho_{\sigma}>$, is represented by a wall wrapped around the Universe or by a tangled network of cosmic domain walls [@TurnerWhite]. The surface tension of the cosmic walls can be much smaller than the Planck scale.
Vacuum energy induced by texture
--------------------------------
The nonzero vacuum energy density, with a weaker dependence on $E_{\rm
Planck}$, is induced by the inhomogeneity of the vacuum. Let us discuss the vacuum energy density induced by texture in a quantum liquid. We consider here the twist soliton in $^3$He-A, since such texture is related to the Riemann curvature in general relativity [@PhysRepRev]. Within the soliton the field of the $^3$He-A order parameter – the unit vector $\hat{\bf l}$ – has a form $\hat{\bf l}(z)=\hat{\bf
x}\cos\phi(z)+ \hat{\bf y}\sin\phi(z)$. The energy of the system in the presence of the soliton consists of the vacuum energy $\rho_{\Lambda}(\phi)$ and the gradient energy: $$\rho= \rho_{\Lambda}(\phi)+\rho_{\rm grad}~,~
\rho_{\Lambda}(\phi)=\rho_{\Lambda}(\phi=0)+{K\over \xi_D^2 }\sin^2\phi~,~
\rho_{\rm grad}=K(\partial_z\phi)^2~,
\label{VacuumEnergySoliton}$$ where $\xi_D$ is the so-called dipole length [@VollhardtWolfle]. Here we denoted the energy $\tilde\epsilon$ by $\rho$ to make the connection with general relativity, and omitted $\sqrt{-g}$ assuming that $c=1$.
The solitonic solution of the sine-Gordon equation, $\tan(\phi/2)=e^{z/\xi_D}$, gives the following spatial dependence of vacuum and gradient energies: $$\rho_{\Lambda}(z)-\rho_{\Lambda}(\phi=0)=\rho_{\rm grad}(z) = {K\over \xi_D^2
\cosh^2 (z/\xi_D)}~.
\label{VacuumGradientEnergy1}$$ Let us consider for simplicity the 1+1 case. Then the equilibrium state of the whole quantum liquid with the texture can be discussed in terms of partial pressure of the vacuum, $P_{\Lambda}=-\rho_{\Lambda}$, and that of the inhomogeneity, $P_{\rm grad}=\rho_{\rm grad}$. The latter equation of state describes the so called stiff matter in cosmology. In equilibrium the external pressure is zero and thus the positive pressure of the texture (stiff matter) must be compensated by the negative pressure of the vacuum: $$P=P_{\Lambda}(z)+P_{\rm grad}(z)=0~.
\label{VacuumGradientEquilibrium}$$ This equilibrium condition produces another relation between the vacuum and the gradient energy densities $$\rho_{\Lambda}(z)=-P_{\Lambda}(z)=P_{\rm grad}(z) =\rho_{\rm
grad}(z)~.
\label{VacuumGradientEnergy2}$$ Compariing this Eq.(\[VacuumGradientEnergy2\]) with Eq. (\[VacuumGradientEnergy1\]) one finds that in equilibrium $$\rho_{\Lambda}(\phi=0)=0~,
\label{AgainZero}$$ i.e.. as before, the main vacuum energy density – the energy density of the bulk liquid far from the soliton – is exactly zero if the isolated liquid is in equilibrium. Within the soliton the vacuum is perturbed, and the vacuum energy is induced being on the order of the energy of the perturbation. In this case $\rho_{\Lambda}(z)$ is equal to the gradient energy density of the texture.
The induced vacuum energy density in Eq. (\[VacuumGradientEnergy1\]) is inversly proportional to the square of the size of the region where the field is concentrated: $$\rho_{\Lambda}(R)\sim
\sqrt{-g} E^2_{\rm Planck} \left({\hbar c\over
R}\right)^2~.
\label{EnergyInverseSquare}$$ In case of the soliton soliton $R\sim
\xi_D$. Similar behavior for the vacuum energy density in the interior region of the Schwarzschild black hole, with $R$ being the Schwarzschild radius, was discussed in Ref.[@ChaplineLaughlin].
In cosmology, the vacuum energy density obeying the Eq.(\[EnergyInverseSquare\]) with $R$ proportional to the Robertson-Walker scale factor has been suggested in Ref. [@Chapline], and with $R$ being the size of the horizon, $R= R_{\rm H}$, in Ref. [@Bjorken]. Following the reasoning of Ref. [@Bjorken], one can state that the vacuum energy density related to the phase transition is determined by Eq.(\[EnergyInverseSquare\]) with $R= R_{\rm H}(t)$ at the cosmological time $t$ when this transition (or crossover) occured. Applying this to, say, the cosmological electroweak transition, where the energy density of the Higgs condensate is of order of $T_{\rm ew}^4$, one obtains the relation $T_{\rm ew}^2=E_{\rm Planck}
\hbar c/ R_{\rm H}(t=t_{\rm ew})$. It also follows that the entropy within the horizon volume at any given cosmological temperature $T$ is $S_{\rm H}\sim E_{\rm Planck}^3/T^3$ for the radiation-dominated Universe.
Vacuum energy due to Riemann curvature
--------------------------------------
The vacuum energy $\sim R^{-2}$, with $R$ proportional to the Robertson-Walker scale factor, comes also from the Riemann curvature in general relativity. It appears that the gradient energy of a twisted $\hat{\bf l}$-texture is equivalent to the Einstein curvature term in the action for the effective gravitational field in $^3$He-A [@PhysRepRev]: $$-{1\over 16 \pi G} \int d^3r \sqrt{-g}{\cal R} \equiv K
\int d^3r
((\hat{\bf l}\cdot(
\nabla\times\hat{\bf l}))^2 ~.
\label{EinsteinActionHe}$$ Here ${\cal R}$ is the Riemann curvature calculated using the effective metric experienced by fermionic quasiparticles in $^3$He-A $$ds^2=-dt^2 +c_\perp^{-2}(\hat{\bf l}
\times d{\bf r})^2+c_\parallel^{-2}(\hat{\bf l}\cdot d{\bf
r})^2~.
\label{Metric3HeA}$$ The order parameter vector $\hat{\bf l}$ plays the role of the Kasner axis; $c_\parallel$ and $c_\perp$ correspond to the speed of “light” propagating along the direction of $\hat{\bf l}$ and in transverse direction; $c_\parallel\gg c_\perp$.
The analogy between the textural (gradient) energy in $^3$He-A and the curvature in general relativity allows us to interprete the result of the previous section, Eq.(\[VacuumGradientEnergy2\]), in terms of the vacuum energy induced by the curvature of the space. It appears that in cosmology this effect can be described within the general relativity. We must consider the stationary cosmological model, since the time dependent vacuum energy is certainly beyond the Einstein theory. The stationary Universe was obtained by Einstein in his work where he first introduced the cosmological term [@EinsteinCosmCon]. It is the closed Universe with positive curvature and with matter, where the effect of the curvature is compensated by the cosmological term, which is ajusted in such a way, that the Universe remains static. This is just the correct and probably unique example, of how the vacuum energy is induced by curvature and matter within the general relativity.
Let us recall this solution. In the static state of the Universe two equilibrium conditions must be fulfilled: $$\rho=\rho_M+\rho_\Lambda+ \rho_{\cal R}=0~,~ P=P_M+P_\Lambda+ P_{\cal
R}=0~.
\label{CurvatureEnergy}$$ The first equation in (\[CurvatureEnergy\]) reflects the gravitational equilibrium, which requires that the total mass density must be zero: $\rho=\rho_M+\rho_\Lambda+ \rho_{\cal R}=0$ (actually the “gravineutrality” corresponds to the combination of two equations in (\[CurvatureEnergy\]), $\rho+3P=0$, since $\rho+3P$ serves as a source of the gravitational field in the Newtonian limit). This gravineutrality is analogous to the electroneutrality in condensed matter. The second equation in (\[CurvatureEnergy\]) is equivalent to the requirement that for the “isolated” Universe the external pressure must be zero: $P=P_M+P_\Lambda+ P_{\cal R}=0$. In addition to matter density $\rho_M$ and vacuum energy density $\rho_{\Lambda}$, the energy density $\rho_{\cal R}$ stored in the spatial curvature is added: $$\rho_{\cal R}=-{{\cal R}\over 16\pi G}=-{3k\over 8\pi
GR^2}~,~ P_{\cal R} =-{1\over 3}\rho_{\cal R}~,
\label{CurvatureMass}$$ Here $R$ is the cosmic scale factor in the Friedmann-Robertson-Walker metric $$ds^2=-dt^2+ R^2\left({dr^2\over 1-kr^2}
+r^2d\theta^2 +r^2
\sin^2\theta d\phi^2\right)~,
\label{FriedmannRobertsonWalker}$$ the parameter $k=(-1,0,+1)$ for an open, flat, or closed Universe respectively; and we again removed the factor $\sqrt{-g}$ from the definition of the energy densities.
For the cold Universe with $P_M=0$, the Eqs. (\[CurvatureEnergy\]) give $$\rho_\Lambda= {1\over 2} \rho_M =-{1\over 3}\rho_{\cal R}= {k\over 8\pi
GR^2} ~,
\label{EinsteinSolution1}$$ and for the hot Universe with the equation of state $P_M=(1/3)\rho_M$, $$\rho_\Lambda= \rho_M =-{1\over 2}\rho_{\cal R}= {3k\over 16\pi
GR^2}~.
\label{EinsteinSolution2}$$ Since the energy of matter is positive, the static Universe is possible only for positive curvature, $k=+1$, i.e. for the closed Universe.
This is the unique solution, which describes an equilibrium static state of the Universe, where the vacuum energy is induced by matter and curvature. And this solution is obtained within the effective theory of general relativity without invoking the trans-Planckian physics and thus does not depend on details of the trans-Planckian physics.
Necessity of Planck physics for time-dependent cosmology
--------------------------------------------------------
The condensed matter analog of gravity provides a natural explanation, why the cosmological constant is zero with a great accuracy, when compared with the result based on naive estimation of the vacuum energy within the effective theory. It also shows how the small effective cosmological constant of the relative order of $10^{-120}$ naturally arises as the response to different perturbations. We considered the time-independent perturbations, where the minimum energy consideration and equilibrium condition provided the solution of the problem.
For the time-dependent situation, such as an expansion of the Universe, the calculation of the vacuum response is not as simple even in quantum liquids. One must solve self-constistently the coupled dynamical equations for the motion of the vacuum and matter fields. In case of general relativity this requires the equation of motion for the vacuum energy $\rho_\Lambda$, but this is certainly beyond the effective theory since the time dependence of $\rho_\Lambda$ violates Bianchi identities. Probably some extension of general relativity towards the scalar-tensor theory of gravity such as discussed in Ref. [@StarobinsyPolarski]) will be more relevant for that.
On the other hand the connection to the Planck physics can help to solve the other cosmological problems. For example there is the flatness problem: To arrive at the Universe we see today, the Universe must have begun extremely flat, which means that parameter $k$ in the Robertson-Walker metric must be zero. In quantum liquids the general Robertson-Walker metric in Eq.(\[FriedmannRobertsonWalker\]) describes the spatially homogeneous space-time as viewed by the low-energy quasiparticles within the effective theory. However, for the external or high-energy observer the quantum liquid is not homogeneous if $k\neq 0$. The same probably happens in gravity: If general relativity is the effective theory, the invariance under the coordinate transformations exists only at low energy. For the “Planck” observer the Robertson-Walker metric in Eq.(\[FriedmannRobertsonWalker\]) is viewed as space dependent if $k\neq 0$. That is why the condition, that the Universe must be spatially homogeneous not only on the level of the effective theory but also on the fundamental level, requires that $k=0$. Thus, if general relativity is the effective theory, the truely homogeneous Universe must be flat.
Effects of discrete number $N$ of particles in the vacuum {#EffectsDiscreteNumberN}
=========================================================
Casimir effect in quantum liquids
---------------------------------
Till now we used the conservation law for the particle number $N$, the number of bare atoms in the quantum vacuum, to derive the nullification of the vacuum energy in the grand ensemble of particles. Now we consider another possible consequence of the discrete nature of the quantim vacuum in quantum liquids. This is related to the Casimir effect.
The attractive force between two parallel metallic plates in vacuum induced by the quantum fluctuations of the electromagnetic field has been predicted by Casimir in 1948 [@Casimir]. The calculation of the vacuum presure is based on the regularization schemes, which allows to separate the effect of the low-energy modes of the vacuum from the huge diverging contribution of the high-energy degrees of the freedom. There are different regularization schemes: Riemann’s zeta-function regularization; introduction of the exponential cutoff; dimensional regularization, etc. People are happy when different regularization schemes give the same results. But this is not always so (see e.g. [@Kamenshchik; @Ravndal; @Falomir], and in particular the divergencies occurring for spherical geometry in odd spatial dimension are not cancelled [@Milton; @CasimirForSphere]). This raises some criticism against the regularization methods [@Hagen] or even some doubts concerning the existence and the magnitude of the Casimir effect.
The same type of the Casimir effect arises in condensed matter, due to thermal (see review paper [@Kardar]) or/and quantum fluctuations. When considering the analog of the Casimir effect in condensed matter, the following correspondence must be taken into account, as we discussed above. The ground state of quantum liquid corresponds to the vacuum of quantum field theory. The low-energy bosonic and fermionic axcitations abobe the vacuum – quasiparticles – correspond to elementary particles forming the matter. The low energy modes with linear spectrum $E=cp$ can be described by the relativistic-type effective theory. The analog of the Planck energy scale $E_{\rm Planck}$ is determined either by the mass $m$ of the atom of the liquid, $E_{\rm Planck}\equiv
mc^2$, or by the Debye energy, $E_{\rm Planck}\equiv
\hbar c/a$ (see Eq.(\[TwoPlanckScales\])).
The traditional Casimir effects deals with the low energy massless modes. The typical massless modes in quantum liquid are sound waves. The acoustic field is desribed by the effective theory in Eq.(\[LagrangianSoundWaves\]) and corresponds to the massless scalar field. The walls provide the boundary conditions for the sound wave modes, usually these are the Neumann boundary conditions. Because of the quantum hydrodynamic fluctuations there must be the Casimir force between two parallel plates immersed in the quantum liquid. Within the effective theory the Casimir force is given by the same equation as the Casimir force acting between the conducting walls due to quantum electromagnetic fluctuations. The only modifications are: (i) the speed of light must be substututed by the spin of sound $c$; (ii) the factor $1/2$ must be added, since we have the scalar field of the longitudinal sound wave instead of two polarizations of light. If $d$ is the distance between the plates and $A$ is their area, then the $d$-dependent contribution to the ground state energy of the quantum liquid at $T=0$ which follows from the effective theory must be $$E_{\rm C}= -{\hbar c\pi^2 A\over 1440 d^3}
\label{CasimirForceSound}$$ Such microscopic quantities of the quantum liquid as the mass of the atom $m$ and interatomic space $a$ do not enter explicitly the Eq.(\[CasimirForceSound\]): the traditional Casimir force is completely determined by the “fundamental” parameter $c$ of the effective scalar field theory.
Finite-size vs finite-$N$ effect
--------------------------------
However, we shall show that the Eq.(\[CasimirForceSound\]) is not always true. We shall give here an example, where the effective theory is not able to predict the Casimir force, since the microscopic high-energy degrees of freedom become important. In other words the “transPlanckian physics” shows up and the “Planck” energy scale explicitly enters the result. In this situation the Planck scale is physical and cannot be removed by any regularization.
The Eq.(\[CasimirForceSound\]) gives a finite-size contribution to the energy of quantum liquid. It is inversly proportional to the linear dimension of the system, $E_{\rm C} \propto 1/R$ for the sphere of radius $R$.. However, for us it is important that it is not only the finite-size effect, but also the finite-$N$ effect, $
E_{\rm C}
\propto N^{-1/3}$, where $N$ is the number of atoms in the liquid in the slab. As distinct from $R$ the quantity $N$ is a discrete quantity. Since the main contribution to the vacuum energy is $\propto R^3 \propto N$, the relative correction of order $N^{-4/3}$ means that the Casimir force is the mesoscopic effect. We shall show that in quantum liquids, the essentially larger mesoscopic effects, of the relative order $N^{-1}$, can be more pronounced. This is a finite-$N$ effect, which reflects the dicreteness of the vacuum and cannot be described by the effective theory dealing with the continuous medium, even if the theory includes the real boundary conditions with the frequency dependence of dielectic permeability.
We shall start with the simplest quantum vacuum – the ideal one-dimensional Fermi gas – where the mesoscopic Casimir forces can be calculated exactly without invoking any regularization procedure.
Vacuum energy from microscopic theory
-------------------------------------
We consider the system of $N$ bare particles, each of them being one-dimensional massless fermions, whose continuous energy spectrum is $E(p)=cp$, with $c$ playing the role of speed of light. We assume that these fermions are either ‘spinless’’ (this means means that they all have the same direction of spin and thus the spin degrees of fredom can be neglected) or the 1+1 Dirac fermions. If the fermions are not interacting the microscopic theory is extremely simple: in vacuum state fermions simply occupy all the energy levels below the chemical potential $\mu$. In the continuous limit, the total number of particles $N$ and the total energy of the system in the one-dimensional “cavity” of size $d$ are expressed in terms of the Fermi momentum $p_F=\mu/c$ in the following way $$\begin{aligned}
N=nd~,~n=\int_{-p_F}^ {p_F}{dp\over 2\pi\hbar} ={ p_F\over
\pi\hbar}~,
\label{FermiMomentum1}\\
E=\epsilon(n) d~,~\epsilon(n)=\int_{-p_F}^ {p_F}{dp\over
2\pi\hbar}cp ={cp_F^2\over 2\pi\hbar}={\pi\over 2}
\hbar c n^2~.
\label{FermiMomentum2}\end{aligned}$$ Here $\epsilon(n)$ is the vacuum energy density as a function of the particle density. The relation between the particle density and chemical potential $\mu=\pi
\hbar c n=p_F c$ also follows from minimization of the relevant vacuum energy: $d(\epsilon(n) -\mu n)/dn=0$. In the vacuum state the relevant vacuum energy density and the pressure of the Fermi gas are $$\tilde\epsilon=\epsilon(n) -\mu n=
-{\pi\over 2} \hbar c n^2~,~P=-\tilde\epsilon={\pi\over
2} \hbar c n^2~.
\label{FermionicVacuumEnergy}$$ Fermi gas can exist only at positive external pressure provided by the walls.
Vacuum energy in effective theory
---------------------------------
As distinct from the microscopic theory, which deals with bare particles, the effective theory deals with the quasiparticles – fermions living at the level of the chemical potential $\mu=cp_F$. There are 4 different quasiparticles: (i) quasiparticles and quasiholes living in the vicinity of the Fermi point at $p=+p_F$ have spectrum $E_{qp}(p_+)=|E(p)
-\mu|=c|p_+|$, where $p=p_z-p_F$; (ii) quasiparticles and quasiholes living in the vicinity of the other Fermi point at $p=-p_F$ have the spectrum $E_{qp}(p_-)=|E(p)
-\mu|= c|p_-|$, where $p_-=p+p_F$. In the effective theory the energy of the system is the energy of the Dirac vacuum $$E=-\sum_{p_+}c|p_+| -\sum_{p_-}c|p_-| ~.
\label{FermionicQuasiVacuumEnergy}$$ This energy is divergent and requires the cut-off. With the proper cut-off provided by the Fermi-momentum, $p_{\rm
Planck} \sim p_F$, the negative vacuum energy density $\epsilon(n)$ in Eq.(\[FermionicVacuumEnergy\]) can be reproduced. This is a rather rare situation when the effective theory gives the correct sign of the vacuum energy.
Vacuum energy as a function of discrete $N$
-------------------------------------------
Now let us discuss the Casimir effect – the change of the vacuum pressure caused by the finite size effects in the vacuum. We must take into account the discreteness of the spectrum of bare particles or quasiparticles (depending on which theory we use, microscopic or effective) in the slab. Let us start with the microscopic description in terms of bare particles (atoms). We can use two different boundary conditions for particles, which give two kinds of discrete spectrum: $$\begin{aligned}
~E_k=k{\hbar c\pi\over d}~,
\label{LinearSpectrum1}\\
~E_k=\left(k+{1\over 2}\right){\hbar c\pi\over d}~.
\label{LinearSpectrum2}\end{aligned}$$ Eq.(\[LinearSpectrum1\]) corresponds to the spinless fermions with Dirichlet boundary conditions at the walls, while Eq.(\[LinearSpectrum2\]) describes the energy levels of the 1+1 Dirac fermions with no particle current through the wall; the latter case with the generalization to the $d+1$ fermions has been discussed in [@Paola].
The vacuum is again represented by the ground state of the collection of the $N$ noninteracting particles. We know the structure of the completely and thus the vacuum energy in the slab is well defined: it is the energy of $N$ fermions in 1D box of size $d$ $$\begin{aligned}
E(N,d)=\sum_{k=1}^N E_k={\hbar c\pi\over 2d}N(N+1)
~~,~{\rm for}~~E_k=k{\hbar c\pi\over d}~,
\label{TotalEnergy1}\\
E(N,d)=\sum_{k=0}^{N-1}E_k={\hbar c\pi\over 2d}N^2
~~,~~{\rm for}~~E_k=\left(k+{1\over 2}\right){\hbar
c\pi\over d}~.
\label{TotalEnergy21}\end{aligned}$$
Leakage of vacuum through the wall.
-----------------------------------
To calculate the Casimir force acting on the wall, we must introduce the vacuum on both sides of the wall. Thus let us consider three walls: at $z=0$, $z=d_1<d$ and $z=d$. Then we have two slabs with sizes $d_1$ and $d_2=d-d_1$, and we can find the force acting on the wall separating the two slabs, i.e. on the wall at $z=d_1$. We assume the same boundary conditions at all the walls. But we must allow the exchange the particles between the slabs, otherwise the main force acting on the wall between the slabs will be determined simply by the difference in bulk pressure in the two slabs. This can be done due to, say, a very small holes (tunnel junctions) in the wall, which do not violate the boundary conditions and thus do not disturb the particle energy levels, but still allow the particle exchange between the two vacua.
This situation can be compared with the traditional Casimir effect. The force between the conducting plates arises because the electromagnetic fluctuations of the vacuum in the slab are modified due to boundary conditions imposed on electric and magnetic fields. In reality these boundary conditions are applicable only in the low-frequency limit, while the wall is transparent for the high-frequency electromagnetic modes, as well as for the other degrees of freedom of real vacuum (fermionic and bosonic), that can easily penetrate through the conducting wall. In the traditional approach it is assumed that those degrees of freedom, which produce the divergent terms in the vacuum energy, must be cancelled by the proper regularization scheme. That is why, though the dispersion of dielectic permeability does weaken the real Casimir force, nevertheless in the limit of large distances, $d_1\gg c/\omega_0$, where $\omega_0$ is the characteristic frequency at which the dispersion becomes important, the Casimir force does not depend on how easily the high-energy vacuum leaks through the conducting wall.
We consider here just the opposite limit, when (almost) all the bare particles are totally reflected. This corresponds to the case when the penetration of the high-energy modes of the vacuum through the conducting wall is highly suppressed, and thus one must certainly have the traditional Casimir force. Nevertheless, we shall show that due to the mesoscopic finite-$N$ effects the contribution of the diverging terms to the Casimir effect becomes dominating. They produce highly oscillating vacuum pressure in quantum liquids. The amplitude of the mesoscopic fluctuations of the vacuum pressure in this limit exceeds by factor $p_{\rm Planck}d/\hbar$ the value of the conventional Casimir pressure. For their description the continuous effective low-energy theories are not applicable.
Mesoscopic Casimir force in 1d Fermi gas
----------------------------------------
The total vacuum energy in two slabs for spinless and Dirac fermions is correspondingly $$\begin{aligned}
E(N,d_1,d_2)={\hbar c\pi\over 2}\left({N_1(N_1+1) \over
d_1}+{N_2(N_2+1)
\over d_2}\right)~,
\label{TotalEnergyTwoBoxes1}\\
E(N,d_1,d_2)={\hbar c\pi\over 2}\left({N_1^2 \over
d_1}+{N_2^2
\over d_2}\right)~,
\label{TotalEnergyTwoBoxes2}\end{aligned}$$ where $N_1$ and $N_2$ are the particle numbers in each of the two slabs: $$N_1+N_2=N~,~d_1+d_2=d
\label{ParticleConservation}$$ Since particles can transfer between the slabs, the global vacuum state in this geometry is obtained by minimization over the discrete particle number $N_1$ at fixed total number $N$ of particles. If the mesoscopic $1/N$ corrections are ignored, one obtains $N_1\approx(d_1/d)N$ and $N_2\approx (d_2/d)N$; the two vacua have the same pressure, and thus there is no force acting on the wall between the two vacua.
However, $N_1$ and $N_2$ are integer valued, and this leads to mesoscopic fluctuations of the Casimir force. The global vacuum with given values of $N_1$ and $N_2$ is realized only within a certain range of parameter $d_1$. If $d_1$ increases, it reaches some treshold value above which the energy of the vacuum with the particle numbers $N_1+1$ and $N_2-1$ has lower energy and it becomes the global vacuum. The same happens if $d_1$ decreases and reaches some treshold value below which the vacuum with the particle numbers $N_1-1$ and $N_2+1$ becomes the global vacuum. The force acting on the wall in the state ($N_1$, $N_2$) is obtained by variation of $E(N_1,N_2,d_1,d-d_1)$ over $d_1$ at fixed $N_1$ and $N_2$: $$F(N_1,N_2,d_1,d_2)=-{dE(N_1,N_2,d_1,d_2)\over
dd_1}+{dE(N_1,N_2,d_1,d_2)\over dd_2}~.
\label{TotalForce}$$ When $d_1$ increases reaches the treshold, where $E(N_1,N_2,d_1,d_2)=E(N_1+1,N_2-1,d_1,d_2)$, one particle must cross the wall from the right to the left. At this critical value the force acting on the wall changes abruptly (we do not discuss here an interesting physics arising just at the critical values of $a_1$, where the degeneracy occurs between the states ($N_1,N_2$) and ($N_1+ 1,N_2- 1$); at these positions of the wall (or membrane) the particle numbers $N_1$ and $N_2$ are undetermined and are actually fractional due to the quantum tunneling between the slabs [@Andreev]). Using for example the spectrum in Eq.(\[TotalEnergyTwoBoxes2\]) one obtains for the jump of the Casimir force: $$F(N_1\pm 1,N_2\mp 1)-F(N_1,N_2) =\hbar c\pi\left({\pm
2N_1 +1\over 2d_1^2}+{\pm 2N_2-1 \over
2d_2^2}\right)\approx \pm {\hbar c\pi N\over d_1d_2} ~.
\label{ChaoticForceChange}$$ The same result for the amplitude of the mesoscopic fluctuations is obtained if one uses the spectrum in Eq.(\[TotalEnergyTwoBoxes1\]).
In the limit $d_1\ll d$ the amplitude of the mesoscopic Casimir force $$|\Delta F_{\rm meso}|= {\hbar c\pi n\over d_1}={\hbar
c\pi n^2\over N_1}\equiv { E_{Planck}\over d_1}~.
\label{AmplitudeForceChange}$$ It is by factor $1/N_1=
(\pi\hbar/d_1p_{F})^3\equiv (\pi\hbar/d_1p_{Planck})^3$ smaller than the vacuum energy density in Eq.(\[FermiMomentum2\]). On the other hand it is by the factor $p_Fd_1\equiv p_{Planck}d_1$ larger than the traditional Casimir pressure, which in one-dimensional case is $P_{\rm C}\sim \hbar c/ d_1^2$. The divergent term which linearly depends on the Planck momentum cutoff $p_{Planck}$ as in Eq.(\[AmplitudeForceChange\]) has been revealed in many different calculations (see e.g. [@CasimirForSphere]), and attempts have made to invent the regularization scheme which would cancel the divergent contribution.
Mesoscopic Casimir pressure in quantum liquids
----------------------------------------------
The equation (\[AmplitudeForceChange\]) for the amplitude of the mesoscopic fluctuations of the vacuum pressure can be immediately generalized for the $d$-dimensional space: if $V_1$ is the volume of the internal region separated by almost inpenetrable walls from the outside vacuum, then the amplitude of the mesoscopic vacuum pressure must be of order $$|P_{\rm meso}|\sim {E_{Planck}\over V_1}~.
\label{MesoscopicVacuumPressure}$$ The mesoscopic random pressure comes from the discrete nature of the underlying quantum lquid, which represents the quantum vacuum. The integer value of the number of atoms in the liquid leads to the mesoscopic fluctuations of the pressure: when the volume $V_1$ of the vessel changes continuously, the equilibium number $N_1$ of particles changes in step-wise manner. This results in abrupt changes of pressure at some critical values of the volume: $$P_{\rm meso}\sim P(N_1\pm 1)-P(N_1)=\pm {dP\over dN_1}
=\pm {mc^2\over V_1}\equiv \pm { E_{Planck}\over V_1}~,
\label{ChaoticForceChangeGeneral}$$ where again $c$ is the speed of sound, which plays the role of the speed of light. The mesoscopic pressure is determined by microscopic “transPlanckian” physics, and thus such microscopic quantity as the mass $m$ of the atom, the “Planck mass”, enters this force.
For the spherical shell of radius $R$ immersed in the quantum liquid the mesoscopic pressure is $$P_{\rm meso}\sim \pm {mc^2\over
R^3}\equiv
\pm \sqrt{-g}E_{\rm Planck}\left({\hbar
c\over R}\right)^3 ~.
\label{ChaoticForceChangeGeneralSpherical}$$
Mesoscopic vacuum pressure vs conventional Casimir effect.
----------------------------------------------------------
Let us compare the mesoscopic vacuum pressure in Eq.(\[ChaoticForceChangeGeneralSpherical\]) with the traditional Casimir pressure obtained within the effective theory for the same spherical shell geometry. In the effective theory (such as electromagnetic theory in case of the original Casimir effect, and the low-ferquency quantum hydrodynamics in quantum liquids) the Casimir pressure comes from the bosonic and fermionic low-energy modes of the system (electromagnetic modes in the original Casimir effect or quanta of sound waves in quantum liquids). In superfluids, in addition to phonons the other low-energy sound-like collective are possible, such as spin waves. These collective modes with linear (“relativistic”) spectrum in quantum liquids play the role of the relativistic massless scalar field. They obey typically the Neumann boundary conditions, corresponding to the (almost) vanishing mass or spin current through the wall (almost, because the vacua inside and outside the shell must be connected).
If we believe in the traditional regularization schemes which cancel out the divergent terms, the effective theory gives the Casimir pressure for the spherical shell is $$P_{\rm C}=-{dE_{\rm C}\over dV}={K
\over 8\pi}\sqrt{-g}\left({\hbar c\over R}\right)^4 ~,
\label{TradCasSpherical}$$ where $K=-0.4439$ for the Neumann boundary conditions; $K=0.005639$ for the Dirichlet boundary conditions [@CasimirForSphere]; and $c$ is the speed of sound or of spin waves. The traditional Casimir pressure is completely determined by the effective low-energy theory, it does not depend on the microscopic structure of the liquid: only the “speed of light” $c$ enters this force. The same pressure will be obtained in case of the pair correlated fermionic superflids, if the fermionic quasiparticles are gapped and their contribution to the Casimir pressure is exponentially small compared to the contribution of the collective massless bosonic modes.
However, at least in our case, the result obtained within the effective theory is not correct: the real Casimir pressure is given by Eq.(\[ChaoticForceChangeGeneralSpherical\]): (i) It essentially depends on the Planck cut-off parameter, i.e. it cannot be determined by the effective theory; (ii) it is much bigger, by factor $p_{\rm Planck}R/\hbar$, than the traditional Casimir pressure in Eq.(\[TradCasSpherical\]); and (iii) it is highly oscillating. The regularization of these oscillations by, say, averaging over many measurements; by noise; or due to quantum or thermal fluctuations of the shell; etc., depend on the concrete physical conditions of the experiment.
This shows that in some cases the Casimir vacuum pressure is not within the responsibility of the effective theory, and the microscopic (trans-Planckian) physics must be evoked. If two systems have the same low-energy behavior and are described by the same effective theory, this does not mean that they necessarily experience the same Casimir effect. The result depends on many factors, such as the discrete nature of the quantum vacuum, and the ability of the vacuum to penetrate through the boundaries. It is not excluded that even the traditional Casimir effect which comes from the vacuum fluctuations of the electromagnetic field is renormalized by the high-energy degrees of freedom
Of course, the extreme limit, which we consider, is not applicable to the original (electromagnetic) Casimir effect, since the situation in the electromagnetic Casimir effect is just opposite. The overwhelming part of the fermionic and bosonic vacuum easily penetrates the conducting wall, and thus the mesoscopic fluctuations are small. But do they negligibly small? In any case this example shows that the cut-off problem is not the mathematical, but the physical one, and the Planck physics dictates the proper regularization scheme or the proper choice of the cut-off parameters.
Conclusion.
===========
We discussed the problems related to the properties of quantum vacuum in general relativity using the known properties of the quantum vacuum in quantum liquids, where some elements of the Einstein gravity arise in the low-energy corner. We found that in both systems there are similar problems, which arise if the effective theory is exploited. In both systems the naive estimation of the vacuum energy density within the effective theory gives $\rho_{\Lambda}\sim E_{\rm Planck}^4$ with the corresponding “Planck” energy appropriate for each of the two systems. However, as distinct from the general relativity, in quantum liquids the fundamental physics, “The Theory of Everything”, is known, and it shows that the “trans-Planckian” degrees of freedom exactly cancel this divergent contribution to the vacuum energy. The relevant vacuum energy is zero without fine tuning, if the vacuum is stable (or metastable), isolated and homogeneous.
Quantum liquids also demonstrate how the small vacuum energy is generated, if the vacuum is disturbed. In particular, thermal quasiparticles – which represent the matter in general relativity – induce the vacuum energy of the order of the energy of the matter. This example shows the possible answer to the question, why the present cosmological constant is of the order of the present matter density in our Universe. It follows that in each epoch the vacuum energy density must be of order of either the matter density of the Universe, or of its curvature, or of the energy density of the smooth component – the quintessence. However, the complete understanding of the dynamics of the vacuum energy in the time-dependent regime of the expanding Universe cannot be achieved within the general relativity and requires the extension of this effective theory.
In principle, one can construct the artificial quantum liquid, in which all the elements of the general relativity are reproduced in the low energy corner. The effective metric $g^{\mu\nu}$ acting on “relativistic” quasiparticles arises as one of the low-energy collective variables of the quantum vacuum, while the Sakharov mechanism leads to the Einstein curvature and cosmological terms in the action for this dynamical variable. In this liquid the low energy phenomena will obey the Einstein equations (\[EinsteinEquation\]), with probably one exception: the dynamics of the cosmological “constant” will be included. It would be extremely interesting to realize this programme, and thus to find out the possible extension of general relativity, which takes into account the properties of the quantum vacuum.
The most important property of the quantum vacuum in quantum liquids is that this vacuum consists of discrete elements – bare atoms. The interaction and zero point oscillations of these elements lead to the formation of the equilibrium vacuum, and in this equilibtium vacuum state the cosmological constant is identically zero. Thus the discreteness of the quantum vacuum can be the possible source of the (almost complete) nullification of the cosmological constant in our present Universe. If so, one can try to exploit the other possible consequences of the discrete nature of the quantum vacuum, such as the mesoscopic Casimir effect discussed in Sec. \[EffectsDiscreteNumberN\].
Analogy with the quantum vacuum in quantum liquids allows us to discuss the other problems related to the quantum vacuum in general relativity: the flatness problem; the problem of a big entropy in the present Universe; the horizon problem, etc.
This work was supported in part by the Russian Foundation for Fundamental Research and by European Science Foundation.
[20]{}
I.M. Khalatnikov: [*An Introduction to the Theory of Superfluidity*]{}, Benjamin, New York, 1965.
R.B. Laughlin and D. Pines, The Theory of Everything, Proc. Natl. Acad. Sc. USA [**97**]{}, 28-31 (2000).
G.E. Volovik, Superfluid analogies of cosmological phenomena, to appear in Physics Reports, gr-qc/0005091.
T. Damour, Gravitation and experiment, gr-qc/9711061.
A. Einstein, Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie, Sitzungsber. Konigl. Preuss. Akad. Wiss., [**1**]{}, 142-152 (1917).
S. Weinberg, The cosmological constant problem, [*Rev. Mod. Phys.*]{} [**61**]{}, 1-23 (1989); S. Weinberg, The cosmological constant problems, astro-ph/0005265.
Supernova Cosmology Project (S. Perlmutter et al.), Measurements of $\Omega$ and $\Lambda$ from 42 high redshift supernovae, Astrophys. J. [**517**]{}, 565-586 (1999).
A. D. Sakharov: Vacuum quantum fluctuations in curved space and the theory of gravitation, Dokl. Akad. Nauk [**177**]{}, 70-71 (1967) \[Sov. Phys. Dokl. [**12**]{}, 1040-41 (1968)\]
W. G. Unruh, Experimental black-hole evaporation?, Phys. Rev. Lett. [**46**]{}, 1351-1354 (1981); Sonic analogue of black holes and the effects of high frequencies on black hole evaporation, Phys. Rev. D [**51**]{}, 2827-2838 (1995).
M. Visser, Acoustic black holes: horizons, ergospheres, and Hawking radiation, Class. Quantum Grav. [**15**]{}, 1767-1791 (1998)
M. Stone, Iordanskii force and the gravitational Aharonov-Bohm effect for a moving vortex, Phys. Rev. [**B 61**]{}, 11780 – 11786 (2000).
M. Stone, Acoustic energy and momentum in a moving medium, Phys. Rev. [**B 62**]{}, 1341- 1350 (2000).
C.D. Frogatt and H.B. Nielsen, [*Origin of Symmetry*]{}, World Scientific, Singapore - New Jersey - London - Hong Kong, 1991.
S. Chadha, and H.B. Nielsen, Lorentz invariance as a low-energy phenomenon, Nucl. Phys. [**B 217**]{}, 125–144 (1983).
C.W. Woo, Microscopic calculations for condensed phases of helium, in [*The Physics of Liquid and Solid Helium*]{}, Part I, eds. K.H. Bennemann and J.B. Ketterson, John Wiley & Sons, New York, 1976.
Y. Nambu, Fermion-boson relations in the BCS-type theories, Physica [**D 15**]{}, 147-151 (1985).
R. Brout, Who is the Inflaton? gr-qc/0103097.
F. Ravndal, Problems with the Casimir vacuum energy, hep-ph/0009208.
J. D. Bjorken, Standard model parameters and the cosmological constant, hep-ph/0103349.
R.R. Caldwell, R. Dave, and P.J. Steinhardt, Cosmological imprint of an energy component with general equation of state, [*Phys. Rev. Lett.*]{} [**80**]{}, 1582-1585 (1998).
M.S. Turner and M. White, CDM models with a smooth component, [*Phys. Rev.*]{} [**D 56**]{}, R4439 – R4443 (1997).
D. Vollhardt, and P. Wölfle, [*The superfluid phases of helium 3*]{}, Taylor and Francis, London - New York - Philadelphia, 1990.
G. Chapline, E. Hohlfeld, R.B. Laughlin, and D.I. Santiago, Quantum phase transitions and the breakdown of classical general relativity, gr-qc/0012094.
G. Chapline, The vacuum energy in a condensate model for spacetime, Mod. Phys. Lett. [**A 14**]{}, 2169-2178 (1999).
B. Boisseau, G. Esposito-Farese, D. Polarski, A.A. Starobinsky, Reconstruction of a scalar-tensor theory of gravity in an accelerating universe, Phys. Rev. Lett. [**85**]{}, 2236–2239 (2000)
H.B.G. Casimir, Proc. K. Ned. Wet. [**51**]{} 793 (1948).
G. Esposito, A.Yu. Kamenshchik, K. Kirsten, Casimir energy in non-covariant gauges, Int. J. Mod. Phys. [**A 14**]{}, 281 (1999); Casimir energy in the axial gauge, Phys. Rev. [**D 62**]{}, 085027 (2000).
H. Falomir, K. Kirsten, K. Rebora, Divergencies in the Casimir energy for a medium with realistic ultraviolet behavior, hep-th/0103050.
K.A. Milton, Dimensional and dynamical aspects of the Casimir effect: Understanding the reality and significance of vacuum energy, hep-th/0009173
G. Cognola, E. Elizalde, K. Kirsten, Casimir energies for spherically symmetric cavities, hep-th/9906228.
C.R. Hagen, Cutoff dependence and Lorentz invariance of the Casimir effect, quant-ph/0102135; Casimir energy for spherical boundaries, Phys. Rev. [**D 61**]{}, 065005 (2000).
M. Kardar and R. Golestanian, The “friction” of vacuum, and other fluctuation-induced forces, Rev. Mod. Phys. [**71**]{}, 1233-1245 (1999).
R.D.M. De Paola, R.B. Rodrigues, N.F. Svaiter, Casimir energy of massless fermions in the slab-bag, Mod. Phys. Lett. [**A 14**]{}, 2353-2362 (1999).
A.F. Andreev, Superfluidity, superconductivity, and magnetism in mesoscopics, Uspekhi Fizicheskikh Nauk [**168**]{}, 655-664 (1998).
|
---
abstract: 'Topologically nontrivial states, the solitons, emerge as elementary excitations in $1D$ electronic systems. In a quasi $1D$ material the topological requirements originate the spin- or charge- roton like excitations with charge- or spin- kinks localized in the core. They result from the spin-charge recombination due to confinement and the combined symmetry. The rotons possess semi-integer winding numbers which may be relevant to configurations discussed in connection to quantum computing schemes. Practically important is the case of the spinon functioning as the single electronic $\pi -$ junction in a quasi $1D$ superconducting material. (Published in [@moriond].)'
address:
- |
$^1$ Laboratoire de Physique Théorique et de s Modèles Statistiques, CNRS;\
Bât.100, Université Paris-Sud, 91405 Orsay cedex, France;\
http://ipnweb.in2p3.fr/ lptms/membres/brazov/, e-mail: brazov@ipno.in2p3.fr
- '$^2$ L.D. Landau Institute, Moscow, Russia.'
author:
- '$^{1,2}$'
title: |
Topological Confinement of Spins and Charges:\
Spinons as ${\large\pi-}$ junctions.
---
Introduction to solitons.
=========================
Topological defects: solitons, vortices, anyons, etc., are discussed currently, see [@ivanov], in connection to new trends in physics of quantum devices, see [@esteve]. Closest to applications and particularly addressed at this conference [@ryazanov] are the $\pi -$ junctions which, linking two superconductors, provide degeneracy of their states with phase differences equal to $0$ and $2\pi$. The final goal of this publication is to show that in quasi one-dimensional ($1D$) superconductors the $\pi -$ junctions are produced already at the single electronic level extendible to a finite spin polarization. The effect results from reconciliation of the spin and the charge which have been separated at the single chain level. The charge and the spin of the single electron reconfine as soon as $2D$ or $3D$ long range correlations are established due to interchain coupling. The phenomenon is much more general taking place, in other respects, also in such a common system as the Charge Density Wave -CDW and in such a popular system as the doped antiferromagnet or the Spin Density Wave as its quasi $1D$ version [@braz-00]. Actually in this article we shall consider firstly and in greater details the CDW which is an object a bit distant to the mesoscopic community. The applications to superconductors will become apparent afterwards. We shall concentrate on effects of interchain coupling $D>1$: confinement, topological constraints, combined symmetry, spin-charge recombination. A short review and basic references on history of solitons and related topics in correlated electronic systems (like holes moving within the antiferromagnetic media) can be found in [@braz-00].
Solitons in superconducting wires were considered very early [@AL], within the macroscopic regime of the Ginzburg - Landau theory, for the phase slips problem. Closer to our goals is the microscopic solution for solitonic lattice in quasi $1D$ superconductors [@buzdin] at the Zeeman magnetic field. This successful application of results from theory of CDWs, see [@SB-84], to superconductors provides also a link of pair breaking effects in these different systems. The solitonic structures in qasi $1D$ superconductors appear as a $1D$ version of the well known FFLO (Fulde, Ferrel, Larkin, Ovchinnikov, see [@buzdin]) inhomogeneous state near the pair breaking limit. Being very weak in $3D$, this effect becomes quite pronounced in systems with nested Fermi surfaces which is the case of the $1D$ limit.
To extend physics of solitons to the higher $D$ world, the most important problem one faces is the effect of *confinement* (S.B. 1980): as topological objects connecting degenerate vacuums, the solitons at $D>1$ acquire an infinite energy unless they reduce or compensate their topological charges. The problem is generic to solitons but it becomes particularly interesting at the electronic level where the spin-charge reconfinement appears as the result of topological constraints. The topological effects of $D>1$ ordering reconfines the charge and the spin *locally* while still with essentially different distributions. Nevertheless *integrally* one of the two is screened again, being transferred to the collective mode, so that in transport the massive particles carry only either charge or spin as in $1D$, see reviews [@SB-84; @SB-89].
Confinement and combined excitations.
=====================================
The classical commensurate CDW: confinement of phase solitons and of kinks.
---------------------------------------------------------------------------
The CDWs were always considered as the most natural electronic systems to observe solitons. We shall devote to them some more attention because the CDWs also became the subject of studies in mesoscopics [@delft]. Being a case of spontaneous symmetry breaking, the CDW order parameter $O_{cdw}\sim\Delta \cos [{\bf{Qr}}+\varphi ]$ possesses a manifold of degenerate ground states. For the $M-$ fold commensurate CDW the energy $\sim \cos [M\varphi]$ reduces the allowed positions to multiples of $2\pi /M$, $M>1$. Connecting trajectories $\varphi \rightarrow \varphi \pm 2\pi /M$ are phase solitons, or ”$\varphi -$ particles” after Bishop *et al*. Particularly important is the case $M=2$ for which solitons are clearly observed e.g. in polyacethylene [@PhToday] or in organic Mott insulators [@monceau].
Above the $3D$ or $2D$ transition temperature $T_{c}$, the symmetry is not broken and solitons are allowed to exist as elementary particles. But in the symmetry broken phase at $T<T_{c}$, any local deformation must return the configuration to the same (modulo $2\pi $ for the phase) state. Otherwise the interchain interaction energy (with the linear density $F\sim \left\langle\Delta _{0}\Delta _{n}\cos [\varphi _{0}-
\varphi _{n}]\right\rangle $ is lost when the effective phase $\varphi _{0}+\pi sign(\Delta _{0})$ at the soliton bearing chain $n=0$ acquires a finite (and $\neq 2\pi )$ increment with respect to the neighboring chain values $\varphi _{n}$. The $1D$ allowed solitons do not satisfy this condition which originates a constant *confinement force* $F$ between them, hence the infinitely growing confinement energy $F|x|$. E.g. for $M=2$ the kinks should be bound in pairs or aggregate into macroscopic complexes with a particular role plaid by Coulomb interactions [@teber].
Especially interesting is the more complicated case of [*coexisting discrete and continuous symmetries*]{}. As a result of their interference the topolological charge of solitons originated by the discrete symmetry can be compensated by gapless degrees of freedom originated by the continuous one. This scenario we shall discuss through the rest of the article.
The incommensurate CDW:\
confinement of Amplitude Solitons with phase wings.
---------------------------------------------------
Difference of ground states with even and odd numbers of particles is a common issue in mesoscopics. In CDWs it also shows up in a spectacular way (S.B. 1980, see [@SB-84; @SB-89]. Thus any pair of electrons or holes is accommodated to the extended ground states for which the overall phase difference becomes $\pm 2\pi$. Phase increments are produced by phase slips which provide the spectral flow [@yak] from the upper $+\Delta_0$ to the lower $-\Delta_0$ rims of the single particle gap. The phase slip requires for the amplitude $\Delta(x,t)$ to pass through zero, at which moment the complex order parameter has a shape of the amplitude soliton (AS, the kink $\Delta (x=-\infty)\leftrightarrow -\Delta (x=\infty$). Curiously, this instantaneous configuration becomes the stationary ground state for the case when only one electron is added to the system or when the total spin polarization is controlled to be nonzero, see Figure 1. The AS carries the singly occupied mid-gap state, thus having a spin $1/2$ but its charge is compensated to zero by local dilatation of singlet vacuum states [@SB-84; @SB-89].
As a nontrivial topological object ($O_{cdw}$ does not map onto itself) the pure AS is prohibited in $D>1$ environment. Nevertheless it becomes allowed even their if it acquires phase tails with the total increment $\delta\varphi =\pi $, see Figure 2. The length of these tails $\xi _{\varphi }$ is determined by the weak interchain coupling, thus $\xi _{\varphi }\gg \xi _{0}$. As in $1D$, the sign of $\Delta$ changes within the scale $\xi _{0}$ but further on, at the scale $\xi _{\varphi }$, the factor $\cos [Qx+\varphi ]$ also changes the sign thus leaving the product in $O_{cdw}$ to be invariant. As a result the 3D allowed particle is formed with the AS core $\xi _{0}$ carrying the spin and the two phase $\pi /2$ twisting wings stretched over $\xi _{\varphi }$, each carrying the charge $e/2$.
![Phase tails adapting the AS.](moriond-1.eps "fig:"){width="5.5cm"} .7
![Phase tails adapting the AS.](moriond-2.eps "fig:"){width="5.5cm"} .35
Spin-Gap cases: the quantum CDW and the superconductivity.
----------------------------------------------------------
We shall omit from consideration the case of repulsion which is relevant to the case of the incommensurate Spin Density Wave or to a hole within the antiferromagnetic media which is important for doped Mott insulators. These cases were emphasized in previous publications [@braz-00]. Here we shall concentrate upon systems with attraction which originate the gap and the discrete degeneracy in the spin channel. Firstly we shall generalize the description of the CDW solitons to the quantum model. Secondly we shall use the accumulated experience to arrive at our final goal: the spin carrier in the SC media.
$1D$ electronic systems are efficiently treated within the boson representation, see [@emery] for a review. The variables can be chosen as $\varphi$ which is the analog of the CDW phase and $\theta $ which is the angle of the $SU2$ spin rotation. These phases are normalized in such a way that their increments divided by $\pi$ count the electronic charge and spin.
For the incommensurate electronic systems the Lagrangian can be written as $${\cal L}_{atr}\sim \{C_1(\partial \theta )^{2}+V\cos(2\theta )\}+C_2(\partial \varphi
)^{2} \, ; \,\, C_1,C_2=cnst$$ where $V$ is the backward exchange scattering and $(\partial f)^{2}=v^{-2}(\partial_t f)^{2}-(\partial_x f)^{2}$, $v\sim v_F$ is the velocity. Elementary excitations in $1D$ are the *spinon* as a soliton $\theta=0 \rightarrow \theta =\pm\pi $, hence carrying the spin $\pm 1/2$, and the gapless charge sound in $\varphi $. It is important to recall the alternative description in terms of conjugated phases. We shall need only the one for the charge channel which is the standard gauge phase $\chi$ of the superconductivity. Phases $\varphi$ and $\chi$ are related (Efetov and Larkin 79) since their derivatives determine the same quantity - the current $j$: $\partial_t \varphi/\pi=j\sim\partial_x\chi $. The term $C_2(\partial \varphi)^{2}$ is duel to $\tilde C_2(\partial \chi)^{2}$ with $\tilde C_2\sim 1/C_2$
### The Quantum CDW.
The CDW order parameter is $
O_{cdw}\sim \exp [i(Qx+\varphi )]\cos \theta
$. The spin operator $\cos \theta $ stands for what was the amplitude in the quasi- classical description and at presence of the spinon it changes the sign as it was for $\Delta$. Hence for the CDW ordered state in a quasi $1D$ system the allowed configuration must be composed with two components: the spin soliton $\theta \rightarrow \theta +\pi $ and the phase wings $\varphi \rightarrow \varphi +\pi $ where the charge $e=1$ is concentrated. Beyond the low dimensionality, a general view is: *the spinon as a soliton bound to the semi-integer dislocation loop*.
![ Motion of the topologically combined excitation in a spin-gap media. The string of the amplitude reversal of the order parameter created by the spinon is cured by the semi- vortex pair (the loop in $3D$) of the phase circulation. For the CDW case the curls are displacements contours for the half integer dislocation pair. For the superconductivity the curls are lines of electric currents circulating through the normal core carrying the unpaired spin. ](moriond-3.eps "fig:"){width="5.5cm"} .85
### The Singlet Superconductivity.
For the Singlet Superconductivity the order parameter is $O_{sc}\sim \exp [i{\chi}]\cos \theta $. In $D>1$ the elementary spin excitation is composed with the *spin soliton* $\theta \rightarrow \theta +\pi$ supplied with *current wings* $\chi \rightarrow \chi +\pi $. The quasi $1D$ interpretation is that the *spinon works as a Josephson* $\pi -$* junction* in the superconducting wire. The $\ 2D$ view is a pair of superconducting $\pi -$ [*vortices sharing the common core where the unpaired spin is localized*]{}. The $3D$ view is a [*half flux vortex loop which collapse is prevented by the spin confined in its center*]{}.
The solitonic nature of the spinon in the quasi $1D$ picture corresponds to the string of reversed sign of the order parameter left behind in the course of the spinon motion. The spin soliton becomes an elementary fragment of the stripe pattern near the pair breaking limit (FFLO phase). In a wire the $\pi$ wings of the spinon motion become the persistent current [@yak].
For this combined particle the electronic quantum numbers are reconfined (while with different scales of localization). But integrally over the cross-section the local electric current induced by the spinon is compensated exactly by the back-flows at distant chains. This is a general property of the vortex dipole configuration constructed above. Finally, the soliton as a state of the coherent media will not carry a current and itself will not be driven by a homogeneous electric field.
Conclusions.
============
Our conclusions have been derived for weakly interacting SC or CDW chains. Since the results are symmetrically and topologically characterized, they can be extrapolated to isotropic systems with strong coupling where a clear microscopical derivation is not available. Here the hypothesis is that instead of normal carries excited or injected above the gap, the lowest states are the symmetry broken ones described above as semiroton - spinon complex. This construction can be processed from another side considering a vortex configuration bound to an unpaired electron. Without assistance of the quasi one-dimensionality, a short coherence length is required to leave only a small number of intragap levels in the vortex core.
The existence of complex spin excitations in superconductors is ultimately related to robustness of the FFLO phase at finite spin polarization. It must withstand a fragmentation due to quantum or thermal melting at small spin polarization. Then any termination point of one stripe within the regular pattern (the dislocation) will be accompanied by the phase semiroton in accordance with the quasi $1D$ picture.
[99]{}
Proceedings of the Moriond meeting [*Electronic Correlations: From meso- to nano-physics*]{}, T. Martin and G. Montambaux eds., EDP Sciences (2001) 315 .
A.Yu Kitaev, **quant-ph**/9707021; D.A. Ivanov, in [@moriond] and **cond-mat**/0005069
D. Esteve in [@moriond].
V.V. Ryazanov, et al in [@moriond] and **cond-mat**/0008364,0103240.
S. Brazovskii, **cond-mat**/0006355; N. Kirova, **cond-mat**/0004313
J.S. Langer and V. Ambegaokar, Phys. Rev. [**164**]{},498 (1967).
A. Buzdin et al , Sov. Phys. JETP [**66, 422**]{} (1987); Physics Letters, [**A225**]{}, 341 (1997); Physica C, [**316**]{}, 89 (1999).
S. Brazovskii and N. Kirova, *Sov. Sci. Reviews.*, I.M. Khalatnikov ed., **A5**, Harwood Ac. Publ., 1984.
S. Brazovskii, in [*Modern Problems in Condensed Matter Science*]{}, v. [**25**]{}, p. 425 (Elsevier Sci. Publ., Amsterdam,1989).
H.S.J. van der Zant [*et al*]{}, J. de Physique IV, [**9**]{} (1999) Pr10 -157.
Physics Today, [**53**]{}, \#12, 19 (2000).
P. Monceau, F. Ya. Nad, S. Brazovskii, Phys. Rev. Lett. [ **86**]{}, 4080 (2001)
S. Teber *et al*, J. Phys.: Condens. Matter [**13**]{}, 4015 (2001).
V. Yakovenko, private communication.
S. Brazovskii and S. Matveenko, Sov. Phys.: JETP, [**72**]{}, 492, 860 (1991).
V.J. Emery in *Highly Conducting One-dimensional Solids*, Plenum Press, NY
|
---
abstract: 'In this paper the old problem of determining the discrete spectrum of a multi-particle Hamiltonian is reconsidered. The aim is to bring a fermionic Hamiltonian for arbitrary numbers N of particles by analytical means into a shape such that modern numerical methods can successfully be applied. For this purpose the Cook-Schroeck Formalism is taken as starting point. This includes the use of the occupation number representation. It is shown that the N-particle Hamiltonian is determined in a canonical way by a fictional 2-particle Hamiltonian. A special approximation of this 2-particle operator delivers an approximation of the N-particle Hamiltonian, which is the orthogonal sum of finite dimensional operators. A complete classification of the matrices of these operators is given. Finally the method presented here is formulated as a work program for practical applications. The connection with other methods for solving the same problem is discussed.'
address: |
Department of Physics, University of Paderborn,\
33098 Paderborn, Germany,\
j@schroe.de
author:
- Joachim Schröter
title: 'A New Approach to the N-Particle Problem in QM'
---
Introduction {#intro}
============
One of the central problems of many-particle quantum mechanics, if not its main problem, is calculating the spectral representation of a many-particle Hamiltonian, which typically has the form $$\label{1.1} \mathbf {H}_N = \sum^{N}_{j} K_j + \frac{1}{2}
\sum^N_{j\not= k} W_{jk}\; .$$ Here $K_j$ contains the kinetic energy of particle $j$ and the external fields acting upon $j$, and $W_{jk}$ is the interaction of the particles $j$ and $k$. As is well-known, this problem has a solution if $W_{jk} = 0$. On the other hand, if $W_{jk}$ does not vanish, the problem is ”almost” unsolvable in a strict sense. But the situation is not hopeless. For, what is really needed for practical purposes, is a ”good” approximate solution.
In this last field a tremendous work has been done, both analytically and numerically. Its mainstreams are well-known under the labels Thomas-Fermi method ([@Thom],[@Ferm]), Hartree-Fock method ([@Hart],[@Fock]), density functional theory ([@hoko],[@kosh]), configuration interaction method, Haken’s method and others. With respect to these methods and their applications and refinements I refer e.g. to the following books [@Kohan], [@Ohno], [@Haken]. There in addition an abundance of papers and monographs is cited, where the methods are also described in detail.
A common feature of these procedures is that they contain one step in which a one-particle approximation of the N-particle problem is carried through. With the methods of Thomas-Fermi and of Hartree-Fock it is all, what is done. With the other methods the described first step is followed by other ones thereby improving the accuracy of approximation. Especially, by combining analytical and numerical mathematics great progress is achieved. Today problems can be solved which were regarded as unsolvable few decades ago.
Nevertheless, the question is obvious, whether there are other approaches to a solution of the $N$-particle problem in quantum mechanics than those mentioned above. It is the aim of this paper to present such a new procedure. For this purpose I need some mathematical tools which, though they are widely known, I have briefly described in Appendix A.1. In particular the reader will find all the notation which is used throughout the text. (More details can be found in [@Cook], [@Schroeck].) The basic idea of the procedure as well as the main results are sketched in Section 2.3.
The Structure of $N$-Particle Hamiltonians
==========================================
In what follows only systems of particles of the same kind are considered. When one starts studying a concrete sytem, its Hamiltonian is usually defined using the position-spin representation, i.e. the Hamiltonian is an operator in the Hilbert space $\bigotimes^N (L^2(\mathbb{R}^{3}) \otimes
\mathcal{S}^1)$, where $\bigotimes$ and $ \otimes$ denote tensor products, and where $\mathcal{S}^1$ is the complex vector space of spin functions (cf. Section A.2.1). For explicit calculations this representation is very useful. But the aim of this paper is primarily a structural analysis of the Hamiltonians of a certain class of systems, and in this case a more abstract formalism is adequate. It turns out that the Cook-Schroeck formalism (cf. Appendix A.1) is very useful for this purpose.\
Then our starting point is an arbitrary initial Hamiltonian of the shape (1.1), which is denoted $ \bar H_N $ and defined in a Hilbert space ${\bar{ \mathcal H}}^N := \bigotimes^N {\bar {\mathcal H }^ 1}$, where $\bar{\mathcal H }^1$ is the Hilbert space of the corresponding one-particle system.\
Now let $K$ be the operator defined in $\bar{\mathcal
H}^1$ which contains the kinetic energy of one particle of a certain kind together with the action of the external fields. Moreover, let $W$ be that operator in $\bar{\mathcal H}^2$ which represents the interaction of two particles of the kind considered. Then, using Formula (\[A1.25\]), $ \bar {H}_N$ defined in $ \bar{\mathcal H}^N$ is given by $$\label{2.1}
\bar H_N = \Omega_N (K) + \Omega_N (W),$$ where $$\begin{array}{c}
\Omega_N (K) := ((N-1)!)^{-1}
\sum_{P \in {\mathcal{S}}_N} U(P) (K \otimes 1 \otimes \ldots
\otimes 1) U^\star(P),\\ [2ex]
\Omega_N (W) := (2 (N-2)!)^{-1}
\sum_{P \in {\mathcal{S}}_N} U(P) (W \otimes 1 \otimes \ldots
\otimes 1) U^\star(P) ,
\end{array}$$ and $U(P)$ is the unitary permutation operator defined by the particle permutation P. Thus, using Formula (\[A1.27\]), the operator $\bar H_N$ specified for Bosons or Fermions reads $$\label{2.2} \bar{H}^\pm_N = \Omega^\pm_N (K) +
\Omega^\pm_N(W).$$ Here the definition $ A^\pm := S^\pm_N AS^\pm_N$ for an arbitrary operator $A$ in $\bar{\mathcal H}^N$ is applied, where $S^\pm_N $ is the symmetrizer (+) resp. the antisymmetrizer (-). Then $A^\pm$ is defined in the Hilbert space $\bar{\mathcal H}^N_\pm = S^\pm_N [\bar
{\mathcal{H}}^N]$.\
It is well-known that the structure of $\bar{H}^\pm_N$ given by (\[2.2\]) is not helpful for studying its spectral problem, because the operators $\Omega^\pm_N (K)$ and $\Omega^\pm_N(W)$ do not commute. This suggests the question if it is possible to find an operator $T$ acting in $\bar{\mathcal H}^M,\; 1
\leq M < N$ such that $$\label{2.3} \bar{H}^\pm_N = \Omega^\pm_N (T).$$ Because the two-particle operator $ W $ cannot be represented by a one-particle operator it holds that $M\geq 2$. If the Hamiltonians as well as the operators $ K$ and $W$ are selfadjoint, it turns out that $M=2$ is possible as shown by the following\
[**Proposition 2.1:**]{} Let $$\label{2.4} \tilde{H}_2 (\gamma) = \gamma (K \otimes
1) + \gamma (1 \otimes K) + W \;$$ so that ${\tilde H}_2 (\gamma)$ is defined in $ \bar{\mathcal H}^2$, and let $\gamma_0:=(N-1)^{-1}$. Then $$\label{2.5} \bar{H}^\pm_N = \Omega^\pm_N ({\tilde H}_2
(\gamma_0)) \; \text {and}\;\; \bar{H}^\pm_N \not= \Omega^\pm_N ({\tilde
H}_2 (\gamma)), \; \gamma \not= \gamma_0.$$ [**Proof:**]{} Using (\[A1.28\]) yields $$\label{2.6}
\begin{array}{ll} \Omega^\pm_N (K) &= N S^\pm_N (K \otimes 1 \otimes
\ldots \otimes 1) S^\pm_N \\[3ex] &={{\displaystyle}\frac{1}{N-1} \binom N2
S^\pm_N ((K \otimes 1) \otimes \ldots \otimes 1) S^\pm_N} \\ [3ex] & {
+ {\displaystyle}\frac{1}{N-1} \binom N 2 S^\pm_N ((1 \otimes K) \otimes \ldots
\otimes 1) S^\pm_N} \\ [3ex] &= { {\displaystyle}\frac{1}{N-1} (\Omega^\pm_N (K
\otimes 1) + \Omega^\pm_N (1 \otimes K))}.
\end{array}$$ Since by supposition ${\tilde H}_2(\gamma), K $ and $W$ are selfadjoint, (\[A1.30\]) can be applied so that with the help of (\[2.6\]) the following relation holds: $$\begin{array} {ll} \label{2.7} \Omega^\pm_N ({\tilde H}_2 (\gamma))
&\supset \gamma \Omega^\pm_N (K \otimes 1) + \gamma \Omega^\pm_N (1
\otimes K) + \Omega^\pm_N (W) \\ [2ex] &= \gamma (N-1) \Omega^\pm_N
(K) + \Omega^\pm_N (W).
\end{array}$$ The term in the last line of (\[2.7\]) is selfadjoint because it is the Hamiltonian of a (possibly fictional) $N$-particle system. Since also $\Omega^\pm_N ({\tilde H_2}(\gamma))$ is selfadjoint (cf. Proposition A.1.11), relation (\[2.7\]) is an equation, from which the proposition follows immediately.\
[**2.2:**]{} This result is somewhat surprising. The initial Hamiltonian $\bar{H}^\pm_N$ is not determined by ${\tilde H_2} (1)$, i.e. by a Hamiltonian of a system of two particles of the same kind, which is described by $\bar{H}^\pm_N$. Rather $\bar{H}^\pm_N$ is determined by ${\tilde H}_2 (\gamma_0), \gamma^{-1}_0 = N-1$, which is a two-particle Hamiltonian for particles of mass $(N-1)m_0$ and external fields weakened by a factor $(N-1)^{-1}$, but with the same interaction $W$ as the particles described by $\bar{H}^\pm_N$, which are supposed to have mass $m_0$.\
The system described by ${\tilde H}_2 (\gamma_0)$ is fictional. I call it *dummy system* and the operator ${\tilde
H}_2 (\gamma_0)$ *dummy Hamiltonian*.\
In Appendix A2 two simple examples are given describing dummy helium and a solid with two dummy electrons.\
In what follows, the operator ${\tilde H}_2 (\gamma), \gamma \not= \gamma_0$ is not needed anymore. Therefore it is convenient to use the notation ${\tilde
H}_2 (\gamma_0)= \bar {H}_{20}$.\
[**Corollary 2.2:**]{} Because of (\[A1.31\]) it follows that $$\label{2.8} \bar{H}^\pm_N = \Omega^\pm_N (\bar{H}_{20}) =
\Omega^\pm_N (\bar{H}^\pm_{20})\;.$$ [**2.3:**]{} *Formula (\[2.8\]) suggests the basic idea of this paper: find an approximation of $\bar{H}^\pm_N$ via an approximation of $\bar{H}^\pm_{20}$ such that the spectral problem of $\bar{H}^\pm_N$ can be solved approximately.*\
The details of this program are carried out for fermions in four steps, which correspond to the Chapters 3 to 6.\
Chapter 3 contains a formal analysis of a restriction $H^-_N$ of the initial Hamiltonian $\bar{H}^-_N$. The aim is expressing the matrix elements of $H^-_N$ in terms of the matrix elements of the restricted dummy operator $H^-_{20}$, which is bounded. The results are summarized in Proposition 3.10.\
In order to use them for the present purposes, a properly chosen orthonormal system $\mathcal O_1$ in the one-particle Hilbert space $\bar{\mathcal H}^1 $ is needed. In Chapter 4 arguments are given that such a system is obtained via the Hartree-Fock procedure applied to $\bar{H}^-_{20}$. Thus the restrictions used in Chapter 3 can be justified. Moreover, a heuristic argumentation suggests that $H^-_{20} $ can be “truncated” such that an operator $ {\hat H}^-_{20} $ results, which is, depending on a parameter $\alpha \in \mathbb{N}$, an approximation of $H^-_{20}$.\
In Chapter 5 it is shown that the operators $ {\hat
H}^-_{20} $ converge strongly to $H^-_{20}$, if $\alpha \rightarrow
\infty$. This has the consequence that the operators ${\hat H}^-_N =
{\Omega}^-_N({\hat H}^-_{20} )$ also converge strongly to $H^-_N$. Therefore it is possible to apply the results of the theory of spectral approximation (cf. e.g. [@Chat],[@Kato]).\
Finally, in Chapter 6 an analysis of the operators $ {\hat H}^-_N $ is given. It is shown that they are block-diagonal, i.e. their matrices with respect to the chosen orthogonal basis are orthogonal sums of finite dimensional matrices, the structure of which are analyzed in detail. At this point numerical methods can come into play.\
In Chapter 7 the results of the previous chapters are summarized in the form of a work program, which can be regarded as the main result of this paper.
The Hamiltonian $H^-_N$ and its Matrix
======================================
Preliminary remarks
-------------------
[**3.1.1:**]{} Since in what follows only such systems are considered which consist of fermions of the same kind, the notation introduced in Appendix A.1 can be used throughout. The Hamiltonians of these systems usually have the following *property.*\
1.) They are unbounded, but bounded from below.\
2.) Their spectrum below a certain value $\epsilon_0$ is discrete, otherwise continuous with possibly inserted discrete values.
Since in this paper we are only interested in the discrete spectrum outside the continuum, i.e. in the bound states of the system, the question is obvious, whether it is possible to restrict the spectral problem to the discrete eigenvalues. In other words: is it possible to realize the following\
[**Assumption 3.1:**]{} There is a subspace ${\mathcal H}^1 \subset
{\bar{\mathcal H}^1} $ such that the restriction ${{H}^-_{20}}$ of ${\bar{H}^-_{20}}$ to the subspace ${\mathcal H}^2_- \subset
{\bar{\mathcal H}^2_-} $ is bounded (so that it can be defined on ${\mathcal H}^2_-$), and has the same discrete eigenvalues as ${\bar{H}^-_{20}}$ outside its continuous spectrum.\
In this chapter it is assumed that such a subspace ${\mathcal H}^1
\subset {\bar{\mathcal H}^1} $ exists. Then $H^-_N = \Omega^-_N
(H^-_{20})$ is bounded and defined on ${\mathcal H}^N_-$. This operator is the subject studied in the following sections. In chapter 4 arguments are given that the assumption can be realized.\
[**3.1.2:**]{} The starting point for the further considerations is the following\
[**Notation 3.2:**]{} 1.) Let ${\mathcal B}_1 = \{\phi_\kappa : \kappa \in
\mathbb{N} \}$ be an arbitrary ONB in ${\mathcal{H}}^1$, and let $$\phi_{\kappa_1 \cdots \kappa_M} :=
\phi_{\kappa_1} \otimes \cdots \otimes \phi_{\kappa_M} \in \mathcal{H}^M$$ with $2 \leq M \leq N$ and $\kappa_j \in \mathbb{N}, j = 1,....,M.$ Then an ONB ${\mathcal
B}^-_M \subset {\mathcal H}^M_-$ is defined by the vectors $$\label{B1.10} \Psi^-_{\kappa_1 \ldots \kappa_M} :=
\sqrt{M!} S^-_M \phi_{\kappa_1 \ldots \kappa_M}$$ with $ S^-_M$ being the antisymmetrizer (cf. (\[A1.6\])).\
2.)For each sequence ${\kappa_1 \cdots \kappa_M}$ of indices there is an infinite sequence $\hat{k} := (k_1, k_2, k_3,\cdots, )$ of so called ocupation numbers $k_\kappa$ defined by $$\label{B1.11} k_\kappa = \sum^M_{j=1} \delta_{\kappa
\kappa_j}.$$ Hence $k_\kappa = 1$ or $0$. Moreover there is a one-to-one correspondence $$\hat{k} \longleftrightarrow {\kappa_1 \cdots \kappa_M}.$$ Thus we can write $$\label{B1.13} \Psi^\pm_{\kappa_1 \ldots \kappa_{M}}
=: \Psi^\pm_M ({\hat k}).$$ 3.)The term “sequence of occupation numbers” is abbreviated by $bzf$ and the set of all $bzf$, which have exactly $M$ numbers $1$ is denoted $BZF_M$. The set $BZF$ comprises all $bzf$.
Now let $\langle \; \cdot,\cdot \;\rangle_2$ be the inner product in ${\mathcal H}^2_-$ and let $$\label{3.1} E (\hat k, \hat m): = \langle \Psi^-_2
(\hat k), H^-_{20}\Psi^-_2 (\hat m) \rangle_2$$ be the matrix elements of the dummy Hamiltonian. Then the matrix representation of $H^-_{20}$ reads: $$\label{3.2} H^-_{20} = \sum_{\hat k} \sum_{\hat m} E
(\hat k, \hat m) \Psi^-_2 (\hat k) \langle \Psi^-_2 (\hat m),\; \cdot
\;\rangle_2 \;.$$ Using the abbreviation $$\label{3.3} {\displaystyle}{\Psi^-_2 (\hat k) \langle \Psi^-_2
(\hat m), \; \cdot\; \rangle_2 =: T (\hat k, \hat m)}$$ together with Formula (\[A1.28\]) yields: $$\label{3.4}
\begin{array} {ll} H^-_N &= \binom N2 S^-_N (H^-_{20} \otimes 1
\otimes \cdots \otimes 1) S^-_N \\ [3ex] &= \binom N2 \sum_{\hat
k} \sum_{\hat m} E ({\hat k}, {\hat m}) S^-_N (T (\hat k, \hat
m)\otimes 1 \otimes \cdots \otimes 1) S^-_N \; .
\end{array}$$ [**Remark 3.3:**]{} Here and in what follows the sums $\sum_{\hat k}$ and $\sum_{\hat m}$ are understood to run over all $bzf$ which occur in the elements of ${\mathcal B}^-_2$. Each of these sums can be arbitrarily ordered because each ordering of the ${\hat k}$ or the ${\hat m}$ yields an ONB. Since $H^-_{20}$ is assumed to be bounded the sums $\sum_{\hat k}$ and $\sum_{\hat m}$ can be interchanged.\
[**Notation 3.4:**]{} 1.) As usual the abbreviation $$\label{3.5} \langle {\hat n}' | H^-_N | {\hat n}
\rangle:= \langle \Psi^-_N ({\hat n'}), H^-_N \Psi^-_N (\hat n)
\rangle$$ is used.\
2.) Let $\hat n \in {B Z F}_j$. In the present case ${ j} = 2$ or ${ j} = N$, and in the next section also ${
j} = M$ is used with $2 \le M < N$. But irrespective of these special choices, for each two $bzf$ an addition and a subtraction can be defined by adding, respectively by subtracting their components. Since these operations on two $bzf$ not necessarily result in a $bzf$ the following notation is used (cf. \[A1.3\]): “$\hat n \pm \hat m$ is a $bzf$” or “$\hat n \pm \hat m \in {BZF}$”. These expressions indicate that the sequence $\hat{n}\pm \hat{m}$ does not contain the numbers 2 or -1.
The basic lemma
---------------
[**3.2.1:**]{} It will be shown that the following proposition holds.\
[**Lemma 3.5:**]{} There is a function $C$ such that for each triple $(\hat n, \hat k, \hat m)$ of $bzf$ with $\sum_\alpha n_\alpha = N, \sum_\beta k_\beta = \sum_\beta m_\beta =
2$ the relations $$\label{3.6}
\begin{array}{llll} C (\hat{n}, \hat k, \hat m) &= \pm 1, &\mathit{if}
\; \hat n - \hat m \in {BZF} \; \mathit{and} \; &\hat n + \hat k -
\hat m \in {BZF}_N \\ &= 0, &\mathit{if} \; \hat n - \hat m \notin
{BZF}\; \mathit{or} \; &\hat n + \hat k - \hat m \not\in {BZF}_N \\
\end{array}$$ hold, and that moreover $$\label{3.7} {\displaystyle}{\langle \hat n' | H^-_N | \hat n
\rangle = \sum_{\hat k} \sum_{\hat m} C ({\hat n}, {\hat k}, {\hat m})
E ({\hat k}, {\hat m}) \delta ({\hat n}', {\hat n} + {\hat k} - {\hat
m})},$$ where $\delta$ is the Kronecker symbol and where $E
(\hat k, \hat m)$ is defined by (\[3.1\]) (cf. also Remark 3.3).\
From Lemma 3.5 one can draw the following\
[**Conclusion 3.6:**]{} If $\hat n$ and $\hat n'$ are given, $\langle \hat n' |H^-_N| \hat n
\rangle$ can be unequal zero only if $\hat n - \hat m \in {{BZF}_
{N-2}}$ and $\hat n' - \hat k \in {{BZF}_{N-2}}$. These relations can be satisfied only for $\binom N2$ $bzf$ $\hat m$ and $\hat k$. Hence the sums in (\[3.7\]) have finitely many summands.\
[**3.2.2:**]{} Though the lemma is used in this paper solely in the above version, for later purposes a generalization of it will be proved in the next sections. (The expenditure is the same in both cases.) In order to do so, some [*notation*]{} is introduced.
Let $2 \leq M < N$ and let $A_M$ be a bounded operator defined on ${\mathcal{H}}^-_M$ such that $$\label{3.8} A_N = \Omega^-_N (A_M)$$ is defined on ${\mathcal{H}}^N_-$. The matrix elements of $A_M$ with respect of ${\mathcal B}^-_M$ are again denoted $E(\hat
k, \hat m)$ so that here $$\label{3.9} \sum_\alpha k_\alpha = \sum_\alpha
m_\alpha =M \; .$$ Moreover let $$\label{3.10} \langle \hat n' |A_N| \hat n \rangle: =
\langle \Psi^-_N (\hat n'), A_N \Psi^-_N (\hat n) \rangle.$$ Finally, the function $C$ is defined as in Lemma 3.2 but with condition (\[3.9\]).\
[**Proposition 3.7:**]{} The relation $$\label{3.11} \langle \hat n' | A_N | \hat n \rangle =
\sum_{\hat k} \sum_{\hat m} C(\hat n, \hat k, \hat m) E ( \hat k, \hat
m) \delta (\hat n', \hat n + \hat k - \hat m)$$ holds. (Cf. also Remark 3.3 and Conclusion 3.6.)
Proof of Formula (3.11)
-----------------------
[**3.3.1:**]{} The starting point is Formula (\[A1.28\]) and the analogue to Formula (\[3.4\]). Thus $$\label{3.12} \langle \hat n' | A_N| \hat n \rangle
= \binom NM \sum_{\hat k} \sum_{\hat m} E(\hat k, \hat m) Z (\hat
n', \hat n + \hat k - \hat m) ,$$ where $$\label{3.13} Z (\hat n', \hat n, \hat k, \hat m) =
\langle \Psi^-_N (\hat n'), (T (\hat k, \hat m) \otimes 1 \otimes
\cdots \otimes 1 ) \Psi^-_{N} ({\hat n}) \rangle.$$ Here the operator $T (\hat k, \hat m)$ is defined by strict analogy with (\[3.3\]). Thus $$\label{3.14}
\begin{array}{lll} T (\hat k, \hat m) &= \Psi^-_M (\hat k) \langle
\Psi^-_M (\hat m), \;\cdot \;\rangle_M \\[3ex] &=
(M!)^{-{\frac{1}{2}}} \sum_{Q \in {\mathcal S}_M} \sigma^- (Q)
\Psi^-_M ({\hat k}) \langle \phi_{\mu_{Q^{-1} (1)} \cdots \mu_{Q^{-1}
(M)}},\; \cdot \;\rangle_M\; ,
\end{array}$$ where $\mu_1, \cdots , \mu_M \leftrightarrow {\hat m}$ with $\mu_1 < \ldots < \mu_M$ is the correspondence defined by (\[A1.13\]). Now using the correspondence $\nu_1, \ldots \nu_N
\leftrightarrow {\hat n}$ with $\nu_1 < \ldots <\nu_N$ one finds that $$\label{3.15} (T ({\hat k}, \hat m) \otimes 1 \otimes
\ldots \otimes 1) \Psi^-_N (\hat n) = \Psi^-_M (\hat k) \otimes
{\chi(\hat n, \hat m)} ,$$ where $$\begin{aligned}
\label{3.16}
\chi(\hat n, \hat m) &= (N! M!)^{-\frac{1}{2}}\, \sum_{\substack{P \in
\mathcal S_N\\ Q \in \mathcal S_M}} \sigma^- (P) \sigma^-
(Q)\\\notag
&\phantom{\chi(\hat n, \hat m) = (N!M!)^{-\frac{1}{2}}}
{\displaystyle}{(\Pi^M_{j=1} \langle \phi_{\mu_{Q^{-1}(j)}} ,
\phi_{\nu_{P^{-1}(j)}} \rangle_1) \phi_{\nu_{P^{-1} (M+1)} \ldots
\nu_{P^{-1} (N)}}}.\end{aligned}$$ Thus finally we obtain the relation $$\label{3.17} Z (\hat n', \hat n, \hat k, \hat m) =
\langle \Psi^-_N (\hat n'), \Psi^-_M (\hat k) \otimes \chi (\hat n,
\hat m) \rangle.$$ [**3.3.2:**]{} In this subsection the following *proposition* is proved: $$\label{3.18} \chi (\hat n, \hat m) \not= 0 ,$$ if and only if $\hat n - \hat m \in BZF$.
Firstly it is assumed that $\hat n - \hat m \notin BZF$. Then there is a number $\alpha$ such that $m_\alpha = 1$ and $n_\alpha =
0$. Consequently, for each $Q \in {{\mathcal{S}}_M}$ there is an $r$ for which $Q^{-1}(r) = \alpha$ holds. But for each $P \in
{{\mathcal{S}}_N}$ the relation $P^{-1} (r) \not= \alpha$ is true. Thus for each pair $P,Q$ $$\label{3.19} \Pi^M_{j=1} \langle \phi_{\mu_{Q^{-1}
(j)}}, \phi_{\nu_{P^{-1}(j)}} {\rangle}_1 = 0$$ so that also $\chi (\hat n, \hat m) = 0$.
Secondly let us assume that $\hat n - \hat m \in BZF$. Then for each $\alpha$ with $m_\alpha = 1$ also $n_\alpha = 1$ holds. Consequently one has to look for all pairs $Q,P$ such that $$\label{3.20} \Pi^M_{j=1} \langle \phi_{\mu_{Q^{-1}
(j)}}, \phi_{\nu_{P^{-1} (j)}} \rangle_1 = 1.$$ For all other pairs $Q, P$ the product in (\[3.20\]) is zero because $\langle \phi_\mu, \phi_\nu \rangle_1 = \delta_{\mu
\nu}$. Thus (\[3.20\]) is equivalent to $$\label{3.21} \mu_{Q^{-1}(j)} = \nu_{P^{-1} (j)},\quad
j = 1, \cdots, M.$$ In order to satisfy (\[3.21\]), for a given $Q \in
{\mathcal{S}}_M$ the permutation $P \in {\mathcal{S}}_N$ must be such that the $\mu_1, \cdots, \mu_M$, which by presumption occur in $\nu_1,
\cdots, \nu_M$, occupy the places $1, \cdots, M$ being ordered by $Q$. All these pairs $Q, P$ can be explicitly indicated by the following procedure.
Let $S \in {\mathcal{S}}_N$ be that permutation for which $$\label{3.22} (\nu_{S^{-1}(1)}, \cdots,
\nu_{S^{-1}(N)}) = (\mu_1, \cdots, \mu_M, \varrho_1, \cdots,
\varrho_{N-M}),$$ where $\varrho_1, \cdots, \varrho_{N-M}$ are all those $\nu_1, \cdots, \nu_N$ which are unequal $\mu_1, \cdots, \mu_M$, and in addition let $\varrho_1 < \cdots < \varrho_{N-M}$. Then with the help of (\[A1.7\]) one finds $$\label{3.23}
\begin{array} {ll} \Psi^-_N (\hat n) &= \sigma^- (S) \sqrt{N!} S^-_N
U(S) \phi_{\nu_1 \cdots \nu_N} \\ [3ex] &= \sigma^- (S) \sqrt{N!}
S^-_N \phi_{\mu_1 \cdots \mu_M \varrho_1 \cdots \varrho_{N-M}}
\end{array}$$ so that (\[3.16\]) now reads $$\label{3.24}
\begin{array}{ll} \chi (\hat n, \hat m) = (N! M!)^{-\frac{1}{2}}
\sigma^- (S) \sum_{P,Q} \sigma^- (P) \sigma^- (Q)\quad \quad
\quad\\[3ex] \phantom{\chi (\hat n, \hat m) = N! M!}(\Pi^M_{j=1}
\langle \phi_{\mu_{Q^{-1}(j)}}, \phi_{\mu_{P^{-1} (j)}} \rangle)
\phi_{\varrho_{P^{-1}(M+1)-M} \cdots \varrho_{P^{-1}(N-M)}} \; .
\end{array}$$ It follows from (\[3.24\]) that only those pairs $Q,P$ give nonzero summands for which $P$ has the form: $$\label{3.25} P = \left(Q, \begin{array}{cc} M + 1,
\cdots, N\\ M +1, \cdots, N \end{array} \right) \left(
\begin{array} {cc} 1, \cdots, M \\ 1, \cdots, M
\end{array}, R \right),$$ where $R \in {\mathcal{S}}_{N-M}$ is an arbitrary permutation. Hence, for a given $Q$ there are $(N-M)!$ permutations $P$ of the form (\[3.25\]) such that (\[3.21\]) is satisfied.
Since $R$ acts on $(\nu_{S^{-1}(M+1)}, \cdots \nu_{S^{-1}(N)})=
(\varrho_1, \cdots, \varrho_{N-M})$ one finally obtains $$\begin{array}{ll} \label{3.26} \chi (\hat n, \hat m) &= (N!
M!)^{-\frac{1}{2}} \sigma^- (S) \sum_{RQ} \sigma^- (R) \sigma^- (Q)^2
\phi_{\varrho_{R^{-1} (1)} \cdots\varrho_{R^{-1}(N-M)}} \\ [3ex] &=
{\binom NM}^{-\frac{1}{2}} \sigma^- (S) \Psi^-_{N-M} (\hat n - \hat m)
\end{array}$$ for all pairs $\hat n, \hat m$ with $\hat n - \hat m
\in BZF$. Hence the proof of relation (\[3.18\]) is complete, and in addition the explicit form of $\chi (\hat n, \hat m)$ is obtained.\
[**3.3.3:**]{} Inserting (\[3.26\]) into (\[3.15\]) and (\[3.13\]) yields $$\label{3.27} Z (\hat n', \hat n, \hat k, \hat m) =
{\binom NM}^{-\frac{1}{2}} \sigma^- (S) \langle \Psi^-_N (\hat n'),
S^-_N (\Psi^-_M (\hat k) \otimes \Psi_{N-M} (\hat n - \hat m)) \rangle
.$$ With the help of (\[A1.7\]) and by the correspondence $\hat k \leftrightarrow (\kappa_1, \cdots, \kappa_M)$ it follows that $$\label{3.28}
\begin{array} {ll} S^-_N (\Psi^-_M (\hat k) \otimes \Psi^-_{M-N} (\hat
n - \hat m)) = \\ [3ex] = (M! (N-M)!)^{-\frac{1}{2}} S^-_N
(\phi_{\kappa_1 \ldots, \kappa_M} \otimes \phi_{\varrho_1 \cdots
\varrho_{N-M}})\\ [3 ex] ={\binom NM}^{-\frac{1}{2}} \sigma^- (T)
\Psi^-_N (\hat n + \hat k - \hat m),
\end{array}$$ where $T \in \mathcal{S}_N$ is the permutation which lines up the sequence $(\kappa_1, \cdots, \kappa_M,\\ \varrho_1,
\cdots, \varrho_{N-M})$ in its natural order. Thus one obtains $$\label{3.29}
\begin{array} {ll} Z (\hat n', \hat n, \hat k, \hat m) = \\ [3ex] =
{\binom NM}^{-1} \sigma^- (S \cdot T) \langle \Psi^-_N (\hat
n'),\Psi^-_N ( \hat n + \hat k-\hat m) \rangle\\ [3ex] =
{\binom NM}^{-1} \sigma^- (S \cdot T) \delta (\hat n', \hat n +
\hat k - \hat m).
\end{array}$$ It follows that $Z (\hat n', \hat n, \hat k, \hat m )
\not= 0$ exactly if $\hat n - \hat m \in BZF, \hat n + \hat k - \hat m
\in BZF_N$ and $\hat n' = \hat n + \hat k - \hat m$.\
[**3.3.4:**]{} Since the permutations $S$ and $T$ are uniquely defined by the sequences of indices $(\nu_1, \cdots, \nu_N)$, $(\mu_1, \cdots,
\mu_M)$ and $(\kappa_1, \ldots, \kappa_M)$ or equivalently by $\hat n,
\hat m$ and $\hat k$ it is obvious to define the function $C$ by $$\label{3.30}
\begin{array}{ll} C (\hat n, \hat k, \hat m) \;= \sigma^- (T \cdot S)
= \pm 1, \mathit{if} \; \hat n - \hat m \in BZF \;\mathit{and} \; \hat
n + \hat k - \hat m \in BZF_N\\ \quad\quad\quad\quad\quad\; =
0,\quad\quad\quad\quad\quad\quad \text{otherwise}.
\end{array}$$ Now inserting (\[3.29\]) together with (\[3.30\]) into (\[3.12\]) Formula (\[3.11\]) is seen to hold, thus also Lemma (3.5). With respect to the sums $\sum_{\hat k}$ and $\sum_{\hat
m}$ I refer to Remark 3.3 and Conclusion 3.6.
An algorithm for $ C(\hat n, \hat k, \hat m)$
---------------------------------------------
[**3.4.1:**]{} The question to be answered in this section reads: is there a finite procedure for calculating $C (\hat n, \hat k, \hat m)$ if $\hat n,
\hat k, \hat m$ are given $bzf$. As in Section 3.3 the more general case $2 \leq M < N$ is considered.
Since by definition $C (\hat n, \hat k, \hat m) = 0$ if the condition $$\label{3.31} \hat n - \hat m \in BZF\quad
\mathit{and} \quad \hat n + \hat k - \hat m \in BZF_N$$ does not hold, only the case needs to be considered that (\[3.31\]) is true. Then $$\label{3.32} C(\hat n, \hat k, \hat m): = \sigma^- (T
\cdot S) = (-1)^{J(T)} (-1)^{J(S)},$$ where $S$ is defined by (\[3.22\]) and $T$ by (\[3.28\]). Moreover, $J(P)$ here means the number of inversions of a permutation $P$ (cf. e.g. (\[A1.16\])).\
[**3.4.2:**]{} To begin with, $J(S)$ is to be calculated. Let $\hat n$ be given. Then exactly $N$ numbers $\nu_i, i=1, \cdots, N$ exist such that $n_{\nu_i} = 1$ and $\nu_1 < \cdots <\nu_N$. Hence $\hat n \leftrightarrow (\nu_1
\cdots, \nu_N)$. Likewise, if $\hat m$ is given, exactly $M$ numbers $\mu_j, j=1, \cdots M$ exist such that $m_{\mu_j} = 1$ and $\mu_1 <
\cdots < \mu_M$.
Because of $\hat n - \hat m \in BZF$ for each $j \in \{1, \cdots M\}$ there is an $r_j$ such that $$\label{3.33} \mu_j = \nu_{r_j} \; \mathit{and} \; j
\leq r_j \;.$$ The permutation $S$ is defined by (\[3.22\]), i.e. $$\label{3.34} (\nu_{S^{-1}(1)}), \cdots,
\nu_{S^{-1}(N)}) = (\mu_1, \cdots, \mu_M, \varrho_1, \cdots,
\varrho_{N-M})$$ and $\varrho_1 < \cdots < \varrho_{N-M}$. Then the right-hand side of (\[3.34\]) can be generated from $(\nu_1, \cdots,
\nu_N)$ by the following procedure.
First, $\mu_1 = \nu_{r_1}$ is positioned at the $r_1-th$ place in $(\nu_1, \cdots, \nu_N)$. Therefore one needs $r_1 - 1$ inversions to bring $\mu_1$ at the first place. Thereby the positions of $\mu_2,
\cdots, \mu_M$ in $(\nu_1, \cdots,\nu_N)$ are not changed.
Second, $\mu_2 = \nu_{r_2}$ is positioned at the ${r_2}^{th}$ place in $(\nu_1 \cdots, \nu_N)$ so that one needs $r_2 - 2$ inversions to bring $\mu_2$ at the second place. Again the positions of $\mu_3,
\cdots, \mu_M$ are not changed.
Thus, in order to bring $\mu_j = \nu_{r_j}$ to position $j$ one needs $r_j - j$ inversions. Therefore the total number of inversions, which realize the permutation $S$ in (\[3.34\]), is given by $$\label{3.35} J(S) = \sum_{j=1}^M r_j - \frac{1}{2} M
(M+1).$$ [**3.4.3:**]{} Now, $J(T)$ is to be determined. This task is the following. Let $\hat n - \hat m$ and $\hat k$ be given. Then $\hat n - \hat m \in BZF$ corresponds to the sequence of indices $(\varrho_1, \cdots, \varrho_{N-M})$ and $\hat k$ to the sequence $(\kappa_1, \cdots, \kappa_M,)$. Thus the sequence of indices $(\kappa_1, \cdots, \kappa_M, \varrho_1 \cdots, \varrho_{N-M})$, brought to its natural order by the permutation $T$ and denoted $(\nu_1', \cdots, \nu'_N)$, corresponds to $\hat n + \hat k - \hat m
\in BZF$. Hence $$\label{3.36} (\nu'_{T(1)}), \cdots, \nu'_{T(N)}) =
(\kappa_1, \cdots, \kappa_M, \varrho_1, \cdots, \varrho_{N-M}).$$ Therefore, because $\hat n + \hat k - \hat m$ is given, also $(\nu'_1, \cdots, \nu'_N)$ is determined so that for each $j \in
\{1, \ldots, M\}$ the position $s_j$ of $\kappa_j$ in $(\nu'_1,
\cdots, \nu'_N)$ can be read off with the help of the relations $$\label{3.37} \kappa_j = \nu'_{s_j} \; \mbox{and}\; j
\leq s_j \;.$$ Using the same arguments as in Section 3.4.2 one obtains for $T^{-1}$ the result $$\label{3.38} J(T) = J(T^{-1}) = \sum_{j=1}^M s_j
-\frac{1}{2} M (M+1)\;.$$ [**3.4.4:**]{} Finally, the algorithm for $C (\hat n,
\hat k\ , \hat m)$ can be formulated thus:\
[**1$^{st}$ step:**]{} Take $bzf\;\hat n, \hat k\ \text{and}\ \hat m$ which fulfil the equations $\sum {n_\alpha} =N$, $\sum k_\alpha = \sum_\alpha m_\alpha = M$, and test Condition (\[3.31\]). If it is satisfied go to the next step. If it is not, define $C (\hat n, \hat k, \hat m) = 0$, so that the task has been done.\
[**2$^{nd}$ step:**]{} Take $\hat n, \hat m$ and determine the corresponding sequences of indices $(\nu_1, \cdots,
\nu_N)$ and $(\mu_1, \cdots, \mu_M)$. Then from (\[3.33\]) read off the numbers $r_j, j=1, \cdots, M$, and calculate $J(S)$ with the help of (\[3.35\]).\
[**3$^{rd}$ step:**]{} Take $\hat n + \hat k - \hat
m$ and $\hat k$, and determine the corresponding sequences $(\nu'_1,
\cdots, \nu'_N)$ and $(\kappa_1, \cdots, \kappa_M)$. Then from (\[3.37\]) read off the numbers $s_j, j=1, \cdots, M$ and calculate $J(T)$ with the help of (\[3.38\]).\
[**4$^{th}$ step:**]{} Calculate $C (\hat n, \hat k, \hat m)$ using (\[3.32\]).\
The coefficients $C
(\hat n, \hat k, \hat m)$ do not depend on the specific physical system, for which they are used, rather they are completely combinatorial. In other words, they result solely from the algebraic structure imposed on the set $BZF$. Therefore they can be computationally calculated once for all. A trivial special result is the following.\
If $\hat k = \hat m$, then $T^{-1} = S$ so that $$\label{3.39} C (\hat n, \hat m, \hat m) = 1.$$
The final form of $\langle\hat n' |H^-_N| \hat n \rangle$
---------------------------------------------------------
[**3.5.1:**]{} For the sake of simplicity in this section only the special case $M = 2$ is considered. This does not entail any loss, because the Hamiltonians we are interested in this paper are supposed to have two-particle interactions. The starting point for this section therefore is (\[3.7\]). Moreover it is assumed that the matrix elements $E(\hat k, \hat m)$ of the dummy Hamiltonian $H^-_{20}$ and the coefficients $C (\hat n, \hat k, \hat m)$ are given.
Now the [*problem*]{} to be solved reads as follows. *Let the pair $\hat n', \hat n$ be given. Then determine those pairs $\hat k,
\hat m$ for which the summands in $\sum_{\hat k}$ and $\sum_{\hat m}$ do not vanish on general grounds.*
It is known from the previous considerations that for given $\hat n',
\hat n$ only those $\hat k$ and $\hat m$ in (\[3.7\]) are relevant which satisfy the condition $$\label{3.40} \hat n - \hat m \in BZF, \quad \hat n +
\hat k - \hat m \in BZF_N,\quad \hat n' = \hat n + \hat k - \hat m,$$ hence also $$\label{3.41} \hat n' - \hat k \in BZF, \quad \hat n'
- \hat n = \hat k - \hat m .$$ Consequently, the sums in (\[3.7\]) have only finitely many summands as already remarked in Conclusion 3.3.\
[**3.5.2:**]{} In this section a disjoint dissection of all possible pairs $\hat k, \hat m$ for given $\hat n',
\hat n$ will be defined. For this purpose it is useful to introduce some new\
[**Notation 3.8:**]{} Let $\hat k, \hat m \in BZF_2$ so that $\sum k_\alpha =\sum m_\beta = 2$ is satisfied. Then the sequence $\hat d:= \hat k - \hat m$ is called differences sequence. The set of all difference sequences is denoted $\mathcal{D}$.
Consequently there are only three types ${\mathcal{D}}_{\varrho},
\varrho = 0, 1, 2$ of possible $\hat d$ generated by $\hat d = \hat k
- \hat m:$\
${\mathcal{D}}_0$ contains only one element $\hat o: =
(0, \cdots, 0,\cdots)$, and $\hat o$ is generated by all $\hat k, \hat
m$ with $\hat k = \hat m$.\
${\mathcal{D}}_1$ contains $\hat d = (1,
-1, 0, \cdots)$ and all permutations of it. They are generated by all $\hat k, \hat m$ such that for exactly one $\alpha$ the relation $k_\alpha = m_\alpha = 1$ holds.\
${\mathcal{D}}_2$ contains $\hat d
= (1, 1, -1,-1, 0, \cdots)$ and all permutations of it. They are generated by all $\hat k, \hat m$ for which no $\alpha$ exists such that $k_\alpha = m_\alpha = 1$.\
It follows immediately that the sets ${\mathcal{D}}_0,{\mathcal{D}}_1, {\mathcal{D}}_2, $ are disjoint and that $$\label{3.42} {\mathcal{D}} = {\mathcal{D}}_0, \cup
{\mathcal{D}}_1 \cup {\mathcal{D}}_2.$$ Many results of the next sections and chapters are based on the following\
[**[Proposition 3.9:]{}**]{} For each pair $\hat
n', \hat n \in BZF_N$ there is a set $$\label{3.43} \{\hat d_1, \ldots, \hat d_L \}
\subset{\mathcal{D}}_1$$ such that $$\label{3.44} \hat n' = \hat n + \hat d_1 + \ldots
\hat d_L.$$ The number $L$ is uniquely determined by $\hat n'$ and $\hat n$, but the difference sequences $\hat d_j, j = 1, \ldots, L$ are not.\
[**[Proof:]{}**]{} From the pair $\hat n', \hat n$ one forms the matrix $$\label{3.45} X: = \left( \begin{array}{c} \hat n\\
\;\hat n'
\end{array} \right) = \left(
\begin{array}{c} n_1, n_2, \cdots \\ n'_1, n'_2, \cdots
\end{array} \right).$$ For each column of $X$ the following alternative holds: $$\label{3.46} \binom{n_\varrho}{n'_\varrho} = \binom
00\ \text{or}\ {\binom 11}\ \text{or}\ \binom 01\ \text{or} \binom 10.$$ Because $\hat n$ and $\hat n'$ both contain $N$ numbers 1, there are equally many columns $\binom 01$ and $\binom10$. Let $L$ by the number of each of the two kinds. Moreover let $\varrho_j$ and $\sigma_j$, $j=1, \cdots, L$ be numberings of the indices of these columns such that $$\label{3.47} \binom{n_{\varrho_j}}{n'_{\varrho_j}}
= \binom 01 \quad \text{and} \quad \binom{n_{\sigma_j}}{n'_{\sigma_j}}
=\binom 10.$$ Then for each pair of indices $\varrho_j, \sigma_j, j=
1, \cdots, L$ define $\hat d_j = (d_{j1}, d_{j2}, \cdots )$ by $d_{j
\varrho_j} = 1, d_{j \sigma_j} = -1$ and $d_{j\alpha} = 0, \alpha
\not= \varrho_j, \sigma_j$.\
It follows from (\[3.47\]) that $$\label{3.48} \hat n + \hat d_j = (\cdots, n_\alpha,
\cdots, n'_{\varrho_j}, \cdots, n'_{\sigma_j}, \cdots, n_\beta,
\cdots)\;,$$ if $\varrho_j < \sigma_j$, and analogously, if $\sigma_j < \varrho_j$.\
Since $j \not= i$ implies $\varrho_j \not=
\varrho_i, \sigma_j \not= \sigma_i$ the addition of $\hat d_i$ to $\hat n + \hat d_j$ can be carried through without altering $n'_{\varrho_j}$ and $n'_{\sigma_j} $ in (\[3.48\]). Thus, finally one ends up with (\[3.44\]) so that the proposition is proved.\
Three immediate [*consequences*]{} are useful later on.
1. For each pair $i, j$ with $i \not= j$ the relation $\hat
d_i + \hat d_j \in {\mathcal{D}}_2$ holds.
2. $\hat n' - \hat n \in {\mathcal {D}}_\varrho$, exactly if $L = \varrho,\; \varrho = 0, 1, 2$.
3. $\hat n' - \hat n \notin {\mathcal D}$ exactly if $L \geq
3$.
[**3.5.3:**]{} Using the results of the previous sections the problem formulated in 3.5.1 now can be solved by giving a disjoint classification of the matrix elements defined by (\[3.7\]). According to (\[3.42\]) four cases have to be taken into account.\
[**1$^{st}$ case:**]{} $\hat n' - \hat n \notin
{\mathcal{ D}}$. It follows immediately from (\[3.7\]) that $$\label{3.49} \langle \hat n' | H^-_N | \hat n\rangle
= 0.$$ [**2$^{nd}$ case:**]{} $\hat n' - \hat n \in {\mathcal
D}_0$, i.e. $\hat n' = \hat n$. Then $\langle \hat n | H^-_N | \hat
n\rangle$ is unequal zero only, if also $\hat k = \hat m$. Hence the double sum $\sum_{\hat k} \sum_{\hat m}$ reduces to a single sum $\sum_{\hat m}$. This sum runs over all $\hat m$ for which $\hat n -
\hat m \in BZF$ holds. Since $\hat n$ contains $N$ numbers 1 there are exactly $\binom N 2$ different sequences $\hat m$ such that this condition is satisfied. Because of $C (\hat n, \hat m, \hat m) = 1$, define $$\label{3.50} {\mathcal{E}} (\hat n, \hat o) =
\sum_{\hat m} E (\hat m, \hat m)$$ for all $\hat m$ with $\hat n - \hat m \in BZF$. Thus finally $$\label{3.51} \langle \hat n | H^-_N| \hat n \rangle =
{\mathcal{ E}} (\hat n, \hat o)\;.$$ [**3$^{rd}$ case:**]{} $\hat n' - \hat n = \hat d_1 \in
{\mathcal{ D}}_1$. Hence $\hat n + \hat d_1 \in BZF$. Then $\langle
\hat n' |H^-_N| \hat n \rangle$ is unequal zero only, if $\hat k =
\hat m + \hat d_1$. Therefore the double sum $\sum_{\hat k}
\sum_{\hat m}$ again reduces to a single sum $\sum_{\hat m}$ which runs over all $\hat m$ so that $\hat n - \hat m \in BZF$ and $\hat m +
\hat d_1 \in BZF$. These $\hat m$ can be characterized as follows.
Let $\hat d_1$ be given by $d_{1\kappa} = 1, d_{1\mu} = -1$ and $
d_{1\beta} = 0, \beta \not= \kappa, \mu$. Hence $n_\kappa = 0$ and $n_\mu = 1$. Then $\hat m + \hat d_1 \in BZF$ if and only if $m_\kappa
= 0$ and $m_\mu = 1$. In order to satisfy the condition $\hat n - \hat
m \in BZF$ it is necessary and sufficient that $n_\mu = 1$ and that there is an $\alpha \not= \kappa, \mu$, for which $n_\alpha = 1$ and $m_\alpha = 1$ holds. Since $n_\kappa = 0$ and $n_\mu = 1$ there are $N-1$ numbers $\alpha \not= \kappa,\mu$ for which $n_\alpha = 1$ so that $\sum_{\hat m}$ runs over all $\hat m$ for which $m_\alpha =
m_\mu = 1$. Now define $$\label{3.52} {\mathcal{E}} (\hat n, \hat d_1) =
\sum_{\hat m} C(\hat n, \hat m + \hat d_1, \hat m) E (\hat m + \hat
d_1, \hat m).$$ Then $$\label{3.53} \langle \hat n' | H^-_N| \hat{n} \rangle
= {\mathcal{E}} (\hat{n}, \hat{d_1}),$$ if $\hat n' = \hat{n} + \hat{d_1},\: \hat{d_1} \in
{\mathcal{D}}_1$\
[**4$^{th}$ case:**]{} $\hat n' - \hat n = \hat d_2
\in {\mathcal{ D}}_2$. Let $\hat d_2$ be defined by $d_{2 \kappa} =
d_{2 \lambda} = 1, d_{2\mu} = d_{2\nu} = -1$. Then $\hat k$ and $\hat
m$ with $\hat k - \hat m = \hat d_2$ are uniquely determined by $k_\kappa = k_\lambda = 1$ and $m_\mu = m_\nu =1$. Hence the $\sum_{\hat h} \sum_{\hat m}$ reduces to a single term. Now define $$\label{3.54} {\mathcal{E}} (\hat n, \hat d_2) =
C(\hat{n}, \hat m + \hat d_2, \hat m) E (\hat m + \hat d_2, \hat m)\;,$$ where $\hat m$ is determined by $ m_\mu = m_\nu =
1$. Then, if $\hat{n}'-\hat{n} = d_2 \in {\mathcal{D}}_2$ $$\label{3.55} \langle \hat n' | H^-_N | \hat n \rangle
= {\mathcal{E}} (\hat{n}, \hat{d_2}).$$ [**3.5.4:**]{} Summing up one obtains\
[**Proposition 3.10:**]{} 1.) The matrix elements of the fermionic Hamiltonian $H^-_N$ defined in Assumption 3.1 (Section 3.1.1.) are given by $$\label{3.56}
\begin{array}{lll} \langle \hat n' |H^-_N| \hat{n} \rangle &=
{\mathcal{E}} (\hat{n}, \hat n' - \hat{n}), &\mbox{if} \quad \hat n' -
\hat n \in \mathcal{D} \\ &= 0 , &\mbox{if} \quad \hat n' - \hat n
\notin {\mathcal{D}}
\end{array}$$ and $$\label{3.57} {\mathcal{E}} (\hat n', \hat n - \hat
n') = \bar{\mathcal{E}} (\hat n, \hat n' - \hat n).$$ 2.) Let be given $\hat n \in BZF$ and $\hat d \in
{\mathcal{D}}$ so that $\hat n + \hat d \in BZF.$ Then it follows from Formula (\[3.7\]) that $$\label{3.58} {\mathcal{E}} (\hat n, \hat d) =
\sum_{\hat m} C (\hat n, \hat m + \hat d, \hat m) E (\hat m + \hat d,
\hat m)\; ,$$ where the sum runs over all $\hat m$ for which $\hat n
- \hat m \in BZF$ and $\hat m + \hat d =: \hat k \in BZF_2$. Hence, the matrix elements $ \langle \hat n' |H^-_N| \hat{n} \rangle $ are determined solely by the matrix elements $E (\hat k , \hat m)$ of $
H^-_{20} $, for which $ \hat k -\hat m = \hat d = \hat n' -\hat n
$. The sum in (\[3.58\]) is finite.
For bosons a result holds, which is formally equal to (\[3.7\]), (\[3.56\]) and (\[3.58\]), but the terms $C, E, {\mathcal{E}}$ are defined differently.
Heuristic Considerations
========================
The result of Chapter 3, summarized in Formula (\[3.56\]), is completely formal up to now. This is due to two unsolved questions connected with it. They read as follows.\
*First, can Assumption 3.1 be verified? More concretely, is it possible to find a one-particle Hilbert space ${\mathcal{H}}^1$ such that the dummy Hamiltonian $H^-_{20}$ is defined on ${\mathcal{H}}^2_-$ and is bounded, and such that in addition the spectrum of $H^-_{20}$ contains the discrete eigenvalues of the dummy Hamiltonian $\bar{H}^-_{20}$, which is primarily defined ?* (Cf. e.g. (\[A2.1\]), (\[A2.4\]), (\[A2.6\]).)\
*Second, is Formula (\[3.56\]) of any advantage for the spectral problem of $H^-_N$?*\
[**4.2:**]{} To begin with, let us look for an answer to the first question. As is already described in Subsection 2.1, any investigation of the Hamiltonian $\bar H_N$ of an $N$-particle system starts with a more or less informal specification of the external fields acting upon the particles and of their interactions. Customarily this is done using the position-spin representation. Then $\bar H_N$ is of the form (\[1.1\]) or, what is the same, (\[2.1\]). It is densely defined in a Hilbert space $\bar
{\mathcal{H}}^N$. Likewise the corresponding dummy Hamiltonian $\bar{H}^-_{20}$ can be immediately written down as is shown for two examples in appendix A2. It is defined in a dense linear submanifold of $\bar {\mathcal{H}}^2$.
In order to verify Assumption 3.1 the space $\bar{\mathcal{H}}^1$ has to be properly restricted to a subspace ${\mathcal{H}}^1$. Such a restriction in turn can be carried through by finding a proper orthonormal system ${\mathcal{O}}_1$ in $\bar{ \mathcal{H}}^1$ so that ${\mathcal{H}}^1 = \mathit{span}\; {\mathcal{O}}_1$. Having in mind the physical meaning of ${\mathcal{H}}^1$ suggests taking the Hartree-Fock procedure for $\bar H^-_{20}$ to determine ${\mathcal{O}}_1$. (For the details cf. Appendix A.3.) This is because this procedure is based on the Ritz variational principle which guarantees optimal approximation. Disregarding the fact that the Hartree-Fock procedure generally is an infinite task, let us assume that it is completely carried through for the dummy Hamiltonian $\bar H^-_{20}$ in $\bar{\mathcal{H}}^2_-$. Thus one has obtained ${\mathcal{O}}_1$ and an orthonormal system ${\mathcal{O}}_2 \subset \bar{\mathcal{H}}^2_-$ of vectors $$\label{4.1} \Psi^-_{\kappa \lambda} =
\frac{1}{\sqrt{2}} (\phi_{\kappa} \otimes \phi_\lambda - \phi_\lambda
\otimes \phi_\kappa),\; \kappa < \lambda .$$\
By definition, ${\mathcal{O}}_1$ is an ONB of ${\mathcal{H}}^1$, therefore ${\mathcal{O}}_2$ is an ONB of ${\mathcal{H}}_2^-$. The corresponding energy levels $E_{\kappa
\lambda} = E( \hat m, \hat m)$ for $\kappa, \lambda \leftrightarrow
\hat m$ approximate the discrete eigenvalues of $\bar H^-_{20}$ outside its continuous spectrum. Since the Hamiltonians considered in this paper are supposed to have a bounded discrete spectrum, the set of the energy levels ${E_{\kappa
\lambda}}$ is also bounded.
Then the restriction $H^-_{20}$ of $\bar{H}^-_{20}$ to the space ${\mathcal{H}}^2_-$ is bounded because its spectrum is approximated by the set $\{E_{\kappa \lambda}: \kappa < \lambda\}$ and its eigenvectors by the set ${\mathcal{O}}_2$. As usual $H^-_{20}$ can be defined on the whole space ${\mathcal{H}}^2_-$ using its matrix representation with respect to the ONB ${\mathcal{O}}_2$.\
In most cases of practical application the complete Hartree-Fock procedure cannot be achieved, because it is infinite. Therefore one has to content oneself with a finite section of this procedure. But also such a finite procedure can be complicated.\
Thus other methods were invented which are equivalent to the Hartree-Fock procedure or approximate it.
[**[4.3:]{}**]{} Let us now come to the second question. From (\[3.56\]) one draws immediately a simple consequence.\
[**Proposition 4.1:**]{} The matrix of the Hamiltonian $H^-_N = \Omega^-_N(H^-_{20})$ is diagonal if the matrix of the dummy Hamiltonian $H^-_{20}$ is diagonal. Unfortunately this result cannot be used to obtain the exact discrete eigenvalues of a realistic N-particle system, because the exact eigenvectors of a dummy Hamiltonian $H^-_{20}$ containing interaction are not elements of any ONB $\mathcal{B}^-_2$. But, if one contents with a Hartree-Fock approximation of the eigenvalues of $H^-_{20}$, Proposition 4.1 delivers a Hartree-Fock-like approximation for the eigenvalues of $H^-_N$.
Thus, if one wants to obtain better approximations, one has to solve the following *problem*. Since the results of Chapter 3 are valid for arbitrary orthonormal bases $\mathcal {B}_1 \subset \mathcal {H}^1$, one firstly has to choose such an ONB and one has to calculate the matrix elements $E (\hat k, \hat m)$ of $\mathcal{H}^-_{20 }$ for the ONB $\mathcal{B}^-_2$. The second part of the problem then is the question, wether the choise of $\mathcal{B }_1$ is helpful for a reasonable approximation of $\mathcal {H}^-_{20}$ and also of $\mathcal{H}^-_N$.\
A heuristic idea to cope with this problem is the following. Choose the ONBs $\mathcal{B}_1$, resp. ${\mathcal{B}}^-_2$ such that the matrix $E$ of $H^-_{20}$ with respect to ${\mathcal{B}}^-_2$, i.e. the matrix defined by the elements $E (\hat k, \hat m)$ is “as diagonal as possible”. This means the elements of ${\mathcal{B}}^-_2$ should approximate the eigenvectors of $H^-_{20}$ optimally. Hence we end up again with the Hartree-Fock method, an equivalent of it or an approximation. Therefore $\mathcal{B}_1 = \mathcal{O}_1$ and ${\mathcal{B}}^-_2 = {\mathcal{O}}_2$ with $\mathcal{O}_1$ and ${\mathcal{O}}_2$ being defined in Section 4.2 and in Appendix 3.
In order to get further insight into the general structure of the matrix $E$ I will give some purely heuristic arguments, which are based on physical intuition. For this purpose let the Hartree-Fock energy levels $E_{\mu \lambda}$ be numbered such that $E_{\mu \nu}
\leq E_{\mu \lambda}$ if $\mu < \nu < \lambda$. Then, if $\lambda$ is large enough, i.e. if it exceeds a certain value $\bar \alpha$ one expects that the particle having state $\lambda$ is “almost” free, so that the interaction between the two particles having the states $\mu$ and $\lambda$ is “almost” zero. Thus the two particles with states $\mu$ and $\lambda$ are “almost” free, if one particle of this pair is “almost” free. This implies, that the vector $\Psi^-_{\mu \lambda} \in
{\mathcal{H}}^2_-$ is “almost” an eigenvector of $H^-_{20}$. Now let $\mu, \lambda$ correspond to a sequence of occupation numbers $\hat
m$. Then $E(\hat m, \hat m)$ is “almost” an eigenvalue of $H^-_{20}$ for the eigenvector $\Psi^-_{\kappa \lambda } = \Psi^-_{2} (\hat m)$, so that $E (\hat k, \hat m)$ is “almost” equal to $E (\hat m, \hat m)
\delta (\hat k, \hat m)$. Consequently $E (\hat k, \hat m)$ is “small”, i.e. it is “almost” zero, if $\hat k \not= \hat m$.
Now let us suppose that the term “small” has been concretized. Then the above considerations can be summarized in the following [*assumption*]{}.
[*There is a natural number $\bar \alpha$ such that $E (\hat k,
\hat m)$ is small for each pair $\hat k, \hat m$ with $\hat k \not=
\hat m$, for which a $k_\sigma=1, \sigma > \bar \alpha$ exists or an $m_\varrho = 1, \varrho > \bar \alpha$.*]{}
This suggests truncating the matrix $E$ by substituting zeros for its small elements. Then intuitively one conjectures that the truncated matrix leads to an approximate solution of the spectral problem of $H^-_N$ which is the goal of this paper. The precise meaning of the conjecture will be given in Section 5.1.\
[**4.4:**]{} The first step is defining the truncated dummy Hamiltonian $\hat H^-_{20}$ and drawing some consequences. The operator $\hat H^-_{20}$ is determined by its matrix $\hat E$, which is given by the elements: $$\label{4.3}
\begin{array}{l} \hat E (\hat k, \hat m)= 0,\; \mbox{if} \; \hat k
\not= \hat m \;\mbox{and if} \; \sigma, \varrho \; \mbox{exist such
that} \\ \phantom{\hat E (\hat k, \hat m)= 0,\;\; } k_\sigma = 1,\;
\sigma > \bar{ \alpha}\;\; \mbox{or} \;\; m_\varrho = 1,\; \varrho >
\bar {\alpha}, \\ \hat E (\hat k, \hat m)= E (\hat k, \hat m), \;
\quad\mbox{otherwise} .
\end{array}$$ According to this definition $\hat E$ is a finite nondiagonal matrix of order $\binom{\bar \alpha} 2$ with an infinite diagonal tail of elements $E (\hat m, \hat m)$ where $\hat m$ contains at least one $m_\varrho = 1,\; \varrho > \hat \alpha$.
Consequently, $\hat H^-_{20}$ is bounded so that $\hat H^-_{N} =
\Omega^-_N (\hat H^-_{20})$ is also bounded and has the form (\[3.56\]), but with $\hat {\mathcal{E}}(\hat n, \hat d)$ defined by $\hat E (\hat k, \hat m)$ the same way as ${\mathcal{E}} (\hat n, \hat
d)$ is determined by $E (\hat k, \hat m)$. Then we obtain the following\
[**Proposition 4.2:**]{} Let us consider $\langle\hat n'
|\hat H^-_N| \hat n \rangle$.\
1.) If $\hat n' - \hat n \notin
{\mathcal{D}}$, one concludes from (\[3.7\]) that $\hat n' - \hat n
\not= \hat k - \hat m$ which implies (cf. (\[3.44\])) $$\label{4.4} \langle\hat n' |\hat H^-_N| \hat n
\rangle = \langle \hat n' | H^-_N| \hat n\rangle = 0.$$ 2.) If $\hat n' - \hat n = \hat d \in {\mathcal{D}}$ and if there is $\lambda > \bar \alpha$ such that $d_\lambda \not= 0$, it follows from $\hat d = \hat k - \hat m$ that $k_\lambda = 1$ or $m_\lambda = 1$. Hence $\hat E (\hat k, \hat m)=0$, and consequently $$\label{4.5} \langle\hat n' |\hat H^-_N| \hat n
\rangle = 0.$$ 3.) Now let $\hat n' - \hat n = \hat d \in
{\mathcal{D}}$ and suppose that $d_\lambda = 0$, if $\lambda >\bar{
\alpha}$. Then if $\hat d \in {\mathcal{D}}_2$, the $bzf$ $\hat k$ and $\hat m$ are uniquely determined and one obtains (cf. Formula (\[3.54\])) $$\label{4.6} \langle\hat n' |\hat H^-_N| \hat n\rangle
= \hat{\mathcal{E}} (\hat n, \hat d) = {\mathcal{E}} (\hat n, \hat d)
= \langle\hat n' | H^-_N| \hat n\rangle.$$ If $\hat d \in {\mathcal{D}}_1$, things are more complicated. One has to apply the general method described in Section 3.5.3 for several special cases. The matrix element\
$\langle\hat n'
|\hat H^-_N| \hat n\rangle$ can be zero or unequal zero, and it need not be equal to $\langle\hat n' |H^-_N| \hat n\rangle$.\
[**4.5:**]{} By these considerations the problem of determining the eigenvalues of $H^-_N$ is transformed into the following two ones.\
*First, do the eigenvalues of $\hat H^-_N$ approximate those of $H^-_N$?*\
*Second, can the eigenvalues of $\hat H^-_N$ be calculated, at least partially?*\
The first problem is treated in Chapter 5, the second one in Chapter 6.
Spectral Approximation
======================
The dummy operators $H^-_{20}$ and $\hat H^-_{20}$ are defined on ${\mathcal{H}}^2_-$ and are bounded so that $D: = H^-_{20} -
{\hat{H}}^-_{20}$ is also bounded and defined on ${\mathcal{H}}^2_-$. Moreover ${\hat{H}}^-_{20}$ and $D$ depend on the fixed number $\bar \alpha$. In what follows these operators are understood to be functions of a parameter $\alpha \in \mathbb{N}$ and $\alpha \geq 2$ so that they are written $D(\alpha)$ and $\hat
H^-_{20} (\alpha)$, and for the matrix elements of $\hat E$ we write $\hat E_\alpha (\hat k, \hat m)$. Then by definition $$\label{5.1} \langle \hat{k} |D (\alpha)| \hat{m}
\rangle = E (\hat{k}, \hat{m}) - \hat{E}_\alpha (\hat{k}, \hat{m}),\;
\alpha \geq 2.$$ Therefore $$\label{5.2}
\begin{array}{l} \langle \hat{k} |D (\alpha)| \hat{m} \rangle = 0,\;
\mbox{if} \; \hat{k} = \hat{m} \; \mbox{or} \; \hat k \leftrightarrow
\kappa_1, \kappa_2 \leq \alpha \; \mbox{and} \; \hat m \leftrightarrow
\mu_1, \mu_2 \leq \alpha, \\ [2ex]
\langle \hat{k} |D (\alpha)| \hat{m} \rangle = \langle \hat{k} |D (2)|
\hat{m} \rangle, \; \kappa_1 > \alpha\; \mbox{or} \;\kappa_2 > \alpha \; \\
\phantom{\langle \hat{k} |D (\alpha)| \hat{m} \rangle = \langle
\hat{k} |D (2)| \hat{m} \rangle, \mbox{if} \hat{k} \not= \hat{m}}
\quad \mbox{or} \;\mu_1 > \alpha \; \mbox{or} \; \mu_2 > \alpha.
\end{array}$$ These properties have the following consequence.\
[**Proposition 5.1:**]{} The sequence $(D (\alpha) : \alpha \in
\mathbb{N}, \alpha \geq 2) $ converges strongly to $0$.\
[**Proof.**]{} Let the projections $F_\alpha$ and $F'_\alpha$ be defined by $$\label{5.3} \displaystyle{F_\alpha =
\sum^\infty_{\kappa_1, \kappa_2 > \alpha } \Psi^-_{\kappa_1 \kappa_2}
\langle \Psi^-_{\kappa_1, \kappa_2}, \;\cdot\; \rangle }$$ and $F'_\alpha = 1 - F_\alpha$. Then $F_\alpha$ converges strongly to $0$, if $\alpha \rightarrow \infty$, and $F'_\alpha$ to $1$. By a simple calculation using Formula (\[5.2\]) one verifies that for each $f \in {\mathcal{H}}^2_-$: $$\label{5.4}
\begin{array}{ll} F_{\alpha} D (\alpha)f = F_\alpha D (2) f, \\ [3ex]
F'_\alpha D (\alpha)f = F'_\alpha D (2) F_\alpha f.
\end{array}$$ Therefore $$\label{5.5} \| D(\alpha) f \| \leq || F_\alpha D(2) f
\| + \| F'_{\alpha} D (2) F_\alpha f \|.$$ Because of $$\label{5.6} \| F'_\alpha D(2) F_\alpha f \| \leq \| D
(2) \| \| F_\alpha f \|$$ and because $F_\alpha$ converges to $0$, the proposition is seen to hold.
Now, in order to transfer the last result to $\hat H^-_N (\alpha) : =
\Omega^-_N (\hat H_{20} (\alpha))$ one needs the following theorem.\
[**Proposition 5.2:**]{} Let be given a sequence $(A_M (r): r \in
\mathbb{N})$ of bounded operators defined on $\mathcal{H^M}$. If $A_M
(r)$ converges strongly to $0_M$ for $r \rightarrow \infty$, then also $$\label{B1.33} s - \lim_{r - \infty} \Omega^-_N (A_M
(r)) = 0_N$$ i.e. $\Omega^-_N (A_M (r))$ converges strongly.\
[**Proof:**]{} 1.) Let $g \in {\mathcal H}^N$, then $$g = \sum_{\kappa_1 \dots \kappa_N} b_{\kappa_1 \dots \kappa_N} \phi_{\kappa_1 \dots \kappa_N}.$$\
If $$\chi_{\kappa_{M+1} \ldots \kappa_N} :=
\sum_{\kappa_1 \dots \kappa_M} b_{\kappa_1 \dots \kappa_N}
\phi_{\kappa_1 \dots \kappa_M}\;,$$\
it follows that $$g = \sum_{{\kappa_{M+1}} \dots \kappa_N} \; \chi_{\kappa_{M+1} \dots \kappa_N} \; \otimes \phi_{\kappa_{M+1} \dots \kappa_N}$$\
and $$\| g \|^2 = \sum_{\kappa_{M+1} \dots \kappa_N} \| \chi_{\kappa_{M+1}\dots \kappa_N} \|^2.$$If $B$ is a bounded operator on ${\mathcal H}^M$, then $$\label{B1.34} \| (B \otimes 1 \otimes \dots \otimes
1) g \|^2 = \sum_{\kappa_{M+1}\ldots \kappa_N} \| B \chi_{\kappa_{M+1}
\dots \kappa_N} \|^2.$$ 2.) Now, let us consider the operators $A_M (r)$. By supposition $A_M(r)$ converges strongly to $0_M $. Hence by the principle of uniform boundedness (cf. e.g. [@Kato], p. 150) there is a number $K$ such that for all $r \in \mathbb{N}$: $$\| A_M (r) \| \leq K.$$\
Thus, one obtains for all $r \in \mathbb{N}$: $$\label{B1.35} \sum_{\kappa_{M+1}\ldots \kappa_N} \|
A_M (r) \chi_{\kappa_{M+1} \ldots \kappa_N} \|^2 \leq \quad K^2
\sum_{\kappa_{M+1} \dots \kappa_N} \| \chi_{\kappa_{M+1} \ldots
\kappa_N} \|^2.$$ Hence, by the criterion of [Weierstraß]{} the left series converges uniformly with respect to the variable $r$. Therefore the limit $r \rightarrow \infty$ can be interchanged with the sum so that $$\label{B1.36}
\begin{array} {ll} \lim_{r \rightarrow \infty} \| (A_M (r) \otimes 1
\otimes \dots \otimes 1) g \|^2 \\ [4ex]= \sum_{\kappa_{M+1} \ldots
\kappa_N} \lim_{r \rightarrow \infty} \| A_M (r)
\chi_{\kappa_{M+1}\ldots \kappa_N} \|^2 &= 0\;.
\end{array}$$ Thus, because for all $g \in {\mathcal H}^N_-$ the relation $$\|\Omega^-_N (A_M (r)) g \| \leq {\binom NM} \| (A_M (r)\otimes 1
\otimes \dots) g \|$$ holds, Formula (\[B1.33\]) is proved.
Then, with the help of the Propositions 5.1 and 5.2 one obtains the following consequences.\
[**Proposition 5.3:**]{} 1.) The sequence ($\hat H^-_N (\alpha): \alpha \in \mathbb{N}, \alpha \geq 2$) converges strongly to $H^-_N$. In other words, the operators $\hat
H^-_N (\alpha)$ approximate $H^-_N$ (cf. [@Chat], p. 228, Formula \[5.1\]). Hence all results concerning the approximation of one operator by a strongly converging sequence of operators are valid for $H^-_N$ and $\hat H^-_N (\alpha)$. (Cf. [@Chat], Chapter 5.). Here only two of these properties are sketched.\
2.) If $e$ is an isolated eigenvalue of $H^-_N$, then there is a sequence ($e_\alpha: \alpha \in
\mathbb{N}$) such that $e_\alpha$ is an eigenvalue of $\hat H^-_N
(\alpha)$ and such that $e_\alpha \rightarrow e$. (Cf. [@Chat], p. 239, Theorem 5.12).\
3.) Let $P$ be the spectral projection belonging to an eigenvalue $e$ of $H^-_N$. Then an $\alpha_0$ exists such that for each $\alpha > \alpha_0$ there is a spectral projection $P_\alpha$ belonging to $\hat H^-_N (\alpha)$ and such that $P_\alpha$ converges strongly to $P$ for $\alpha \rightarrow
\infty$. (Cf. [@Chat], p. 240, Theorem 5.13.)\
[**5.2:**]{} Besides the approximation of $H^-_N$ by $ \hat H^-_N(\alpha)$ sketched above there are results, which are based on the norm of $D (\alpha)$, i.e. on $\delta (\alpha):= \| D (\alpha) \|$. Using the Formulae (\[A1.29\]) and (\[A1.30\]) one obtains the relation $$\label{5.7} \| H^-_N - \hat H^-_N (\alpha) \| =
\parallel \Omega^-_N (D (\alpha))\parallel \leq {\binom N2} \delta
(\alpha).$$ Thus it follows that $\hat H^-_N (\alpha)$ converges in norm to $H^-_N$ if $$\label{5.8} \lim_{\alpha \rightarrow \infty} \delta
(\alpha) = 0.$$ Theorems concerning spectral approximation based on convergence in norm can be found in [@Chat], p. 291, Theorem 4.10, p. 362, Theorem 5.10 and in [@Kato], p. 249, Proposition 5.28.
The operator $\hat H^-_N$ and its matrix
========================================
Preliminary remarks
-------------------
[**6.1.1:**]{} The aim of this chapter is proving\
[**Proposition 6.1:**]{} 1.) The operator $\hat{H}^-_N$ defined on ${\mathcal{H}}^N_-$ is an orthogonal sum of operators defined on finite dimensional (orthogonal) subspaces of ${\mathcal{H}}^N_-$.\
2.) Moreover, using the matrix of ${\hat H}^-_N$ the matrices of the suboperators can be determined explicitly.
If this proposition is verified, we have obtained a block-diagonalization of ${\hat H}^-_N$. Thus a way is opened, depending on the numbers $\bar \alpha$ and $N$, to calculate the eigenvalues of $\hat H^-_N$ with the help of numerical methods. Purely analytical solutions of the eigenvalue problem of $\hat H^-_N$ are also possible, if $\bar \alpha = 2, 3, 4.$ But, in these cases one cannot expect that $\hat H^-_N$ is a good approximation of a realistic Hamiltonian $H^-_N$.\
[**6.1.2:**]{} In order to verify Proposition 6.1 some further notation is used, which is provided by\
[**Definition 6.2:**]{} 1.) Let be given an $\hat n \in BZF_N$ and a natural number $\alpha$, which for the moment is completely arbitrary. Then $$\label{6.1} (\hat n,\alpha) := (n_1, \cdots ,
n_\alpha) \quad \mbox{and} \quad (\alpha, \hat n): = (n_{\alpha+1},
n_{\alpha + 2}, \cdots).$$ For the infinite second part of $\hat n$ also the abbreviation $(\alpha, \hat n) = : \hat r$ is used.\
2.) Let $\hat{r}$ be given. Then ${\mathcal{N}}_\beta (\hat r)$ denotes the set of all $\hat n \in BZF_N$, for which $(\alpha, \hat n) = \hat r,
\; \sum^{\infty}_{\varrho = \alpha +1} n_\varrho = N- \beta$ and $0
\leq \beta \leq \mbox{min} \{\alpha, N \}$. Hence the finite sequence $(\hat n, \alpha)$ contains exactly $\beta$ numbers 1 and $\alpha -
\beta$ numbers $0$. Because $\beta$ is determined by $\hat r$, the notation is a bit redundant, but it turns out to be useful.
Now it is supposed that $\alpha$ and $N$ are fixed numbers. Then, Definition 6.2 yields the following\
[**Consequence 6.3:**]{} 1.) The set ${\mathcal{N}}_\beta (\hat r),\; 0 \leq \beta \leq min \{\alpha,
N\}$ is finite, more precisely, card ${\mathcal{N}}_\beta (\hat r) =
{\binom \alpha \beta}$. Therefore the subspace ${\mathcal{H}}^N_-
(\hat r)$ of ${\mathcal{H}}^N_-$ spanned by the $\Psi^-_N (\hat n)$ for $\hat n \in {\mathcal{N}}_\beta (\hat r)$ has dimension $\binom\alpha
\beta$.\
2.) The sets $\mathcal{N}_\beta (\hat r)$ and $\mathcal{N}_{\beta'}(\hat r')$ are disjoint if $\hat r \not= \hat
r'$. Moreover $\beta = \beta'$, if and only if the sequences $\hat r ,
\hat r'$ contain the same number of elements 1. Then for each $ \hat n
\in \mathcal{N}_\beta (\hat r)$ there is an $ \hat n' \in
\mathcal{N}_\beta (\hat r')$ such that $(\hat n',\alpha) = ( \hat n,
\alpha)$.\
3.) Let $\mathcal{B}^-_N$ be the ONB of ${\mathcal{H}^N_-}$ defined by (\[A1.13\]), and let $\Psi^-_N (\hat
n) \in \mathcal{B}^-_N$. Then there is exactly one $\beta$ so that $\hat n \in \mathcal{N}_\beta (\alpha, \hat n)$. Hence the sets $\mathcal{N}_\beta (\hat r)$ with $\hat r = (n_{\alpha + 1}, n_{\alpha
+ 2}, \cdots)$ containing $N-\beta$ numbers 1 and $ 0 \leq \beta \leq
min \{\alpha, N\}$ form a complete disjoint dissection of the set of all $\hat n \in BZF$ for which $\sum_\varrho n_\varrho =N$.\
4.) From the above parts 2 and 3 one concludes that the spaces $\mathcal{H}^N_- (\hat r)$ are orthogonal for different $\hat r$, and that they span $\mathcal{H}^N_-$, i.e. $$\label{6.2} \mathcal{H}^N_- = \bigoplus_{\hat r}
\mathcal{H}^N_- (\hat r).$$ Later on a restriction of the set $\mathcal{D}$ of all difference sequences (cf. Notation 3.8) is needed.\
[**Definition 6.4:**]{} $\mathcal{D}_\alpha$ is the set of all $\hat d \in \mathcal{D}$, for which $d_\varrho = 0$ if $\varrho > \alpha$. In addition let $\mathcal{D}_{j \alpha}: = {\mathcal{D}}_j \cap {\mathcal{D}}_\alpha,
j =0, 1,2$.\
Therefore the sets $\mathcal{D}_{j\alpha}$ are again disjoint, and ${\mathcal{D}}_{0 \alpha} = {\mathcal{D}}_0 = \{ \hat
o\}$.\
[**[6.1.3:]{}**]{} Finally a lemma is proved which is basic for the further considerations.\
[**Proposition 6.5:**]{} If $\hat n \in
\mathcal{N}_\beta (\hat r), 0 \leq \beta \leq \mathit{ min} \{\alpha,
N\} $ and if $\hat d \in \mathcal{D}_\alpha,\hat d \not= \hat o$, then either $\hat n + \hat d \in \mathcal{N}_\beta (\hat r)$ or $\hat n +
\hat d \notin BZF$.\
[**Proof:**]{} The proof is complete if one can show that $\hat n + \hat d \in \mathcal{N}_{\beta} (\hat r)$ is equivalent to $\hat n + \hat d \in BZF$. First, if $\hat n + \hat d
\in \mathcal{N}_\beta (\hat r)$, then $\hat n + \hat d \in BZF$ holds. Second, it follows from $\hat n + \hat d \in BZF$, that the (two or one) numbers 1 in $\hat d$ must be at positions where there are $0$ in $\hat n$. Likewise, the (two or one) numbers -1 in $\hat d$ must be at positions, where numbers 1 are in $\hat n$. Because of $\hat d \in \mathcal{D}_\alpha$, the sequence $(\hat n + \hat d,
\alpha)$ has the same quantity $\beta$ of numbers 1 as $(\hat n,
\alpha)$ has, and $\hat r:= (\alpha, \hat n) = (\alpha, \hat n + \hat
d)$ because $\hat d$ does not affect $\hat r$. Thus $\hat n + \hat d
\in \mathcal{N}_\beta (\hat r)$, so that the proof is complete.
General properties of the matrix of $\hat H^-_N$
------------------------------------------------
[**6.2.1:**]{} In what follows the definitions and results of Section 6.1 are applied for the special choice $\alpha =\bar \alpha$ with a properly chosen $\bar \alpha$. Moreover, the ONBs ${\mathcal{B}}^-_2$ and $\mathcal{B}^-_N$ are those which are defined via the Hartree-Fock procedure in the Sections 4.2 and 4.3.\
[**6.2.2:**]{} In this subsection the first part of Proposition 6.1 is proved. In order to do so, the matrix representation of $\hat H^-_N$ with respect to $\mathcal{B}^-_N$ is used. The [*proof*]{} is complete, if one shows that for $\hat n \in \mathcal{N}_\beta (\hat
r)$ and $\hat n' \in \mathcal{N}_{\beta'} (\hat r')$ with $\hat r
\not= \hat r'$: $$\label{6.3} \langle \hat n' |\hat H^-_N| \hat
n\rangle = 0.$$ There are three possibilities for the pair $\hat n',
n$.\
If $\hat n' - \hat n \notin \mathcal{D}$, it follows from Formula (\[4.4\]) that (\[6.3\]) holds.\
If $\hat n' - \hat n \in
{\mathcal{D}} \diagdown {\mathcal{D}}_{\bar \alpha}$, by Formula ([\[4.5\]]{}) it is seen that (\[6.3\]) holds, too.\
If $\hat n' -\hat n \in {\mathcal{D}}_{\bar \alpha}$, then $\hat n' = \hat n + \hat
d \in BZF$ , $\hat d \in \mathcal{D}_{\bar\alpha} $ and $\hat d \not= \hat
o$. Thus it follows from Proposition 6.5 that $\hat n' \in
{\mathcal{N}}_\beta (\hat r)$. This result contradicts the supposition $\hat n' \in \mathcal{N}_{\beta'} (\hat r')$ and $\hat r' \not= \hat
r$. Therefore $\hat n' - \hat n \notin \mathcal{D}_{\bar \alpha}$ is true.\
Hence (\[6.3\]) holds if $(\bar {\alpha}, \hat n') \not=
(\bar { \alpha}, \hat n)$.
If one denotes the restriction of $\hat H^-_N$ to the space $\mathcal{H}^N_- (\hat r)$ by $\hat H^-_N (\hat r)$ one obtains $$\label{6.4} \hat H^-_N = \bigoplus_{\hat r} \hat
H^-_N (\hat r).$$ Thus, part one of Proposition 6.1 has been proved. The proof of the second part is postponed to Section 6.3.\
[**6.2.3:**]{} In this subsection therefor some preparatory work will be done. Since the dimension of $\mathcal{H}^N_- (\hat r)$ is $\binom{\bar\alpha}\beta$ with $0 \leq \beta \leq {\mathit{min}} \{\bar \alpha, N\}$, it can become gigantic depending on $N, \bar \alpha$ and $\beta$. Therefore, aiming at the diagonalization of $\hat H^-_N (\hat r)$ it is of vital interest to know how many matrix elements $\langle\hat n' |\hat H^-_N
(\hat r)| \hat n\rangle$ vanish on principle grounds. This means, how many matrix elements of $\hat H^-_N$ are zero for arbitrary dummy Hamiltonians $\hat H^-_{20}$ respectively their matrices $\hat E$. Then in addition further matrix elements of $\hat H_N^-$ can be zero for special $\hat H^-_{20}$. But the last aspect will not be considered in this paper.
Now, from Consequence 4.3 one immediately draws\
[**Conclusion 6.6:**]{} Let $\hat n', \hat n \in \mathcal{N}_\beta (\hat r)$ and let $\hat n' - \hat n \notin \mathcal{D}_{\bar \alpha}$, then $$\label{6.5} \langle \hat n' | \hat H^-_N | \hat n
\rangle = 0.$$ Since by Proposition 3.9 the relation $$\hat n' = \hat n + \hat d_1 + \cdots + \hat d_L, \quad \hat d_j \in \mathcal{D}_1$$\
holds, one has a simple criterion to decide whether $\hat n' - \hat n
\notin \mathcal{D}_{\bar \alpha}$ or not. Especially, if $L > 2$ Formula (\[6.5\]) is true.\
[**[6.2.4:]{}**]{} Finally the nondiagonal matrix elements with $\hat n' - \hat n = \hat d \in \mathcal{D}_{\bar
\alpha}, \hat d \not= \hat o$ are considered. For this purpose let us introduce the following\
[**Notation 6.7:**]{} If $\hat n', \hat n \in
BZF_N$. Then $\hat n', \hat n$ are called $\mathcal{D}_{j \bar
\alpha}-$concatenated, $j = 0,1,2$, if there is a $\hat d \in
\mathcal{D}_{j \bar \alpha}$ so that $\hat n' = \hat n + \hat d$. The $bzf \;\hat n', \hat n$ are simply called $\mathcal{D}_{\bar\alpha}-$ concatenated if they are ${\mathcal{D}}_{j \bar \alpha}-$ concatenated for $j=0$ or 1 or 2.\
With the help of this notation we arrive at the\
[**Result 6.8:**]{} 1.) For each $\hat n \in \mathcal{N}_\beta (\hat
r)$, there are exactly
$$\tau_1 (\bar \alpha, \beta) := \beta (\bar \alpha - \beta)$$
$\mathcal{D}_{1 \bar \alpha}-$concatenated $\hat n' \in
\mathcal{N}_\beta (r)$. This is because each number 1 out of the $\beta$ numbers 1 in $(\hat n,\bar \alpha)$ can be transposed at each position of the $ \bar \alpha - \beta$ numbers 0 by a $\hat d \in
\mathcal{D}_{1 \bar \alpha}$.\
2.) For each $\hat n \in
\mathcal{N}_\beta (\hat r)$ there are exactly $$\tau_2 (\bar \alpha, \beta) := {\binom\beta 2} {\bar \alpha -
\binom\beta 2}$$ $\mathcal{D}_{2 \bar \alpha}$-concatenated $\hat n' \in
\mathcal{N}_\beta (\hat r)$, where ${\binom\varrho 2} = 0$ if $\varrho = 0,1$. This holds because each pair of numbers 1 out of the $ \beta$ numbers 1 in $(\hat n,\bar \alpha)$ can be brought at the position of each pair out of the $\bar \alpha - \beta$ numbers 0 by a $\hat d \in \mathcal{D}_{2 \bar \alpha}$.\
3.) $\hat n', \hat n \in
\mathcal{N}_\beta (\hat r)$ are $\mathcal{D}_{0 \bar \alpha}-$ concatenated exactly if $\hat n' = \hat n$.\
4.) For each $\hat n \in
\mathcal{N}_\beta (\hat r)$ there are exactly $$\label{6.6} \tau (\bar \alpha, \beta):= \tau_2 (\bar
\alpha, \beta) + \tau_1 (\bar \alpha, \beta) = {\binom\beta 2} {\bar
\alpha - \binom\beta 2} + \beta (\bar \alpha - \beta)$$ $\mathcal{D}_{\bar\alpha}\text{-concatenated}\ \hat n'
\not= \hat n$.\
5.) Finally, let us consider the matrix of $\hat
H^-_N (\hat r)$ with the elements $\langle\hat n' |\hat H^-_N| \hat
n\rangle$, $\hat n', \hat n \in \mathcal{N}_\beta (\hat{r})$. Then, in the $\hat n'$-row there are exactly $\tau (\bar \alpha, \beta)$ nondiagonal elements which can be unequal zero, and similarly for $\hat n$-columns. Consequently, the number $Z (\bar \alpha, \beta)$ of zero nondiagonal elements in each $\hat n'$-row or $\hat n$-column is $$\label{6.7} Z (\bar \alpha, \beta) = \binom{\bar \alpha}\beta - \tau (\bar \alpha, \beta) - 1.$$ According to Conclusion 6.6 $Z (\bar \alpha, \beta)$ is the number of $bzf \;\hat n$ for which $\hat n' - \hat n \notin
\mathcal{D}_{\bar \alpha}$ in any $\hat n'$-row, and likewise for the $\hat n$-columns.
The matrices of the operators $\hat H^-_N (\hat r)$
----------------------------------------------------
In this section the second part of Proposition 6.1 will be proved. This runs as follows.\
[**6.3.1:**]{} $\beta = \bar \alpha \leq N$. The number $z$ of elements $\hat n \in \mathcal{N}_{\bar \alpha} (\hat r)$ is $\binom{\bar \alpha}{\bar \alpha} = 1$, and the element $\hat n$ has the form $$\label{6.8} \hat n = (1, \cdots, 1, n_{\bar \alpha +
1}, \cdots).$$ Consequently the matrix of $\hat H^-_N (\hat r)$ is of order one and its element is $$\label{6.9} \langle\hat n |\hat H^-_N| \hat n \rangle
= \hat{\mathcal{E}} (\hat n, \hat o) = \mathcal{E} (\hat n, \hat o).$$ [**6.3.2:**]{} $\beta = \bar \alpha - 1 \leq N.$ 1.) The number $z$ of elements $\hat n \in \mathcal{N}_{\bar \alpha - 1} (\hat
r)$ is $\binom{\bar \alpha}{\bar\alpha - 1} = \bar \alpha$, and the $(\hat n, \bar \alpha)$ for $\hat n \in \mathcal{N}_{\bar \alpha-1}
(\hat r)$ contain only one 0 and $\bar \alpha - 1$ numbers 1.\
2.) The $\hat n \in N_{\bar \alpha - 1} (\hat r)$ are numbered by $ \hat n
=: \hat n_\kappa$ if $0$ is at position $\bar \alpha - \kappa$ in $(\hat n, \bar \alpha)$, and $\kappa = 0, \cdots, \bar \alpha - 1$.\
3.) Any two elements $\hat n', \hat n \in \mathcal{N}_{\bar \alpha -1}
(\hat r)$ with $\hat n' \not= \hat n$ are $\mathcal{D}_{1 \bar
\alpha}$-concatenated. This is because the number of $\hat n \in
\mathcal{N}_{\bar \alpha - 1}(\hat r)$, which are $\mathcal{D}_{\bar
\alpha}$-concatenated with $\hat n' \not= \hat n$, according to (\[6.6\]) is $$\label{6.10} \tau (\bar \alpha, \bar \alpha - 1) =
\tau_1 (\bar \alpha, \bar \alpha -1) = \bar \alpha - 1 = z - 1.$$ 4.) The matrix of $\hat H^-_N (\hat r)$ in the present case is a $z \times z= \bar \alpha \times \bar \alpha$ matrix, which has the elements $$\label{6.11} \langle\kappa' |\hat H^-_N| \kappa
\rangle: = \langle\hat n_{\kappa'} |\hat H^-_N| \hat n_\kappa\rangle =
\hat{\mathcal{E}} (\hat n_\kappa, \hat n_{\kappa'} - \hat n_{\kappa}
).$$ [**6.3.3:**]{} $\beta = \bar \alpha - 2 \leq N$. 1.) The number $z$ of elements $\hat n \in \mathcal{N}_{\bar \alpha - 2} (\hat
r)$ is $\binom{\bar \alpha}{\bar \alpha - 2} = \frac{1}{2} \bar
\alpha (\bar \alpha -1)$, and the ($\hat n, \bar \alpha$) for $\hat n
\in \mathcal{N}_{\bar \alpha - 2} (\hat r)$ contain two numbers 0 and $\bar \alpha - 2$ numbers 1.\
2.) The $\hat n \in \mathcal{N}_{\bar
\alpha-2} (\hat r)$ are numbered by $\hat n =: \hat n_{\kappa
\lambda}, \kappa < \lambda$ if the two $0$ are at the positions $\bar
{\alpha}-\lambda $ and $\bar \alpha - \kappa,\; \;0 \leq \kappa <
\lambda \leq \bar \alpha - 1$.\
3.) Any two elements $\hat n', \hat n
\in \mathcal{N}_{\bar \alpha - 2} (\hat r)$ with $\hat n' \not= \hat
n$ are $\mathcal{D}_{\bar \alpha}$-concatenated. This is because the number of $\hat n \in \mathcal{N}_{\bar \alpha - 2}(\hat r)$, which are $\mathcal{D}_{\bar \alpha}$-concatenated with $\hat n' \not= \hat
n$, according to (\[6.6\]) is $$\label{6.12} \tau (\bar \alpha, \bar \alpha - 2) =
\binom{\bar \alpha}{\bar \alpha - 2} - 1 = z - 1.$$ 4.) The matrix of $\hat H^-_N (\hat r)$ then is a $z
\times z$ matrix with $z = \frac{1}{2} \bar \alpha (\bar \alpha-1)$, which has the elements $$\label{6.13} \langle\kappa', \lambda' |\hat H^-_N|
\kappa, \lambda\rangle : = \langle\hat n_{\kappa' \lambda'} |\hat
H^-_N| \hat n_{\kappa \lambda}\rangle = \hat{\mathcal{E}} (\hat
n_{\kappa \lambda}, \hat n_{\kappa' \lambda'} - \hat n_{\kappa
\lambda}).$$ [**6.3.4:**]{} $\beta = 2 < N$. 1.) The number $z$ of elements $\hat n \in \mathcal{N}_2 (\hat r)$ is $\binom{\bar \alpha}2
= \frac{1}{2} \bar \alpha (\bar \alpha - 1)$, and the $(\hat n,
\bar\alpha)$ for $\hat n \in \mathcal{N}_2 (\hat r)$ contain two numbers 1 and $\bar \alpha - 2$ numbers 0.\
2.) The $\hat n \in
\mathcal{N}_2 (\hat r)$ are numbered by $\hat n =: \hat n_{\kappa
\lambda}, \kappa < \lambda$, if the two 1 are at positions $\kappa$ and $\lambda$, $1 \leq \kappa < \lambda \leq \bar \alpha$\
3.) Any two $\hat n', \hat n \in \mathcal{N}_2 (\hat r)$ are $\mathcal{D}_{\bar \alpha}$-concatenated. This follow via the same argument as in 6.3.3.\
4.) Like in 6.3.3 one obtains the $z \times z$ matrix of $\hat H^-_N (\hat r)$. It has the elements $$\label{6.14} \langle\kappa' \lambda' |\hat H^-_N|
\kappa \lambda \rangle := \langle\hat n_{\kappa' \lambda'} |\hat
H^-_N| \hat n_{\kappa \lambda}\rangle = \hat{\mathcal{E}} (\hat
n_{\kappa \lambda}, \hat n_{\kappa' \lambda'}- \hat n_{\kappa
\lambda}).$$ [**6.3.5:**]{} $\beta = 1, N>2$. 1.) The number $z$ of elements $\hat n \in \mathcal{N}_1 (\hat r)$ is ${\bar \alpha \choose
1} = \bar\alpha$, and the $(\hat n, \bar \alpha)$ for $\hat n \in
\mathcal{N}_1 (\hat r)$ contain one number 1 and $\bar \alpha - 1$ numbers 0.\
2.) The $\hat n \in \mathcal{N}_1 (\hat r)$ are numbered by $\hat n =: \hat n_{\kappa}$, if $1$ is at position $\kappa$, $1
\leq \kappa \leq \bar \alpha$.\
3.) Any two $\hat n', \hat n \in
\mathcal{N}_1 (\hat r)$ are $\mathcal{D}_{1 \bar
\alpha}$-concatenated. The argument is the same as in 6.3.2.\
4.) Also as in 6.3.2 the matrix $ \hat H^-_N (\hat r)$ is obtained. It is a $z \times z = \bar \alpha \times \bar \alpha$ matrix having the elements $$\label{6.15} \langle\kappa' |\hat H^-_N|
\kappa\rangle:= \langle\hat n_{\kappa'} |\hat H^-_N| \hat
n_\kappa\rangle = \hat{\mathcal{E}} (\hat n_\kappa, \hat n_\kappa' -
\hat n_\kappa).$$ [**6.3.6:**]{} $\beta = 0, N >2 $. The number $z$ of elements $\hat n \in \mathcal{N}_0 (\hat r)$ is ${\bar \alpha \choose
0} = 1$, and the element $\hat n$ has the form $$\label{6.16} \hat n= (0, \cdots, 0, n_{\bar \alpha
+1}, \cdots ).$$ Consequently the matrix of $ \hat H^-_N (\hat r)$ is of order 1 and its element is $$\label{6.17} \langle\hat n |\hat H^-_N| \hat n\rangle
= {\hat {\mathcal{E}}} (\hat n, \hat o) = \mathcal{E} (\hat n, \hat
o).$$ [**6.3.7:**]{} $2 < \beta < \bar \alpha - 2,\; \beta \leq
N$. 1.) The number $z$ of elements $\hat n \in \mathcal{N}_\beta\
(\hat r)$ is $\bar \alpha \choose \beta$. It is larger than the numbers $z$ in the previous cases. Each element $\hat n \in
\mathcal{N}_\beta (\hat r)$ contains in $(\hat n, \alpha)$ at least three numbers 1 and three numbers 0.\
2.) For each $\beta$ not all pairs $\hat n', n' \in \mathcal{N}_\beta (\hat r)$ are $\mathcal{D}_{\bar\alpha}$-concatenated. To prove this proposition it suffices to give an example. Thus, let $$\label{6.18} \hat n' = (1,1,1, \cdots, 0,0,0, n_{\bar
\alpha+1}, \cdots),\; \hat n = (0,0,0, \cdots, 1,1,1, n_{\bar \alpha +
1} \cdots),$$ and let $\hat d_j \in \mathcal{D}, j = 1,2,3$ be defined by $d_{j \varrho} = \delta_{j \varrho} - \delta_{\bar \alpha +
j - 3, \varrho},\; \varrho \in \mathbb{N}$. Then $$\label{6.19} \hat n' = \hat n + \hat d_1 + \hat d_2 +
\hat d_3$$ so that $\hat n' - \hat n \notin \mathcal{D}_{\bar
\alpha}$. The factual number of non concatinated elements can be calculated from $Z(\bar \alpha, \beta)$ as defined by Formula (\[6.7\]).\
3.) The elements $\hat n \in \mathcal{N}_\beta (\hat
r)$ are numbered by $\hat n = \hat n_\kappa, \kappa = 1, \cdots, z$ arbitrarily. Then the matrix elements of $ \hat H^-_N (\hat r)$ in the present case are $$\label{6.20}
\begin{array} {lll} \langle\kappa' |\hat H^-_N| \kappa \rangle: &=
\langle \hat n_{\kappa'} |\hat H^-_N| \hat n_\kappa\rangle \\ &=
\mathcal{\hat E} (\hat n_\kappa, \hat n_{\kappa'} - \hat n_\kappa) &,
\hat n_{\kappa'} - \hat n_\kappa \in \mathcal{D}_{\bar \alpha} \\ &= 0
&, \hat n_{\kappa'} - \hat n_\kappa \notin \mathcal{D}_{\bar\alpha}\;.
\end{array}$$ [**6.3.8:**]{} Besides the above properties of the matrices of $\hat H^-_N (\hat r)$ the following result is of practical relevance.\
[**Proposition 6.9:**]{} Let the sequences $\hat r$ and $\hat r'$ have the same number $N-\beta$ of elements $1$. If $ \hat
n_1 ,\hat n_2 \in \mathcal{N}_\beta\ (\hat r)$ and $\hat n_1 \neq \hat
n_2$, there are $\hat n'_1 ,\hat n'_2 \in \mathcal{N}_\beta\ (\hat
r'), \hat n'_1 \neq \hat n'_2$ such that $$\label{6.21} \langle \hat n'_1 |\hat H^-_N (\hat r')|
\hat n'_2\rangle = \langle \hat n_1 |\hat H^-_N (\hat r)| \hat
n_2\rangle, \\$$ and vice versa. Thus, the matrices of $\hat H^-_N
(\hat r')$ and $\hat H^-_N (\hat r')$ have the same nondiagonal elements.\
[**Proof:**]{} For given $ \hat n_1 ,\hat n_2$ the bzf $\hat
n'_1 ,\hat n'_2$ are chosen according to Consequence 6.3 as follows: $(\hat n'_j, \alpha)=(\hat n _j, \alpha), j=1,2.$ Thus, $\hat n'_2 -
\hat n'_1 = \hat n_2 - \hat n_1$.\
If $\hat n_2 - \hat n_1 \not\in
\mathcal{D}_{\bar\alpha}$, it follows from the proof in Subsection 6.2.2. that Formula (\[6.21\]) holds, because both sides are zero.\
Now let us assume that $ \hat d := \hat n_2 - \hat n_1 \in
\mathcal{D}_{\bar\alpha}$. Then applying Formula (\[3.58\]) yields $$\label{6.22} \langle \hat n_1 |\hat H^-_N (\hat r)|
\hat n_2\rangle = \langle \hat n_1 |\hat H^-_N | \hat n_2\rangle =
\sum_{\hat m} C (\hat n_2, \hat m + \hat d, \hat m) E (\hat m + \hat
d, \hat m),$$ where the sum runs over all $\hat m$ with $\hat n_2
-\hat m \in BZF$ and $\hat k := \hat m + \hat d \in BZF_2 $. Because $\hat E(\hat m + \hat d, \hat m) = 0$, if $k_\lambda = 1, \lambda >
\bar \alpha$ or $ m_\kappa = 1, \kappa > \bar \alpha $, in Formula (\[6.22\]) only such components $k_\rho, m_\sigma $ are relevant, for which $\rho, \sigma \leq \bar \alpha.$ Therefore, if the condition $ \hat n_2 - \hat m \in BZF$ is satisfied, then also $\hat n'_2 - \hat
m \in BZF $ holds. The other condition is also satisfied, because $\hat d = \hat n'_2 - \hat n'_1$. Finally, the inversions, which determine $ C(\hat n, \hat m + \hat d, \hat m)$, only refer to the elements of $(\hat n'_2, \alpha)=(\hat n _2, \alpha)$. Thus one obtains $$\label{6.23} C(\hat n, \hat m + \hat d, \hat m) =
C(\hat n', \hat m + \hat d, \hat m).$$ Hence the last term in (\[6.22\]) is equal to $$\label{6.24} \sum_{\hat m} C (\hat n'_2, \hat m +
\hat d, \hat m) E (\hat m + \hat d, \hat m), = \langle \hat n'_1 |\hat
H^-_N | \hat n'_2\rangle = \langle \hat n'_1 |\hat H^-_N (\hat r')|
\hat n'_2\rangle.$$ The proof of the inverse runs the same way.
Conclusion
----------
The decomposition of the matrix of $\hat
H^-_N$ into orthogonal finite matrices as described in the Sections 6.2 and 6.3 now allows, depending on $N$ and $\bar \alpha$, to determine parts of the spectrum of $\hat H^-_N$. Thus, it is only a question of the capacity of the computers available, which parts of the spectrum one can calculate, and it is a question of physical relevance, which parts one wants to calculate. Intuitively, the case $\beta = \bar \alpha$ is a Hartree-Fock approximation. This suggests that better approximations are achieved if $\bar \alpha$ is greater than $N$, because then the case $\beta = \bar \alpha$ cannot occur.
Final Remarks
=============
Summary of the results
----------------------
[**7.1.1:**]{} The way of approaching the eigenvalue problem of the Hamiltonian $H^-_N$ presented in the Chapters 3 to 6 may at the first glance seem complicated. Therefore it is useful to realize the simple kernel of the procedure. I will present it in the form of a work program which comprises six steps.\
$\mathbf 1^{st}$ [**step:**]{} As a starting point one formulates the Hamiltonian $\bar H_N$ to be considered. This is usually done making use of the position-spin representation, i.e. $\bar H_N$ is an operator in the Hilbert space $\bar{ \mathcal{H}}^N = \bigotimes^N (L^2(\mathbb{R}^{3}) \otimes
\mathcal{S}^1)$. The most general form of $\bar {H}_N$ for charged particles is due to Breit. It can be found in the literature, e.g. in [@Ludwig] p. 247.\
[**2$^{nd}$ step:**]{} According to Formula (\[2.4\]) one shapes the dummy Hamiltonian $\bar H^-_{20}: =\bar
H^-_2 (\gamma_ 0), \gamma^{-1}_0 = N-1$, belonging to $\bar H_N$. Then one determines the orhonormal system ${\mathcal{O}}_1$ via the Hartree-Fock procedure for $\bar H^-_{20}$ (Cf. Appendix A.3.), an equivalent method or an approximation. Eventually, the operators $\bar H^-_N$ and $\bar
H^-_{20}$ are restricted to the spaces $ \mathcal{H}^N_-,
\mathcal{H}^2_-$ with $\mathcal{H}^1= \mathit{span}\ {\mathcal{O}}_1$ and are denoted $H^-_N$ and $H^-_{20}$.\
$\mathbf{3}^{rd}$ [**step:**]{} In order to determine the matrix $E$ of $H^-_{20}$ one has to choose an ONB $\mathcal{B}_1 $ of $\mathcal{H}^1$. As explained in Section 4.3, the best choise is $\mathcal{B}_1 = \mathcal{O}_1.$ Then, using the ONB $\mathcal{B}^-_2 \subset \mathcal{H}^2_-$ the matrix elements $E (\hat k, \hat m)$ are calculated (cf. e.g. (\[A1.10\]), (\[A1.13\])).\
[**4$^{th}$ step:**]{} From the matrix $E$ one obtains the truncated matrix $\hat{E}$ by replacing the “small” elements $E (\hat k, \hat m), \hat k \not=
\hat m$ of $E$ by zeros as described in Section 4.4. The matrix $\hat
E$ depends on a number $\bar \alpha$ and determines the operator $\hat H^-_{20}$.\
[**5$^{th}$ step:**]{} The matrix elements $\hat{\mathcal{E}}(\hat n, \hat d) $ of $\hat H^-_N = \Omega^-_N (\hat
H^-_{20})$ are calculated from the matrix elements $\hat E (\hat k,
\hat m)$ the same way as the $\mathcal{E} (\hat n, \hat d)$ are calculated from $E (\hat k, \hat m)$ in Section 3.5. In this connection the general results of the Sections 6.1 and 6.2 are useful.\
[**6$^{th}$ step:**]{} One determines the orthogonal submatrices of the matrix of $\hat H^-_N$ according to Section 6.3 and diagonalizes them as many as possible numerically or analytically.\
Then one can try to obtain error estimates applying suitable results of the theory of spectral approximation.\
[**7.1.2:**]{} As mentioned at the end of Section 6.4, the lowest energy levels of $\hat{H}^-_N$ are not simply Hartree-Fock-like approximations of the true values, if $N$ is smaller than $\bar \alpha$. Thus in this case the method presented here reproduces also those results, which are obtained by other methods like density functional theory (DFT) or configuration interaction method (CI). Summing up, it is intuitively clear that the approximation of $H^-_N$ by $\hat H^-_N$ is the better the smaller the number $N$ and the larger the parameter $\bar \alpha$, which in turn is limited by the capacity of computers. My colleague Arno Schindlmayr is preparing an application of the proposed method.
Finite procedures
-----------------
[**7.2.1:**]{} The program described in Section 7.1.1 is a [*work*]{} program in a strict sense only if all its steps could be carried through in finite time. Thus, the critical points are found in those steps which contain infinite tasks. The first and decisive one is the determination of the infinite ONB $\mathcal{O}_1$ in the second step. For one has to expect that in most cases ${\mathcal{O}}_1$ can neither analytically nor numerically be calculated completely. Hence, what can be performed is the determination of a finite part $\mathcal{O}_{f1}$ of $\mathcal{O}_1$, i.e. its elements up to a number $R$.
However, if only $\mathcal{O}_{f1}$ is available, the method described here does not brake down. Rather the work program formulated in Section 7.1.1 can also be carried through for $\mathcal{O}_{f1}$ instead of $\mathcal{O}_1$. The only question is, which of the obtained results are of physical interest.
In order to get an answer let us use the following obvious [*notation*]{}: $$\mathcal{B}_{f1} = \mathcal{O}_{f1},\mathcal{H}^1_f , \mathcal{H}^2_{f-}, \mathcal{B}^-_{f2},
\mathcal{B}^-_{fN}, \mathcal{H}^N_{f-}, H^-_{f20}, \hat H^-_{f20},
H^-_{fN}, \hat H^-_{fN}, \hat E_f, \hat{\mathcal{E}}_f .$$\
Moreover, as already introduced above, $R$ is the number of elements of $\mathcal{B}_{f1} $.
Now, because the reduced work program depends on the three parameters $N, R$ and $\bar \alpha$, the above question can be answered as follows.\
1.) If $N > R$, there are no vectors unequal zero in $\mathcal{H}^N_{f-}$. Therefore this case has to be excluded.\
2.) If $N = R$, the Hilbert space $\mathcal{H}^N_{f-}$ is $1$-dimensional, so that the $N$-particle Hamiltonian $\hat H^-_{fN} = H^-_{fN}$ has one eigenvalue of Hartree-Fock type. This result is of minor interest.
Thus, in order to get better results one needs an $R$ which is “sufficiently” larger than $N$.\
3.) Let $\bar \alpha \leq N <
R$. Then, according to the classification described in Section 6.3, for each $\beta$ satisfying $max \{0, \bar \alpha + N - R\} \leq
\beta \leq \bar \alpha$ at least one submatrix of the matrix of $\hat H^-_{fN}$ exists, which in principle can be diagonalized numerically.\
4.) Let $N < \bar \alpha < R$. Then, as in point 3, for each $\beta$ with $max \{0, \bar \alpha + N - R\} \leq \beta \leq N$ there is again at least one submatrix of the matrix of $\hat
H^-_{fN}$, which in principle can be diagonalized numerically.\
5.) If $N < \bar \alpha = R$, then $\beta = N$. Therefore, there is only one submatrix of the matrix of $\hat H^-_{fN}$, which is identical with the matrix of $\hat H^-_{fN}$. Moreover $\hat H^-_{fN} =
H^-_{fN}$. Hence, this case is optimal, but it possibly can not be treated numerically because $R$ is too large.
*The result of the above considerations now reads: only the cases 3., 4. and 5. can be of physical interest.*\
[**7.2.2:**]{} Thus, the question arises, *are* they. In other words, what can be said about the spectrum of $\hat H^-_N$ by studying $\hat
H^-_{fN}$. The answer is given by\
[**Proposition 7.1**]{}: The spectrum of $\hat H^-_{fN}$ is contained in the spectrum of $\hat
H^-_N$. Thus, the finite work program does not change the eigenvalues of $\hat H^-_N$, rather it delivers only a subset of them.\
The [**proof**]{} runs as follows. It suffices to show that the matrices of $\hat
H^-_{fN}$ and $\hat H^-_N$ with respect to the ONB $\mathcal{B}^-_{fN}$ are identical.
Let $ {\mathcal{M}}_{fL}$ be the set of all $\hat l \in BZF_L$ such that $l_\varrho = 0$, if $\varrho >R$. Then $\hat n \in
\mathcal{M}_{fN}$ exactly if $\Psi^-_N (\hat n) \in
\mathcal{B}^-_{fN}$. Now let $\hat k, \hat m \in BZF_2$ and $\hat n',
\hat n \in \mathcal{M}_{fN}$. If $\hat n - \hat m \in BZF$ and $\hat
n' - \hat k \in BZF$, then $\hat k, \hat m \in
{\mathcal{M}}_{f2}$. This implies $$\label{7.1} \hat E_f (\hat k, \hat m) = \hat E (\hat
k, \hat m) .\\$$ In addition let $\mathcal{D}_f$ be the set of all $\hat
d:= \hat k - \hat m,\; \hat k, \hat m \in \mathcal{M}_{f2}$. Thus $d_\varrho = 0$, if $\varrho >R$, for each $\hat d \in \mathcal{D}_f$.
By definition, $\hat{\mathcal{E}}_f (\hat n, \hat d),$ where $\hat n
\in \mathcal{M}_{fN}$ and $\hat d \in \mathcal{D}_f$, is constructed from $\hat E_f (\hat k, \hat m)$ via Formula (\[3.58\]) like $\hat{\mathcal{E}} (\hat n, \hat d)$ from $\hat E (\hat k, \hat m)$ (or $\mathcal{E} (\hat n, \hat d)$ from $E (\hat k, \hat
m))$. Therefore, by Formula (\[7.1\]) one obtains $$\label{7.2} \hat{\mathcal{E}}_f (\hat n, \hat d) =
\hat{\mathcal{E}} (\hat n, \hat d)$$ so that by use of (\[3.56\]) $$\label{7.3} \langle\hat n' | \hat H^-_{fN}| \hat
n\rangle = \langle\hat n' |\hat H^-_N| \hat n\rangle.$$ Finally, if $\hat n \in \mathcal{M}_{fN}$ and $\hat n
\in \mathcal{N}_\beta (\hat r),\; \hat r = (\bar \alpha, \hat n)$, then $\mathcal{N}_\beta (\hat r) \subset \mathcal{M}_{fN}$. Therefore, corresponding submatrices of $\hat H^-_{fN}$ and $\hat H^-_N$ have the same shape. Hence, they are identical.\
This result guarantees the practical applicability of the finite work program.
Appendix
========
Glossary
--------
**A.1.1**\[A.1.1\] The formalism briefly described in this section was mainly developed by Cook [@Cook] and by Schroeck [@Schroeck]. The purpose of this appendix is fixing notation and formulating some few results, which are used throughout the paper.\
The starting point is an axiom of $QM$ that reads: Let $\mathcal{H}^1$ be the Hilbert space of a system containing only one particle of a certain kind. Then the Hilbert space of a system containing $N$ particles of the same kind is the symmetric or the antisymmetric subspace of the $N-$fold tensor product $\mathcal{H}^N: = \bigotimes^N \mathcal{H}^1$. Likewise the Hilbert spaces of systems composed of different kinds of particles are subspaces of appropriate tensor products of one-particle Hilbert spaces.
In what follows the tensor product $\otimes$ of Hilbert spaces is understood to be a complete space. But the noncomplete tensor product of linear manifolds is a noncomplete linear manifold. This product is denoted $\underline{\otimes}$.
The inner product in $\mathcal{H}^N$, denoted $\langle\;\cdot,
\;\cdot\;\rangle$ or $\langle\;\cdot, \cdot\;\rangle_N$, is defined as usual by the inner product $\langle\;\cdot, \cdot\;\rangle_1$ in $\mathcal{H}^1$ in the following way. If $f = f_1 \otimes \cdots
\otimes f_N$ and $ g = g_1 \otimes \cdots \otimes g_N$, then $$\label{A1.1} \langle f, g \rangle_N = \langle f_1,
g_1 \rangle_1 \cdots \langle f_N, g_N \rangle_1 \;.$$ By linear and continuous extension $\langle\cdot,
\cdot\rangle_N$ is defined on $\mathcal{H}^N$.
The tensor structure of the $N$-particle Hilbert spaces implies the following\
[**Proposition A.1.1:**]{} Let $\mathcal{B}_1 :=
\{\phi_\lambda : \lambda \in \mathbb{N}\}$ be an orthonormal basis (ONB) in $\mathcal{H}^1$. Then $$\label{A1.2} {\mathcal{B}}_N := \{\phi_{\lambda_1}
\otimes \cdots \otimes \phi_{\lambda_N}: \lambda_j \in \mathbb{N},
j=1, \ldots, N \}\\$$ is an ONB in $\mathcal{H}^N$.\
Throughout this paper the abbreviation is used: $$\label{A1.3} \phi_{\lambda_1 \cdots \lambda_N} =
\phi_{\lambda_1} \otimes \cdots \otimes \phi_{\lambda_N} .$$ [**A.1.2:**]{} Let ${\mathcal S}_N$ be the symmetric group, and let $P \in {\mathcal S}_N$. Then the operator $U (P)$ of the exchange of particles is *defined* by $$\label{A1.4} U (P) \phi_{\kappa_1 \cdots \kappa_N} =
\phi_{\kappa_{P^{-1}(1)} \cdots \kappa_{P^{-1}(N)}}.$$ and by continuous linear extension.\
The operator $U
(P)$ has the following properties.\
[**Proposition A.1.2:**]{} 1.) $U
(P)$ is invariant under a change of the ONB.\
2.) $U(P)$ is defined on $\mathcal{H}^N$ and is unitary, i.e. $U (P) U^\star (P) =
1$. Moreover $$\label{A1.5} U^\star (P) = U(P^{-1}),\; U(PQ) = U (P)
U (Q).$$ With the help of $ U(P), P \in {\mathcal S}_N$ the symmetrizer and the antisymmetrizer are [*defined*]{} by $$\label{A1.6} S^\pm_N = \frac{1}{N!} \sum_{P \in
{\mathcal S}_N} \sigma^\pm (P) U (P)$$ with $\sigma^+ (P) = 1$ and $\sigma^- (P) =
(-1)^{J(P)}$, where $J(P)$ is either the number of inversions of $P$ or equivalently the number of transpositions forming $P$.
Some useful properties of the operators $S^\pm_N$ are summarized in the next [**Proposition A.1.3:**]{} $S^\pm_N$ are projections defined on ${\mathcal{H}}^N$. Moreover $$\label{A1.7}
\begin{array}{ll} U (P) S^\pm_N = S^\pm_N U(P) = \sigma^\pm (P)
S^\pm_N, \\ [3ex] S^\pm_{M+K} (S^\pm_M \phi_{\mu_1 \ldots, \mu_M}
\otimes S^\pm_K \phi_{\kappa_1 \ldots \kappa_K}) = S^\pm_{M+K}
(\phi_{\mu_1 \ldots \mu_M} \otimes \phi_{\kappa_1 \ldots \kappa_K}).
\end{array}$$ Then the physically relevant subspaces of ${\mathcal{H}}^N$ are ${\mathcal{H}}^N_\pm = S^\pm_N [
{\mathcal{H}}^N]$ where $+$ stands for bosons and $-$ for fermions. Thus $$\label{A1.8} {\mathcal{H}}^N = {\mathcal{H}}^N_+
\oplus {\mathcal{H}}^N_- \oplus {\mathcal{H}}^N_r ,$$ where $\oplus$ is the orthogonal sum as usual.
A special role in this paper play some orthonormal bases of ${\mathcal{H}}^N_\pm$, which are defined by\
[**Proposition A.1.4:**]{} 1.) The set ${\mathcal{B}}^+_N$ of all vectors $$\label{A1.9} {\displaystyle}{\Psi^+_{\kappa_1 \ldots \kappa_N} :=
\frac{\sqrt{N!}}{\sqrt{\Pi_j n_{\kappa_j}}!} S^+_N \phi_{\kappa_1
\ldots \kappa_N}}$$ with $\kappa_1 \leq \ldots \leq \kappa_N$ and $n_{\kappa_j} = \sum^N_{\alpha = 1} \delta_{\kappa_j\kappa_\alpha}$ is an ONB in ${\mathcal{H}}^N_+$. Moreover $\sum_j n_{\kappa_j} = N$.\
2.) The set ${\mathcal{B}}^-_N$ of all vectors $$\label{A1.10} \Psi^-_{\kappa_1 \ldots \kappa_N} :=
\sqrt{N!} S^-_N \phi_{\kappa_1 \ldots \kappa_N}$$ with $\kappa_1 < \ldots < \kappa_N$ is an ONB in ${\mathcal{H}}^N_-$.\
[**A.1.3:**]{} For the problems to be treated in this paper notation (\[A1.9\]) and (\[A1.10\]) is not optimal, it can be improved by introducing the sequences of occupation numbers by the following\
[**Definition A.1.5:**]{} 1.) Let be given a sequence of indices $\kappa_1, \cdots, \kappa_N$ as in (\[A1.9\]) or in (\[A1.10\]), where $\kappa_j \in \mathbb{N}, j = 1, \cdots, N.$ Then define the occupation number of $\kappa \in \mathbb{N}$ by $$\label{A1.11} n_\kappa = \sum^N_{j=1} \delta_{\kappa
\kappa_j},$$ and the sequence of all $n_\kappa, \kappa \in
\mathbb{N}$, abbreviated $bzf$, by $$\label{A1.12} (n_1, n_2, \cdots) = : \hat n.$$ 2.) Moreover let us denote the set of all $bzf$, for $+$ or for $-$, by $BZF$. Then the proposition “$\hat n$ is a sequence of occupation numbers” is abbreviated by “$\hat n$ is a $bzf$” or by $"\hat n \in BZF"`$. Sometimes it is useful to write $BZF_L$ for the set of all $bzf \;\hat l$ with $\sum_\varrho l_\varrho = L$.\
[**Consequence A.1.6:**]{} 1.) Each sequence of indices $\kappa_1, \cdots,
\kappa_N$ determines uniquely a $bzf$, and vice versa.\
2.) The elements of the ONB ${\mathcal{B}}^\pm_N$ can be written this way: $$\label{A1.13} \Psi^\pm_{\kappa_1 \ldots \kappa_{N}}
=: \Psi^\pm_N ({\hat n}).$$ This notation turns out to be very useful.\
[**A.1.4:**]{} In the next step the question is to be answered which are the physically relevant operators, i.e. the relevant observables in ${\mathcal{H}}^N$. Obviously only those are relevant which leave the spaces ${\mathcal{H}}^N_\pm$ invariant.
Therefore we [*define*]{}: Let ${\mathcal{D}}_A \subset
{\mathcal{H}}^N$ be the domain of a selfadjoint operator $A$. Then, if for each $f \in {\mathcal{D}}_A \cap {\mathcal{H}}^N_\pm$ the relation $A f \in {\mathcal{H}}^N_\pm$ holds, the operator $A$ is called a physically relevant observable.\
[**Consequence A.1.7:**]{} 1.) $A$ is physically relevant, exactly if $$\label{A1.14} A = S^+_N A S^+_N + S^-_N A S^-_N +
S^r_N A S^r_N \quad \mbox{with} \quad S^r = 1- S^+_N - S^-_N .$$ 2.) $A$ is physically relevant, if for each $P \in
{\mathcal{S}} _N$ $$\label{A1.15} A= U(P) A U^\star (P)$$ holds, i.e. if A is invariant under permutations of particles.\
[**A.1.5:**]{} Many relevant physical observables are defined using tensor products of operators in Hilbert spaces. In order to avoid unnecessary complications here only the tensor product of two operators is introduced, because the extension to more than two factors is straightforward.
Thus let be given two Hilbert spaces ${\mathcal{H}}_1$ and ${\mathcal{H}}_2$ and two densely defined closed operators $A_1$ and $A_2$. Then $A_1^\star$ and $A^\star_2$ exist having domains ${\mathcal{D}}_{A^\star_1}$ and ${\mathcal{D}}_{A^\star_2}$.
Now form the (noncomplete) tensor product ${\mathcal{D}}_0 : =
{\mathcal{D}}_{A^\star_1} \underline{\otimes }
{\mathcal{D}}_{A^\star_2} \subset {\mathcal{H}}_1 \otimes
{\mathcal{H}}_2$. It is dense in ${\mathcal{H}}_1 \otimes
{\mathcal{H}}_2 $ and contains only finite linear combinations of the form $$\label{A1.16} f = \sum a_j \varphi^j_1 \otimes
\varphi^j_2, \quad \varphi^j_\kappa \in {\mathcal{D}}_{A_\kappa},
\kappa = 1,2.$$ Then the operator $T_0 (A^\star_1, A^\star_2)$ is defined by $$\label{A1.17} T_0 (A^\star_1, A^\star_2) f = \sum a_j
(A^\star_1 \varphi^j_1) \otimes (A^\star_2 \varphi^j_2).$$ Finally the tensor product of $A_1$ and $A_2$ is [*defined*]{} by $$\label{A1.18} A_1 \otimes A_2 = T_0 (A^\star_1,
A^\star_2)^\star.$$ Hence, for selfadjoint operators definition (\[A1.18\]) reads $$A_1 \otimes A_2 = T_0 (A_1, A_2) ^\star .$$
A very useful tool is given by\
[**Proposition A.1.8:**]{} For bounded operators $A_1, A_2$ the above definition of $A_1 \otimes A_2$ is equivalent to $$\label{A1.19} (A_1 \otimes A_2) g =
\sum^\infty_{\lambda \kappa} a_{\lambda \kappa} (A_1 \phi^1_\lambda)
\otimes (A_2 \phi^2_\kappa)\; ,\\$$ where $\{ \phi^\varrho_\lambda : \lambda \in \mathbb{N}
\}$ is an ONB in ${\mathcal{H}}_\varrho, \varrho = 1,2$ and $g =
\sum_{\lambda \kappa}^\infty a_{\lambda \kappa} \phi^1_\lambda \otimes
\phi^2_\kappa \in {\mathcal{H}}_1 \otimes {\mathcal{H}}_2$.\
[**A.1.6:**]{} The results of the last section now are applied to transfer observables of $M$-particle systems into observables of $N$-particle systems, $M<N$. Some basic results in this connection are contained in the following\
[**Proposition A.1.9:**]{} 1.) Let $A_M$ be selfadjoint in ${\mathcal{H}}^M$ and let 1 be the identity operator in ${\mathcal{H}}^{N-M}$.Then $$\label{A1.20} (A_M \otimes 1)^\star = A_M \otimes 1
\;.$$ 2.) If $A_M$ is bounded, then $$\label{A1.21}
\parallel A_M \otimes 1\parallel = \parallel A_M \parallel \;.$$ 3.) If $(A_M + B_M)^\star = A^\star_M + B^\star_M$, then $$\label{A1.22} (A_M + B_M) \otimes 1 \supset (A_M
\otimes 1) + (B_M \otimes 1)\; .$$ The $=$-sign holds if the domains of both sides are equal, which is the case if $A_M, B_M$ are bounded and have domain ${\mathcal{H}}^M$.\
4.) If $A_M$ or $B_M$ is bounded then $$\label{A1.23} (A_m \otimes 1) (B_M \otimes 1) = (A_M
B_M \otimes 1)\;.$$ 5.) From (\[A1.7\]) one concludes that $$\label{A1.24} S^\pm_N (S^\pm_M \otimes 1) = S^\pm_N =
(S^\pm_M \otimes 1) S^\pm_N\;.$$ [**A.1.7:**]{} The operator which defines the physically relevant transfer from ${\mathcal{H}}^M$ to ${\mathcal{H}}^N, M < N$ is given by $$\label{A1.25} \Omega_N (A_M) := (M! (N-M)!)^{-1}
\sum_{P \in {\mathcal{S}}_N} U(P) (A_M \otimes 1 \otimes \ldots
\otimes 1) U^\star(P) ,$$ where $A_M$ is a densely defined closed linear operator in ${\mathcal{H}}^M$ and where 1 is the identity operator on ${\mathcal{H}}^1$. It has the following properties.\
[**Proposition A.1.10:**]{} 1.) The equation $$\label{A1.26} U(Q) \Omega_N (A_M) U^\star (Q) =
\Omega_N (A_M)$$ holds for each $Q \in {\mathcal{S}}_N$ so that $\Omega_N (A_M)$ is indeed physically relevant if it is selfadjoint.\
2.) If $A_M$ is bounded and selfadjoint with domain ${\mathcal{H}}^M$ the operator $\Omega_N (A_M)$ is bounded and selfadjoint with domain ${\mathcal{H}}^N$.\
3.) If $A_M$ is unbounded and selfadjoint then $\Omega_N (A_M)$ is not necessarily selfadjoint. But it is symmetric if it is densely defined.\
[**A.1.8:**]{} Though the operators $\Omega_N
(A_M)$ are physically relevant, they are not of interest in a strict sense, if one wants to consider only systems with one kind of particles as is the case in this paper. Then only the spaces ${\mathcal{H}}^N_\pm \subset {\mathcal{H}}^N$ are of interest and the operators defined therein. Thus the following [*definition*]{} is natural for the operators of proper physical relevance: $$\label{A1.27} \Omega^\pm_N (A_M) = S^\pm_N \Omega_N
(A_M) S^\pm_N\;.$$ As in the preceding sections here $\Omega^\pm_N (A_M)$ is studied only for selfadjoint $A_M$. But the definition itself is much more general. The operators $\Omega^\pm_N (A_M)$ have some properties, which are of special interest in this paper.\
[**Proposition A.1.11:**]{} 1.) For each $P\in {\mathcal{S}}_N $ the following relations hold: $$\label{A1.28}
\begin{array} {ll} \Omega^\pm_N (A_M) &= {N \choose M} S^\pm_N (A_M
\otimes 1 \cdots \otimes 1)S^\pm_N \\ [3ex] &= {N \choose M} S^\pm_N
U(P) (A_M \otimes 1 \otimes \cdots \otimes 1) U^\star (P) S^\pm_N \; .
\end{array}$$ 2.) $\Omega^\pm_N (A_M)$ is selfadjoint, because $A_M$ is selfadjoint by supposition and because $S^\pm_N$ are projections, i.e. are bounded.\
3.) If $A_M$ is bounded, then $\Omega^\pm_N (A_M)$ is bounded and $$\label{A1.29}
\parallel \Omega^\pm_N (A_M)\parallel \leq {N \choose M} \parallel A_M
\parallel\; .$$ 4.) If $A_M + B_M,\; A_M$ and $B_M$ are selfadjoint, Formula (\[A1.22\]) implies the relation $$\label{A1.30} \Omega^\pm_M (A_M + B_M) \supset
\Omega^\pm_N (A_M) + \Omega^\pm_N (B_M)\;.$$ If $A_M, B_M$ are bounded with domain ${\mathcal{H}}^M$, the $=$ sign holds.\
5.) Let $A_M + B_M$ and $\Omega^\pm_N (A_M) + \Omega^\pm (B_M)$ be selfadjoint. Then (\[A1.30\]) is an equation, because a selfadjoint operator cannot have a selfadjoint extension.\
6.)Finally, from (\[A1.24\]) and (\[A1.28\]) one concludes that $$\label{A1.31} \Omega^\pm_N (\Omega^\pm_M (A_M)) =
\Omega^\pm_N (A_M)\; .$$
Dummy Hamiltonians
------------------
[**A.2.1:**]{} In order to formulate explicitly the dummy Hamiltonians for electronic systems it is advisably to work with representations of Hilbert spaces instead of the abstract versions used elsewhere in this paper. For our purposes the position-spin representation is most useful. Therefore the Hilbert spaces we are working with are $$\label{A2.1} {\mathcal{H}}^N_- \subset{\bar
{\mathcal{H}}^N} = \bigotimes^N (L^2(\mathbb{R}^{3}) \otimes
\mathcal{S}^1),\quad {\mathcal{H}}^2_- \subset {\bar {\mathcal{H}}^2}
= \bigotimes^2 (L^2 (\mathbb{R}^{3}) \otimes \mathcal{S}^1) \\ ,$$ where $\mathcal{S}^1$ is the complex vector space of spin functions $u : \{ 1, -1 \} \rightarrow \mathbb{C}$ which is spanned by the ONB $\{\delta_{1,s}, \delta_{-1, s}\}$.(Cf. also Subsection 3.1.1.) This choice fits into the abstract formulations by the following definition of the tensor product. Let $x \in
{\mathbb{R}}$, $s \in \{1, -1\}$, and $z:= (x,s)$. Then, if $f_1, f_2
\in {\bar{\mathcal{H}}}^1 = L^2 (\mathbb{R}^3) \otimes \mathcal
{S}^1$, one defines $f_1 \otimes f_2$ by $$\label{A2.2} (f_1 \otimes f_2) (z_1, z_2) = f_1 (z_1)
f(z_2)\; .$$ For the sake of simplicity let us assume that with the following two examples the external fields and the interactions are electrostatic. This means that all influences of magnetism and spin are disregarded.\
[**A2.2:**]{} On the above assumptions the Hamiltonian for the $N$ electrons in an atom reads $$\label{A2.3} H^-_N \supset \frac{1}{2 m_0}
\sum^N_{j=1} P^2_j - N e_0^2 \sum^N_{j=1} \frac{1}{r_j} + \frac{1}{2}
e^2_0 \sum^N_{j\not= k} \frac{1}{r_{jk}} \;,$$ where $r_j = |x_j|$ and $r_{jk} = |x_j -
x_\kappa|$. The domain of $H^-_N$ is ${\mathcal{H}}^N_-$ as defined in Section 3.1.1. Consequently, the atomic dummy Hamiltonian describes “dummy helium” and is defined on ${\mathcal{H}}^2_-$. It reads explicitly $$\label{A2.4} H^-_{20} \supset \frac{\gamma_0}{2m_0}
(P^2_1 + P^2_2) - 2 \gamma_0 e^2_0 (\frac{1}{r_1} + \frac{1}{r_2}) +
e^2_0 \frac{1}{r_{12}}$$ with $\gamma_0 = (N-1)^{-1}.$\
[**A2.3:**]{} Now let us consider $N$ electrons in a finite lattice, the points of which are given by $y_\alpha, \alpha = 1, \cdots, N$ each carrying the charge $e_0$. Then the Hamiltonian is defined by $$\label{A2.5} H^-_N \supset \frac{1}{2 m_0}
\sum^N_{j=1} P^2_j - e^2_0 \sum^N_{j=1} \sum^N_{\alpha = 1}
\frac{1}{r'_{j\alpha}} + \frac{1}{2} \sum^N_{j \not= \kappa}
\frac{1}{r_{j \kappa}}\;,$$ where $r'_{j \alpha} = |x_j - y_\alpha|$. Thus the dummy solid has a Hamiltonian given by $$\label{A2.6} H^-_{20} \supset \frac{\gamma_0}{2m_0}
(P^2_1 + P^2_2) - \gamma_0 e^2_0 \sum^N_{\alpha=1}
(\frac{1}{r'_{1\alpha}} + \frac{1}{r'_{2\alpha}}) + e^2_0
\frac{1}{r_{12}}$$ and is defined on $\mathcal{H}^2_-.$
Hartree-Fock Procedure
----------------------
[**A.3.1**]{}: In what follows a system of two Fermions is considered. For this purpose it is useful to introduce some [*notation*]{}.\
1.) Let $\bar{\mathcal{H}}^1 = L^2 (\mathbb{R}^3) \otimes S^1$, where $S^1$ is the space of spin functions spaned by the $ONB \{\delta_{1S}, \delta_{-1S}\}$. Thus the general two-particle space is $\bar{\mathcal{H}}^2 = \bar{\mathcal{H}}^1 \otimes \bar{\mathcal{H}}^1$ and the space for Fermions is $\bar{\mathcal{H}}^2_-$.\
2.) It is assumed that the Hamiltonian of the system has the form
$$\label{A.3.1}
H_2 = K \otimes 1 + 1 \otimes K + W,$$
\
where $W$ is a multiplication operator densely defined in $\bar{\mathcal{H}}^2_-$ by a real function $V (x, s, x', s'), x, x' \in \mathbb{R}^3$ and $s, s' \in \{1, -1\}$. The operator $K$ contains the kinetic energy and the external fields. The dummy Hamiltonian is an example of the operators considered here.\
3.) The inner product in $\bar{\mathcal{H}}^1$ is defined as usual by $$\label{A.3.2}
<f, g > = \sum^1_{s = -1} \int \bar{f} (x,s) g(x,s) dx$$\
for $f, g \in \bar{\mathcal{H}}^1$.\
4.) Some special forms of the inner product appear in the context of Hartree-Fork procedure. Let $\Psi \in \bar{\mathcal{H}}^2$ and $f, g \in \bar{\mathcal{H}}^1$. Then
$$\label{A.3.3}
< g, W \Psi >_1 (x,s) = \sum_{s'} \int \bar{g} (x', s') V (x', s', x,s) \Psi (x', s', x, s) dx$$
and
$$\label{A.3.4}
< g, Wf >_1 (x,s) = \sum_{s'} \int \bar{g} (x', s') V(x', s', x,s) f (x', s') dx'.$$
\
Therefore one obtains
$$\label{A.3.5}
<g, W (f \otimes h) >_1 = <g, W f>_1h.$$
\
5.) From (\[A.3.4\]) one concludes that
$$\label{A.3.6}
<g, Wf>_1 = < Wg, f>_1.$$
\
If $g=f$, the term $<f, Wf>_1$ is the action of the “charge density” $|f|^2$ on one particle. Moreover, if the function $V$ is bounded, the term $<g, Wf>_1$ is defined for all $g, f \in \bar{\mathcal{H}}^1$. For realistic Hamiltonians $\bar H_2$ of the form (\[A.3.1\]) the function $V$ is symmetric, i.e. $V (x,s,x',s') = V(x', s', x,s)$.\
[**A.3.2:**]{} An essential part of the Hartree-Fock procedure is given by the following\
[**Definition A.3.1**]{}: The linear operator $$\label{A.3.7}
F ( \chi) = K + < \chi, W \chi >_1 - < \chi, W \cdot >_1 \chi$$ is defined for all $\chi \in \bar{\mathcal{H}}^1$, for which the domain of $F(\chi)$ is dense in $\bar{\mathcal{H}}^1 \ominus span \{\chi\} = : \bar{\mathcal{H}}^1_\chi$. It is called the Fock operator belonging to $\chi$.\
Then for each $\chi$, for which $F(\chi)$ is defined, the following result holds.\
[**Proposition A.3.2:**]{} $F(\chi)$ symmetric in $\bar{\mathcal{H}}^1_\chi$.\
[**Proof:**]{} Let $f,g$ be elements of the domain of $F(\chi)$. Then
$$\nonumber
\begin{array} {lll}
<f, F(\chi) g>
&=&<f, Kg> + < \chi \otimes f, W \chi \otimes g>- <\chi \otimes f, Wg \otimes \chi>\\
&=& <Kf,g> + <W\chi \otimes f, \chi \otimes g> - <W \chi \otimes f, g \otimes \chi>\\
&=& < F(\chi) f, g>^.
\end{array}$$
\
[**A.3.3:**]{} Now the Hartree-Fock procedure can be described by the following two steps.\
[**1st step:**]{} Determine two elements $\phi_1, \phi_2$ of $\bar{\mathcal{H}}^1$ and two real numbers $e_{12}$ and $e_{21}$ such that the equations $$\label{A.3.8}
\begin{array} {lll}
F (\phi_2) \phi_1 &=& e_{21} \phi_1 \\
F(\phi_1) \phi_2 &=& e_{12} \phi_2
\end{array}$$\
are satisfied. In addition let $e_{12} \le e_{21}$.\
[**2nd step:**]{} Determine normed elements $\phi_\kappa \in \bar{\mathcal{H}}^1_{\phi_1}$, $ \kappa = 3,4, \ldots$ and real numbers $e_{1 \kappa}$ such that the equations $$\label{A.3.9}
F (\phi_1) \phi_\kappa = e_{1 \kappa} \phi_\kappa\\$$ hold.\
The Hartree-Fock procedure is usually derived via the Ritz variational principle. But this derivation is not of interest in the present context, rather the following consequence.\
[**Proposition A.3.3:**]{} The set ${\mathcal{O}}_1$ of vectors $\phi_\chi \in {\mathcal{H}}^1, \kappa = 1,2,3, \ldots$ obtained from (\[A.3.8\]) and (\[A.3.9\]) is an orthonormal system in $\bar{\mathcal{H}}^1$.\
[**Proof:**]{} From the definition of the Fock operator and from the Formulae (\[A.3.8\]), (\[A.3.9\]) it follows that $\phi_\kappa \in \bar{\mathcal{H}}^1_{\phi_1}, \kappa \geq 2$. Hence $<\phi_1, \phi_\kappa> = 0$ for all $\kappa \geq 2$. Since the Fock operator $F(\phi_1)$ is a symmetric linear operator in $\bar{\mathcal{H}}^1_{\phi_1}$, all its eigenspaces are orthogonal for different eigenvalues. Therefore the set of eigenvectors $\phi_\kappa, \kappa \geq 2$ can be chosen such that it is an orthonormal system $\bar{\mathcal{H}}^1_{\phi_1}$. Hence, ${\mathcal{O}}_1$ is an orthonormal system in $\bar{\mathcal{H}}^1$.\
[**A.3.4:**]{} The Hartree-Fock procedure is not only a method determining the set ${\mathcal{O}}_1$, rather it is also used for an approximate diagonalization of $\bar H_2$. This runs as follows. Let\
$$\label{A3.10}
\Psi^-_{\kappa \lambda} = \frac{1}{\sqrt{2}}
(\phi_\kappa \otimes \phi_\lambda - \phi_\lambda \otimes \phi_\kappa)$$\
for $\kappa < \lambda $ and let\
$$\label{A3.11}
E_{\kappa \lambda} = <\Psi^-_{\kappa \lambda}, H_2 \Psi^-_{\kappa \lambda}> .$$\
Then the Hartree-Fock approximation of $\bar H_2$ is given by the diagonal operator\
$$\label{A3.12}
\hat{H}^-_2 = \sum_{\kappa \lambda} E_{\kappa \lambda}
\Psi^-_{\kappa \lambda} <\Psi^-_{\kappa \chi}, \cdot >.$$\
Because normally the discret spectrum of $\bar H_2$ is bounded, the operator $
\hat{H}^-_2$ is bounded too. $\hat{H}^-_2$ is the best approximation of $\bar H_2$ using only the Ritz variational principle for vectors of the shape (A3.10).\
By Proposition 4.1 the Hartree-Fock diagonalization of an N-particle Hamiltonian is reduced to Hartree-Fock diagonalizing its dummy Hamiltonian.
[**Acknowledgment**]{}\
I want to thank my colleagues Arno Schindlmayr, Uwe Gerstmann, Thierry Jecko and Jörg Meyer for valuable discussions and critical remarks, and Mr. Wolfgang Rothfritz for correcting my English.
[99]{} Thomas, L.H.: [*The calculation of atomic fields*]{}. Proc. Cambridge Phil. Soc. 23, 542 (1927) Fermi, E.: [*A statistical method for the determination of some properties of atoms*]{}. Z. Phys. 48, 73 (1928) Hartree, D. R.: [*The wave mechanics of an atom with a non-coulomb central field. Theory and methods*]{}. Proc. Cambridge Phil. Soc. 24, 89 (1928) Fock, V.: [*Näherungsmethode zur Lösung des quantenmechanischen Mehrkörperproblems*]{}. Z. Phys. 61, 126 (1930) Hohenberger P., Kohn, W.: [*Inhomogeneous elektron gas*]{}. Phys. Rev. 136 B, 864 (1964) Kohn, W., Sham, L.: [*Selfconsistent equations including exchange and correlation effects*]{}. Phys. Rev. 14 A, 1133(1964) Kohannoff, J.: [*Electronic Structure Calculations for Solids and Molecules*]{}. Cambridge University Press, Cambridge, New York, 2006. Ohno, K., Esfarjani, K., Kawazoe, Y.: [*Computational Material Science*]{}. Springer Verlag, Berlin, Heidelberg, New York, 1999 Haken, H.: [*Quantenfeldtheorie des Festkörpers*]{}, Teubner Verlag, Stuttgart, 1993 Cook, J. M.: [*The Mathematics of Second Quantization*]{}. Trans. Math. Soc. Vol. 74, 222 (1953) Schroeck Jr., F. E.: [*Generalization of the Cook Formalism for Fock Space*]{}. J. Math. Phys., Vol. 12, 1849 (1971) Chatelin, F.: [*Spectral Approximation of Linear Oparators*]{}. Academic Press, New York, London, Paris, 2011 Kato, T.: [*Perturbation theory for linear operators*]{}. Springer-Verlag, Berlin, Heidelberg, New York, 1966 Ludwig, G.: [*Die Grundlagen der Quantenmechanik*]{}. Springer-Verlag, Berlin, Göttingen, Heidelberg, 1954
|
---
abstract: 'We review some aspects of the current state of data-intensive astronomy, its methods, and some outstanding data analysis challenges. Astronomy is at the forefront of “big data” science, with exponentially growing data volumes and data rates, and an ever-increasing complexity, now entering the Petascale regime. Telescopes and observatories from both ground and space, covering a full range of wavelengths, feed the data via processing pipelines into dedicated archives, where they can be accessed for scientific analysis. Most of the large archives are connected through the Virtual Observatory framework, that provides interoperability standards and services, and effectively constitutes a global data grid of astronomy. Making discoveries in this overabundance of data requires applications of novel, machine learning tools. We describe some of the recent examples of such applications.'
address: |
1-Department of Physics, University Federico II, via Cintia 6, Napoli, Italy\
2-INAF-Astronomical Observatory of Capodimonte, via Moiariello 16, Napoli, Italy\
3-Center for Data Driven Discovery, California Institute of Technology, Pasadena, 90125- CA, USA
title: Data Driven Discovery in Astrophysics
---
Astronomy, Virtual Observatory, data mining
Introduction {#sec:intro}
============
Like most other sciences, astronomy is being fundamentally transformed by the Information and Computation Technology (ICT) revolution. Telescopes both on the ground and in space generate streams of data, spanning all wavelengths, from radio to gamma-rays, and non-electromagnetic windows on the universe are opening up: cosmic rays, neutrinos, and gravitational waves. The data volumes and data rates are growing exponentially, reflecting the growth of the technology that produces the data. At the same time, we see also significant increases in data complexity and data quality as well. This wealth of data is greatly accelerating our understanding of the physical universe.
It is not just the data abundance that is fueling this ongoing revolution, but also Internet-enabled data access, and data re-use. The informational content of the modern data sets is so high as to make archival research and data mining not merely profitable, but practically obligatory: in most cases, researchers who obtain the data can only extract a small fraction of the science that is enabled by it. Furthermore, numerical simulations are no longer just a crutch of an analytical theory, but are increasingly becoming the dominant or even the only way in which various complex phenomena (e.g., star formation or galaxy formation) can be modeled and understood. These numerical simulations produce copious amounts of data as their output; in other words, theoretical statements are expressed not as formulae, but as data sets. Since physical understanding comes from the confrontation of experiment and theory, and both are now expressed as ever larger and more complex data sets, science is truly becoming data-driven in the ways that are both quantitatively and qualitatively different from the past. The situation is encapsulated well in the concept of the “fourth paradigm” [@Hey2009], adding to experiment, analytical theory, and numerical simulations as the four pillars of modern science. This profound, universal change in the ways we do science has been recognized for over a decade now, sometimes described as e-Science, cyber-science, or cyber-infrastructure.
Data Overabundance, Virtual Observatory and Astroinformatics {#sec:overabundance}
============================================================
A confluence of several factors pushed astronomy to the forefront of data-intensive science. The first one was that astronomy as a field readily embraced, and in some cases developed, modern digital detectors, such as the CCDs or digital correlators, and scientific computing as a means of dealing with the data, and as a tool for numerical simulations. The culture of e-Science was thus established early (circa 1980s), paving the way for the bigger things to come. The size of data sets grew from Kilobytes to Megabytes, reaching Gigabytes by the late 1980s, Terabytes by the mid-1990s, and currently Petabytes (see Fig. 1). Astronomers adopted early universal standards for data exchange, such as the Flexible Image Transport System (FITS; [@wells1981]).
The second factor, around the same time, was the establishment of space missions archives, mandated by NASA and other space agencies, with public access to the data after a reasonable proprietary period (typically 12 to 18 months). This had a dual benefit of introducing the astronomical community both to databases and other data management tools, and to the culture of data sharing and reuse. These data centers formed a foundation for the subsequent concept of a Virtual Observatory [@hanish2001]. The last element was the advent of large digital sky surveys as the major data sources in astronomy. Traditional sky surveys were done photographically, ending in 1990s; those were digitized using plate-scanner machines in the 1990s, thus producing the first Terabyte-scale astronomical data sets, e.g., the Digital Palomar Observatory Sky Survey (DPOSS; [@GSD1999]). They were quickly superseded by the fully digital surveys, such as the Sloan Digital Sky Survey (SDSS; [@york2000]), and many others (see, e.g. [@GSD2012c] for a comprehensive review and references). Aside from enabling a new science, these modern sky surveys changed the social psychology of astronomy: traditionally, observations were obtained (and still are) in a targeted mode, covering a modest set of objects, e.g., stars, galaxies, etc. With modern sky surveys, one can do first-rate observational astronomy without ever going to a telescope. An even more powerful approach uses data mining to select interesting targets from a sky survey, and pointed observations to study them in more detail.
This new wealth of data generates many scientific opportunities, but poses many challenges as well: how to best store, access, and analyze these data sets, that are several orders of magnitude larger than what astronomers are used to do on their desktops? A typical sky survey may detect $\sim 10^8 - 10^9$ sources (stars, galaxies, etc.), with $\sim 10^2 - 10^3$ attributes measured for each one. Both the scientific opportunities and the technological challenges are then amplified by data fusion, across different wavelengths, temporal, or spatial scales.
Virtual Observatory {#virtualobs}
-------------------
The Virtual Observatory (VO, [@brunner2001a; @GSD2002a; @hanish2001]) was envisioned as a complete, distributed (Web-based) research environment for astronomy with large and complex data sets, by federating geographically distributed data and computing assets, and the necessary tools and expertise for their use. VO was also supposed to facilitate the transition from the old data poverty regime, to the regime of overwhelming data abundance, and to be a mechanism by which the progress in ICT can be used to solve the challenges of the new, data-rich astronomy. The concept spread world-wide, with a number of national and international VO organizations, now federated through the International Virtual Observatory Alliance (IVOA; http://ivoa.net). One can regard the VO as an integrator of heterogeneous data streams from a global network of telescopes and space missions, enabling data access and federation, and making such value-added data sets available for a further analysis. The implementation of the VO framework over the past decade was focused on the production of the necessary data infrastructure, interoperability, standards, protocols, middleware, data discovery services, and a few very useful data federation and analysis services (see [@hanish2007; @graham2007], for quick summaries and examples of practical tools and services implemented under the VO umbrella).
Most astronomical data originate from sensors and telescopes operating in some wavelength regime, in one or more of the following forms: images, spectra, time series, or data cubes. A review of the subject in this context was given in [@brunner2001b]. Once the instrumental signatures are removed, the data typically represent signal intensity as a function of the position on the sky, wavelength or energy, and time. The bulk of the data are obtained in the form of images (in radio astronomy, as interferometer fringes, but those are also converted into images).
The sensor output is then processed by the appropriate custom pipelines, that remove instrumental signatures and perform calibrations. In most cases, the initial data processing and analysis segments the images into catalogs of detected discrete sources (e.g., stars, galaxies, etc.), and their measurable attributes, such as their position on the sky, flux intensities in different apertures, morphological descriptors of the light distribution, ratios of fluxes at different wavelengths (colors), and so on. Scientific analysis then proceeds from such first-order data products. In the case of massive data sets such as sky surveys, raw and processed sensor data, and the initial derived data products such as source catalogs with their measured attributes are provided through a dedicated archive, and accessible online.
The Virtual Observatory (VO) framework aims to facilitate seamless access to distributed heterogeneous data sets, for example, combining observations of the same objects from different wavelength regimes to understand their spectral energy distributions or interesting correlations among their properties. The International Virtual Observatory Alliance (IVOA) is charged with specifying the standards and protocols that are required to achieve this. A common set of data access protocols ensures that the same interface is employed across all data archives, no matter where they are located, to perform the same type of data query. Although common data formats may be employed in transferring data, individual data providers usually represent and store their data and metadata in their own way. Common data models define the shared elements across data and metadata collections and provide a framework for describing relationships between them so that different representations can interoperate in a transparent manner. Most of the data access protocols have an associated data model, e.g., the Spectral data model defines a generalized model for spectrophotometric sequences and provides a basis for a set of specific case models, such as Spectrum, SED and TimeSeries. There are also more general data models for spatial and temporal metadata, physical units, observations and their provenance, and characterizing how a data set occupies multidimensional physical space. When individual measurements of arbitrarily named quantities are reported, either as a group of parameters or in a table, their broader context within a standard data model can be established through the IVOA Utypes mechanism. Namespaces allow quantities/concepts defined in one data model to be reused in another one. Data models can only go so far in tackling the heterogeneity of data sources since they provide a way to identify and refer to common elements but not to describe how these are defined or related to each other. Concept schemes, from controlled vocabularies to thesauri to ontologies, specify in increasing levels of detail the domain knowledge that is ultimately behind the data models. It then becomes possible, for example, to automatically construct a set of extragalactic sources with consistent distances, even if each initially has it specified in a different way; the Tully-Fisher relationship can be used with those with HI line widths whereas surface brightness and velocity dispersion can be used for elliptical galaxies. Finally, the IVOA provides a Registry tool where descriptions of available data archives and services can be found, e.g., catalogs of white dwarfs or photometric redshift services.
Beyond VO: Astroinformatics {#astroinfo}
---------------------------
While much still remains to be done, data discovery and access in astronomy have never been easier, and the established structure can at least in principle expand and scale up to the next generation of sky surveys, space missions, etc. What is still lacking is a powerful arsenal of widely available, scalable tools needed to extract knowledge from these remarkable data sets. The key to further progress in this area is the availability of data exploration and analysis tools that can operate on the Terascale data sets and beyond. Progress in this arena is being made mainly by individual research groups in universities, or associated with particular observatories and surveys.
Thus we now have an emerging field of Astroinformatics, a bridge field between astronomy on one side, and ICT and applied CS on the other (see, e.g., [@borne2009]). The idea behind Astroinformatics is to provide an informal, open environment for the exchange of ideas, software, etc., and to act as a “connecting tissue” between the researchers working in this general arena. The motivation is to engage a broader community of researchers, both as contributors and as consumers of the new methodology for data-intensive astronomy, thus building on the data-grid foundations established by the VO framework.
A good introduction to Astroinformatics are the talks and discussions at the series of the international Astroinformatics conferences, starting with http://astroinformatics2010.org.
Data Mining and Knowledge Discovery {#sec:datamining}
===================================
Data – no matter how great – are just incidental to the real task of scientists, knowledge discovery. Traditional methods of data analysis typically do not scale to the data sets in the Terascale regime, and/or with a high dimensionality. Thus, adoption of modern data mining (DM) and Knowledge Discovery in Databases (KDD) techniques becomes a necessity. Large data volumes tend to preclude direct human examination of all data, and thus an automatization of these processes is needed, requiring use of Machine Learning (ML) techniques. Astronomical applications of ML are still relatively recent and restricted to a handful of problems. This is surprising, given the data richness and a variety of possible applications in the data-driven astronomy. Sociological challenges aside, there are some technical ones that need to be addressed. First, a large family of ML methods (the so called supervised ones) requires the availability of relatively large and well characterized knowledge bases (KB), e.g., reliable (“ground truth”) training data sets of examples from which the ML methods can learn the underlying patterns and trends. Such KBs are relatively rare and are available only for a few specific problems. Second, most ML algorithms used so far by the astronomers cannot deal well with missing data (i.e., no measurement was obtained for a given attribute) or with upper limits (a measurement was obtained, but there is no detection at some level of significance). While in many other fields (e.g., market analysis and many bioinformatics applications) this is only a minor problem since the data are often redundant and/or can be cleaned of all records having incomplete or missing information, in astronomy this is usually not so, and all data records, including those with an incomplete information, are potentially scientifically interesting and cannot be ignored.
Finally, scalability of algorithms can be an issue. Most existing ML methods scale badly with both increasing number of records and/or of dimensionality (i.e., input variables or features): the very richness of our data sets makes them difficult to analyze. This can be circumvented by extracting subsets of data, performing the training and validation of the methods on these manageable data subsets, and then extrapolating the results to the whole data set. This approach obviously does not use the full informational content of the data sets, and may introduce biases which are often difficult to control. Typically, a lengthy fine tuning procedure is needed for such sub-sampling experiments, which may require tens or sometimes hundreds of experiments to be performed in order to identify the optimal DM method for the problem in hand, or, a given method, the optimal architecture or combination of parameters.
Examples of uses of modern ML tools for analysis of massive astronomical data sets include: automated classification of sources detected in sky surveys as stars (i.e., unresolved) vs. galaxies (resolved morphology) using Artificial Neural Nets (ANN) or Decision Trees (DT) (e.g., [@weir1995; @odewan2004; @donalek2006]). Brescia et al. [@brescia2012] have recently used a ML method for a different type of resolved/unresolved objects separation, namely the identification of globular clusters in external galaxies. Another set of ML applications is in classification or selection of objects of a given type in some parameter space, e.g., colors. This is particularly well suited for the identification of quasars and other active galactic nuclei, which are morphologically indistinguishable from normal stars, but represent vastly different physical phenomena ([@dabrusco2009; @dabrusco2012; @richards2009]). Yet another application is estimates of photometric redshifts, that are derived from colors rather than from spectroscopy ([@tagliaferri2002; @firth; @hildebrandt2010; @cavuoti2012]). Laurino et al. [@laurino2011] implemented a hybrid procedure based on a combination of unsupervised clustering and several independent classifiers that has improved the accuracy, for both normal galaxies and quasars.
The rapidly developing field of time-domain astronomy poses some new challenges. A new generation of synoptic sky surveys produces data streams that correspond to the traditional, one-pass sky surveys many times repeatedly [@GSD2012c; @GSD2012a; @GSD2013a; @GSD2012d; @GSD2011e]. In addition to the dramatic increase of data rates and the resulting data volumes, and all of the challenges already posed by the single-pass sky surveys, there is a need to identify, characterize, classify, and prioritize for the follow-up observations any transient events or highly variable sources that are found in the survey data streams. Since many such events are relatively short in duration, this analysis must be performed as close to the real time as possible. This entails challenges that are not present in the traditional automated classification approaches, which are usually done in some feature vector space, with an abundance of self-contained data derived from homogeneous measurements. In contrast, measurements generated in the synoptic sky surveys are generally sparse and heterogeneous: there are only a few initial measurements, their types differ from case to case, and the values have differing variances; the contextual information is often essential, and yet difficult to capture and incorporate; many sources of noise, instrumental glitches, etc., can masquerade as transient events; as new data arrive, the classification must be iterated dynamically. We also require a high completeness (capture all interesting events) and a low contamination (minimize the number of false alarms). Since only a small fraction of the detected transient events can be followed up with the available resources, at any given stage, the current best classification should be used to make automated decisions about the follow-up priorities. Both the classification and the availability of resources change in time, the former due to the new measurements, and the latter due to the time allocations, weather, day/night cycle, etc. These formidable challenges require novel approaches to a robust and flexible (near)real-time mining of massive data streams. Reviewing the ongoing work in this domain is beyond the scope of this paper, but some examples can found in [@GSD2008; @GSD2009b; @GSD2010b; @GSD2010c; @GSD2010d; @donalek2013z; @richards2011a; @bloom2011a; @GSD2008e; @GSD2011z].
There are various free DM/KDD packages commonly used in the academic community that would be suitable for adoption by the astronomical community, although their uptake has also been relatively slow. Several of them have been evaluated in this context by Donalek et al. [@donalek2011], including Orange, Rapid Miner, Weka, VoStat and DAMEWARE.
Multidimensional Data Visualization Challenges
----------------------------------------------
Effective visualization is a key component of data exploration, analysis and understanding, and it must be an integral part of a DM process. It is fair to say that visualization represents the bridge between the quantitative content of the data, and the intuitive understanding of it. While astronomy can be intrinsically very visual, with images of the sky at different wavelengths and their composites, visualization of highly- dimensional parameter spaces presents some very non-trivial challenges. For a relevant discussion, see [@goodman2012].
This is not about the images of the sky, but about a visualization of highly-dimensional parameter spaces of measurements from large sky surveys. How do we effectively visualize phenomena that are represented in parameter spaces whose dimensionality is $D >> 3$? For example, a feature space of measured properties of sources in a sky survey, or a federation of several surveys, may have a dimensionality $D \sim 10^2 - 10^3$. Meaningful structures (correlations, clustering with a non-trivial topology, etc.), representing new knowledge may be present in such hyper-dimensional parameter spaces, and not be recognizable in any low-dimensional projection thereof. This problem is not unique to astronomy, but it affects essentially all of “big data” science. There are fundamental limitations of the human visual perception and visual pattern recognition. Various tricks exist that can be used to represent up to a dozen dimensions in a pseudo-3D graph, but going to many tens, hundreds, or thousands of dimensions that characterize some of the modern data sets represents a fundamental barrier to their intuitive understanding. This problem may be one of the key bottlenecks for data-intensive science in general.
We have experimented with a novel approach to this challenge, using an immersive virtual reality (VR) as a scientific collaboration and data visualization platform [@farr; @donalek2014]. This approach offers a more intuitive perception of data and relationships present in the data (clusters, correlations, outliers, etc.), as well as a possibility of an interactive, collaborative data visualization and visual data exploration. As the commercially-driven VR technology improves, this may become an indispensable methodology for an effective visualization of high-domensionality data spaces.
DAMEWARE: A New Tool for Knowledge Discovery {#sec:typestyle}
============================================
A working example of a DM/KDD platform deployed in the astronomical context is the Data Mining and Exploration Web Application REsource (DAMEWARE; http://dame.dsf.unina.it/) web application [@brescia2014b], a joint effort between the Astroinformatics groups at University Federico II, the Italian National Institute of Astrophysics, and the California Institute of Technology. DAMEWARE aims at solving in a practical way some of the above listed DM problems, by offering a completely transparent architecture, a user-friendly interface, and the possibility to seamlessly access a distributed computing infrastructure. It adopts VO standards in order to facilitate interoperability of data; however, at the moment, it is not yet fully VO compliant. This is partly due to the fact that new standards need to be defined for data analysis, DM methods and algorithm development. In practice, this implies a definition of standards in terms of an ontology and a well-defined taxonomy of functionalities to be applied to the astrophysical use cases. DAMEWARE offers asynchronous access to the infrastructure tools, thus allowing the running of jobs and processes outside the scope of any particular web application, and independent of the user connection status. The user, via a simple web browser, can access application resources and can keep track of his jobs by recovering related information (partial/complete results) without having to keep open a communication socket. Furthermore, DAME has been designed to run both on a server and on a distributed computing infrastructure (e.g., Grid or Cloud). The front end is a GUI (Graphical User Interface) that contains dynamical web pages that are used by the end users to interact with the applications, models, and facilities to launch scientific experiments. The interface includes an authentication procedure that redirects the user to a personal session environment, where he can find uploaded data, check the experiment status and driven procedures, configure and execute new scientific experiments. This mechanism was also required for Grid access security reasons. A detailed technical description of the various components can be found in [@brescia2014b]. In the currently available DAMEWARE release, DAME offers Multi-Layer Perceptron ANNs, trained by three different learning rules (Back Propagation, Genetic Algorithm, Quasi-Newton), Random Forest and Support Vector Machines (SVM) as supervised models; Self Organizing Feature Maps (SOFM), Principal Probabilistic Surfaces (PPS) and K-Means as unsupervised models. In addition, in cases where specific problems may require applications of DM tools or models that are not yet available in the main release, DAME includes also a Java-based plugin wizard for custom experiment setup (DMPlugin). This allows any user to upload into the DAME suite their own DM code and run it on the computing infrastructure. Even though still under development, DAME has already been tested against several specific applications. The already mentioned globular cluster identification problem [@brescia2012]; the evaluation of photometric redshifts for galaxies and quasars [@dabrusco2007; @laurino2011; @brescia2013], the identification of candidate quasars from multiband survey data [@dabrusco2009], and finally, the identification of candidate emission line galaxies [@cavuoti2014el]. In what follows we shall summarize some of these applications.
A use case: photometric redshifts with DAMEWARE {#sec:photoz}
-----------------------------------------------
Photo-z are essential in constraining dark matter and dark energy through weak gravitational lensing, for the identification of galaxy clusters and groups, for type Ia Supernovae, and to study the mass function of galaxy clusters, just to quote a few applications.
The physical mechanism responsible for the correlation between the photometric features and the redshift of an astronomical source is the change in the contribution to the observed fluxes caused by the prominent features of the spectrum shifting through the different filters as the spectrum of the source is redshifted. This mechanism implies a non-linear mapping between the photometric parameter space of the galaxies and the redshift values.
When accurate and multi-band photometry for a large number of objects is complemented by spectroscopic redshifts for a statistically significant sub-sample of the same objects, the mapping function can be inferred using supervised machine learning methods such as Neural Networks.
As an example let us consider the recent application of the MLPQNA neural network (for details on this method see [@brescia2013]) applied to the Sloan Digital Sky Survey Data Release 9 [@brescia2014]. The SDSS-DR9 provides an ideal data set for this type of applications since it contains both very extensive multiband photometry for more than 300 million objects as well as accurate spectroscopic redshift for a fair subsample of them. After an extensive series of experiments, the best results were obtained with a two hidden layer network, using a combination of the four SDSS colors (obtained from the SDSS $psfMag$) plus the pivot magnitude $psfMag$ in the $r$ band. This yields a normalized overall uncertainty of $\sigma=0.023$ with a very small average bias of $\sim 3\times 10^{-5}$, a low $NMAD$, and a low fraction of outliers ($\sim5\%$ at $2\sigma$ and $\sim0.1\%$ at $0.15$). After the rejection of catastrophic outliers, the residual uncertainty is $\sigma=0.0174$.
The trained network was then used to process the galaxies in the SDSS-DR9, matching the above outlined selection criteria, and to produce the complete photometric catalogue. This catalog consists of photo-z estimates for more than $143$ million SDSS-DR9 galaxies. The distribution of the spectroscopic versus photometric redshifts in the SDSS-DR9 test set used do derive these results is given in Fig. 2.
Future Prospects
================
The preceding discussion gives just a flavor of the data processing and analysis challenges in modern, data-intensive astronomy. We are now entering the Petascale regime in terms of data volumes, but the exponential growth continues. One important recent development is the advent of synoptic sky surveys, which cover large areas of the sky repeatedly, thus escalating the challenges of data handling and analysis from massive data sets to massive data streams, with all of the added complexities. This trend is likely to continue, pushing astronomy towards the Exascale regime.
The astronomical community has responded well and in a timely manner to the challenges of massive data handling, by embracing Internet-accessible archives, databases, interoperability, standard formats and protocols, and a virtual scientific organization, Virtual Observatory, that is now effectively a global data grid of astronomy. While this complex and necessary infrastructure represents a solid foundation for a big data science, it is just a start. The real job of science, data analysis and knowledge discovery, starts after all the data processing and data delivery through the archives. This requires some powerful new approaches to data exploration and analysis, leading to knowledge discovery and understanding. Many good statistical and data mining tools and methods exist, and are gradually permeating the astronomical community, although their uptake has been slower than what may be hoped for.
One tangible technical problem is the scalability of DM tools: most of the readily available ones do not scale well to the massive data sets that are already upon us. The key problem is not so much the data volume (expressible, e.g., as a number of feature vectors in some data set), but their dimensionality: most algorithms may work very well in 2 or 3 or 6 dimensions, but are simply impractical when the intrinsic dimensionality of the data sets is measured in tens, hundred, or thousands. Effective, scalable software and a methodology needed for knowledge discovery in modern, large and complex data sets typically do not exist yet, at least in the public domain.
SGD and CD acknowledge a partial support from the U.S. NSF grants AST-0834235, AST-1313422, AST-1413600, and IIS-1118041. GL, MB and SC aknowledge a partial support from the PRIN MIUR “The obscure universe and the evolution of baryons” as well as the European COST action TD-1403 “Big data era in Sky-Earth Observations”. We thank numerous colleagues for useful discussions and collaborations over the years, and in particular to M. Graham, A. Mahabal, A. Drake, K. Polsterer, M. Turmon, G. Riccio, D. Vinkovic, and many others.
[99]{} J. Bloom and J. Richards. “Data Mining and Machine-Learning in Time-Domain Discovery & Classification”, , arXiv/1104.3142, 2011.
K. Borne, et al. “Astroinformatics: A 21st Century Approach to Astronomy”, , Position Papers, no. 6, available at http://arxiv.org/abs/0909.3892, 2009.
M. Brescia, S. Cavuoti, M. Paolillo, G. Longo, T. Puzia. “The Detection of Globular Clusters in Galaxies as a Data Mining Problem”, , 421, 1155, 2012.
M. Brescia, S. Cavuoti, R. D’Abrusco, G. Longo, A. Mercurio. “Photometric redshifts for Quasars in multi band Surveys”, , 772, 2, 140, 2013.
M. Brescia, S. Cavuoti, G. Longo, V. De Stefano. “A catalogue of photometric redshifts for the SDSS-DR9 galaxies”, , 568, 126, 2014.
M. Brescia, S. Cavuoti, G. Longo, et al. “DAMEWARE: A web cyberinfrastructure for astrophysical data mining”, , 126, 783, 2014.
R. Brunner, S.G. Djorgovski, Szalay, A. eds. “Virtual Observatories of the Future”, , p. 225. Provo, UT: Astronomical Society of the Pacific, 2001.
R. Brunner, S.G. Djorgovski, T. Prince, A. Szalay. “Massive Data Sets in Astronomy”, , eds. J. Abello, P. Pardalos, M. Resende, p. 931, Boston: Kluwer, 2001.
S. Cavuoti, M. Brescia, G. Longo, A. Mercurio. “Photometric Redshifts With Quasi-Newton Algorithm (MLPQNA). Results in the PHAT1 Contest”, , 546, 13, 1, 2012.
S. Cavuoti, M. Brescia, R. D’Abrusco, G. Longo, M. Paolillo. “Photometric classification of emission line galaxies with Machine Learning methods”, , 473, 968, 2014.
R. D’Abrusco, A. Staiano, G. Longo, M. Brescia, et al. “Mining the SDSS Archive. I. Photometric Redshifts in the Nearby Universe”, , 663, 752, 2007.
R. D’Abrusco, G. Longo, N.A. Walton. “Quasar candidate selection in the Virtual Observatory era”, , 396, 223, 2009.
R. D’Abrusco, G. Fabbiano, S.G. Djorgovski, C. Donalek, et al. “CLASPS: A New Methodology for Knowledge Extraction From Complex Astronomical Data Sets”, , 755, 92, 2012.
S.G. Djorgovski, R. Gal, S. Odewahn, R.R. de Carvalho, R. Brunner, G. Longo, R. Scaramella. “The Palomar Digital Sky Survey (DPOSS)”, , eds. S. Colombi et al., Gif sur Yvette: Editions Frontieres, p. 89, 2012.
S.G. Djorgovski, and the NVO Science Definition Team. “Towards the National Virtual Observatory”, report available on line at http://www.us-vo.org/sdt/, 2002.
S.G. Djorgovski, et al. “Towards an Automated Classification of Transient Events in Synoptic Sky Surveys”, , eds. A. Srivasatva, et al., NASA publ., p. 174, 2011.
S.G. Djorgovski, A. Mahabal, A. Drake, M. Graham, C. Donalek. “Sky Surveys”, , ed. H. Bond, Vol.2 of Planets, Stars, and Stellar Systems, ser. ed. T. Oswalt, Berlin: Springer Verlag, 2012.
S.G. Djorgovski, A.A. Mahabal, A.J. Drake, M.J. Graham, C. Donalek, C., R. Williams. , New Horizons in Time Domain Astronomy, eds. E. Griffin et al., p. 141. Cambridge: Cambridge Univ. Press, 2012.
C. Donalek. “Mining Astronomical Data Sets”, , Italy, 2006.
C. Donalek et al. “New Approaches to Object Classification in Synoptic Sky Surveys”, , 1082, 252, 2008.
C. Donalek, M. Graham, A. Mahabal, S.G. Djorgovski, R. Plante. “Tools for Data to Knowledge”, , 2011.
C. Donalek, et al. “Feature Selection Strategies for Classifying High Dimensional Astronomical Data Sets”, , IEEE BigData 2013.
C. Donalek, Djorgovski, S.G. et al. “Immersive and Collaborative Data Visualization Using Virtual Reality Platforms”, , IEEE press, 2014.
W.M. Farr, P. Hut, J. Ames, and A. Johnson. “Immersive and Collaborative Data Visualization Using Virtual Reality Platforms”, , Journal of Virt. Worlds Res., 2 (3), 2009.
A.E. Firth, O. Lahav, R.S. Somerville. “Estimating Photometric Redshifts with Artificial Neural Networks”, 339, 1195, 2003.
A. Goodman. “Principles of High-Dimensional Data Visualization in Astronomy”, , 333, 505, 2012.
M.J. Graham, et al. “Data Challenges of Time Domain Astronomy”, , eds. Qiu, X., Gannon, D.,30 (5-6), 371, 2012.
M.J. Graham, et al. “Connecting the Time Domain Community with the Virtual Astronomical Observatory”, , eds., Peck, A., Seaman, R., Comeron, F., Proc. SPIE, 8448, 2013.
M. Graham, M. Fitzpatrick, T. Mc. Glynn eds. “The National Virtual Observatory: Tools and Techniques for Astronomical Research”, , 382, 2007.
R. Hanish. “A Foundation for the Virtual Observatory”, , eds. R. Brunner et al., 225, 97, 2001.
R. Hanisch. “The Virtual Observatory in Transition”, , C. Gabriel et al., Astron. Soc. Pacific Conf. Ser., 351, 765, 2007.
T. Hey, S. Tansley, K. Tolle, eds. “The Fourth Paradigm: Data-Intensive Scientific Discovery”, , 2009.
H. Hildebrandt, et al. “PHAT: PHoto-z Accuracy Testing”, , 523, 31, 2010.
O. Laurino, R. D’Abrusco, G. Longo, G. Riccio. “Astroinformatics of Galaxies and Quasars: A New General Method for Photometric Redshifts Estimation”, , 418, 2165, 2011.
A.A. Mahabal, and the PQ Survey Team. “Automated probabilistic classification of transients and variables”, , 329, 288, 2008.
A. Mahabal, P. Wozniak, C. Donalek and S.G. Djorgovski. “Transients and Variable Stars in the Era of Synoptic Imaging”, , Ch. 8.4, p. 261; available at http://www.lsst.org/lsst/scibook, 2009.
A.A. Mahabal, et al. “Mixing Bayesian Techniques for Effective Real-time Classification of Astronomical Transients”, ed. Y. Mizumoto, ASP Conf. Ser., 434, 115, 2010.
A.A. Mahabal, et al. “Classification of Optical Transients: Experiences from PQ and CRTS Surveys”, , eds. C. Turon, et al., EAS Publ. Ser. 45, 173, Paris: EDP Sciences, 2010.
A.A. Mahabal, et al. “The Meaning of Events”, , eds. S. Emery Bunn, et al., Lulu Enterprises Publ. http://www.lulu.com/, p. 31, 2010.
A.A. Mahabal, et al. “Discovery, classification, and scientific exploration of transient events from the Catalina Real-Time Transient Survey”, , 39, 38, 2011.
S. Odewahn, R.R. de Carvalho, R. Gal, S.G. Djorgovski, R. Brunner, A. Mahabal, P. Lopes, J. Kohl Moreira, B. Stalder. “The Digitized Second Palomar Observatory Sky Survey (DPOSS). III. Star-Galaxy Separation”, 128, 3092, 2004.
G. Richards et al. “Eight-Dimensional Mid-Infrared/Optical Bayesian Quasar Selection”, , 137, 3884, 2009.
G. Richards et al. “On Machine-learned Classification of Variable Stars with Sparse and Noisy Time-series Data”, , 733, 10, 2011.
R. Tagliaferri, G. Longo, S. Andreon, S. Capozziello, C. Donalek, G. Giordano. “Neural Networks and Photometric Redshifts”, 0203445, 2002.
N. Weir, U. Fayyad, S.G. Djorgovski. “Automated Star/Galaxy Classification for Digitized POSS-II”, , 109, 2401, 1995.
D. Wells, E. Greisen, R. Harten. “FITS - a Flexible Image Transport System”, , 44, 363, 1981.
D.G. York, and the SDSS team. “The Sloan Digital Sky Survey: Technical Summary”, , 120, 1579, 2000.
|
---
abstract: 'We present a new grid of model photospheres for the SDSS-III/APOGEE survey of stellar populations of the Galaxy, calculated using the ATLAS9 and MARCS codes. New opacity distribution functions were generated to calculate ATLAS9 model photospheres. MARCS models were calculated based on opacity sampling techniques. The metallicity (\[M/H\]) spans from $-$5 to 1.5 for ATLAS and $-$2.5 to 0.5 for MARCS models. There are three main differences with respect to previous ATLAS9 model grids: a new corrected H$_{\rm 2}$O linelist, a wide range of carbon (\[C/M\]) and $\alpha$ element \[$\alpha$/M\] variations, and solar reference abundances from @asplund01. The added range of varying carbon and $\alpha$ element abundances also extends the previously calculated MARCS model grids. Altogether 1980 chemical compositions were used for the ATLAS9 grid, and 175 for the MARCS grid. Over 808 thousand ATLAS9 models were computed spanning temperatures from 3500K to 30000K and log $g$ from 0 to 5, where larger temperatures only have high gravities. The MARCS models span from 3500K to 5500K, and log $g$ from 0 to 5. All model atmospheres are publically available online.'
author:
- 'Sz. M[é]{}sz[á]{}ros, C. Allende Prieto, B. Edvardsson, F. Castelli, A. E. Garc[í]{}a P[é]{}rez, B. Gustafsson, S. R. Majewski, B. Plez, R. Schiavon, M. Shetrone, A. de Vicente'
title: 'NEW ATLAS9 AND MARCS MODEL ATMOSPHERE GRIDS FOR THE APACHE POINT OBSERVATORY GALACTIC EVOLUTION EXPERIMENT (APOGEE)'
---
Introduction
============
The Apache Point Observatory Galactic Evolution Experiment (APOGEE; Allende Prieto et al. 2008) is a large-scale, near-infrared, high-resolution spectroscopic survey of Galactic stars, and it is one of the four experiments in the Sloan Digital Sky Survey-III (SDSS-III; Eisenstein et al. 2011; Gunn et al. 2006, Aihara et al. 2011). APOGEE will obtain high S/N, R$\sim$22,500 spectra for 100,000 stars in the Milky Way Galaxy, for which accurate chemical abundances, radial velocities, and physical parameters will be determined. APOGEE data will shed new light on the formation of the Milky Way, as well as its chemical and dynamical evolution. To achieve its science goals, APOGEE needs to determine abundances for about 15 elements to an accuracy of 0.1 dex. To attain this precision, a large model photosphere database with up-to-date solar abundances is required. We chose to build the majority of APOGEE’s model photosphere database on ATLAS9 and MARCS calculations.
ATLAS [@kurucz05] is widely used as a universal LTE 1$-$D plane-parallel atmosphere modeling code, which is freely available from Robert Kurucz’s website[^1]. ATLAS9 [@kurucz01] handles the line opacity with the opacity distribution functions (ODF), which greatly simplifies and reduces the computation time [@strom01; @kurucz04; @castelli02]. ATLAS uses the mixing-length scheme for convective energy transport. It consists in pretabulating the line opacity as function of temperature and gas pressure in a given number of wavelength intervals which cover the whole wavelength range from far ultraviolet to far infrared. For computational reasons, in each interval the line opacities are rearranged according to strength rather than wavelength. For each selected metallicity and microturbulent velocity, an opacity distribution function table has to be computed. While the computation of the ODFs is very time consuming, extensive grids of model atmospheres and spectrophotometric energy distributions can be computed in a short time once the required ODF tables are available.
ATLAS12 [@kurucz04; @castelli03] uses the opacity sampling method (OS) to calculate the opacity at 30,000 points. The high resolution synthetic spectrum at a selected resolution can then be obtained by running SYNTHE [@kurucz03]. More recently, @lester01 have developed SATLAS\_\_ODF and SATLAS\_\_OS, the spherical version of both ATLAS9 and ATLAS12, respectively. No extensive grids of models have been published up to now, neither with ATLAS12, nor with any of the two versions of SATLAS.
Instead, extensive grids of ATLAS9 ODF model atmospheres for several metallicities were calculated by @castelli01. These grids are based on solar (or scaled solar) abundances from @grevesse01. Recently, @kirby01 provided a new ATLAS9 grid, but he used abundances from @anders01. The calculations presented in this paper are based on the more recent solar composition from @asplund01. This updated abundance table required new ODFs and Rosseland mean opacity calculations as well. Abundances from @asplund01 were chosen instead of those from newer studies [@asplund02] to match the composition of the MARCS models described below, and those available from the MARCS website.
The MARCS model atmospheres [@gustafsson01; @plez01; @gustafsson02] were developed and have been evolving in close connection with applications primarily to spectroscopic analyses of a wide range of late type stars with different properties. The models are one-dimensional plane-parallel or spherical, and computed in LTE assuming the mixing-length scheme for convective energy transport, as formulated by @henyey01. For luminous stars (giants), where the geometric depth of the photosphere is a non-negligible fraction of the stellar radius, the effects of the radial dilution of the energy transport and the depth-varying gravitational field is taken into account. Initially, spectral line opacities were economically treated by the ODF approximation, but later the more flexible and realistic opacity sampling scheme has been adopted. In the OS scheme, line opacities are directly tabulated for a large number of wavelength points (10$^{5}$) as a function of temperature and pressure.
The shift in the MARCS code from using ODFs to the OS scheme avoided the sometimes unrealistic assumption that the line opacities of certain relative strengths within each ODF wavelength interval overlap in wavelength irrespective of depth in the stellar atmosphere. This assumption was found to lead to systematically erroneous models, in particular when plolyatomic molecules add important opacities to surface layers [@ekberg01]. The current version of the MARCS code used for the present project and for the more extensive MARCS model atmosphere data base[^2] was presented and described in detail by @gustafsson02. The model atmospheres presented in this paper adds large variaty in \[C/M\] and \[$\alpha$/M\] abundances to the already existing grids by covering these abundances systematically from -1 to +1 for each metallicity.
Our main purpose is to update the previous ATLAS9 grid and publish new MARCS models to provide a large composition range to use in the APOGEE survey and future precise abundance analysis projects. These new ATLAS models were calculated with a corrected H$_{2}$O linelist. The abundances used for the MARCS models presented in this paper are from @grevesse02, which are nearly identical to @asplund01; the only significant difference is an abundance of scandium 0.12 dex higher than in @asplund01. The range of stellar parameters (T$_{\rm eff}$, log $g$ and \[M/H\]) spanned by the models covers most stellar types found in the Milky Way.
This paper is organized as follows. In Section 2 we describe the parameter range of our ODFs and model atmospheres and give details of the calculation method of ATLAS9 we implemented. Section 3 contains the parameter range and calculation procedure for MARCS models. In Section 4 we compare MARCS and ATLAS9 models with @castelli01, and illustrate how different C and $\alpha$ contents affect the atmosphere. Section 5 contains the conclusions. The grid of ODFs and model atmospheres will be periodically updated in the future and available online[^3].
![The \[C/M\] or \[$\alpha$/M\] content as a function of \[M/H\] of ATLAS9 models (filled circles, Table 1) and MARCS models (open circles, Table 4). Both \[C/M\] and \[$\alpha$/M\] changes independently from each other, and the small steps in metallicities give altogether 1980 different compositions for the ATLAS9 models and 175 compositions for the MARCS models. The number of acceptable models may vary for each composition, for details see the ATLAS-APOGEE website (http://www.iac.es/proyecto/ATLAS-APOGEE/). For missing metal-rich compositions of ATLAS9 models see Table 2.](figure01.eps){width="2.5in"}
ATLAS9 Model Atmospheres
========================
Parameters
----------
The metallicity (\[M/H\]) of the grid varies from $-$5 to 1.5 to cover the full range of chemical compositions and scaled to solar abundances[^4]. For each of these solar scaled compositions we also vary the \[C/M\] and \[$\alpha$/M\] abundances from $-$1.5 to 1 (Figure 1).
ODFs and Rosseland opacity files were calculated with microturbulent velocities v$_{\rm t}$ = 0, 1, 2, 4, 8 km s$^{-1}$, while the model atmospheres were produced only with v$_{\rm t}$ = 2 km s$^{-1}$. The metallicity grids were the same for all effective temperatures, and the range can be seen in Table 1. Some metal-rich compositions with high C but low $\alpha$ content were not calculated due to excessive computation time; these are listed in Table 2. The $\alpha$ elements considered when varying \[$\alpha$/M\] were the following: O, Ne, Mg, Si, S, Ca, and Ti. The temperature and gravity parameter grid for each composition and spectral type is given in Table 3. The T$_{\rm eff}$ $-$ log $g$ distribution is plotted in Figure 2. Extreme metal-poor and metal-rich compositions were also included.
![The gravity (log $g$) as a function of effective temperature (T$_{\rm eff}$) of ATLAS9 and MARCS models calculated for each composition. Acceptable models are denoted by filled circles, while models not acceptable (and missing) are denoted by open circles for the solar composition. Models with C/O $>$ 1.7 and T$_{\rm eff} < 4000$K are not published (see Section 4.2). ](figure02.eps){width="2.5in"}
All the ATLAS codes use atomic and molecular linelists made available by Kurucz on a series of CDROM [@kurucz02]. They can now be found at the Kurucz website[^5]. The molecular line lists for TiO and H$_{2}$O were provided by @schwenke01 and @partridge01, respectively, and reformatted by Kurucz in ATLAS format. These are also available for download at the Kurucz website. For these models, we used the same linelists as @castelli01, except for H$_{2}$O, for which a new Kurucz release of the @partridge01 data was adopted[^6].
The solar reference abundance table was adopted from @asplund01. Convection was turned on with the mixing length parameter set to $l/H_{p} = 1.25$, but the convective overshooting was turned off. All the models have the same 72 layers from log $\tau_{Ross}$ = $-$6.875 to 2, where the step is log $\tau_{Ross}$ = 0.125. These parameters remained the same as @castelli01 for easy comparison.
All computations were performed on the Diodo cluster at the Instituto Astrofisico de Canarias. Diodo consists of 1 master node and 19 compute nodes, for a total of 80 cores and 256 GB of RAM, communicating through two independent Gigabit Ethernet networks. 16 of the compute nodes host 2 Intel Xeon 3.20 GHz EM64T processors each, with 4 GB of RAM (2GB per core); the remaining 3 compute nodes host each 16 Intel Xeon (E7340) 2.40 GHz EM64T processors, with 64 GB of RAM (4GB per core). On this cluster, about 3 months of computer time was required for ODF and model atmosphere calculations, using all 80 processors.
Calculation Method
------------------
Two separate scripts were developed, one for the ODF and Rosseland opacity calculations, and one for the ATLAS9 calculations. The ODF and Rosseland opacity calculations followed exactly the procedure described by @castelli01, and @castelli02 using the DFSYNTHE code for the ODF, KAPPA9 code for the Rosseland opacity, and ATLAS9 for the model atmopshere calculations [@sbordone02; @sbordone01]. These codes were compiled in Linux with the Intel Fortran compiler version 11.1.
[lrrr]{} $[$M/H$]$ & $-$5 & $-$3.5 & 0.5\
$[$C/M$]$ & $-$1 & 1 & 0.5\
$[\alpha$/M$]$ & $-$1 & 1 & 0.5\
$[$M/H$]$ & $-$3 & 0.5 & 0.25\
$[$C/M$]$ & $-$1.5 & 1 & 0.25\
$[\alpha$/M$]$ & $-$1.5 & 1 & 0.25\
$[$M/H$]$ & 1 & 1.5 & 0.5\
$[$C/M$]$ & $-$1.5 & 1 & 0.5\
$[\alpha$/M$]$ & $-$1.5 & 1 & 0.5\
[rrr]{} 1 & 1 & $-$1.5\
1 & 1 & $-$1\
1.5 & 0.5 & $-$1.5\
1.5 & 1 & $-$1.5\
1.5 & 1 & $-$1\
1.5 & 1 & $-$0.5\
1.5 & 1 & 0\
[lrrrrrr]{} M, N, R, K, G & 3500 & 6000 & 250 & 0 & 5 & 0.5\
F & 6250 & 8000 & 250 & 1 & 5 & 0.5\
A & 8250 & 12000 & 250 & 2 & 5 & 0.5\
B & 12500 & 20000 & 500 & 3 & 5 & 0.5\
B, O & 21000 & 30000 & 1000 & 4 & 5 & 0.5\
Our algorithm sets up the initial starting models from the grid provided by @castelli01[^7]. The algorithm chooses the model that has the closest composition, effective temperature, and log $g$ to the desired output, and an initial ATLAS9 model is calculated. The result must be checked to see whether the output model satisfies the convergence parameters provided by the user for each layer in the model atmosphere. These parameters were set to 1$\%$ for the flux or 10$\%$ for the flux derivative errors after 30 iterations in each run, as recommended in the ATLAS cookbook[^8]. A model is considered converged if the convergence parameters satisfy these criteria in all depths.
We then determined that an atmospheric model is acceptable if one of the following criteria is satisfied: 1. the model has converged through the whole atmosphere, 2. no more than 1 non-converged layer exists between log $\tau_{\rm Ross} = -4$ and log $\tau_{\rm Ross} = 1$. The model is allowed to have other non-converged layers for log $\tau_{\rm Ross} < -4$. A model was considered unacceptable in all other cases. We used only log $\tau_{\rm Ross} >= -4$ to log $\tau_{\rm Ross} = 1$ to check the convergence, because most of the lines in the optical and H band form in this region. In case the output was not acceptable, we restarted the calculation using more iterations. In case of a run with unacceptable output, we selected a starting model that had a different log $g$ from the initial starting model and used it to restart the calculation. Then the previously described convergence test was performed and more restarts were done if it was necessary. If the output remained unacceptable, the effective temperature of the output model was changed by 10, 50, and 100K. If the output of any of these runs was acceptable, then it was used as an input model to calculate the atmosphere with the original effective temperature.
To test our model atmospheres, we replicated the Rosseland opacity calculations by @castelli01. For this we used Castelli’s scripts without any modification to calculate the ODF, and Rosseland opacities for the @grevesse01 abundances. These calculations concluded in perfect agreement within numerical precision. We then attempted to reproduce model atmospheres with the same parameters found on Castelli’s website with our scripts using the ODFs and Rosseland opacity files generated with @grevesse01 abundances. This test also concluded with near perfect agreement with only 0.1$-$0.2K maximum differences coming from the different version of compilers used.
MARCS Model Atmospheres
=======================
Parameters
----------
Models were computed for seven overall metallicities, \[M/H\] from $-$2.5 to 0.5, with a step size of 0.5 dex. For each of these seven overall \[M/H\] mixtures, 25 combinations of modified carbon and $\alpha$ element abundances were adopted: The modifications to the logarithmic C and $\alpha$ abundances are $-$1, $-$0.5, 0, 0.5, and 1 dex. This format resulted in a total of 175 subgrids with unique chemical compositions (Table 4). The $\alpha$ elements in MARCS are O, Ne, Mg, Si, S, Ar, Ca, and Ti. This composition scheme is exactly the same as the previous models found on the MARCS website. The systematic $\alpha$ abundance changes were chosen to overlap the scheme (Section 2.1) used in the ATLAS9 model calculations.
[lrrr]{} $[$M/H$]$ & $-$2.5 & 0.5 & 0.5\
$[$C/M$]$ & $-$1 & 1 & 0.5\
$[\alpha$/M$]$& $-$1 & 1 & 0.5\
[lrrrrrr]{} M, N, R, K & 3500 & 4000 & 100 & 0 & 3 & 0.5\
K, G & 4250 & 5500 & 250 & 0 & 3 & 0.5\
M, N, R, K & 3500 & 4000 & 100 & 3.5 & 5 & 0.5\
K, G & 4250 & 5500 & 250 & 3.5 & 5 & 0.5\
For each of these abundance subgrids, models with 12 values of effective temperature from 3500 to 5500K and 11 values of logarithmic surface gravities from 0 to 5 were computed (see Figure 2 and Table 5). Models with logarithmic surface gravities lower than 3.5 (giants) were computed in spherical geometry and with a microturbulence parameter of 2 km s$^{-1}$, while the remaining (dwarf) models adopted v$_{\rm t}$ = 1 km s$^{-1}$ and plane-parallel geometry. In the end, 86% of the 23,140 models converged satisfactorily. Convergence was particularly poor for cool dwarfs that are simultaneously $\alpha$ rich and carbon poor.
Details about the atomic and molecular line lists used by the MARCS code are given by @gustafsson02. In a number of instances, they are different from those used by the ATLAS code, as for H$_{2}$O [@barber01], and TiO [@plez03].
Calculation Method
------------------
Similarly to the ATLAS9 ODF calculations, a new metallicity MARCS subgrid is started with the summation of an OS table of atomic line opacities for the relevant abundance mixture and microturbulence parameter. Since line opacity data is needed for many atoms and first ions, it saves time to add the opacities of the individual species into one file before a set of models is computed. This file contains a table with over 10$^5$ wavelength points with line opacities relevant to the equation of state for 306 combinations of temperature and damping pressure $P_6$. The damping pressure is used as a proxy for the pressure that broadens metal lines by collisions with neutral atoms in cool stars. $P_6$ is the pressure of HI with the addition of the polarizability corrected pressures of neutral helium and H$_2$ (see Eq. 33 of Gustafsson et al. 2008). For each molecule, in contrast, the line opacities are given in one table for the same wavelength set, but as a function of temperature and microturbulence parameter.
The method used for the MARCS model calculations is described in detail in @gustafsson02. They start with the generation of a simplified starting model assuming a grey opacity. Physical parameters and their derivatives are computed and the model structure is iterated in a multidimensional Newton-Raphson scheme until the flux through each depth layer corresponds to the prescribed effective temperature. The models usually converged after 4 to 8 iterations; convergence also requires that the maximum temperature correction in any depth point is below 1.5K during two consecutive iterations. Occasionally convergence takes longer, and some models do not converge at all. A converged model with similar model parameters is then identified as a new starting model. This approach is often successful, but some models do not converge, which leaves vacancies in the model grid.
Discussion
==========
Changes in the chemical composition can have a number of effects on the calculated atmospheres. These effects are mainly related to either changes in opacity or changes in the equation of state. The main effect of an increased line opacity in late-type stars is either cooling or warming (depending on the opacity and its wavelength dependence) of the outer layers and also back-warming of the innermost ones [@gustafsson01]. Changes in the equation of state are mainly variations in the mean molecular weight for abundant elements, changes in the number of free electrons for elements that are important electron donors, or other more intricate changes related to chemical equilibrium through molecule formation. In the next two sections, we give examples of MARCS and ATLAS9 models from our grid, and illustrate briefly the changes in the model atmospheres related to large changes in C and $\alpha$ elements.
![Examples of ATLAS9 and MARCS model atmospheres with T$_{\rm eff}$ = 4000K and log $g$ = 1 (red line) and T$_{\rm eff}$ = 5500K and log $g$ = 4 (blue line). The model from the APOGEE grid is denoted by a solid line, the corresponding ATLAS9 model from @castelli01 is denoted by a dotted line, the MARCS model is denoted by dashed line. The panels show the mass column ($m$), temperature ($T$), gas pressure ($P$), electron number density ($N_{\rm e}$) as a function of the logarithm of optical depth (log $\tau$), and the relative difference between the ATLAS9 (A$_{\rm K}$), MARCS (A$_{\rm M}$) calculations presented in this paper and @castelli01 (C$_{\rm K}$). ](figure03.eps){width="3.5in"}
Comparing our MARCS and ATLAS9 models to the Castelli-Kurucz grid
-----------------------------------------------------------------
Figure 3 illustrates examples of the atmospheric structures for two models with solar composition, one for $T_{\rm eff} = 4000$K and log $g = 1$ (red line) and the other for $T_{\rm eff} = 5500$K and log $g = 4$ (blue line). There are four double panels showing the parameter dependence with Rosseland optical depth of the mass column (top-left panel), temperature (top-right), gas pressure (bottom-left) and electron number density (bottom-right). The dashed, solid, and dotted lines correspond to the new APOGEE MARCS, APOGEE Kurucz, and earlier (NEWODF) Castelli-Kurucz models [@castelli01], respectively. While the APOGEE Kurucz and MARCS models share essentially the same chemical composition [@asplund01; @grevesse02], the Castelli-Kurucz models use the solar mixture given by @grevesse01.
For both the cooler (red line) and the warmer model (blue line), we see good agreement between the MARCS and the ATLAS9 models presented in this paper. The new MARCS and ATLAS9 models show good agreement, the differences are modest, less than 1 percent, for the thermal structure, and less than 2$-$3 percent for the gas pressure and the electron density in the layers where weak spectral lines and continuum form ($\tau >$ 0.01). These small differences are most likely related to the different equation of state implemented in the two codes, and can also be due to the fact that ATLAS9 uses the ODF method, while MARCS uses the OS method. The differences between the two increase at temperatures lower than 4000K due to the different H$_{2}$0 and TiO linelists used in the calculations. More importantly, larger differences are present in the gas pressure and electron density in the ATLAS9 models compared to the Castelli-Kurucz ones for the cooler models. These differences must be related to the updated solar chemical composition, as the Castelli-Kurucz models are also computed with a corrected H$_{2}$O linelist. The most significant change in the solar composition corresponds to the reduction in oxygen, nitrogen and carbon abundances, all of which decrease in the update. This causes a reduction in the Rosseland opacity at a given temperature and gas pressure. The decreased Rosseland opacity leads to a subsequent increase in the total pressure, which is only partially compensated by a reduction in the electron density.
@gustafsson02 present some further comparisons between MARCS models and ATLAS9 models of @castelli01. Other comparisons between MARCS, ATLAS9, and PHOENIX model atmospheres and spectra were discussed by @plez02.
![Examples of how different \[C/M\] and \[$\alpha$/M\] content changes the temperature profile in the ATLAS9 and MARCS atmospheres for T$_{\rm eff}$ = 4000K, log $g$ = 1. This figure shows the temperature ($T$), gas pressure ($P$), electron number density ($N_{\rm e}$) and the relative difference (where 0.5 corresponds to 50%, 1 corresponds to 100%) of these parameters between \[C/M\]=\[$\alpha$/M\] = 0 and \[C/M\]=\[$\alpha$/M\] = $-$1, and 1 as a function of optical depth (log $\tau$). The symbols $T_{0}$, $P_{0}$, $N_{e, 0}$ correspond to \[C/M\]=\[$\alpha$/M\] = 0. ](figure04.eps){width="3.5in"}
![Examples of how different \[C/M\] and \[$\alpha$/M\] content changes the temperature profile in the ATLAS9 and MARCS atmospheres for T$_{\rm eff}$ = 5500K, log $g$ = 4. For more details see the caption of Figure 6. ](figure05.eps){width="3.5in"}
Changes in model structures related to chemical composition
-----------------------------------------------------------
Figures 4 and 5 illustrate the changes in the model structures in response to changes in chemical composition. The left-hand panels of Figure 4 (T$_{\rm eff} = 4000$K model) and 5 (T$_{\rm eff} = 5500$K) show the relative changes associated with variations of the carbon abundance from solar proportions. The ATLAS9 model for the warmer temperature has v$_{\rm t}$ = 2 km s$^{-1}$, while the MARCS model has v$_{\rm t}$ = 1 km s$^{-1}$, but this difference does not affect the model structures significantly. The relative variations found are very similar for MARCS and ATLAS9, except in the outermost layers of the atmosphere. The changes in the thermal structure are modest and show a behavior symmetric to that found for oxygen, which strongly suggests that CO formation is the driver of the variation. CO is the most tightly bound molecule and consumes almost all of the free atoms of either carbon or oxygen, whichever is less abundant, leaving the majority species (C I or O I) to form other molecules. If oxygen dominates, it produces a “normal star” (if cool, an M star), otherwise carbon is free to form many molecules with high opacities, making carbon stars with very different spectra. At T$_{\rm eff} = 5500$K, CO formation is low, thus this molecule does not affect the chemical equilibrium contrary to what is seen in the cooler atmospheres.
The right-hand panels show changes for the same model parameters and different $\alpha$-element abundances. Large changes are visible for both models in the pressure and electron numbers. The significant differences in the pressure compared to the cooler model are due to the increased gravity. Increasing the abundance of $\alpha$ elements reduces pressure (despite the fact that electron pressure increases), because the electrons of the other main electron contributors change the continuum opacity. The $\alpha$ elements that contribute most to the total number of electrons in the ATLAS9 models are shown in Figure 6. For the T$_{\rm eff} = 4000$K, log $g = 1$ model, the main electron contributors are Ca, Mg, Na, and Al in the outer layers and Mg, Si and Fe in the deeper layers. For the warmer models, this changes significantly, as more Fe and H is ionized, the overall number of electrons increases and the main electron contributors become Mg, Si, and Fe through most of the atmospheres.
Internal tests showed that polyatomic carbon molecules (C$_{2}$H$_{2}$, C3) substantially change the structure of the atmosphere with C/O ratios higher 1.7, if they are included in the linelist. These molecules significantly inflate the atmosphere changing the temperature in the line-forming photospheric layers. Since these molecules were used neither in the ATLAS9, nor in the MARCS calculations, atmospheric structures with C/O $>$ 1.7 and T$_{\rm eff} < 4000$K are not reliable and thus not published.
![Examples of the largest electron contributors to the total electron numbers in the ATLAS9 atmospheres for \[M/H\]=\[C/M\]=\[$\alpha$/M\] = 0. The figure shows the element electron numbers relative to the total number of electrons for the two models used in previous figures. The T$_{\rm eff} = 5500$K, log $g = 1$ model is plotted in the middle section to show that the electron contributors do not change significantly compared to a model with the same temperature but higher gravity. ](figure06.eps){width="3.5in"}
Conclusions
===========
Most of the ATLAS9 models are fully converged above T$_{\rm eff}$ = 5000K. Convergence problems are visible only in the outermost layers for stars with T$_{\rm eff} <$ 5000K. The regions affected by convergence issues are limited to log $\tau_{\rm Ross} < -$4. However, these un-converged layers on the top of the atmosphere at low temperatures do not affect most line profiles very significantly, because most lines form deeper in the atmosphere. A little over a million ATLAS9 atmospheres were calculated.
Over 20,000 MARCS fully converged models with ranges in \[C/M\] and \[$\alpha$/M\] have been produced. Models are available in spherical and plane-parallel cases. Convergence issues are also present for red dwarf stars, similarly to the ATLAS9 models.
Examples were presented for both ATLAS9 and MARCS models for T$_{\rm eff} = 4000$K and 5500K, and compared to the Castelli-Kurucz grid. The new ATLAS9 and MARCS models agree well for all temperatures, while the differences between these calculations and the Castelli-Kurucz grid arise from the updated abundance tables. We briefly illustrated the effects of decreased/increased carbon and $\alpha$ content on the structure of the atmospheres, which are very similar in the MARCS and ATLAS9 models. The response to the carbon content is only different in the outermost layers due to increased CO in the atmosphere. The response to $\alpha$ elements is also almost the same in MARCS and ATLAS9 models. The increased $\alpha-$element content has profound effects in electron numbers and pressure for both the giant and dwarf stars, which is related to the higher number of Mg, Si, and Ca ions. Carbon rich models with C/O $>$ 1.7 and T$_{\rm eff} < 4000$K are not published here, because polyatomic carbon molecules not included in the linelists here significantly change the temperature structure in the photosphere.
These model grids will be used as the primary database for the pipeline analysing the spectra from the APOGEE survey. The high-resolution model spectra used in the survey’s atmospheric parameters and abundances determination code will be built on the models presented in this paper. Both the ATLAS9 and MARCS models atmospheres will be continuously updated with new compositions as the APOGEE survey progresses. The calculated ODFs, Rosseland opacities available for v$_{\rm t}$ = 0, 1, 2, 4, 8 km s$^{-1}$, and the ATLAS9 model atmosphere files are available for v$_{\rm t}$ = 2 km s$^{-1}$ from the ATLAS-APOGEE website[^9]. The MARCS models are available with v$_{\rm t}$ = 2 km s$^{-1}$ for the giant and v$_{\rm t}$ = 1 km s$^{-1}$ for the dwarf stars from the standard MARCS website[^10].
Aihara, H., et al. 2011, , 193, 29
Allende Prieto, C., Majewski, S. R., Schiavon, R., Cunha, K., Frinchaboy, P., Holtzman, J., Johnston, K., Shetrone, M., Skrutskie, M., Smith, V., Wilson, J. 2008, AN, 329, 1018
Anders, E., $\&$ Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197
Asplund, M., Grevesse, N. $\&$ Sauval, A. J. 2005, ASPC, 336, 25
Asplund, M., Grevesse, N., Sauval, A. J. $\&$ Scott, P. 2009, ARA$\&$A, 47, 481
Barber, R. J., Tennyson, J., Harris, G. J., Tolchenov, R. N. 2006, MNRAS, 369, 1087
Castelli, F. 2005, MSAIS, 8, 25
Castelli, F. 2005, MSAIS, 8, 344
Castelli, F., $\&$ Kurucz, R. L. 2003, New Grids of ATLAS9 Model Atmospheres, IAUS, 210, 20P
Eisenstein, D.J., et al. 2011, AJ, 142, 72
Ekberg, U., Eriksson, K., $\&$ Gustafsson, B. 1986, , 167, 304
Grevesse, N., Asplund, M. $\&$ Sauval, A. J. 2007, , 130, 105
Grevesse, N., $\&$ Sauval, A. J. 1998, SSR, 85, 161
Gunn, J. E., et al. 2006, AJ, 131, 2332
Gustafsson, B., Bell, R. A., Eriksson, K., $\&$ Nordlund, [Å]{}. 1975, A$\&$A, 42, 407
Gustafsson, B., Edvardsson, B., Eriksson, K., J[ø]{}rgensen U.G., Nordlund, [Å]{}., $\&$ Plez B. 2008, A$\&$A, 486, 951
Henyey, L., Vardya, M. S., $\&$ Bodenheimer, P. 1965, ApJ, 142, 841
Kirby, E. N. 2011, PASP, 123, 531
Kurucz, R. L. 1979, ApJS, 40, 1
Kurucz, R. L. 1993, ATLAS9 Stellar Atmosphere Programs and 2 km s$^{-1}$ grid. Kurucz CD-ROM No. 13. Cambridge, Mass.: Smithsonian Astrophysical Observatory, 1993, 13
Kurucz, R. L. 1990, in NATO ASI C Ser., Stellar Atmospheres: Beyond Classical Models, ed. L. Crivellari, I. Huben´y, $\&$ D. G. Hammer (Dordrecht: Kluwer), 441 Proc. 341: Stellar Atmospheres - Beyond Classical Models, 441
Kurucz, R. L. 2005, MSAIS, 8, 14
Kurucz, R.L., $\&$ Avrett, E. H. 1981. Solar spectrum synthesis. I. A sample atlas from 224 to 300 nm., SAO Spec. Rep. 391
Lester, J. B. $\&$ Neilson, H. R. 2008, A$\&$A, 491, 633
Partridge, H., $\&$ Schwenke, D. W. 1997, J. Chem. Phys., 106, 4618
Plez, B. 1998, A$\&$A, 337, 495
Plez, B. 2011, J. Phys. Conf. Ser. 328, 012005
Plez, B., Brett, J. M., $\&$ Nordlund, [Å]{}. 1992, A$\&$A, 256, 551
Sbordone, L. 2005, MSAIS, 8, 61
Sbordone, L., Bonifacio, P., Castelli, F., $\&$ Kurucz, R. L. 2004, MSAIS, 5, 93
Schwenke, D. W. 1998, Faraday Discussions, 109, 321
Strom, S. E., $\&$ Kurucz, R. L. 1966, AJ, 71, 181
[^1]: http://kurucz.harvard.edu
[^2]: http://marcs.astro.uu.se/
[^3]: http://www.iac.es/proyecto/ATLAS-APOGEE/
[^4]: \[M/H\] means any element with Z $>$ 2 and \[M/H\] = $\log_{10}(N_{\rm M} / N_{\rm H})_{\star} -
\log_{10}(N_{\rm M} / N_{\rm H})_{\odot}$, where N$_{\rm Fe}$ and N$_{\rm H}$ are the number of the desired element and hydrogen nuclei per unit volume, respectively
[^5]: http://kurucz.harvard.edu/LINELIST/
[^6]: http://kurucz.harvard.edu/MOLECULES/H2O/h2ofastfix.bin
[^7]: http://wwwuser.oat.ts.astro.it/castelli/
[^8]: http://atmos.obspm.fr/index.php/documentation
[^9]: http://www.iac.es/proyecto/ATLAS-APOGEE/
[^10]: http://marcs.astro.uu.se/
|
[**Dark Energy Studies: Challenges to Computational Cosmology**]{}\
The ability to test the nature of dark mass-energy components in the universe through large-scale structure studies hinges on accurate predictions of sky survey expectations within a given world model. Numerical simulations predict key survey signatures with varying degrees of confidence, limited mainly by the complex astrophysics of galaxy formation. As surveys grow in size and scale, systematic uncertainties in theoretical modeling can become dominant. Dark energy studies will challenge the computational cosmology community to critically assess current techniques, develop new approaches to maximize accuracy, and establish new tools and practices to efficiently employ globally networked computing resources.
Introduction
============
Ongoing and planned observational surveys, such as the Dark Energy Survey (DES)$^{\ino}$, are providing increasingly rich information on the spatial distributions and internal properties of large numbers of galaxies, clusters of galaxies, and supernovae. These astrophysical systems reside in a cosmic web of large-scale structure that evolved by gravitational amplification of an initially small-amplitude Gaussian random density field. The DES plans to investigate the dark sector through the evolution of the Hubble parameter $H(z)$ and linear growth factor $D(z)$ from four independent channels: i) the evolution and clustering properties of rich clusters of galaxies; ii) the redshift evolution of baryonic features in the galaxy power spectrum; iii) weak-lensing tomography derived from measurement of galaxy shear patterns and iv) the luminosity distance–redshift relation of type Ia SNe. We focus our attention on theoretical issues related to the first three of these tests.
The power spectrum of fluctuations at recombination is calculated to high accuracy from linear theory$^{\ino}$, so the problem of realizing, through direct simulation, the evolution of a finite patch of a particular world model from this epoch forward is well posed. To support DES-like observations, one would like to evolve multiple regions of Hubble Length dimension with the principal clustered matter components — dark matter and multiple phases of baryons (stars and cold gas in galaxies, warm/hot gas surrounding galaxies and in groups/clusters) — represented by multiple fluids. Mapping observable signatures of the theoretical solution along the past light-cone of synthetic observers in the computational volume allows ‘clean’ mock surveys to be created, which can further be ‘dirtied’ by real-world observational effects in support of survey data analysis.
Two fundamental barriers stand in the way of achieving a complete solution to this problem. One is the wide dynamic range of non-linear structures (sub-parsecs to gigaparsecs in length, for example), and the other is the complexity of astrophysical processes controlling the baryonic phases. Since DES-like surveys probe only the higher mass portion of the full spectrum of cosmic structures, the first issue is not strongly limiting. The DES will, however, require understanding galaxy and cluster observable signatures, so uncertainties in the treatment of baryonic physics will play a central role. In a companion paper$^{\ino}$, we outline theoretical uncertainties associated with the large-scale structure channels DES will use to test dark energy. Here, we offer a critique of the computational methods that provide theoretical support for DES and similar surveys.
Challenges for Computational Cosmology
=======================================
Given the wide scale and scope of large-scale structure, a number of approaches have evolved to address restricted elements of the full problem. Since cold dark matter dominates the matter density, N-body methods that follow collisionless clustering have played an important role in defining the overall evolution of the cosmic web and the structure of collapsed halos formed within it. Combined N-body and gas dynamics techniques explore gravitationally coupled evolution of baryons and dark matter. Knowledge gained from these ‘direct’ approaches led to the development of ‘semi-analytic’ methods that efficiently explore scenarios for baryon-phase evolution. The overall challenge to computational cosmology in the dark energy era is to understand how to harness and push forward these different methods so as to maximize science return from sky survey data.
A ‘halo model’ description of the large-scale density field ties together these approaches. The model posits that all matter in the late universe is contained in a spectrum of gravitationally bound objects (halos), each characterized by a mass $M$ defined (typically) by a critical overdensity condition around a local filtered density peak. The space density, large-scale clustering bias (relative to all matter), and internal structure, such as density and temperature profiles, are basic model ingredients. For galaxy studies, the halo occupation distribution (‘HOD’) defines the likelihood $p(N_{gal}|M,z)$ that $N_{gal}$ galaxies of a certain type are associated with the halo. For some applications, it may be important to consider HOD dependence on local environment.
Collisionless N-body modeling
-----------------------------
Understanding the growth of density perturbations into the mildly and strongly non-linear regimes is a critical component for weak lensing tomography and galaxy cluster studies, respectively. Much progress has been made in this area using N-body simulations, as Moore’s law has enabled progressively larger computations, up to $10^{10}$ particles today$^{\ino}$. Parallel computing has led to production environments where $512^3$ particle runs can be realized on an almost daily basis.
By creating large statistical samples of dark matter halos and by probing the internal structure of some halos in great detail, large-$N$ simulations have validated and characterized important raw ingredients of the halo model: i) the space density is calibrated to $\sims 10\%$ accuracy in terms of a similarity variable $\sigma(M,z)$, with $\sigma$ the [*rms*]{} level of density fluctuations filtered on mass scale $M$; ii) the large-scale clustering bias of halos is calibrated to similar accuracy; iii) except for rare cases of major mergers in progress, the interior of halos are hydrostatic with an internal density profile that depends primarily on mass, and secondarily on individual accretion history and iv) the structural similarity of halos is reflected in a tight virial scaling between mass and velocity dispersion. The study of sub-halos, locally bound structures accreted within larger halos but not fully tidally disrupted, is a rapidly developing area with important application to optical studies of galaxy clusters. Since they serve as a foundation for more complex treatments, these ingredients of the halo model deserve more careful study and more precise calibration.
The weak lensing shear signal on arcminute and larger scales is generated by weakly non-linear matter fluctuations on relatively large spatial scales, making it relatively insensitive to small-scale baryon physics. Although the Hamilton-Peacock characterization$^{\ino}$ of the non-linear evolution of the power spectrum has been useful, it will need refinement to achieve the anticipated accuracy of DES power spectrum measurements on dark energy. The evolution of higher density moments, particularly the bi-spectrum, is less well understood than the second moment. A suite of multi-scale N-body simulations covering a modest grid of cosmological models is needed to address these problems. New approaches to generating initial conditions and combining multiple realizations of finite volumes$^{\ino}$ should be employed in an effort to push systematic uncertainties on relevant spatial scales to percent levels and below.
Although a relatively mature enterprise, N-body modeling of dark matter clustering faces fundamental challenges to improve the absolute level of precision in current techniques and to better understand the dynamical mechanisms associated with non-linear structure evolution. Code comparison projects$^{\ino}$ should be more aggressively pursued and the sensitivity of key non-linear statistics to code control parameters deserves more careful systematic study. A return to testing methods on the self-similar clustering problem$^{\ino}$ is likely to provide valuable insights, and deeper connections to analytic approaches, such as extended perturbation theory and equilibrium stellar dynamics, should be encouraged. Highly accurate dark matter evolution is only a first step, however, as it ignores the $17\%$ matter component of the universe that is directly observable.
The baryon phase and galaxy/cluster observables
------------------------------------------------
The dark energy tests planned by DES require modeling the astrophysical behavior of different baryonic phases. Acoustic oscillations in the galaxy power spectrum must be linked to features in the matter power spectrum, requiring accurate tests of the constancy of galaxy bias on large scales. For clusters, selection by Sunyaev-Zel’dovich or X-ray signatures depends on the hot gas phase properties while optical selection is dependent on star formation and interstellar medium phase evolution within galaxies. Several distinct, but related, approaches have emerged to address this complex modeling requirement, all involving tunable model parameters that must, to some degree, be determined empirically.
‘Direct’ computational approaches couple a three-dimensional gas dynamics solver to an N-body algorithm. A dozen, nearly-independent codes, in both Lagrangian and Eulerian flavors, now exist to perform this task. All methods follow entropy generation in gas from shocks, and most allow radiative entropy loss assuming local thermodynamic equilibrium in plasma that may optionally be metal enriched. Methods diverge in their treatment of interstellar medium processes: cold-hot gas phase interactions, star formation rate prescriptions, return of mass and energy from star forming regions, supermassive black hole (SMBH) formation, and attendant SMBH feedback. A valuable comparison study$^{7}$ revealed agreement at the $\sim 10\%$ level among a dozen codes for the solution of the internal structure of a single halo evolved without cooling and star formation.
In massive halos, the hosts to rich clusters where only a small fraction of baryons condense into galaxies, gas dynamic models have had good success in modeling the behavior of the hot intracluster medium (ICM), particularly its structural regularity. The ICM mass, a quantity that is essentially independent of temperature when derived from low energy X-ray surface brightness maps, is observed to behave as a power-law of X-ray temperature with only $14\%$ intrinsic scatter$^{\ino}$. Simulations have been instrumental in showing that this tight scaling relation results from a combination of factors: i) approach to virial equilibrium is rapid, so temperature and total halo mass are strongly correlated; ii) the ICM mass fraction varies by $\lta 10\%$ at a given mass, meaning variations in local baryon fraction and galaxy formation efficiency at fixed mass are small and iii) the ICM plasma outside the core is not strongly multi-phase, making the spectral temperature a good indicator of the host halo’s gravitational potential.
Although the sensitivity of the hot ICM to galaxy formation and galactic nuclear feedback is under active investigation$^{\ino}$, predictions from direct simulations for observable scaling relations are not likely to converge fast enough to be useful for DES cluster analysis. Instead, such simulations can offer immediate support by constraining the expected forms of the likelihoods for how properties like X-ray temperature, intrinsic galaxy richness or total SZ decrement should scale with mass and epoch. Power-law distributions with log-normal scatter are supported by current simulations, but more study is needed to understand potential deviations from power-law behavior, the origins of the variance, and the covariance of independent observable measures. For a sufficiently rich data set, the parameters describing the observable-mass relations (log-mean slope and intercept, dispersion, and drifts in these with scale factor) can be folded into the analysis as nuisance parameters and solved for directly using differential survey counts and clustering properties. Fisher matrix analyses$^{\ino}$ offer reasons to be optimistic about this self-calibrating approach, but studies based on mock sky survey data, where projection effects can lead to more complex scatter in observable scaling relations, remain to be done.
Because baryon oscillation and detectable weak lensing effects are tied to scales significantly larger than individual galaxies, these dark energy probes should be relatively insensitive to uncertainties in galaxy formation physics. Direct simulations are inefficient at testing this assumption, since large volumes need to be modeled at very high spatial resolution, and many realizations covering a range of control parameters need to be done. Instead, hybrid approaches are favored that combine large N-body models, or equivalent halo model realizations, with ‘semi-analytic’ or empirically-derived HOD galaxy assignment schemes.
Semi-analytic methods use simplifying assumptions, calibrated by direct simulation, to reduce the problem of galaxy formation to a set of coupled ordinary differential equations describing the flow of mass and energy among different baryonic components. Their flexibility and computational efficiency allow large regions of the controlling parameter space to be explored, a significant advantage compared to direct methods. Semi-analytic models reproduce much, but not yet all, of the full statistical richness of the galaxy populations in local (SDSS and 2dF) catalogs. There are questions regarding uniqueness of the prescriptions for the various flows and it is not yet clear how degenerate are the existing control parameters (which number in the several tens at the least). In addition, predicting optical and near infrared signatures entails modeling galaxy stellar populations and dust content, introducing further complexity and uncertainty. An important issue for dark energy is to understand the sensitivity of the galaxy power spectrum to different treatments of galaxy formation, so alternatives to semi-analytic and direct modeling deserve consideration. Methods that use empirically-trained HOD or similar statistical schemes to relate galaxies and dark matter offer a potentially powerful and complementary means to explore systematic uncertainties in galaxy/cluster-based dark energy studies.
Mock surveys as theory testbeds
-------------------------------
Correct interpretation of sky survey data requires detailed understanding of survey selection and signal contamination issues. Mock surveys provide a testbed for assessing the purity and completeness of well-defined samples and for exploring the influence of non-trivial contamination from projection. By offering a highly realistic input signal, to which observational effects like survey masks and instrumental noise can be added, mock surveys can also provide the important service of end-to-end testing of data processing and analysis pipelines.
Dark energy tests from weak lensing and baryon features in the galaxy power spectrum require distance-selected galaxy samples. For DES, distance will be based on photometric redshifts derived from broad-band colors. The accuracy of photometric redshifts is under ongoing empirical investigation, but the distribution of errors is likely to be non-Gaussian in at least some regions of color space. Mock surveys can address the impact of complex photo-z errors on derived measurements, and offer insights into selection procedures that maximize signal to noise.
For cluster detection, projected contamination is a potentially serious concern. At sub-mm wavelengths, simulated sky maps from direct simulations indicate that planned experiments will be highly complete above a halo mass roughly one-fifth that of Coma. Optical cluster detection is more susceptible to contamination, and studies aimed at calibrating purity and completeness of particular methods remain under development. The relation between galaxy color and local density is a crucial element. Since this relation is not particularly well reproduced by direct or semi-analytic methods, parametric approaches that consider the likelihood of a galaxy of a particular color inhabiting some location (defined by dark matter properties like local mass density or constituent halo mass) may be more powerful.
An exercise that has not yet been done is to baseline the current level of uncertainties by comparing sky survey expectations generated by multiple methods — N-body with semi-analytic galaxy assignment, N-body with empirical galaxy assignment, and direct gas dynamic simulation — within a fixed world model. For clusters, the question of how best to combine optical, sub-mm and X-ray observations to provide optimal constraints on dark energy needs more careful study.
Opportunities from Emerging Technologies
========================================
Progress in computational cosmology accelerated when groups of researchers pooled intellectual resources and formed consortia, such as GC$^3$ and the N-body Shop in the US and the Virgo Consortium in Europe. Consortia offer advantages, in the form of shared computational frameworks and systems for its members, within which large-scale efforts can be realized more efficiently. In the US, the ‘grand challenge’ era of consortia building was driven by investments in supercomputer resources by NSF and NASA in the mid-1990’s. The emergence of relatively low-cost parallel clusters coincided with the transformation of US federal support into a partnership model, in which universities were expected to provide substantial investment in computational infrastructure. Given the many distractions caused by the ‘dot-com’ bubble, most universities were slow to respond. As a result, the landscape today is highly fractured. While serious computing power exists at select locations (in the US, primarily national laboratories and a few, highly subscribed supercomputer centers), most research groups are hard pressed to put together and maintain a compute cluster with a few tens of processors.
In recent years, strong support for bandwidth-oriented development has created a new atmosphere of opportunity for computational research. Within the nascent world of Grid computing lie a number of rational elements to improve research productivity, such as intelligent networked resource brokers and schedulers, dedicated “quality of service” bandwidth, and single sign-on authorization. The computational cosmology community should take a more active role in establishing the ‘middle-ware’ elements needed to enhance productivity within collaborations and the sharing of information within and among them.
The building of Theory Virtual Observatory archives, with libraries of open source codes, raw resources (particle/cell-level files), and processed resources (halo catalogs and histories, sky survey catalogs, sky maps) needs to get underway in earnest. Dark energy science demands theory support at the highest level and of the highest quality. The time has come to leverage the networked resources of the information age and build a cyberinfrastructure framework that will bring theory and observation closer together, enhancing dark energy and astrophysical science studies in the process.
[**References**]{}
[ =1 $^{\ino}$ See [http://www.darkenergysurvey.org]{} $^{\ino}$ Seljak, U., [Sugiyama]{}, N., [White]{}, M., [Zaldarriaga]{}, M. 2003, PhRvD, 3507 $^{\ino}$ J. Annis 2005, Theory White Paper to the Dark Energy Task Force $^{\ino}$ Springel, V. 2005, Nature, 435, 629 $^{\ino}$ Sirko, E. 2005, ApJ, submitted, ([astro-ph/0503106]{}) $^{\ino}$ [[Hamilton]{}, A.J.S., [Kumar]{}, P., [Lu]{}, E. and [Matthews]{}, A.]{} 1991, ApJ, 374, L1; [Peacock]{}, J.A. and [Dodds]{}, S.J. 1996, MNRAS, 280, L19; Smith, R.E. 2003, MNRAS, 341, 1311. $^{\ino}$ Frenk, C.S. 1999, ApJ, 525, 554 $^{\ino}$ [[Efstathiou]{}, G., [Frenk]{}, C.S., [White]{}, S.D.M. and [Davis]{}, M.]{} 1988, MNRAS, 235, 715 $^{\ino}$ [[Mohr]{}, J.J., [Mathiesen]{}, B. and [Evrard]{}, A.E.]{} 1999, ApJ, 517, 627 $^{\ino}$ [[Kravtsov]{}, A.V., [Nagai]{}, D. and [Vikhlinin]{}, A.A.]{}, 2005 ApJ, 625, 588; Sijacki, D and Springel, V. 2005, astro-ph/0509506; Borgani, S. 2004, MNRAS, 348, 1078 $^{\ino}$ Lima, M and Hu, W. 2005, PhRvD, 72, 3006; Lima, M and Hu, W. 2004, PhRvD, 70, 3504 ]{}
|
---
abstract: 'Convergent migration involving multiple planets embedded in a viscous protoplanetary disc is expected to produce a chain of planets in mean motion resonances, but the multiplanet systems observed by the Kepler spacecraft are generally not in resonance. We demonstrate that under equivalent conditions, where in a viscous disc convergent migration will form a long-term stable system of planets in a chain of mean motion resonances, migration in an inviscid disc often produces a system which is highly dynamically unstable. In particular, if planets are massive enough to significantly perturb the disc surface density and drive vortex formation, the smooth capture of planets into mean motion resonances is disrupted. As planets pile up in close orbits, not protected by resonances, close encounters increase the probability of planet-planet collisions, even while the gas disc is still present. While inviscid discs often produce unstable non-resonant systems, stable, closely packed, non-resonant systems can also be formed. Thus, when examining the expectation for planet migration to produce planetary systems in mean motion resonances, the effective turbulent viscosity of the protoplanetary disc is a key parameter.'
author:
- |
Colin P. McNally,$^{1}$[^1] Richard P. Nelson$^{1}$ and Sijme-Jan Paardekooper $^{1,2}$\
$^{1}$Astronomy Unit, School of Physics and Astronomy, Queen Mary University of London, London E1 4NS, UK\
$^{2}$DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK\
bibliography:
- 'resplanets\_paper.bib'
date: 'Accepted XXX. Received YYY; in original form ZZZ'
title: Multiplanet systems in inviscid discs can avoid forming resonant chains
---
\[firstpage\]
planets and satellites: dynamical evolution and stability — planet-disc interactions — protoplanetary discs
Introduction
============
Convergent migration in multiplanet systems, driven by disc-planet interactions in protoplanetary discs, has been shown to result in the capture of the planets into mean motion resonances . tested the behaviour of initially tightly-packed systems in viscous discs, and found that after a period of initial adjustment almost all systems formed chains of MMRs. However, the multiplanet systems discovered by the Kepler mission have only a weak preference for period ratios near first order mean motion resonances [@2011ApJS..197....8L]. Multiplanet systems in the Kepler sample tend to have planets with similar masses, with relatively even orbital spacing [@2017ApJ...849L..33M; @2018AJ....155...48W], but are largely non-resonant. Furthermore, the sample contains planets which appear to mainly cluster around the local thermal mass scale in a fiducial protoplanetary disc model [@2018arXiv180604693W], where the thermal mass corresponds to the planet Hill sphere radius being approximately equal to the disc pressure scale height.
Various mechanisms have been proposed to either allow planets to escape a resonant configuration during the presence of the gas disc, or to disrupt the resonant configuration during the later, nearly dissipationless, n-body phase of dynamics. Overstable librations about resonant configurations can cause planets to escape resonance while the gas disc is present [@2014AJ....147...32G], although the requirements on the form of the eccentricity damping for this to occur may be difficult to meet. Disc-driven resonant repulsion can push a resonant pair of planets away from resonance by a combination of orbital circularization and the interaction between the wakes of the planets [@2013ApJ...778....7B]. Orbital perturbations due to turbulent overdensities in the disc have been argued to prevent protoplanets capturing into resonance [e.g. @2008ApJ...683.1117A]. However, the planet forming regions of protoplanetary discs are thought to be largely dead to the magnetorotational instability (MRI), and lacking an instability capable of driving turbulent motion at such high levels. If protoplanetary discs are characterised in these regions by being largely MRI-dead, possessing nearly-laminar flow with wind-driven accretion in their surface layers, they can possess vanishingly low viscosity while still providing a conduit for mass accreting onto the star [@2013ApJ...769...76B].
A second set of concepts proposes that a resonant chain of planets may be disrupted after dissipation of the gas disc. [@2007ApJ...654.1110T] showed that tidal interaction with the central star could extract short period systems out of resonance. [@2015ApJ...803...33C] proposed the interaction of the planets in resonant orbits with a planetesimal disc left over from planet formation may break planets out of resonance. More ambitiously, the possibility that late dynamical instability of resonant chains formed through convergent migration in a viscous disc is responsible for sculpting the entire period ratio distribution of exoplanet systems was raised by . @2017MNRAS.470.1750I and @2019arXiv190208772I demonstrate that systems with large numbers of protoplanetary cores in an n-body computation with a prescription for disc-planet interactions may result in a planetary system configuration which becomes unstable after the dissipation of the gas disc. This behaviour appears to be due to the increasing tendency for chains with a high mass to undergo dynamical instability after the dissipation of the gas disc, as identified by @2012Icar..221..624M.
In this letter, motivated by our recent work showing that disc-planet interactions involving intermediate mass planets embedded in inviscid protoplanetary discs leads to stochastic, non-deterministic migration behaviour due to the emergence of vortices in the flow [@2019MNRAS.484..728M], we question the basic premise that convergent migration in protoplanetary gas discs should result in chains of planets in mean-motion resonance, and the consequent tendency to form systems of resonant planets which are stable over Gyr time scales. We construct a scenario where a like-for-like comparison between the convergent migration of a multiplanet system in a viscous and an inviscid disc can be made, and demonstrate that, in contrast to the situation in viscous discs, the ability to form resonant chains of planets is impeded by vortex-modified feedback migration in inviscid discs.
Methods and Results
===================
![Planet migration in a viscous disc, compare to Figure \[fig:convsys29a\]. All five planets form a resonant chain, and in long term evolution this configuration tends to be stable.[]{data-label="fig:convsys30a"}](plots/convsys30_a.pdf){width="\columnwidth"}
![Planet nearest neighbour period ratios in a viscous disc, displaying the formation of a resonant chain.[]{data-label="fig:convsys30pratio"}](plots/convsys30_pratio.pdf){width="\columnwidth"}
![Planets in an inviscid disc, compare to Figure \[fig:convsys30a\]. The final configuration consists of three planets that have migrated into the viscous region of the disc in a chain of resonances, and a coorbital pair to the outside in the inviscid region out of resonance. Sampling the long term behaviour of this system shows it tends to undergo dynamical instability.[]{data-label="fig:convsys29a"}](plots/convsys29_a.pdf){width="\columnwidth"}
Gas disc-planet interaction simulations were performed in two-dimensional vertically integrated models of gas discs with a modified version of [FARGO3D]{} 1.2 [@2016ApJS..223...11B], including an implementation of the energy equation in term of specific entropy (see Appendix \[sec:entropy\]). Indirect terms for the planets and gas disc were included, the planet potential was smoothed with a Plummer-sphere potential with length 0.4 scale heights, and the disc gravity force felt by the planets was modified by removing the azimuthally symmetric component of the potential to compensate for the neglect of disc self-gravity following @2008ApJ...678..483B. In [FARGO3D]{} the planet orbits are integrated with the built-in 5th order scheme, with an additional planet-planet acceleration time step limit from to increase accuracy of energy conservation during close encounters. The detailed outcomes of these close encounters and three-body interactions is chaotic and senstive to small perturbations in the initial conditions and the numerical method of integration. We do not include planet-planet collisions. The grid spacing was radially logarithmic, extending radially from $r=0.6$ to $4$, with resolution corresponding to $\sim 24$ cells per scale height in all directions. Damping boundary zones were applied as in @2019MNRAS.484..728M, The azimuthal velocity field was initialised to produce an exact numerical radial force balance, following the method implemented in [FARGO]{} . The runs presented in this work required in total 600 kCPUh.
Disc thermodynamics were modelled in the simplest useful form for considering the near-adiabatic thermodynamics of the inner regions of protoplanetary discs in two dimensions. Thus, we apply a thermal relaxation term in the form used by @2012ApJ...750...34H and @2016ApJ...817..102L, with a timescale derived from an effective optical depth estimate from @1990ApJ...351..632H for an irradiated disc as described by @2012ApJ...757...50D. We adopt the simplified Rosseland mean opacity model of @1997ApJ...486..372B and approximate the Planck mean opacity as being the equal to it. In viscous (turbulent) disc regions, we include a subgrid turbulent diffusivity to entropy assuming a turbulent Prandtl number unity, i.e. equal diffusion of momentum and specific entropy. One consequence of including the energy equation in this manner is that vortices can form due to both the Rossby Wave Instability (RWI) and through baroclinic forcing. This is in contrast to our recent work where a barotropic equation of state was adopted [@2019MNRAS.484..728M]. In terms of realism, the thermodynamic treatment adopted in this paper is an improvement on our previous approach.
To facilitate a comparison on as equal terms as possible between the behaviours of viscous and inviscid discs, we have constructed a model for a disc with a planet migration trap formed at the inner radial edge of the dead zone where the disc transitions from being MRI inactive to becoming turbulent. This radial location is a density maximum, with a rapidly decreasing surface density to the inside where the disc has a large viscosity. Exterior to this edge the disc is inviscid or has a lower viscosity. Thus we must arrange for a disc which has a stationary configuration under the action of viscosity even though it has 1) a non power-law radial surface density profile, and 2) it may have a smooth transition from viscous to inviscid. These two properties can be arranged by modifying the viscosity operator to diffuse the disc towards the initial surface density profile. This is implemented by subtracting a term equal to the initial specific viscous torque from the azimuthal momentum equation, supplementing equation (129) of @2016ApJS..223...11B with $$\begin{aligned}
\partial_t v_{\phi} = \ldots - \left[ - \frac{1}{\rho}\left\{ \frac{1}{r^2}\partial_r (r^2 \tau_{\phi r}) \right\} \right]_{t=0}\ .\end{aligned}$$ In the case of an inviscid outer disc region, $\tau_{\phi r}=0$ and the additional viscous force is zero. With this added term, the initial condition density profile is an equilibrium state for the disc.
The disc surface density profile is $\Sigma = \Sigma_0 (r/0.8)^{5/2}$ inside of $r=0.64$ and $\Sigma = \Sigma_0 r^{-3/2}$ outside of $r=0.84$ and given by the the unique smooth cubic interpolating polynomial in the interval $r=[0.64,0.84]$. The disc has an initial radial temperature distribution $T\propto r^{-3/7}$ and aspect ratio $h= 0.035 r^{2/7}$ corresponding to a passively irradiated disc. When used, the viscosity is scaled radially to produce no net accretion flow in the two power-law regimes of the disc, so $\nu=\nu_0 r^{\alpha-1/2}$ with $\alpha=-5/2$ for $r<0.8$ and $\alpha=3/2$ for $r>0.8$. In the viscous case, the viscosity scaling coefficient is everywhere $\nu_0=2\times10^{-5}$. At r=1, this viscosity corresponds to a Shakura-Sunyaev turbulent viscosity of $\alpha_{\rm SS}=8\times10^{-3}$. In the case representing a laminar dead zone the viscosity coefficient $\nu_0$ is tapered linearly to zero in the interval $r=[0.85,0.95]$ and is zero at larger radii. The planetary system is characterised by a factor of $5/2$ in planet mass ($q=\left[ 1, 1.26, 1.58, 1.99, 2.5 \right] \times 10^{-5}$) and in $5/2$ in initial orbital semimajor axis ($a=1, 1.26, 1.58, 1.99, 2.5$). The planet masses thus range from approximately 3 to 8 solar masses, making them similar to the Kepler multiplanet sample. As these planets migrate inwards, they are between the local feedback mass and thermal mass, where at the feedback mass spiral density waves excited by a planet undergo quasi-local shock dissipation resulting in partial gap formation [@2002ApJ...572..566R].
In the globally viscous disc case (left panel of Figure \[fig:logsigma comparison\]), the planets settle into a chain of first order mean motion resonances, and the planets stay well separated in their orbits at all times, as shown in Figure \[fig:convsys30a\] and Figure \[fig:convsys30pratio\]. However, in the inviscid disc (right panel of Figure \[fig:logsigma comparison\]), where the planets drive significant structure in the disc, including vortices, some planets are able to escape mean motion resonances, and undergo a series of chaotic close encounters as shown in Figure \[fig:convsys29a\]. Data for a second viscous disc system and three more inviscid discs systems with perturbed initial planet positions, showing this behaviour is a generic outcome, are presented in the Supplementary Material (online only).
Once these compact systems have been produced, either in a chain of resonances or not, the further evolution of the system in the pure n-body post-disc phase can be examined. To model this, we restart each run at 11 times, evenly spaced from $45\times10^5$ to $60\times10^5$ orbits, and add a operator to the surface density equation $\partial_t \Sigma = -1/t_e \Sigma$ with the disc evaporation timescale $t_e=800$ orbits. This timescale is chosen to be as short as reasonable which maintaining slow, adiabatic changes to the n-body system. After 5 e-folding times of this disc evaporation operator, the planet and star positions and velocity configuration were saved for use in the n-body phase. This pure n-body evolution was computed with [REBOUND]{} using the WHFast integrator [@2015MNRAS.452..376R] with a time step of $1/120$ of an orbital time at $a=1$, continued until a maximum time of $10^8$ orbits was reached, or when any individual planet achieved an eccentricity above $0.95$. Thus close encounters are not directly detected or given special treatment, as no attempt is made to integrate past their occurrence in this post-gas-disc phase. This eccentricity stopping criteria indicates that the system likely undergoes close encounters or dynamical instability. In total, the viscous disc systems survive to $10^8$ orbits in a chain of mean-motion resonances, with 5 exceptions out of 22 cases. However, of the 44 cases of the inviscid discs, only 11 survive for $10^8$ orbits, with 7 of those occurring for n-body extractions from a single gas disc evolution case. It is important to stress that while these results cannot be interpreted as a general statistical statement for the frequency of stable systems, they demonstrate both that the systems resulting from migration in an inviscid disc are in the long term much more likely to undergo dynamical instability as the result of the planets not being entirely locked in a chain of first-order mean-motion resonances, and additionally that the inviscid disc cases can produce systems with planets out of resonance that are nevertheless stable for long periods. The semimajor axis and eccentricity evolutions of the 55 realisations of full system histories are presented in Supplementary Material (online only), along with the histories of the nearest first-order mean-motion resonant angles.
Discussion
==========
The stable systems resulting from viscous disc migration were always found to be in mean motion resonance, but those resulting from migration in an inviscid disc may not form complete resonant chains, despite being closely packed. This phenomenon may provide a route to forming systems like Kepler-11, which contain a chain of closely-packed planets close to, but not in mean motion resonances [@2014ApJ...795...32M]. In such a scenario the formation of the planets at precisely their late-time orbital locations, and the suppression of any planet-disc interactions leading to planet migration [in-situ formation, @2012ApJ...751..158H] is not required. Furthermore, the inviscid disc scenario provides an attractive alternative to the idea that Kepler-11 may have previous been in a resonant chain that subsequently broke up due to dynamical instability, as envisioned in the recent calculations of @2019arXiv190208772I. Systems displaying the close-packed nature of Kepler-11, with its proximity to dynamically unstable configurations, do not arise easily from scenarios that involve strong scattering and dynamical relaxation of multiplanet systems [@2014ApJ...795...32M], and hence a gentler formation scenario involving migration in an inviscid disc may explain these types of systems.
While our simulations include a fairly complete physical model, numerous improvements will be required in future work to confirm the results we have presented here. The thermal relaxation model adopted has been sufficient for this study, however the heating of the disc due to the tidal interaction of feedback-mass planets can be significant over the long timescales considered here. More accurate predictions of the outcome of low-viscosity planet-disc interactions will require a radiative transfer prescription valid far from the disc irradiation-cooling equilibrium.
Our models have constrained the planet orbits to zero inclination. Planet-planet and planet-disc interactions will have an effect on the evolution of this parameter. Although allowing non-zero inclination would not be expected to change the qualitative result about the difference between system evolution in viscous and inviscid discs found here, the possible quantitative differences in the results of systems produced from viscous and inviscid discs and comparison to observations is an important area for follow up work.
In inviscid discs, many close encounters between pairs of planets occur when the gas is still present, in strong contrast to the viscous disc models, where efficient capture into resonance prevents close planet-planet encounters. We note that these close encounters often lead to separations which under reasonable planet mass-radius models a collision would be inevitable, if this process were included in our model. However, as the planet trajectories were constrained to zero inclination the collision probability during a close encounter is significantly increased as compared to true three dimensional interactions. Further work should include a three dimensional treatment of the planet trajectories, and a model for planet-planet collisions as these may become much more important for the evolution of planetary systems in inviscid discs. Dynamical instability in the post-gas disc should drive the loss of planet atmospheres through heating [@2019MNRAS.tmp..739B], but collisions while the gas disc is still present may allow the planet to re-accrete a gas atmosphere.
Conclusions
===========
Convergent migration has different typical outcomes in viscous and inviscid discs. In viscous disc models, planets are able to migrate into resonant chains, with a preference for first-order mean motion resonances. In inviscid discs, the ability of planets, particularly those above the feedback mass, to spur vortices and modify the disc surface density profile both allows planets to escape resonant configurations and to migrate into sustained non-resonant configurations. These non-resonant configurations have significantly different long-term stability properties from chains of planets in mean motion resonances, being much more likely to be unstable. At the same time, the ability of planets to escape mean-motion resonances while undergoing convergent migration in an inviscid disc allows the system to find tightly packed configurations which appear to possess long term stability without being entirely in mean motion resonances.
Acknowledgements {#acknowledgements .unnumbered}
================
This research was supported by STFC Consolidated grants awarded to the QMUL Astronomy Unit 2015-2018 ST/M001202/1 and 2017-2020 ST/P000592/1. This research utilised Queen Mary’s Apocrita HPC facility, supported by QMUL Research-IT [@apocrita]; and the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). We acknowledge PRACE for awarding access to TGCC Irene at CEA, France. This equipment was funded by a BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grant ST/K00087X/1, DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure. SJP is supported by a Royal Society University Research Fellowship. Simulations in this paper made use of the REBOUND code which can be downloaded freely at [http://github.com/hannorein/rebound]{}.
Specific entropy formulation {#sec:entropy}
============================
FARGO3D by default evolves energy by use of an internal energy variable (internal energy per volume). Like other ZEUS-family codes [@2011IAUS..270....7N], the variable used to evolve energy can be changed. For example, in ZEUS, the use of a total energy formulation has been shown to have important advantages [@2010ApJS..187..119C]. The specific algorithmic choices made in updating the energy variable are also not canonical, and variations to the use of “consistent advection” [@1980ApJ...239..968N] may be advantageous for the energy variable [@2010ApJS..187..119C].
In this work, we have opted to evolve the specific entropy, for the reason that like the internal energy, the evolution requires only the use of cell-centred quantities, because the evolution equation does not have a compression-work term. Specific entropy $s$ is the entropy per mass. With the ideal gas equation of state as: $$\begin{aligned}
P &= (\gamma-1)\Sigma c_v T\ ,\end{aligned}$$ where $P$ is the gas pressure, $\gamma$ the adiabatic index, $\Sigma$ the gas surface density, and $T$ the temperature, the heat capacity at constant volume is defined as: $$\begin{aligned}
c_v \equiv \frac{k_B}{\mu m_H (\gamma-1)}\ ,
\end{aligned}$$ with $k_B$ the Boltzmann constant, and $\mu m_H $ the mean mass per gas particle. The specific entropy itself is thus defined as: $$\begin{aligned}
s\equiv c_v \log\left(\frac{T}{T_0}\left(\frac{\Sigma}{\Sigma_0}\right)^{-(\gamma-1)}\right)\ ,\end{aligned}$$ where $\Sigma_0$ and $T_0$ are constant reference densities and temperatures, and play not role in determining the gas physics.
The evolution equation for specific entropy is $$\begin{aligned}
\frac{\partial s}{\partial t} &= - ({\bm{v}}\cdot \nabla) s + \mathcal{L} \label{eq:sevol}\ ,\\
\mathcal{L}&\equiv \frac{1}{T}\left( \Gamma_\nu +\Gamma_{\rm sh}\right) + \frac{1}{T}\left[ -c_v \frac{T-T_{\rm ref}(r)}{t_{\rm cool}}\right]\end{aligned}$$ where $\mathcal{L}$ is the heating/cooling term, containing the entropy production in viscous dissipation (both shock-capturing artificial viscosity and kinematic viscosity) and a thermal relaxation term [@2016ApJ...817..102L]. The viscous heating $\Gamma_\nu$ and shock-capturing artificial viscosity heating $\Gamma_{\rm sh}$ remain the same as in FARGO3D’s implementation of the internal energy equation. The motivation for the choice of this form of energy equation is that it lacks a PdV term that the internal energy density evolution equation has. ZEUS-family schemes such as FARGO3D utilise a simple form of operator splitting to solve the fluid evolution equations, where the fields are updated by a full time step in the source terms, and then the fields are updated by the conservation law component of the evolution equation. Thus, to integrate equation (\[eq:sevol\]) only the source term $\mathcal{L}$ is integrated during the source step. The remainder of equation (\[eq:sevol\]) is applied during the transport step.
The transport operator component of equation (\[eq:sevol\]) is $$\begin{aligned}
\frac{\partial s}{\partial t} &= - ({\bm{v}}\cdot \nabla) s\ .\end{aligned}$$ To solve this equation in FARGO3D we multiply $s$ by density $\rho$ before the transport step, and so solve for the transport of the volumetric quantity $S=\rho s$. The quantity $S$ obeys a conservation law as $$\begin{aligned}
\frac{\partial S}{\partial t} + \nabla \cdot (S{\bm{v}}) = 0 \ .\end{aligned}$$ After the update of $S$ to the end of the time step, the result is transformed back to the specific entropy by utilising the updated density field. The series of steps used to apply the transport operator from time $t_i$ to $t_{i+1}$ to the density and entropy fields in the transport step is thus $$\begin{aligned}
S_{i} &= s_{i} \rho_{i}\\
s_{i+1}& = \mathrm{Transport}(s_i)\\
\rho_{i+1} &= \mathrm{Transport}(\rho_i)\\
s_{i+1} &= S_{i+1}/\rho_{i+1}\end{aligned}$$ Lacking a strong reason to adopt “consistent advection” on top of the procedure of transforming to $S$ with the initial density, and back to $s$ with updated density, we follow the advice given in @2010ApJS..187..119C and disable it for the transport of $S$. As a validation of the implementation, a version of the Sod shock tube test is shown in the Supplementary Material.
\[lastpage\]
[^1]: E-mail: c.mcnally@qmul.ac.uk (CPM)
|
---
abstract: 'We study theoretically the influence of the surface plasmon excitation on the Goos-Hänchen lateral shift of a $p$-polarized Gaussian beam incident obliquely on a dielectric-metal bilayer in the Otto configuration. We find that the lateral shift depends sensitively on the thickness of the metal layer and the width of the incident beam, as well as on the incident angle. Near the incident angle at which surface plasmons are excited, the lateral shift changes from large negative values to large positive values as the thickness of the metal layer increases through a critical value. For wide incident beams, the maximal forward and backward lateral shifts can be as large as several hundred times of the wavelength. As the width of the incident Gaussian beam decreases, the magnitude of the lateral shift decreases rapidly, but the ratio of the width of the reflected beam to that of the incident beam, which measures the degree of the deformation of the reflected beam profile, increases. In all cases considered, we find that the reflected beam is split into two parts. We also find that the lateral shift of the transmitted beam is always positive and very weak.'
address:
- '$^1$ Department of Energy Systems Research and Department of Physics, Ajou University, Suwon 16499, Korea'
- '$^2$ School of Physics, Korea Institute for Advanced Study, Seoul 02455, Korea'
author:
- 'Sangbum Kim$^{1}$ and Kihong Kim$^{1,2}$'
title: 'Direct calculation of the strong Goos-Hänchen effect of a Gaussian light beam due to the excitation of surface plasmon polaritons in the Otto configuration'
---
Introduction
============
A light beam deviates from the path expected from geometrical optics when it is totally reflected at the interface between two different media. The reflected beam is displaced laterally along the interface, which is called the Goos-Hänchen (GH) shift. This phenomenon has been predicted a long time ago and measured experimentally by Goos and Hänchen for the first time [@Goos1; @Goos2; @Goos3]. Artmann has derived an analytical formula for the GH shift for incident plane waves [@Artmann]. The GH effect occurs in many diverse areas such as acoustics, optics, plasma physics and condensed matter physics [@Lotsch; @Puri].
Earlier works have treated the GH shift in multilayered structures in the Otto or Kretschmann configuration [@Shah1; @Shah2; @Shah3]. Tamir and Bertoni performed a detailed analysis of the electromagnetic field distribution in a leaky-wave structure upon which a Gaussian beam is incident [@Tamir]. It has been demonstrated that the reflected beam displays either a forward or a backward beam shift. An approximate analytical solution has shown that the initial Gaussian beam profile splits into two. The theory of leaky waves has also been applied to acoustic beams incident on a liquid-solid interface, with the aim of presenting a unified theory of the beam shifting effect near the Rayleigh angle [@Bertoni]. The GH effect of a light beam incident on a dielectric slab from the air has been studied with an emphasis on the transmitted beam [@Li]. Lakhtakia has pointed out that the GH shift reverses its direction when $\epsilon < 0$ and $\mu < 0$ in the optically rarer medium [@Lakhtakia]. The enhancement of the GH shift and the control of the reflected field profile has been achieved by adding a defect or cladding layer to photonic crystals [@He; @Wang]. Recently, De Leo [*et al.*]{} have performed an extended study investigating the asymmetric GH effect and derived an expression for the GH shift valid in the region where the Artmann formula diverges [@Leo1; @Leo2; @Leo3; @Leo4].
Light waves confined to the surface of a medium and surface charges oscillating resonantly with the light waves constitute the surface plasmon polaritons (SPPs). The enhancement of electromagnetic fields near the surface caused by the excitation of SPPs has generated practical applications in sensor technology [@Homola; @Kneipp; @Kurihara]. These applications include thin film probing [@Pockrand], biosensing [@Liedberg] and biological imaging [@Okamoto1]. In the Otto or Kretschmann configuration, the SPPs are excited by attenuated total internal reflection by enhancing the momentum of the incident light [@Yeatman; @Torma].
The excitation of SPPs in the Otto or Kretschmann configuration affects the GH shift profoundly. Early results on the influence of SPPs on the shift of light beams can be found in [@Mazur] and [@Kou]. It has been shown that the interaction of leaky waves with SPPs enhances the GH shift. Results on the excitation of surface waves in the Otto configuration have been reported by Chen [*et al.*]{} [@Chen]. Chuang has conducted an analysis of the behavior of the reflection coefficient for both Otto and Kretschmann configurations [@Chuang]. The zeros and poles of the reflection coefficient move around the complex plane with the change of parameters, such as the beam width, the wavelength, the thickness and the dielectric constants. Zeller [*et al.*]{} have shown that the coupling of an incident wave with the SPP is highly dependent on the thickness of the dielectric sublayer in both the Kretschmann and Otto configuration [@Zeller1; @Zeller2; @Zeller3]. Shadrivov [*et al.*]{} have studied the GH shift in the Otto configuration with the metal sublayer substituted by a left-handed metamaterial [@Shadrivov]. A large GH shift with beam splitting was observed, and the energy transfer between the right- and left-handed materials was demonstrated by numerical simulations. There also exist studies to enhance the GH shift using various hybrid structures containing sublayers of graphene, MoS$_2$ or cytop [@Xiang1; @Xiang2; @Xiang3]. Recently, much progress has been made on obtaining a tunable GH shift in the prism-coupling system, by applying an external voltage to a graphene sublayer and other heterostructures [@Farmani1; @Farmani2; @Farmani3; @Xiang4]. Kim [*et al.*]{} have studied the GH shift of incident $p$ waves in the Otto configuration containing a nonlinear dielectric layer and shown that its magnitude can be as large as several hundred times of the wavelength at the incident angles where the SPPs are excited [@Kim3]. Furthermore, they have shown that the sign and the size of the GH shift can change very sensitively as the nonlinearity parameter varies.
In this paper, we study the strong enhancement of the GH effect for incident Gaussian beams when SPPs are excited at the metal-dielectric interface in the Otto configuration. We examine the influence of varying the thickness of the metal layer and the incident beam width on the GH effect and find out optimal configurations for maximal forward and backward lateral shifts.
Our theoretical method is based on the invariant imbedding method, using which we transform the wave equation into a set of invariant imbedding equations to create an equivalent initial value problem [@Kim5; @Kim1; @Kim2; @Kim6]. For the simplest case of multilayered structures made of uniform linear media, this method is equivalent to those based on the Fresnel coefficients. The invariant imbedding method has been employed to calculate the GH shift for plane waves incident on nonlinear media [@Kim3; @Kim4]. It can also be applied to the case of graded media. Here we consider the interaction of a Gaussian beam with linear media. More details of our model and method will be presented in the next section.
Generalization of the invariant imbedding method to Gaussian beams
==================================================================
We assume the layered structure lies in $0 \le z \le L$. A Gaussian beam with a finite half-width $W$ is incident from the region where $z>L$ at an angle $\theta_i$. For a $p$-polarized beam propagating in the $xz$ plane, the $y$ component of the magnetic field associated with the incident beam at the $z=L$ plane can be written as $${H_y}^{(i)}(x,L) = H_0 \exp \left( -{x^2 \over {W_x}^2} + i k_{x0} x
\right),$$ where $W_x$ $( = W / \cos\theta_i)$ is the half-width in the $x$ direction. The center of the incident beam is at $x = 0$. The parameter $k_{x0}$ ($= k_1 \sin\theta_i$) is the $x$ component of the wave vector corresponding to the incident angle $\theta_i$ and $k_1$ is the wave number in the incident region, which corresponds to the prism. The superscript $(i)$ refers to the incident beam.
We consider the incident Gaussian beam as a linear combination of plane waves and write its field as $${H_y}^{(i)}(x,L) = {1 \over \sqrt{2\pi}} \int_{-\infty}^{\infty}
\tilde{H}\left(k_x\right) \exp \left( i k_x x \right) dk_x,$$ where the Fourier transform $\tilde{H}\left(k_x\right)$ is given by $$\tilde{H}(k_x) = \frac{1}{\sqrt{2}}H_0 W_x\exp\left[-\frac{{W_x}^2}{4}\left(
k_x-k_{x0}\right)^2\right].$$ The variable $k_x$ can be parameterized as $k_x=k_1\sin\theta$. We write the reflection and transmission coefficients corresponding to each Fourier component as $r(k_x)$ and $t(k_x)$ respectively. Then the field profiles for the reflected and transmitted beams ${H_y}^{(r)} (x,z)$ and ${H_y}^{(t)} (x,z)$ are given by $$\begin{aligned}
&&{H_y}^{(r)}(x,z) = {1 \over \sqrt{2\pi}} \int_{-\infty}^{\infty}
r(k_x) \tilde{H}(k_x) \exp \left[ i k_x x + i k_z ( z - L )
\right] dk_x ~~(z>L),\nonumber\\
&&{H_y}^{(t)}(x,z) = {1 \over \sqrt{2\pi}} \int_{-\infty}^{\infty}
t(k_x) \tilde{H}(k_x) \exp \left( i k_x x - i k^\prime_z z
\right) dk_x ~~(z<0),\end{aligned}$$ where $k_z$ ($=k_1\cos\theta$) and ${k_z}^\prime$ are the negative $z$ components of the wave vector in the incident ($z>L$) and transmitted ($z<0$) regions respectively.
The GH shifts for the reflected $(\Delta_r)$ and transmitted $(\Delta_t)$ beams, which are also known as the normalized first momenta of the magnetic field [@Shadrivov], are defined by $$\Delta_{r} = \frac{\int_{-\infty}^{\infty} x \left\vert {H_y}^{(r)}(x,L) \right\vert^2 dx} {\int_{-\infty}^{\infty} \left\vert {H_y}^{(r)}(x,L) \right\vert^2 dx},~~~\Delta_{t} = \frac{\int_{-\infty}^{\infty} x \left\vert {H_y}^{(t)}(x,0) \right\vert^2 dx} {\int_{-\infty}^{\infty} \left\vert {H_y}^{(t)}(x,0) \right\vert^2 dx}.$$ The reflected and transmitted beams will be severely deformed when the half-width of the incident beam is small. In order to measure the degree of the deformation of the reflected and transmitted beams, we calculate the normalized second momenta of the magnetic field defined by $$\begin{aligned}
&&\beta_{r} = \frac{4 \int_{-\infty}^{\infty} \left( x - \Delta_{r} \right)^2 \left\vert {H_y}^{(r)}(x,L) \right\vert^2 dx }
{\int_{-\infty}^{\infty} \left\vert {H_y}^{(r)}(x,L) \right\vert^2 dx},\nonumber\\
&& \beta_{t} = \frac{4 \int_{-\infty}^{\infty} \left( x - \Delta_{t} \right)^2 \left\vert {H_y}^{(t)}(x,0) \right\vert^2 dx }
{\int_{-\infty}^{\infty} \left\vert {H_y}^{(t)}(x,0) \right\vert^2 dx}.\end{aligned}$$ These expressions are just the first and second moments of a distribution function and give the average and the variance for a given statistical distribution function. The first moment can also be interpreted as the centroid of the area under the distribution curve. The half-widths of the reflected and transmitted beams, $W_r$ and $W_t$, are obtained using $$W_r = \sqrt{\beta_{r}} \cos\theta_i,~~~W_t = \sqrt{\beta_{t}} \cos\theta_i.$$
In the expressions given above, the reflection and transmission coefficients play a crucial role. In order to calculate $r(k_x)$ and $t(k_x)$ in the wave vector space, we resort to the invariant imbedding method, which we summarize here briefly. For a $p$-polarized wave propagating in a nonmagnetic ($\mu
= 1$) layered structure along the $xz$ plane, the complex amplitude of the magnetic field, $H = H(z)$, satisfies the wave equation $$\label{p-wave_eq}
{d^2 H \over dz^2} - {1 \over \epsilon(z)} {d \epsilon \over dz} {dH
\over dz} + \left[ {k_0}^2 \epsilon(z) - {k_x}^2 \right] H = 0,$$ where $k_0$ ($ = \omega / c$) is the vacuum wave number and $\epsilon(z)$ is the dielectric permittivity. We consider a plane wave of unit magnitude $\hat{H}(x,z) = H(z)
e^{ik_xx} = e^{ik_z(L-z)+ik_xx}$ incident on the medium lying in $0\le z\le L$ from the region where $z>L$ at an angle $\theta$. The complex reflection and transmission coefficients $r =
r(L)$ and $t = t(L)$, which we consider as functions of $L$, are defined by the wave functions outside the medium by $$\hat{H}(x,z)=\cases{e^{ik_z(L-z)+ik_xx} + r(L) e^{ik_z(z-L)+ik_xx}, &if $z > L$\\
t(L) e^{-i{k_z}^\prime z+ik_xx},&if $z < 0$\\}.$$ When $\epsilon$ is equal to $\epsilon_1$ in $z>L$ and $\epsilon_2$ in $z<0$, the wave vector components $k_x$, $k_z$ and ${k_z}^\prime$ are given by $ k_x= \sqrt{\epsilon_1} k_0 \sin \theta=k_1\sin\theta$, $ k_z=
\sqrt{\epsilon_1} k_0 \cos \theta$ and ${k_z}^\prime = \sqrt{\epsilon_2}
k_0 \cos \theta^\prime$, where $\theta^\prime$ is the angle that outgoing waves make with the negative $z$ axis.
We can transform the wave equation into a set of differential equations for the reflection and transmission coefficients using the invariant imbedding method [@Kim5]: $$\begin{aligned}
{1 \over k_z} {dr(l) \over dl} & = & 2i {\epsilon(l) \over \epsilon_1}
r(l) - {i \over 2} \left[ {\epsilon(l) \over \epsilon_1} - 1 \right]
\left[ 1 - {\epsilon_1 \over \epsilon(l)}
\tan^2 \theta \right] \left[ 1 + r(l) \right]^2, \nonumber \\
%
{1 \over k_z} {dt(l) \over dl} & = & i {\epsilon(l) \over \epsilon_1}
t(l) - {i \over 2} \left[ {\epsilon(l) \over \epsilon_1} - 1 \right]
\left[ 1 - {\epsilon_1 \over \epsilon(l)}
\tan^2 \theta \right] \left[ 1 + r(l) \right] t(l).\end{aligned}$$ These equations are integrated from $l=0$ to $l=L$ using the initial conditions given by the Fresnel formulas $$\begin{aligned}
&&r(0) = {\epsilon_2 \sqrt{\epsilon_1} \cos \theta - \epsilon_1
\sqrt{\epsilon_2 - \epsilon_1 \sin^2 \theta} \over \epsilon_2
\sqrt{\epsilon_1} \cos \theta + \epsilon_1 \sqrt{\epsilon_2 -
\epsilon_1 \sin^2 \theta}}, \nonumber\\
&&t(0) = {2 \epsilon_2 \sqrt{\epsilon_1} \cos \theta \over
\epsilon_2 \sqrt{\epsilon_1} \cos \theta + \epsilon_1
\sqrt{\epsilon_2 -
\epsilon_1 \sin^2 \theta}}.\end{aligned}$$ The reflection and transmission coefficients $r(L)$ and $t(L)$ calculated above are employed into the integral in (4), multiplying each Fourier component.
The GH shift for reflected plane waves is computed and compared with those of beams, using Artmann’s formula $$\Delta = -{d\phi \over dk_x} = -{\lambda \over 2\pi
\sqrt{\epsilon_1} \cos\theta} {d\phi \over d\theta}$$ where $\lambda$ is the wavelength and $\phi$ is the phase of the reflection coefficient satisfying $r = |r| e^{i \phi}$.
Numerical results and discussion
================================
![\[fig1\] Schematic of the Otto configuration considered in this paper. ](fig1){width="15cm"}
We assume that a $p$-polarized beam is incident from a prism onto a dielectric-metal bilayer, which lies on a dielectric substrate, at room temperature. Both the prism and the substrate are assumed to have the same refractive index of 1.77 corresponding to sapphire. The refractive index of the dielectric layer is 1.46 corresponding to fused silica (${\rm SiO_2}$). Then the critical incident angle $\theta_c$ is equal to $55.57^\circ$. The vacuum wavelength of the incident wave $\lambda$ is 633 nm and the dielectric permittivity of the metal layer $\epsilon_m$ is $-16+i$ corresponding to silver at $\lambda =
633$ nm. In figure 1, we show the schematic of our configuration. We note that we have a propagating wave in the substrate in the present geometry.
![\[fig2\] GH shift normalized by the wavelength, $\delta_r = \Delta_r /
\lambda$, of the reflected wave calculated from Artmann’s formula for plane waves when $\theta_i= 62.33^\circ$ plotted versus the thickness of the metal layer normalized by the wavelength, $x_m = d_m / \lambda$.](fig2){width="8.5cm"}
We first consider the case where plane waves are incident. In figure 2, we plot the GH shift normalized by the wavelength, $\delta_r = \Delta_r /
\lambda$, of the reflected wave calculated from Artmann’s formula for plane waves versus the thickness of the metal layer normalized by the wavelength, $x_m = d_m / \lambda$, when $\theta_i$ is fixed to $62.33^\circ$. As $x_m$ is increased through the critical value $x_{m,\rm cr}\approx 0.08648$, $\delta_r$ changes rapidly from large negative values to large positive values. The value of the largest backward normalized GH shift is about $-7957$ at $x_m\approx 0.08642$, while that of the largest forward normalized GH shift is about 4214 at $x_m\approx 0.08654$. We note that the change of $x_m$ from $0.08642$ to $0.08654$ corresponds to the thickness change of just 0.076 nm. From many numerical calculations, we have found that the maximum GH shifts are obtained when $\theta_i\approx 62.33^\circ$. Since the values of the GH shift for plane waves are much larger than those obtained for Gaussian beams and shown in figure 4(a) below, we show them in separate figures.
Shen [*et al.*]{} have presented the results of the calculation and the measurement of the phase shift of the reflected wave in the Kretschmann configuration as a function of the incident angle [@Shen]. It has been shown that the phase shift decreases monotonically with $\theta_i$, which corresponds to the forward GH shift, when the metal (Ag) layer is relatively thin, while it increases monotonically, which corresponds to the backward GH shift, when the metal layer is relatively thick. Shen [*et al.*]{} have shown that the forward shift changes to the backward shift when the metal layer thickness changes by only a few nanometers. We notice that the sign of the GH shift changes from negative to positive as $d_m$ increases in our case of the Otto configuration, whereas it changes from positive to negative in Shen [*et al.*]{}’s case of the Kretschmann configuration.
![\[fig3\] (a) Forward normalized GH shift of the reflected wave, $\delta_r$, calculated from Artmann’s formula for plane waves when $x_m=0.08657$ and (b) the corresponding phase of the reflected wave, $\Phi_r$, versus incident angle $\theta_i$. (c) Backward normalized GH shift of the reflected wave calculated from Artmann’s formula when $x_m=0.08638$ and (d) the corresponding phase of the reflected wave versus incident angle.](fig3){width="9cm"}
In figures 3(a) and 3(b), we show the forward GH shift for the reflected plane wave and the corresponding phase of the reflected wave as a function of the incident angle, when $x_m=0.08657$. We find that $\Phi_r$ changes very rapidly and the GH shift becomes extremely large near $\theta_i=62.33^\circ$. The maximum $\delta_r$ is about 7450 at $\theta_i=62.332^\circ$. Similarly, in figures 3(c) and 3(d), we show the backward GH shift for the reflected plane wave and the corresponding phase of the reflected wave obtained for $x_m=0.08638$. $\delta_r$ is about $-7814$ at $\theta_i=62.3311^\circ$ in this case. We find that the maximal forward and backward GH shifts occur at the same incident angles where the reflectance takes a minimum value.
![\[fig4\] (a) Normalized GH shift of the reflected beam $\delta_r$ plotted versus $x_m$, for several different values of the half-width $W$ of the incident Gaussian beam when $\theta_i=
62.33^\circ$. (b) Half-width of the reflected beam $W_r$ normalized by $W$ plotted versus $x_m$ for the same values of $W$ and $\theta_i$ as in (a).](fig4){width="8.5cm"}
We now consider the case where Gaussian light beams are incident. In figure 4(a), we plot the normalized GH shift of the reflected beam $\delta_r$ versus $x_m$, for several different values of the half-width $W$ of the incident Gaussian beam. The incident angle is fixed to $\theta_i= 62.33^\circ$. For the displayed range of $x_m$ $\in [0.08, 0.09]$, $d_m$ varies from 50.64 nm to 56.97 nm. The thickness of the dielectric layer, $d_d$, is chosen such that $x_d =
d_d / \lambda = 0.5055$, therefore $d_d$ is equal to 319.98 nm. The specific values of $\theta_i$ and $d_d$ were chosen so as to maximize the largest value of the GH shift.
We observe in figure 4(a) that the GH shift of the reflected beam changes its sign from negative to positive as $x_m$ increases through a critical value $x_{m,\rm cr}$. This value and the corresponding critical thickness of the metal layer, $d_{m,\rm cr}$, are found to vary very little as $W$ is decreased from 320 $\mu$m $( \approx 505.53\lambda)$ to 40 $\mu$m $( \approx 63.19\lambda)$. As $W$ is decreased further, $d_{m,\rm cr}$ decreases noticeably. $x_{m,\rm cr}$ is about 0.0862 and $d_{m,\rm cr}$ is about 54.56 nm when $W$ is 40 $\mu$m, while $x_{m,\rm cr}$ is about 0.0825 and $d_{m,\rm cr}$ is about 52.22 nm when $W$ is 10 $\mu$m $(\approx 15.8\lambda)$. The maximal forward and backward lateral shifts are close to the half-width of the incident beam. For example, the beam with $W = 505.53\lambda$ is shifted by $\Delta_r = 494.72\lambda$ in the forward direction and by $\Delta_r =
-527.2\lambda$ in the backward direction. For $\lambda =
633$ nm, these correspond to 313.2 $\mu$m and $-333.7$ $\mu$m respectively. Our numerical results are summarized in Table 1. We notice that for $W/\lambda \approx 15.8$, the forward GH shift has no local maximum.
$W$($\mu$m) $W/\lambda$ $x_{m,{\rm cr}}$ $x_{m,f}$ $\delta_{r,f}$ $x_{m,b}$ $\delta_{r,b}$ $x_{m,{\rm md}}$ $W_{r,{\rm md}}/W$
------------- ------------- ------------------ ----------- ---------------- ----------- ---------------- ------------------ --------------------
$\infty$ $\infty$ 0.08648 0.08654 4214.2 0.08642 $-7956.8$ – –
320 505.53 0.08647 0.08717 494.72 0.08581 $-527.20$ 0.08647 1.72054
160 252.76 0.08646 0.08788 258.86 0.08517 $-255.52$ 0.08647 1.72957
80 126.38 0.08640 0.08939 135.19 0.08396 $-122.74$ 0.08647 1.73259
40 63.19 0.08617 0.09286 72.20 0.08173 $-57.48$ 0.08648 1.73621
20 31.6 0.08530 0.10287 40.69 0.07790 $-25.45$ 0.08652 1.74811
10 15.8 0.08246 – – 0.07196 $-10.10$ 0.08683 1.78945
: \[tab:tab1\] Summary of the numerical results for varying beam widths $W$. $x_{m,{\rm cr}}$ is the normalized critical thickness of the metal layer. $x_{m,f}$ and $x_{m,b}$ are the normalized thicknesses for the maximal forward and backward GH shifts, while $\delta_{r,f}$ and $\delta_{r,b}$ are the corresponding maximal forward and backward GH shifts normalized by $\lambda$. $x_{m,{\rm md}}$ is the normalized thickness corresponding to the maximum distortion, namely, the maximum $W_r$. $W_{r,{\rm md}}/W$ is the normalized value of the maximum distortion.
Our result can be understood from the theory of the GH shift in multilayered structures, where the sign of the GH shift is determined by the competition between intrinsic damping and radiative damping [@Liu; @Okamoto2]. As the thickness of the dielectric or metal layer is varied, the GH shift becomes positive if the intrinsic damping (absorption loss in the lossy layer) is less than the radiative damping (leakage from the dielectric layer to the prism), and negative otherwise. Our result is consistent with this argument, since the intrinsic damping is inversely proportional to the thickness of the metal layer [@Okamoto2]. Thus the reduction of $d_m$ enhances the intrinsic damping, resulting in the negative GH shift in figure 4(a).
Zeller [*et al.*]{} have also performed a theoretical analysis of the reflection coefficient associated with $s$-polarized SPPs in the Otto configuration containing a layer of a negative-index metamaterial instead of a metal layer [@Zeller3]. They have found that the sign of the GH shift changes from negative to positive as the thickness of the dielectric layer, $d_d$, increases.
In figure 4(b), we plot the half-width of the reflected beam $W_r$ normalized by $W$ versus $x_m$. This value takes a maximum at $x_m
\approx 0.08647$ for $W = 320$, 160 and 80 $\mu$m. The thickness for the maximum $W_r/W$ increases slightly to $x_m = 0.08683$ for $W =
10$ $\mu$m. Thus the maximum distortion of the beam occurs near the critical thickness of the metal layer, but not at the exactly same thickness. We observe that $W_r/W$ is larger and the overall curve is broader when the GH shift is smaller and when the incident beam is narrower.
![\[fig5\] (a) Forward normalized GH shift of the reflected beam versus incident angle for several different values of $W$ when $x_m=0.08657$. (b) Normalized half-width of the reflected beam versus incident angle for the same values of $x_m$ and $W$ as in (a).](fig5){width="8.5cm"}
In figure 5(a), we plot $\delta_r$ as a function of the incident angle for several values of $W$ when $x_d$ is 0.5055 and $x_m$ is 0.08657. We find that the maximum value of the [*forward*]{} lateral shift, which is about 140 times of the wavelength when $W$ is 320 $\mu$m and about 42 times of the wavelength when $W$ is 160 $\mu$m, is an order of magnitude smaller than the value in the plane-wave incidence, but is still very large. This value decreases as $W$ decreases. Since each Fourier component in the incident beam experiences a different phase shift from each other, the reflected beam, which is the sum of the reflected Fourier components, gets a much smaller GH shift than the case of the plane-wave incidence. As the half-width of the Gaussian beam increases, the magnitude of the lateral shift of the reflected beam increases and approaches the value for the plane-wave incidence. The reflectance for plane waves takes a dip at $\theta = 62.33^\circ$ and the maximum of $\delta_r$ occurs at the reflectance dip.
In figure 5(b), we show the normalized half-width of the reflected beam, $W_r / W$, as a function of the incident angle for each half-width of the incident Gaussian beam. We find that the half-width of the reflected beam at the reflectance dip is substantially larger than that of the incident Gaussian beam. The relative distortion of the reflected beam profile is larger for narrower beams and does not disappear at $\theta_i\approx 62.33^\circ$ as the beam width $W$ increases.
![\[fig6\] (a) Backward normalized GH shift of the reflected beam versus incident angle for several different values of $W$ when $x_m=0.08638$. (b) Normalized half-width of the reflected beam versus incident angle for the same values of $x_m$ and $W$ as in (a).](fig6){width="8.5cm"}
In figure 6(a), we plot $\delta_r$ as a function of the incident angle for different values of $W$, when $x_d$ is 0.5055 and $x_m$ is 0.08638. We find that the GH shift at the reflectance dip is [*backward*]{} with negative values of $\delta_r$ when $W$ is larger than 40 $\mu$m in the present case. The maximum value of $\vert\delta_r\vert$ is about 140 when $W$ is 320 $\mu$m and about 30 when $W$ is 160 $\mu$m. Interestingly, for $W \leq 40$ $\mu$m, the value of $\delta_r$ at the reflectance dip becomes positive. The critical thicknesses in figure 4(a) for narrow beams with $W = 40$, 20 and 10 $\mu$m are $x_m = 0.08617$, 0.08530 and 0.08246 respectively. Thus in the calculation with $x_m = 0.08638$ presented here, narrow beams have positive GH shifts. In figure 6(b), we show the normalized half-width of the reflected beam $W_r / W$ as a function of the incident angle for each value of $W$. General characteristics are similar to the forward shift case shown in figure 5(b).
![\[fig7\] (a) Magnetic field intensity distribution associated with the reflected beam plotted versus $x$ for several different values of $W$, when $x_m=x_{m,f}$ and $\theta_i= 62.33^\circ$. The GH shift is positive in this case. (b) Magnetic field intensity distribution associated with the reflected beam plotted versus $x$ for several different values of $W$, when $x_m=x_{m,b}$ and $\theta_i= 62.33^\circ$. The GH shift is negative in this case. (c) Comparison of the magnetic field profiles associated with the reflected beams corresponding to the backward ($x_m=0.08581$) and forward ($x_m=0.08717$) GH shifts, when $W = 320$ $\mu$m and $\theta_i= 62.33^\circ$.](fig7){width="8.5cm"}
In figures 7(a) and 7(b), we plot the magnetic field intensity distribution associated with the reflected beam at the reflectance dip ($\theta_i=62.33^\circ$) for several different values of $W$, when $x_m=x_{m,f}$ and $x_m=x_{m,b}$ respectively. In all cases, we observe clearly that the reflected beam is split into two parts. In the case of the plane-wave incidence, the reflectance is very low. This should apply to other Fourier components as well, so the intensity of the reflected beam is substantially weaker than that of the incident beam.
In figure 7(c), we compare the magnetic field profiles associated with the reflected beams corresponding to the backward ($x_m=x_{m,b}$) and forward ($x_m=x_{m,f}$) GH shifts, when $W = 320$ $\mu$m and $\theta_i= 62.33^\circ$. We observe that the left peak is larger than the right peak in the case of the backward GH shift, while it is vice versa in the case of the positive GH shift. The maximum forward GH shift for the beam with $W = 320$ $\mu$m is about 495 times of the wavelength ($\sim 313$ $\mu$m), which is sufficiently large to be observable in figure 7.
We mention briefly on the GH shift of the transmitted beam, $\delta_t$ ($ = \Delta_t / \lambda$), as a function of the incident angle for several values of $W$. We find that $\delta_t$ is positive in all cases and depends very weakly on $W$. The maximum value of $\Delta_t$ is about seven times of the wavelength ($\sim 4.4$ $\mu$m). For the normalized half-width of the transmitted beam $W_t/ W$, we find that the value at the reflectance dip is slightly larger than 1. We also find that there is no splitting in the magnetic field intensity distribution of the transmitted beam. The spatial profiles of the transmitted beam look similar to those of the incident Gaussian beam. If we employ a substrate with its refractive index less than 1.5676, we obtain an evanescent transmitted beam. Then there will be another SPP excited at the metal-substrate interface, enhancing the GH shift even further.
There have been several experiments showing both positive and negative GH shifts. In particular, there exist two experiments that have used experimental parameters similar to ours [@Liu; @Yin]. In [@Shen], the totally reflected $p$-polarized wave is heterodyned with the $s$-polarized wave from the same laser source. In [@Yin], a position-sensitive detector produces a signal that is proportional to the difference of the lateral displacements of $s$- and $p$-polarized waves. Since an $s$ wave does not excite SPPs, the obtained signal gives the GH shift of the $p$-polarized wave in both cases. Following similar schemes, it will not be difficult to set up an experiment in the Otto configuration to test our results.
Conclusion
==========
In this paper, we have studied the influence of the surface plasmon excitation on the GH effect theoretically, when a Gaussian beam of a finite width is incident on a dielectric-metal bilayer in the Otto configuration. Using the invariant imbedding method, we have calculated the GH lateral shifts of the reflected and transmitted beams and the widths of these beams relative to that of the incident beam. We have found that the lateral shift of the reflected beam depends very sensitively on the thickness of the metal layer and the width of the incident beam. Close to the incident angle at which surface plasmons are excited, the lateral shift of the reflected beam has been found to change from large negative values to large positive values as the thickness of the metal layer increases through a critical value. The maximal forward and backward lateral shifts can be as large as the half-width of the incident beam. As the width of the incident Gaussian beam decreases, we have found that the size of the lateral shift decreases rapidly, but the relative deformation of the reflected beam increases. In all cases studied, we have found that the reflected beam is split into two parts. Finally, we have found that the lateral shift of the transmitted beam is always positive and very weak. The lateral shift of the reflected beam studied here is large enough to be considered for application to optical devices.
This work has been supported by the National Research Foundation of Korea Grant (NRF-2018R1D1A1B07042629) funded by the Korean Government.
References {#references .unnumbered}
==========
[10]{}
Goos F and Hänchen H 1943 Über das Eindringen des totalreflektierten Lichtes in das dünnere Medium [*Ann. Phys.*]{} [**435**]{} 383
Goos F and Hänchen H 1947 Ein neuer und fundamentaler Versuch zur Totalreflexion [*Ann. Phys.*]{} [**436**]{} 333
Goos F and Lindberg-Hänchen H 1949 Neumessung des Strahlversetzungseffektes bei Totalreflexion [*Ann. Phys.*]{} [**440**]{} 251
Artmann K 1948 Berechnung der Seitenversetzung des totalreflektierten Strahles [*Ann. Phys.*]{} [**437**]{} 87
Lotsch H K V 1968 Reflection and refraction of a beam of light at a plane interface [*J. Opt. Soc. Am.*]{} [**58**]{} 551
Puri A and Birman J L 1986 Goos-Hänchen beam shift at total internal reflection with application to spatially dispersive media [*J. Opt. Soc. Am. A*]{} [**3**]{} 543
Shah V and Tamir T 1977 Brewster phenomena in lossy structures [*Opt. Commun.*]{} [**23**]{} 113
Shah V and Tamir T 1981 Anomalous absorption by multi-layered media [*Opt. Commun.*]{} [**37**]{} 383
Shah V and Tamir T 1983 Absorption and lateral shift of beams incident upon lossy multilayered media [*J. Opt. Soc. Am.*]{} [**73**]{} 37
Tamir T and Bertoni H L 1971 Lateral displacement of optical beams at multilayered and periodic structures [*J. Opt. Soc. Am.*]{} [**61**]{} 1397
Bertoni H L and Tamir T 1973 Unified theory of Rayleigh-angle phenomena for acoustic beams at liquid-solid interfaces [*Appl. Phys.*]{} [**2**]{} 157
Li C F 2003 Negative lateral shift of a light beam transmitted through a dielectric slab and interaction of boundary effects [*Phys. Rev. Lett.*]{} [**91**]{} 133903
Lakhtakia A 2003 On planewave remittances and Goos-Hänchen shifts of planar slabs with negative real permittivity and permeability [*Electromagnetics*]{} [**23**]{} 71
He J, Yi J and He S 2006 Giant negative Goos-Hänchen shifts for a photonic crystal with a negative effective index [*Opt. Express*]{} [**14**]{} 3024
Wang L G and Zhu S Y 2006 Giant lateral shift of a light beam at the defect mode in one-dimensional photonic crystals [*Opt. Lett.*]{} [**31**]{} 101
Araujo M P, Carvalho S A and De Leo S 2014 The asymmetric Goos-Hänchen effect [*J. Opt.*]{} [**16**]{} 015702
Araújo M P, De Leo S and Maia G G 2016 Closed-form expression for the Goos-Hänchen lateral displacement [*Phys. Rev. A*]{} [**93**]{} 023801
Araújo M P, De Leo S and Maia G G 2017 Oscillatory behavior of light in the composite Goos-Hänchen shift [*Phys. Rev. A*]{} [**95**]{} 053836
De Leo S and Kraus R K 2018 Incidence angles maximizing the Goos-Hänchen shift in seismic data analysis [*Pure Appl. Geophys.*]{} [**175**]{} 2023
Homola J, Yee S S and Gauglitz G 1999 Surface plasmon resonance sensors: review [*Sens. Actuators B*]{} [**54**]{} 3
Kneipp K, Kneipp H, Itzkan I, Dasari R R and Feld M S 2002 Surface enhanced Raman scattering and biophysics [*J. Phys.: Condens. Matter*]{} [**14**]{} R597
Kurihara K and Suzuki K 2002 Theoretical understanding of an absorption-based surface plasmon resonance sensor based on Kretschmann’s theory [*Anal. Chem.*]{} [**74**]{} 696
Pockrand I, Swalen J D, Gordon II J G and Philpott M R 1978 Surface plasmon spectroscopy of organic monolayer assemblies [*Surf. Sci.*]{} [**74**]{} 237
Liedberg B, Nylander C and Lunström I 1983 Surface plasmon resonance for gas detection and biosensing [*Sens. Actuators*]{} [**4**]{} 299
Okamoto T and Yamaguchi I 1992 Surface plasmon microscope with an electronic angular scanning [*Opt. Commun.*]{} [**93**]{} 265
Yeatman E M 1996 Resolution and sensitivity in surface plasmon microscopy and sensing [*Biosens. Bioelectron.*]{} [**11**]{} 635
Törmä P and Barnes W L 2015 Strong coupling between surface plasmon polaritons and emitters: a review [*Rep. Prog. Phys.*]{} [**78**]{} 013901
Mazur P and Djafari-Rouhani B 1984 Effect of surface polaritons on the lateral displacement of a light beam at a dielectric interface [*Phys. Rev. B*]{} [**30**]{} 6759
Kou E F Y and Tamir T 1989 Excitation of surface plasmons by finite width beams [*Appl. Opt.*]{} [**28**]{} 1169
Chen W P, Ritchie G and Burstein E 1976 Excitation of surface electromagnetic waves in attenuated total-reflection prism configurations [*Phys. Rev. Lett.*]{} [**37**]{} 993
Chuang S L 1986 Lateral shift of an optical beam due to leaky surface-plasmon excitations [*J. Opt. Soc. Am. A*]{} [**3**]{} 593
Zeller M A, Cuevas M and Depine R A 2011 Surface plasmon polaritons in attenuated total reflection systems with metamaterials: homogeneous problem [*J. Opt. Soc. Am. B*]{} [**28**]{} 2042
Zeller M A, Cuevas M and Depine R A 2012 Phase and reflectivity behavior near the excitation of surface plasmon polaritons in Kretschmann-ATR systems with metamaterials [*Eur. Phys. J. D*]{} [**66**]{} 17
Zeller M A, Cuevas M and Depine R A 2015 Critical coupling layer thickness for positive or negative Goos-Hänchen shifts near the excitation of backward surface polaritons in Otto-ATR systems [*J. Opt.*]{} [**17**]{} 055102
Shadrivov I V, Zharov A A and Kivshar Y S 2003 Giant Goos-Hänchen effect at the reflection from left-handed metamaterials [*Appl. Phys. Lett.*]{} [**83**]{} 2713
You Q, Shan Y, Gan S, Zhao Y, Dai X and Xiang Y 2018 Giant and controllable Goos-Hänchen shifts based on surface plasmon resonance with graphene-MoS$_2$ heterostructure [*Opt. Mater. Express*]{} [**8**]{} 3036
You Q, Zhu J Q, Guo J, Wu L M, Dai X Y and Xiang Y J 2018 Giant Goos-Hänchen shifts of waveguide coupled long-range surface plasmon resonance mode [*Chin. Phys. B*]{} [**27**]{} 087302
You Q, Jiang L, Dai X and Xiang Y 2018 Enhancement and control of the Goos-Hänchen shift by nonlinear surface plasmon resonance in graphene [*Chin. Phys. B*]{} [**27**]{} 094211
Farmani A, Miri M and Sheikhi M H 2017 Analytical modeling of highly tunable giant lateral shift in total reflection of light beams from a graphene containing structure [*Opt. Commun.*]{} [**391**]{} 68
Farmani A, Miri M and Sheikhi M H 2017 Tunable resonant Goos-Hänchen and Imbert-Fedorov shifts in total reflection of terahertz beams from graphene plasmonic metasurfaces [*J. Opt. Soc. Am. B*]{} [**34**]{} 1097
Farmani A, Mir A and Sharifpour Z 2018 Broadly tunable and bidirectional terahertz graphene plasmonic switch based on enhanced Goos-Hänchen effect [*Appl. Surf. Sci.*]{} [**453**]{} 358
Luo C, Guo J, Wang Q, Xiang Y and Wen S 2013 Electrically controlled Goos-Hänchen shift of a light beam reflected from the metal-insulator-semiconductor structure [*Opt. Express*]{} [**21**]{} 10430
Kim K, Phung D K, Rotermund F and Lim H 2008 Strong influence of nonlinearity and surface plasmon excitations on the lateral shift [*Opt. Express*]{} [**16**]{} 15506
Kim K, Lim H and Lee D H 2001 Invariant imbedding equations for electromagnetic waves in stratified magnetic media: applications to one-dimensional photonic crystals [*J. Korean Phys. Soc.*]{} [**39**]{} L956
Kim K, Lee D H and Lim H 2005 Theory of the propagation of coupled waves in arbitrarily inhomogeneous stratified media [*Europhys. Lett.*]{} [**69**]{} 207
Kim K, Phung D K, Rotermund F and Lim H 2008 Propagation of electromagnetic waves in stratified media with nonlinearity in both dielectric and magnetic responses [*Opt. Express*]{} [**16**]{} 1150
Kim S and Kim K 2016 Invariant imbedding theory of wave propagation in arbitrarily inhomogeneous stratified bi-isotropic media [*J. Opt.*]{} [**18**]{} 065605
Kim K 2015 Large enhancement of nonlinear Goos-Hänchen shifts and optical bistability due to surface plasmon excitations [*J. Korean Phys. Soc.*]{} [**67**]{} 2092
Shen S, Liu T and Guo J 1998 Optical phase-shift detection of surface plasmon resonance [*Appl. Opt.*]{} [**37**]{} 1747
Liu X, Cao Z, Zhu P, Shen Q and Liu X 2006 Large positive and negative lateral optical beam shift in prism-waveguide coupling system [*Phys. Rev. E*]{} [**73**]{} 056617
Okamoto T and Yamaguchi I 1997 Absorption measurement using a leaky waveguide mode [*Opt. Rev.*]{} [**4**]{} 354
Yin X, Hesselink L, Liu Z, Fang N and Zhang X 2004 Large positive and negative lateral optical beam displacements due to surface plasmon resonance [*Appl. Phys. Lett.*]{} [**85**]{} 372
|
---
abstract: 'In this paper we give an upper bound on the number of extensions of a triple to a quadruple for the Diophantine $m$-tuples with the property $D(4)$ and confirm the conjecture of uniqueness of such extension in some special cases.'
author:
- Marija Bliznac Trebješanin
title: 'Extension of a Diophantine triple with the property $D(4)$'
---
2010 [*Mathematics Subject Classification:*]{} 11D09, 11D45, 11J86\
Keywords: diophantine tuples, Pell equations, reduction method, linear forms in logarithms.
Introduction
============
Let $n\neq0$ be an integer. We call the set of $m$ distinct positive integers a $D(n)$-$m$-tuple, or $m$-tuple with the property $D(4)$, if the product of any two of its distinct elements increased by $n$ is a perfect square.
One of the most interesting and most studied questions is how large those sets can be. In the classical case, first studied by Diophantus, when $n=1$, Dujella has proven in [@duje_kon] that a $D(1)$-sextuple does not exist and that there are at most finitely many quintuples. Over the years many authors improved the upper bound for the number of $D(1)$-quintuples and finally He, Togbé and Ziegler in [@petorke] have given the proof of the nonexistence of $D(1)$-quintuples. To see details of the history of the problem with all references one can visit the webpage [@duje_web].
Variants of the problem when $n=4$ or $n=-1$ are also studied frequently. In the case $n=4$ similar conjectures and observations can be made as in the $D(1)$ case. In the light of that observation, Filipin and the author have proven that $D(4)$-quintuple also doesn’t exist.
In both cases $n=1$ and $n=4$, conjecture about uniqueness of extension of a triple to a quadruple with a larger element is still open. In the case $n=-1$, a conjecture of nonexistence of a quadruple is studied, and for the survey of a problem one can see [@cipu].
A $D(4)$-pair can be extended with a larger element $c$ to form a $D(4)$-triple. The smallest such $c$ is $c=a+b+2r$, where $r=\sqrt{ab+4}$ and such triple is often called a regular triple, or in the $D(1)$ case it is also called an Euler triple. There are infinitely many extensions of a pair to a triple and they can be studied by finding solutions to a of a Pellian equation $$\label{par_trojka}
bs^2-at^2=4(b-a),$$ where $s$ and $t$ are positive integers defined by $ac+4=s^2$ and $bc+4=t^2.$
For a $D(4)$-triple $\{a,b,c\}$, $a<b<c$, we define $$d_{\pm}=d_{\pm}(a,b,c)=a+b+c+\frac{1}{2}\left(abc\pm \sqrt{(ab+4)(ac+4)(bc+4)}\right),$$ and it is easy to check that $\{a,b,c,d_{+}\}$ is a $D(4)$-quadruple, which we will call a regular quadruple, and if $d_{-}\neq 0$ then $\{a,b,c,d_{-}\}$ is also a regular $D(4)$-quadruple with $d_{-}<c$.
Any $D(4)$-quadruple is regular.
Results which support this conjecture in some special cases can be found for example in [@dujram], [@bf], [@fil_par], [@fht] and some of those results are stated in the next section and will be used as known results.
In [@fm] Fujita and Miyazaki approached this conjecture in the $D(1)$ case differently – they examined how many possibilities are there to extend a fixed Diophantine triple with a larger integer. They improved their result from [@fm] further in the joint work [@cfm] with Cipu where they have shown that any triple can be extended to a quadruple in at most $8$ ways.
In this paper we will follow the approach and ideas from [@fm] and [@cfm] to prove similar results for extensions of a $D(4)$-triple. Usually, the numerical bounds and coefficients are slightly better in the $D(1)$ case, which can be seen after comparing Theorem \[teorem1.5\] and [@fm Theorem 1.5]. To overcome this problem we have made preparations similar as in [@nas2] by proving a better numerical lower bound on the element $b$ in an irregular $D(4)$-quadruple and still many results needed considering and proving more special cases.
Let $\{a,b,c\}$ be a $D(4)$-triple which can be extended to a quadruple with an element $d$. Then there exist positive integers $x,y,z$ such that $$ad+4=x^2,\quad bd+4=y^2, \quad cd+4=z^2.$$ By expressing $d$ from these equations we get the following system of generalized Pellian equations $$\begin{aligned}
cx^2-az^2&=4(c-a),\label{prva_pelova_s_a}\\
cy^2-bz^2&=4(c-b).\label{druga_pelova_s_b}\end{aligned}$$
There exists only finitely many fundamental solutions $(z_0,x_0)$ and $(z_1,y_1)$ to these Pellian equations and any solution to the system can be expressed as $z=v_m=w_n$, where $m$ and $n$ are non-negative integers and ${v_m}$ and ${w_n}$ are recurrence sequences defined by $$\begin{aligned}
&v_0=z_0,\ v_1=\frac{1}{2}\left(sz_0+cx_0\right),\ v_{m+2}=sv_{m+1}-v_{m},\\
&w_0=z_1,\ w_1=\frac{1}{2}\left(tz_1+cy_1 \right),\ w_{n+2}=tw_{n+1}-w_n.\\\end{aligned}$$
The initial terms of these equations were determined by Filipin in [@fil_xy4 Lemma 9] and one of the results of this paper is improving that Lemma by eliminating the case where $m$ and $n$ are even and $|z_0|$ is not explicitly determined.
\[teorem\_izbacen\_slucaj\] Suppose that $\{a,b,c,d\}$ is a $D(4)$-quadruple with $a<b<c<d$ and that $w_m$ and $v_n$ are defined as before.
1. If equation $v_{2m}=w_{2n}$ has a solution, then $z_0=z_1$ and $|z_0|=2$ or $|z_0|=\frac{1}{2}(cr-st)$.
2. If equation $v_{2m+1}=w_{2n}$ has a solution, then $|z_0|=t$, $|z_1|=\frac{1}{2}(cr-st)$ and $z_0z_1<0$.
3. If equation $v_{2m}=w_{2n+1}$ has a solution, then $|z_1|=s$, $|z_0|=\frac{1}{2}(cr-st)$ and $z_0z_1<0$.
4. If equation $v_{2m+1}=w_{2n+1}$ has a solution, then $|z_0|=t$, $|z_1|=s$ and $z_0z_1>0$.
Moreover, if $d>d_+$, case $ii)$ cannot occur.
Also we improved a bound on $c$ in the terms of $b$ for which an irregular extension might exist.
\[teorem1.5\] Let $\{a,b,c,d\}$ be a $D(4)$-quadruple and $a<b<c<d$. Then
i) if $b<2a$ and $c\geq 890b^4$ or
ii) if $2a\leq b\leq 12 a$ and $c\geq 1613b^4$ or
iii) if $b>12a$ and $c\geq 39247 b^4$
we must have $d=d_+$.
\[teorem1.6\] Let $\{a,b,c,d\}$ be a $D(4)$-quadruple and $a<b<c<d_+<d$. Then any $D(4)$-quadruple $\{e,a,b,c\}$ with $e<c$ must be regular.
For a fixed $D(4)$-triple $\{a,b,c\}$, denote by $N$ a number of positive integers $d>d_+$ such that $\{a,b,c,d\}$ is a $D(4)$-quadruple. The next theorem is proven as in [@cfm] and similar methods yielded analogous result.
\[teorem\_prebrojavanje\] Let $\{a,b,c\}$ be a $D(4)$-triple with $a<b<c$.
i) If $c=a+b+2r$, then $N\leq 3$.
ii) If $a+b+2r\neq c<b^2$, then $N\leq 7$.
iii) If $b^2<c<39247b^4$, then $N\leq 6$.
iv) If $c\geq 39247b^4$, then $N=0$.
This implies next corollary.
Any $D(4)$-triple can be extended to a $D(4)$-quadruple with $d>\max\{a,b,c\}$ in at most $8$ ways. A regular $D(4)$-triple $\{a,b,c\}$ can be extended to a $D(4)$-quadruple with $d>\max\{a,b,c\}$ in at most $4$ ways.
We can apply the previous results on triples when $c$ is given explicitly in the terms of $a$ and $b$ which gives us a slightly better estimate on a number of extensions when $b<6.85a$.
\[prop\_covi\] Let $\{a,b\}$ be a $D(4)$-pair with $a<b$. Let $c=c^{\tau}_{\nu}$ given by $$\resizebox{\textwidth}{!}{$c=c_{\nu}^{\tau}=\frac{4}{ab}\left\{\left(\frac{\sqrt{b}+\tau\sqrt{a}}{2}\right)^2\left(\frac{r+\sqrt{ab}}{2}\right)^{2\nu}+\left(\frac{\sqrt{b}-\tau\sqrt{a}}{2}\right)^2\left(\frac{r-\sqrt{ab}}{2}\right)^{2\nu}-\frac{a+b}{2}\right\}$}$$ where $\tau\in\{1,-1\}$ and $\nu\in\mathbb{N}$.
i) If $c=c_1^{\tau}$ for some $\tau$, then $N\leq 3$.
ii) If $c_2^+\leq c\leq c_4^+$ then $N\leq 6$.
iii) If $c=c_2^-$ and $a\geq 2$ then $N\leq 6$ and if $a=1$ then $N\leq 7$.
iv) If $c\geq c_5^-$ or $c\geq c_4^-$ and $a\geq 35$ then $N=0$.
\[kor2\] Let $\{a,b,c\}$ be a $D(4)$-triple. If $a<b\leq6.85a$ then $N\leq 6$.
Preliminary results about elements of a $D(4)$-$m$-tuple
========================================================
First we will list some known results.
\[c\_granice\] Let $\{a,b,c\}$ be a $D(4)$-triple and $a<b<c$. Then $c=a+b+2r$ or $c>\max\{ab+a+b,4b\}$.
This follows from [@fil_xy4 Lemma 3] and [@duj Lemma 1].
The next lemma can be proven similarly as [@petorke Lemma 2].
\[b10na5\] Let $\{a,b,c,d\}$ be a $D(4)$-quadruple such that $a<b<c<d_+<d$. Then $b>10^5$.
This result extends the result from [@nas Lemma 2.2] and [@bf_parovi Lemma 3] and is proven similarly by using Baker-Davenport reduction as described in [@dujpet]. For the computation we have used Mathematica 11.1 software package on the computer with Intel(R) Core(TM) i7-4510U CPU @2.00-3.10 GHz processor and it took approximately 170 hours to check all possibilities.
From [@fil_par Theorem 1.1] we have a lower bound on $b$ in the terms of element $a$.
\[b\_a\_57sqrt\] If $\{a,b,c,d\}$ is a $D(4)$-quadruple such that $a<b<c<d_+<d$ then $b\geq a+57\sqrt{a}$.
The next Lemma gives us possibilities for the initial terms of sequences $v_m$ and $w_n$, and will be improved by Theorem \[teorem\_izbacen\_slucaj\].
[@fil_xy4 Lemma 9]\[lemma\_fil\_pocetni\] Suppose that $\{a,b,c,d\}$ is a $D(4)$-quadruple with $a<b<c<d$ and that $w_m$ and $v_n$ are defined as before.
1. If equation $v_{2m}=w_{2n}$ has a solution, then $z_0=z_1$ and $|z_0|=2$ or $|z_0|=\frac{1}{2}(cr-st)$ or $z_0<1.608a^{-5/14}c^{9/14}$.
2. If equation $v_{2m+1}=w_{2n}$ has a solution, then $|z_0|=t$, $|z_1|=\frac{1}{2}(cr-st)$ and $z_0z_1<0$.
3. If equation $v_{2m}=w_{2n+1}$ has a solution, then $|z_1|=s$, $|z_0|=\frac{1}{2}(cr-st)$ and $z_0z_1<0$.
4. If equation $v_{2m+1}=w_{2n+1}$ has a solution, then $|z_0|=t$, $|z_1|=s$ and $z_0z_1>0$.
From the proof of [@fil_xy4 Lemma 9] we see that case $v_{2m}=w_{2n}$ and $z_0<1.608a^{-5/14}c^{9/14}$ holds only when $d_0=\frac{z_0^2-4}{c}$, $0<d_0<c$ is such that $\{a,b,d_0,c\}$ is an irregular $D(4)$-quadruple. As we can see from the statement of Theorem \[teorem\_izbacen\_slucaj\], this case will be proven impossible and only cases where we $z_0$ is given in the terms of a triple $\{a,b,c\}$ will remain.
Using the lower bound on $b$ in an irregular quadruple from Lemma \[b10na5\] we can slightly improve [@fil_xy4 Lemma 1].
\[granice\_fundamentalnih\] Let $(z,x)$ and $(z,y)$ be positive solutions of (\[prva\_pelova\_s\_a\]) and (\[druga\_pelova\_s\_b\]). Then there exist solutions $(z_0,x_0)$ of (\[prva\_pelova\_s\_a\]) and $(z_1,y_1)$ of (\[druga\_pelova\_s\_b\]) in the ranges $$\begin{aligned}
1&\leq x_0<\sqrt{s+2}<1.00317\sqrt[4]{ac},\\
1&\leq |z_0|<\sqrt{\frac{c\sqrt{c}}{\sqrt{a}}}<0.05624c,\\
1&\leq y_1<\sqrt{t+2}<1.000011\sqrt[4]{bc},\\
1&\leq |z_1|<\sqrt{\frac{c\sqrt{c}}{\sqrt{b}}}<0.003163c,\end{aligned}$$ such that $$\begin{aligned}
z\sqrt{a}+x\sqrt{c}&=(z_0\sqrt{a}+x_0\sqrt{c})\left(\frac{s+\sqrt{ac}}{2}\right)^m,\\
z\sqrt{b}+y\sqrt{c}&=(z_1\sqrt{b}+y_1\sqrt{c})\left(\frac{t+\sqrt{bc}}{2}\right)^n.\end{aligned}$$
This Lemma can now be used to get a lower bound on $d$ in the terms of the elements of a triple $\{a,b,c\}$.
\[lemaw4w5w6\] Suppose that $\{a,b,c,d\}$ is a $D(4)$-quadruple with $a<b<c<d_+<d$ and that $w_m$ and $v_n$ are defined as before. If $z\geq w_n$, $n=4,5,6,7$, then $$d>k\cdot b^{n-1.5}c^{n-0.5}$$ where $k\in\{0.249979,0.249974,0.249969,0.249965\}$ respectively.
If $z\geq v_m$, $n=4,5,6$, then $$d>l\cdot a^{m-1.5}c^{n-0.5}$$ where $l\in\{0.243775,0.242245,0.240725\}$ respectively.
In the proof [@fil_xy4 Lemma 5] it has been shown that $$\begin{aligned}
\frac{c}{2x_0}(s-1)^{m-1}&<v_m&<cx_0s^{m-1},\\
\frac{c}{2y_1}(t-1)^{n-1}&<w_n&<cy_1t^{n-1}.\end{aligned}$$ We use that $bc>10^{10}$ for $d>d_+$ and $d= \frac{z^2-4}{c}$ to obtain desired inequalities.
We know a relation between $m$ and $n$ if $v_m=w_n$.
[@fil_xy4 Lemma 5]\[mnfilipin\] Let $\{a,b,c,d\}$ be $D(4)$-quadruple. If $v_m=w_n$ then $n-1\leq m\leq 2n+1$.
But we can also prove that a better upper bound holds using more precise argumentation and the fact from [@sestorka Lemma 8] that $c<7b^{11}$.
\[lema\_epsilon\_m\_n\] If $c>b^{\varepsilon}$, $1\leq \varepsilon< 12$, then $$m<\frac{\varepsilon+1}{0.999\varepsilon}n+1.5-0.4\frac{\varepsilon+1}{0.999\varepsilon}.$$
If $v_m=w_n$ then $$2.00634^{-1}(ac)^{-1/4}(s-1)^{m-1}<1.000011(bc)^{1/4}t^{n-1}.$$ Since $c>b>10^5$ we can easily check that $s-1>0.9968a^{1/2}c^{1/2}$ and $b^{1/2}c^{1/2}<t<1.0001b^{1/2}c^{1/2}$, so the previous inequality implies $$(s-1)^{m-3/2}<2.00956t^{n-0.5}<t^{n-0.4}.$$ On the other hand, the assumption $b<c^{1/\varepsilon}$ yields $t<1.0001c^{\frac{\varepsilon +1}{2\varepsilon}}$ and $$s-1>0.9968c^{1/2}>\left(0.99t \right)^{\frac{\varepsilon}{\varepsilon+1}}>t^{0.999\frac{\varepsilon}{\varepsilon+1}}.$$ So we observe that an inequality $$0.999(m-1.5)\frac{\varepsilon}{\varepsilon +1}<n-0.4$$ must hold, which proves our statement.
We have the next Lemma from the results of [@ireg_pro Lemma 5] and [@sestorka Lemma 5].
\[donje na m i n\] If $v_m$=$w_n$ has a solution, then $6\leq m\leq 2n+1$ and $n\geq 7$ or case $|z_0|<1.608a^{-5/14}c^{9/14}$ from Lemma \[lemma\_fil\_pocetni\] holds and either $6\leq m\leq 2n+1$ or $m=n=4$.
\[lema\_d\_vece\] Assume that $c\leq 0.243775a^{2.5}b^{3.5}$. If $z=v_m=w_n$ for some even $m$ and $n$ then $d>0.240725a^{4.5}c^{5.5}$.
Let us assume that $z_0\notin\left\{2,\frac{1}{2}(cr-st)\right\}$, i.e. there exists an irregular $D(4)$-quadruple $\{a,b,d_0,c\}$, $c>d_0$ but then Lemma \[lemaw4w5w6\] implies $c>0.243775a^{2.5}b^{3.5}$, a contradiction. So, we must have $z_0\in\left\{2,\frac{1}{2}(cr-st)\right\}$ and by Lemma \[donje na m i n\] we know that $\max\{m,n\}\geq 6$. Statement now follows from Lemma \[lemaw4w5w6\].
By using an improved lower bound on $d$ in an irregular quadruple from Lemma \[lemaw4w5w6\] we can prove next result in the same way as [@fil_xy4 Lemma 9].
If $v_{m}=w_{n}$ for some even $m$ and $n$ and $|z_0|\notin \left\{2,\frac{1}{2}(cr-st)\right\}$ then $|z_0|<1.2197b^{-5/14}c^{9/14}$.
Similarly as in [@fm], by using Lemma \[granice\_fundamentalnih\] we can prove some upper and lower bounds on $c$ in the terms of smaller elements depending on the value of $z_0$.
\[lema\_tau\] Set $$\tau=\frac{\sqrt{ab}}{r}\left(1-\frac{a+b+4/c}{c} \right), \quad (<1).$$ We have that
i) $|z_0|=\frac{1}{2}(cr-st)$ implies $c<ab^2\tau^{-4}$,
ii) $|z_1|=\frac{1}{2}(cr-st)$ implies $c<a^2b\tau^{-4}$,\[2.8drugi\]
iii) $|z_0|=t$ implies $c>ab^2$, \[2.8treci\]
iv) $|z_1|=s$ implies $c>a^2b$,
and \[2.8drugi\]) and \[2.8treci\]) cannot occur simultaneously when $d>d_+$.
Next lemma can be easily proved by induction.
\[lema\_nizovi\] Let $\{v_{z_0,m}\}$ denote a sequence $\{v_m\}$ with an initial value $z_0$ and $\{w_{z_1,n}\}$ denote a sequence $\{w_n\}$ with an initial value $z_1$. It holds that $v_{\frac{1}{2}(cr-st),m}=v_{-t,m+1}$, $v_{-\frac{1}{2}(cr-st),m+1}=v_{t,m}$ for each $m\geq 0$ and $w_{\frac{1}{2}(cr-st),n}=w_{-s,n+1}$, $w_{-\frac{1}{2}(cr-st),n+1}=w_{s,n}$ for each $n \geq 0$.
For the proof of Theorem \[teorem\_prebrojavanje\] we will use the previous Lemma. It is obvious that if we “shift” a sequence as in Lemma \[lema\_nizovi\] the inital term of the new sequence would not satisfy the bounds from Lemma \[granice\_fundamentalnih\] since the original sequence did. In the next Lemma we will prove some new lower bounds on $m$ and $n$ when $|z_0|=t$ and $|z_1|=s$, but without assuming bounds on $z_0$ and $z_1$ from Lemma \[granice\_fundamentalnih\]. Since Filipin has proven that $n\geq 7$ in any case $(z_0,z_1)$ which appears when “shifting” sequences as in Lemma \[lema\_nizovi\] we can consider that bound already proven. Even though the following proof is analogous to the proof of [@cfm Lemma 2.6], there is more to consider in the $D(4)$ case which is why we present the proof in detail.
\[prop\_m\_n\_9\] Let $\{a,b,c\}$ be a $D(4)$-triple, $a<b<c$, $b>10^5$ i $c>ab+a+b$. Let us assume that the equation $v_m=w_n$ has a solution for $m>2$ such that $m\equiv n\equiv 1 \ (\bmod \ 2)$, $|z_0|=t$, $|z_1|=s$, $z_0z_1>0$. Then $\min\{m,n\}\geq 9$.
It is easy to see that $v_1=w_1<v_3<w_3$. If we show that
i) $w_3<v_5<w_5<v_9$ and $v_7\neq w_5$,
ii) $v_7<w_7<v_{13}$ and $v_9\neq w_7\neq v_{11}$,
from Lemma \[mnfilipin\] we see that it leads to a conclusion that $\min\{m,n\}\geq 9$.
Let $(z_0,z_1)=(\pm t,\pm s)$. We derive that $$\begin{aligned}
w_3&=&\frac{1}{2}(cr\pm st)(bc+1)+cr\\
w_5&=&\frac{1}{2}(cr\pm st)(b^2c^2+3bc+1)+cr(bc+2)\\
w_7&=&\frac{1}{2}(cr\pm st)(b^3c^3+5b^2c^2+6bc+1)+cr(b^2c^2+4bc+3)\end{aligned}$$ and $$\begin{aligned}
v_5&=&\frac{1}{2}(cr\pm st)(a^2c^2+3ac+1)+cr(ac+2)\\
v_7&=&\frac{1}{2}(cr\pm st)(a^3c^3+5a^2c^2+6ac+1)+cr(a^2c^2+4ac+3)\\
v_9&=&\frac{1}{2}(cr\pm st)(a^4c^4+7a^3c^3+15a^2c^2+10ac+1)+\\&&cr(a^3c^3+6a^2c^2+10ac+4)\end{aligned}$$ $$\begin{aligned}
v_{11}&=&\frac{1}{2}(cr\pm st)(a^5c^5+9a^4c^4+28a^3c^3+35a^2c^2+15ac+1)+\\&&cr(a^4c^4+8a^3c^3+21a^2c^2+20ac+5)\\
v_{13}&=&\frac{1}{2}(cr\pm st)(a^6c^6+11a^5c^5+45a^4c^4+84a^3c^3+70a^2c^2+21ac+1)+\\&&cr(a^5c^5+10a^4c^4+36a^3c^3+56a^2c^2+35ac+6)\end{aligned}$$ Since $a<b$, $c>ab$, and sequences $v_m$ and $w_n$ are increasing, it is easy to see that $w_3<v_5<v_7$, $v_5<w_5<v_7<v_9<v_{11}$, $v_7<w_7$, $w_5<v_9$ and $w_7<v_{13}$. It remains to prove that $v_7\neq w_5$ and $v_9\neq w_7\neq v_{11}$.
From the Lemma \[donje na m i n\] and the explanation before this Lemma, we consider that $v_7\neq w_5$ is already proven, but it is not hard to follow the following proof to prove this case also.
First, we want to prove that $v_9\neq w_7$. Let us assume contrary, that $v_9= w_7$. We have for $(z_0,z_1)=(\pm t,\pm s)$ that $$\begin{aligned}
&cr(a^4c^4+9a^3c^3+27a^2c^2+30ac-b^3c^3-7b^2c^2-14bc+2)=\\
&\mp st (a^4c^4+7a^3c^3+15a^2c^2+10ac-b^3c^3-5b^2c^2-6bc).\end{aligned}$$
a) **Case $z_0=-t$**\
Since $cr>st$ we have $$\begin{aligned}
&a^4c^4+9a^3c^3+27a^2c^2+30ac-b^3c^3-7b^2c^2-14bc+2<\\
&<a^4c^4+7a^3c^3+15a^2c^2+10ac-b^3c^3-5b^2c^2-6bc\end{aligned}$$ which leads to $$\label{v9w7_prva_njdk}a^3c<b^2.$$ Since $c>10^5$ we have $b>316a$. On the other hand, we easily see that $$cr-st>\frac{4c^2-4ac-4bc-16}{2cr}>\frac{2ab}{r}>1.99r,$$ which can be used to prove $$\begin{aligned}
v_9&<\frac{1}{2}(cr- st)(a^4c^4\left(2.0051+\frac{7}{ac} \right)\\&+15a^2c^2+10ac+1)+cr(6a^2c^2+10ac+4),\\
w_7&>\frac{1}{2}(cr- st)(b^3c^3+15a^2c^2+10ac+1)+cr(6a^2c^2+10ac+4).\end{aligned}$$ So $$\label{v9w7_druga} b^3<2.006a^4c.$$ Combining equations (\[v9w7\_prva\_njdk\]) and (\[v9w7\_druga\]) we see that $b<2.006a$, which is in a contradiction with $b>316a$.
b) **Case $z_0=t$**\
Since $cr>st$ we have $$\begin{aligned}
&a^4c^4+9a^3c^3+27a^2c^2+30ac-b^3c^3-7b^2c^2-14bc+2<\\
&<-a^4c^4-7a^3c^3-15a^2c^2-10ac+b^3c^3+5b^2c^2+6bc\end{aligned}$$ which leads to $$2a^4c^4+16a^3c^3+42a^2c^2+40ac+2<2b^3c^3+12b^2c^2+20bc.$$ If $16a^3c^3<12b^2c^2$ then $c<0.75\frac{b^2}{a^3}$. On the other hand, if $16a^3c^3\geq 12b^2c^2$ then $2a^4c^4<2b^3c^3$. In each case, inequality $$\label{v9w7_tplus_prva_njdk}a^4c<b^3$$ holds. Since $c>10^5$, $b>46a$ and since $c>ab$ we have $b>a^{5/2}$ and $c>a^{7/2}$. It is easily shown that $cr+st>632r^2$ and we can use it to see that $$\begin{aligned}
\frac{1}{2}(cr+ st)(a^4c^4+7a^3c^3)+cr\cdot a^3c^3&<\frac{1}{2}(cr+ st)a^4c^4\left(1+\frac{2}{632ar}+\frac{7}{ac} \right),\\\end{aligned}$$ and get an upper bound on $v_9$. Now, from $v_9=w_7$ we have $$\label{v9w7_tplus_druga_njdk}b^3<1.001a^4c.$$ Notice that $c>b^3 1.001^{-1}a^{-4}>0.999b^{7/5}>9.99\cdot 10^6$.
If we consider $v_9=w_7$ modulo $c^2$ and use the fact that $st(cr-st)\equiv 16 \ (\bmod \ c)$ it yields a congruence $$2r(cr-st)\equiv 16(6b-10a)\ (\bmod \ c).$$ Since $2r(cr-st)<4.01c$ and $16(6b-10a)<96b<96\cdot (1.001a^4c)^{1/3}<97c^{5/7}<c$, we have that one of the equations $$kc=2r(cr-st)-16(6b-10a),\quad k\in\{0,1,2,3,4\}$$ must hold.
If $k=0$, we have $2r(cr-st)=16(6b-10a)$. The inequality $2r(cr-st)>3.8r^2>3.8ab$ implies $96b>16(6b-10a)>3.8ab$ so $a\leq 25 $. Now, we have $$\begin{aligned}
cr-st&>\frac{2c^2-2ac-2bc-8}{cr}>\frac{2\frac{b^3}{25^4\cdot 1.001}-(2b+50+1)}{\sqrt{25b+4}}\\
&>\frac{b(0.00000255b^2-2)-51}{b^{1/2}\sqrt{26}}>0.0000005b^{5/2},\end{aligned}$$ so $16(6b-10a)>2\sqrt{ab+4}\cdot 0.0000005b^{5/2}$. For each $a\in[1,25]$, we get from this inequality a numerical upper bound on $b$ which is in a contradiction with $b>10^5$.
If $k\neq 0$, we have a quadratic equation in $c$ with possible solutions $$c_{\pm}=\frac{-B\pm\sqrt{B^2-4AE}}{2A}$$ where $A=r^2(16-4k)+k^2>0$, $B=-(32(2r^2-6)(6b-10a)+16r^2(a+b))<0$ and $E=64(4(6b-10a)^2-r^2)>0$.
If $k\leq 3$, $A> 4r^2$ and $c_{\pm}<\frac{-B}{A}<\frac{32\cdot2r^2+6b+16r\cdot32b}{4r^2}=100b$. Since $c>ab+a+b$, we have $a\leq 98$ and from $b^3<1.001a^4c<100.1a^4b$ we have $b<100.1^{1/2}a^2<96089$ which is in a contradiction with $b>10^5$.
In only remains to check if $k=4$. But in this case we express $b$ in the terms of $a$ and $c$ and get $$b_{\pm}=\frac{-B\pm\sqrt{B^2+4AE}}{2A}$$ where $A=400ac-9216>0$, $B=c(16a^2-640a+832)+64a+30720$ and $E=16c^2+1216ac+25600a^2-256>0$.
We have $B^2+4AE>4AE>25536ac^3$, so it is not hard to see that $b_{-}<0$. Also, $B<0$ when $a\in[2,38]$, and $B>0$ otherwise. When $B>0$, $b_+<\frac{\sqrt{B^2+4AE}}{2A}<\frac{\sqrt{256a^4c^2+400ac^2(16c+2000a)}}{798ac}$ and since $a<c^{2/7}$, we get $b_+<\frac{84a^{1/2}c^{3/2}}{798ac}<0.106c^{1/2}a^{-1/2}$. On the other hand, $b_+>\frac{\sqrt{25536ac^3}-16a^2c}{800ac}>0.18c^{1/2}a^{-1/2}$, which is a contradiction with the previous inequality. In the last case, when $B<0$, we have $|B|<7232c$ and get similar contradiction.
Now it only remains to show that $v_{11}\neq w_7$. Let us assume contrary, that $v_{11}=w_7$. We have for $(z_0,z_1)=(\pm t,\pm s)$ that $$\begin{aligned}
&cr(a^5c^5+11a^4c^4+44a^3c^3+77a^2c^2+55ac-b^3c^3-7b^2c^2-14bc+4)=\\
&\mp st (a^5c^5+9a^4c^4+28a^3c^3+35a^2c^2+15ac-b^3c^3-5b^2c^2-6bc).\end{aligned}$$
a) **Case $z_0=-t$**\
Since $cr>st$ we have $$\begin{aligned}
&a^5c^5+11a^4c^4+44a^3c^3+77a^2c^2+55ac-b^3c^3-7b^2c^2-14bc+4<\\
&<a^5c^5+9a^4c^4+28a^3c^3+35a^2c^2+15ac-b^3c^3-5b^2c^2-6bc\end{aligned}$$ which leads to $$a^4c^2<b^2.$$ But, this is a contradiction with $c>ab$ and $a\geq 1$.
b) **Case $z_0=t$**\
Here we have $$\begin{aligned}
&a^5c^5+11a^4c^4+44a^3c^3+77a^2c^2+55ac-b^3c^3-7b^2c^2-14bc+4<\\
&<-a^5c^5-9a^4c^4-28a^3c^3-35a^2c^2-15ac+b^3c^3+5b^2c^2+6bc\end{aligned}$$ which gives us an inequality $a^5c^2<b^3$. Together with $ab<c$, we get $$b>a^{7/3},\quad b>2154a,\quad c>a^{10/3}.$$ If we consider the equality $v_{11}=w_7$ modulo $c^2$, and use the fact that $st(cr-st)\equiv 16\ (\bmod \ c)$ we have that a congruence $$4r(cr-st)\equiv 16(6b-15a) \ (\bmod \ c)$$ must hold. On the other hand, from $v_{11}=w_7$, as in [@cfm], we can derive that $$b^3<1.001a^5c^2.$$ Also, since we can show that $4r(cr-st)<8.02c$ and $16(6b-15a)<2.1c$, we have $$kc=4r(cr-st)-16(6b-15a),\quad k\in\{\-2,-1,0,1,2,\dots,8\}.$$
In the case $k=0$, we have $7.6ab<4r(cr-st)=16(6b-15a)<96b$ which means that $a\leq 12$. We can express $c$ as a solution of quadratic equation and get $$c_{\pm}=\frac{-B\pm \sqrt{B^2-4AD}}{2A}$$ where $A=r^2$, $B=-(13b-29a)r^2$ and $D=4((6b-15a)^2-r^2)$. It is easy to see that $$c_{\pm}<\frac{|B|}{A}<\frac{13br^2}{r^2}=13b$$ so $b<1.001a^5\cdot 13^2=169.169a^5$ and we get $a\geq 4$ since $b>10^5$. For $D(4)$-pairs in the range $a\in[4,12]$, $b\in[10^5,169.169\cdot a^5]$, we check by computer explicitly that there is no integer $c_{\pm}$ given by this equation.
In the case where $k\neq 0$, we observe equality $kc=4r(cr-st)-16(6b-15a)$ and express $c$ as $$c_{\pm}=\frac{-B\pm\sqrt{B^2-4AD}}{2A},$$ where $A=(4r^2-k)^2-4r^2ab>12a^2b^2$, $B=-(32(4r^2-k)(6b-15a)-16r^2(a+b))>-800r^2b$ and $D=64(4(6b-15a)^2-r^2)>0$. We see that $$c_{\pm}<\frac{|B|}{A}<\frac{800r^2b}{12a^2b^2}<\frac{400}{3a}<134$$ which is in contradiction with $c>b>10^5.$
\[korolar\] Let $c>ab+a+b$, $v_m=w_n$, $n>2$, and let one of the cases $(ii)-(iv)$ or case $(i)$ where $|z_0|=\frac{1}{2}(cr-st)$ from Lemma \[lemma\_fil\_pocetni\] hold. Then $\min\{m,n\}\geq 8$.
It follows from Proposition \[prop\_m\_n\_9\] and Lemma \[lema\_nizovi\].
Proof of Theorem \[teorem\_izbacen\_slucaj\]
============================================
From now on, when we assume that $z=v_m=w_n$ has a solution for some $D(4)$-triple $\{a,b,c\}$, we are usually considering only solutions where $d=\frac{z^2-4}{c}>d_+$.
\[lema\_uvodna\_teorem\_izbacen\] Let us assume that $c>0.243775a^2b^{3.5}$ and $z=v_m=w_n$ has a solution for $m$ and $n$, $n\geq 4$. Then $m\equiv n \ (\bmod \ 2)$ and
i) if $m$ and $n$ are even and $b\geq 2.21a$, then $$n>0.45273b^{-9/28}c^{5/28},$$ and if $b<2.21a$ then $$n>\min \{0.35496a^{-1/2}b^{-1/8}c^{1/8},0.177063b^{-11/28}c^{3/28}\};$$
ii) if $m$ and $n$ are odd, then $$n>0.30921b^{-3/4}c^{1/4}.$$
Since from $b>10^5$ and $c>0.243775a^2b^{3.5}$ we have $c>ab^2$ and $\tau^{-4}<b/a$, cases $i)$ and $ii)$ from Lemma \[lema\_tau\] cannot hold. So, we see that only options from Lemma \[lemma\_fil\_pocetni\] are $i)$, when $|z_0|=2$ or $|z_0|<1.219b^{-5/14}c^{9/14}$, and $iv)$. In each case we have $m\equiv n \ (\bmod \ 2)$.
First, let us observe the case $|z_0|<1.219b^{-5/14}c^{9/14}$, $z_0=z_1$ and $m$ and $n$ even. Since $(z_0,x_0)$ satisfies an equality $$cx_0^2-az_0^2=4(c-a)$$ we have $$x_0^2\leq \frac{a}{c}\cdot 1.2197^2b^{-5/7}c^{9/7}+4\left(1-\frac{a}{c} \right)<1.7109ac^{2/7}b^{-5/7},$$ where we have used the estimate $c^2b^{-5}>0.243775^2b^2>5.9426\cdot 10^8$. So, $$\label{gornja_xo}
x_0<1.30802a^{1/2}b^{-5/14}c^{1/7}.$$ Similarly, we get $$\label{gornja_y1}
y_1<1.21972b^{1/7}c^{1/7}.$$ From [@fil_xy4 Lemma 12] we have that the next congruence holds $$\label{kongruencija} az_0m^2-bz_1n^2\equiv ty_1n-sx_0m\ (\bmod \ c).$$
From $b>10^5$ and $c>0.243775b^{3.5}$ we have $c>b^{3.377}$, so we can use $\varepsilon=3.377$ in the inequality from Lemma \[lema\_epsilon\_m\_n\] and get $$m<1.2975n+0.9811.$$ This implies that the inequality $m<1.34n$ holds for every possibility $m$ and $n$ even except for $(m,n)=(6,4)$, which we will observe separately.
Now we observe the case where $b\geq 2.21a$ and let us assume contrary, that $n\geq 0.45273b^{-9/28}c^{5/28}$. Then from $c>b>10^5$ we have $$\begin{aligned}
am^2|z_0|&<a\cdot 1.34^2\cdot 0.45273^2 b^{-9/14}c^{5/14}\cdot 1.2197b^{-5/14}c^{9/14}\\
&<a\cdot 0.4489b^{-1}c<\frac{c}{4},\end{aligned}$$ $$\begin{aligned}
bn^2|z_0|&<b\cdot 0.45279^2\cdot b^{-9/14}c^{5/14}\cdot 1.2197b^{-5/14}c^{9/14}<\frac{c}{4},\end{aligned}$$ and from inequalities (\[gornja\_xo\]) and (\[gornja\_y1\]) we also have $$\begin{aligned}
ty_1n&<(bc+4)^{1/2}1.2197b^{1/7}c^{1/7}\cdot 0.45273b^{-9/28}c^{5/28}\\
&<\frac{2.21b^{1/2}c^{1/2}}{c^{19/28}b^{5/28}}\cdot \frac{c}{4}=\frac{2.21}{c^{5/28}b^{-9/28}}\cdot \frac{c}{4}<\frac{2.85}{a^{5/14}b^{17/56}}\frac{c}{4}<\frac{c}{4}, \end{aligned}$$ and similarly $$\begin{aligned}
sx_0m&<(ac+4)^{1/2}\cdot 1.34n\cdot x_0<0.79361ab^{-19/28}c^{23/28}<\frac{c}{4}.\end{aligned}$$ In the case $(m,n)=(6,4)$ we can prove the same final inequalities. So, from congruence (\[kongruencija\]) we see that an equality $$\label{eq1}
az_0m^2+sx_0m=bz_0n^2+ty_1n$$ must hold.
On the other hand, from equation $az_0^2-cx_0^2=4(a-c)$, since $|z_0|\neq 2$ in this case, and $c\mid (z_0^2-4)$ we have $z_0^2\geq c+4$. Let us assume that $z_0^2<\frac{5c}{a}$, then we would have $c(x_0^2-9)+4a<0$ and since $c>4a$ we must have $x_0=2$ and $|z_0|=2$, which is not our case. So, here we have $z_0^2\geq \max\{c+4,\frac{5c}{a}\}$.
Now, $$\begin{aligned}
0\leq \frac{sx_0}{a|z_0|}-1&=\frac{4x_0^2+4ac-4a^2}{a|z_0|(sx_0+a|z_0|)}\leq \frac{2x_0^2}{a^2z_0^2}+\frac{2ac-2a^2}{a^2z_0^2}\\
&<2\left(\frac{1}{ac}+4\left(1-\frac{a}{c} \right)\frac{1}{a^2z_0^2} \right)+\left( \frac{1}{z_0^2}\left(2\frac{c}{a}-2 \right)\right)\\
&<\frac{18}{5ac}+\left(\frac{2}{5}-\frac{2a}{5c}\right)\leq 0.000036+0.4=0.400036, \end{aligned}$$ and similarly $$0\leq \frac{ty_1}{b|z_0|}-1<0.00004.$$
When $z_0>0$, i.e. $z_0>2$, we have $$bz_0n(n+1)<bz_0n(n+\frac{ty_1}{bz_0})=az_0m(m+\frac{sx_0}{az_0})< az_0m(m+1.400036).$$
Since $m\geq 4$ we have from the previous inequality that $2.21n(n+1)<1.35001m^2<1.35001(1.2975n+0.9811)$ must hold, but then we get $n<1$, an obvious contradiction.
On the other hand, if $z_0<0$, i.e. $z_0<-2$, we similarly get $$\label{njdk:1}
am(m-1)>bn(n-1.00004).$$ Since, $m<1.2975n+0.9811$ and $b>2.21a$ we have $n\leq 6$. If $b\geq 2.67a$, we would have $n<4$. So, it only remains to observe the case $2.21a\leq b<2.67a$ and $n\leq6$. In that case we have $c>0.243775a^2c^{3.5}>0.03419b^{5.5}$, so we can put $\varepsilon=5.2$ in Lemma \[lema\_epsilon\_m\_n\] and get $m<1.1936n+1.0226$. Inserting in the inequality (\[njdk:1\]) yields that only the case $(m,n)=(4,4)$ remains, but it doesn’t satisfy equation (\[njdk:1\]).
Now, let us observe the case where $b<2.21a$. We again consider the congruence (\[kongruencija\]), which after squaring and using $z_0^2\equiv t^2 \equiv s^2 \equiv 4 \ (\bmod \ c)$ yields a congruence $$\label{kongr2}
4((am^2-bn^2)^2-y_1^2n^2-x_0^2m^2)\equiv -2tsx_0y_1mn\ (\bmod \ c).$$ Let us denote $C=4((am^2-bn^2)^2-y_1^2n^2-x_0^2m^2)$, and (\[kongr2\]) multiplied by $s$ and by $t$ respectively shows that $$\begin{aligned}
Cs&\equiv& -8tx_0y_1mn \ (\bmod \ c)\label{cs}\\
Ct&\equiv& -8sx_0y_1mn \ (\bmod \ c).\label{ct}\end{aligned}$$
Now, assume that $n\leq \min \{0.35496a^{-1/2}b^{-1/8}c^{1/8}, 0.177063b^{-11/28}c^{3/28}\}.$ Then also $n\leq 0.45273b^{-9/28}c^{5/28}$ holds, so we again have an equality in the congruence (\[kongruencija\]), i.e. $$az_0m^2+sx_0m=bz_0n^2+ty_1n.$$ It also holds $x_0^2<y_1^2<1.4877b^{2/7}c^{2/7}.$ Since $b<2.21a$ we have $c>0.04991b^{5.5}$so we can take $\varepsilon=5.239$ in Lemma \[lema\_epsilon\_m\_n\] and we get $m<1.1921n+1.0232$, and since we know $m,n\geq 4$ and $m$ and $n$ even, we also have $m<1.34n$ from this inequality. This yields $$\begin{aligned}
|Cs|<|Ct|&=|4t((am^2-bn^2)^2-(y_1^2n^2+x_0^2m^2))|<\max\{4tb^2m^4,8ty_1^2m^2\}\\
&<\max\{12.896718b^2n^2\sqrt{bc+4},21.370513b^{2/7}c^{2/7}n^2\sqrt{bc+4}\}.\end{aligned}$$ On the other hand, we have from our assumption on $n$ that $$12.896718b^2n^2\sqrt{bc+4}<12.896718\left(\frac{b}{a}\right)^2 0.35496^4\cdot 1.000001c^{1/2}<c,$$ and $$21.370513b^{2/7}c^{2/7}n^2\sqrt{bc+4}<21.370513\cdot0.177063^2\cdot 1.000001c<c,$$ so, $|Cs|<|Ct|<c$. On the other hand, $8sx_0y_1mn<8tx_0y_1mn$ and $$\begin{aligned}
8tx_0y_1mn&<8ty_1^2\cdot 1.34n^2\\
&<8\cdot 1.000001b^{1/2}c^{1/2} 1.4877b^{2/7}c^{2/7}\cdot 1.34\cdot 0.177063^2b^{-11/14}c^{3/14}\\
&<0.5c<c,\end{aligned}$$ So, in congruences (\[cs\]) and (\[ct\]) we can only have $$Cs=kc-8tx_0y_1mn,\quad Ct=lc-8sx_0y_1mn,\quad k,l\in\{0,1\}.$$ If $k=l=0$, we would have $s=t$, which is not possible. When $k=l=1$, we get $c=8(t+s)x_0y_1mn<0.5c+0.5c<c$, a contradiction. In the case $k=0$ and $l=1$ we get $cs=8(s^2-t^2)x_0y_1mn<0$, and in the case $k=1$ and $l=0$, we have $ct=8(t^2-s^2)x_0y_1mn$, so $c(t-s)<8(t^2-s^2)x_0y_1mn$, which leads to a contradiction as in the case $k=l=1$.
So, $n> \min \{0.35496a^{-1/2}b^{-1/8}c^{1/8},0.177063b^{-11/28}c^{3/28}\}$ must hold.
It remains to consider a case when $m$ and $n$ are odd. In this case, a congruence from [@fil_xy4 Lemma 3] holds, $$\begin{aligned}
\pm 2t(am(m+1)-bn(n+1))&\equiv& 2rs(n-m)\ (\bmod \ c),\label{kong3}\\
\pm 2s(am(m+1)-bn(n+1))&\equiv& 2rt(n-m)\ (\bmod \ c).\label{kong4}\end{aligned}$$ Let us assume that $n\leq 0.30921b^{-3/4}c^{1/4}$. In this case we have $m<1.2975n+0.9811$ and since $m\geq n\geq 5$ both odd we also have $m<1.47n$. Notice that $2t(am(m+1)-bn(n+1))<2tbm(m+1)$ holds. Also, from $c>0.243775b^{3.5}>7.7\cdot 10^{16}$ we have $b<0.243775^{-2/7}c^{2/7}$. So it suffice to observe that $$\begin{aligned}
2tbm(m+1)&<2\sqrt{bc}\cdot 1.000001b\cdot 1.21m^2\\
&<5.22944b^{3/2}c^{1/2}<5.22944(0.243775^{-2/7}c^{2/7})^{3/2}c^{1/2}\\
&<7.82711c^{13/14}<0.5c\\
2rt(m-n)&<2\sqrt{ab}\left(1+\frac{1}{\sqrt{ab}}\right)\sqrt{bc}\left(1+\frac{1}{\sqrt{bc}}\right)(0.2962n+0.85194)^2\\
&<2a^{1/2}bc^{1/2}\cdot 1.00318\cdot 0.51^2 \cdot 0.30921^2 b^{-3/2}c^{1/2}\\
&<0.0499(a/b)^{1/2}c\end{aligned}$$ which means that we have equalities in congruences (\[kong3\]) and (\[kong4\]) and implies $$\pm \frac{2rs}{t}(n-m))=\pm \frac{2rt}{s}(n-m).$$ Since $s\neq t$, only possibility is $n=m$, but then $2t(am(m+1)-bn(n+1))=0$ implies $a=b$, a contradiction. So, our assumption for $n$ was wrong and $n> 0.30921b^{-3/4}c^{1/4}$ when $n$ and $m$ are odd.
Various altered versions of Rickert’s theorem from [@rickert] and results derived from it are often used when considering problems of $D(1)$-$m$-tuples and $D(4)$-$m$-tuples. In this paper we will use one of the results from [@bf_parovi] and we will give, without proof since it is similar as in the other versions, a new version which improves this result in some cases.
[@bf_parovi Lemmas 6 and 7]\[lema\_rickert\] Let $\{a,b,c,d\}$ be a $D(4)$-quadruple, $a<b<c<d$ and $c>308.07a'b(b-a)^2a^{-1}$, where $a'=\max\{4a,4(b-a)\}$. Then $$n<\frac{2\log(32.02aa'b^4c^2)\log (0.026ab(b-a)^{-2}c^2)}{\log(0.00325a(a')^{-1}b^{-1}(b-a)^{-2}c)\log(bc)}.$$
Combining results from [@bf_parovi Lemma 7] and [@cf Lemma 3.3] we can prove a generalization of [@nas2 Lemma 7].
\[lema\_rickert2\] Let $\{a,b,c,d\}$ be a $D(4)$-quadruple, $a<b<c<d$, $b>10^5$ and $c>59.488a'b(b-a)^2a^{-1}$, where $a'=\max\{4a,4(b-a)\}$. If $z=v_m=w_n$ for some $m$ and $n$ then $$n<\frac{8\log(8.40335\cdot 10^{13} a^{1/2}(a')^{1/2}b^2c)\log (0.20533a^{1/2}b^{1/2}(b-a)^{-1}c)}{\log(bc)\log(0.016858a(a')^{-1}b^{-1}(b-a)^{-2}c)}.$$
\[propozicija\_d+\] Let $\{a,b,c,d\}$ be a $D(4)$-quadruple such that $a<b<c<d$ and that equation $z=v_m=w_n$ has a solution for some $m$ and $n$. If $b\geq 2.21a$ and $c>\max\{0.24377a^2b^{3.5},2.3b^5\}$ or $b<2.21a$ and $c>1.1b^{7.5}$ then $d=d_+$.
Let us assume that $d>d_+$. Since $c>0.24377a^2b^{3.5}$ in both cases we can use Lemma \[lema\_uvodna\_teorem\_izbacen\], and as in the proof of that Lemma, we have that $m$ and $n$ have the same parity. Also, since in any case $c>2.3b^5>308.07a'b(b-a)^2a^{-1}$ we can use Lemmas \[lema\_rickert\] and \[lema\_rickert2\], but Lemma \[lema\_rickert\] will give better results in these cases.
Let us assume that $m$ and $n$ are even and $b\geq 2.21a$. Then we have $n>0.45273b^{-9/28}c^{5/28}$, $a'=\max\{4a,4(b-a)\}=4(b-a)$, $a\leq \frac{b}{2.21}$ and $b>b-a>\frac{1.21}{2.21}b$. We estimate $$\begin{aligned}
32.02aa'b^4c^2&=32.02a\cdot4(b-a)b^4c^2<57.955b^6c^2,\\
0.026ab(b-a)^{-2}c^2&<\frac{0.026}{2.21}b^2\frac{2.21^2}{1.21^2}b^{-2}c^2<0.0393c^2,\\
0.00325a(a')^{-1}b^{-1}(b-a)^{-2}c&>0.00325\cdot4^{-1}(b-a)^{-3}b^{-1}c>0.0008125b^{-4}c.\end{aligned}$$ Now, from Lemma \[lema\_rickert\] we have an inequality $$0.45273b^{-9/28}c^{5/28}<\frac{2\log(57.955b^6c^2)\log(0.0393c^2)}{\log(bc)\log(0.0008125b^{-4}c)},$$ where the right hand side is decreasing in $c$ for $b>0$, $c>2.3b^5$ so we can observe the inequality in which we have replaced $c$ with $2.3b^5$ which gives us $b<19289$, a contradiction with $b>10^5.$
Let us observe the case when $m$ and $n$ are even and $2a<b<2.21a$. Then $a'=4(b-a)$, $\frac{b}{2}<b-a<\frac{1.21}{2.21}b$. Denote $$\begin{aligned}
F\in&\{0.35496a^{-1/2}b^{-1/8}c^{1/8},0.177063b^{-11/28}c^{3/28}\}\\&>\{0.50799b^{9/16},0.17888b^{23/56}\}.\end{aligned}$$ Then by Lemma \[lema\_rickert\] $$F<\frac{2\log(35.0627b^6c^2)\log(0.052c^2)}{\log(bc)\log(0.0022397b^{-3}c)},$$ and the right hand side of this inequality is decreasing in $c$ for $b>0$, $c>1.1b^{7.5}$, and for each possibility for $F$ we get $b<722$ and $b<81874$ respectively, a contradiction in either case.
In the case when $m$ and $n$ are even and $b<2a$, we have $a'=4a$ and $57<b-a<\frac{b}{2}$ and with $F$ defined as before, we have $F>\{0.35921b^{9/16},0.17888b^{23/56}\}$ and $$F<\frac{2\log(128.08b^6c^2)\log(0.0000081b^2c^2)}{\log(bc)\log(0.00325b^{-3}c)}.$$ Again, right hand side is decreasing in $c$ for $c>1.1b^{7.5}$. We get $b<1396$ for the first choice for $F$, and $b<98413$ for the second, again, a contradiction.
It remains to consider the case when $m$ and $n$ are odd. If $b\geq 2a$ we have $a'=4(b-a)$ and $c>2.3b^5$, so similarly as before $$\begin{aligned}
32.02aa'b^4c^2&<64.04b^6c^2,\\
0.026ab(b-a)^{-2}c^2&<0.052c^2,\\
0.00325a(a')^{-1}b^{-1}(b-a)^{-2}c&>0.0008125b^{-4}c.\end{aligned}$$ In this case we have $n>0.30921b^{-3/4}c^{1/4}$ so by Lemma \[lema\_rickert\] we observe an inequality $$0.30921b^{-3/4}c^{1/4}<\frac{2\log(64.04b^6c^2)\log(0.052c^2)}{\log(bc)\log(0.0008125b^{-4}c)},$$ and since the right hand side is decreasing in $c$ for $c>2.3b^5$ we get $b<97144$, a contradiction.
If $b< 2a$ and $c>1.1b^{7.5}$ we observe $$0.30921b^{-3/4}c^{1/4}<\frac{2\log(128.08b^6c^2)\log(0.0000081b^2c^2)}{\log(bc)\log(0.00325b^{-3}c)}$$ and since the right hand side is decreasing in $c$ for $c>1.1b^{7.5}$ we get $b<48$, a contradiction.
Let us assume that $d=d_+$. If $n\geq 3$ then $$\begin{aligned}
z&\geq w_3> \frac{c}{2y_1}(t-1)^2>\frac{c}{2.000022\sqrt[4]{bc}}\cdot 0.999bc\\
&>0.499b^{3/4}c^{7/4}>0.499bc^{6/4}>157bc\end{aligned}$$ where we have used Lemma \[granice\_fundamentalnih\] and $bc>10^{10}$ from Lemma \[b10na5\]. On the other hand, when $d=d_+$ we have $z=\frac{1}{2}(cr+st)<cr<cb$, which is an obvious contradiction. So, when $d=d_+$, we must have $n\leq 2$. Also, since $a<b<c<d$, from the proof of [@fil_xy4 Lemma 6] we see that when $n\leq 2$, only possibility is $d=d_+$ when $(m,n;z_0,z_1)\in\{ (1,1;t,s),(1,2;t,\frac{1}{2}(st-cr)),(2,1;\frac{1}{2}(st-cr),s),(2,2;\frac{1}{2}(st-cr),\frac{1}{2}(st-cr))\}$.
Let us assume that $d>d_+$. Then from Lemma \[donje na m i n\] we have $n\geq 4$. Here we only observe a possibility when $m$ and $n$ are even and $z_0=z_1\notin \left\{2,\frac{1}{2}(cr-st) \right\}$. Then from [@fil_xy4] we know that for $d_0=\frac{z_0^2-4}{c}<c$, $D(4)$-quadruple $\{a,b,d_0,c\}$ is irregular.
Denote $\{a_1,a_2,a_3\}=\{a,b,d_0\}$ such that $a_1<a_2<a_3$.
If $a_3\leq 0.234775a_1^{2.5}a_2^{3.5}$ holds then by Lemma \[lema\_d\_vece\] we also have an inequality $c>0.240725a_1^{4.5}a_3^{5.5}\geq 0.240725a^{4.5}b^{5.5}$. If $b>2.21a$, since $b>10^5$, this inequality implies $c>76a^{4.5}b^{5.5}$. On the other hand, if $b\leq 2.21a$, we have $a\geq 45249$ and $c>0.240725\cdot 2.21^{-2}a^{2.5}b^{7.5}>10a^{2}b^{7.5}$. We see that in either case we can use Proposition \[propozicija\_d+\] and conclude $d=d_+$, i.e. we have a contradiction with the assumption $d>d_+$.
It remains to observe a case $a_3>0.234775a_1^{2.5}a_2^{3.5}$. If $b=a_2$ then from Lemma \[lemaw4w5w6\] we get $$c>0.249979b^{2.5}(0.243775b^{3.5})^{3.5}>0.00178799b^{14.75}$$ and it is easy to see that we again have conditions of Proposition \[propozicija\_d+\] satisfied and can conclude $d=d_+$ is the only possibility. On the other hand, if $b=a_3$, then $b>0.243775a_1^{2.5}a_2^{3.5}>0.243775\cdot 10^{17.5}$ since $a_2>10^5$, so by Lemma \[lema\_uvodna\_teorem\_izbacen\] we need to consider two cases, first when $a_2\geq 2.21a_1 $ then $n'>0.45273a_2^{-9/28}b^{5/28}>0.45273(0.243775^{-2/7}b^{2/7})^{-9/28}b^{5/28}>0.3976b^{17/196}>11$ so by Lemma \[lemaw4w5w6\] we have $z'>w_6$ and $c>0.249969a_1^{4.5}b^{5.5}$ which, as before, yields a contradiction when Proposition \[propozicija\_d+\] is applied. Second case is when $a_2<2.21a_1$. If $n'\geq 6$, we have $c>0.249969a_1^{4.5}b^{5.5}>b^{7.5}$, since $a_1>\frac{a_2}{2.21}>\frac{10^5}{2.21}$ and get a same conclusion as before. If $n<6$, by Lemma \[donje na m i n\] we see that we have $m'$ and $n'$ even, so $n'=4$ and since $b>0.0335a_2^6>a_2^{5.7}$ from Lemma \[lema\_epsilon\_m\_n\] we have $m'<1.1766n'+1.0294$, i.e. $m'=4$. From the proof of [@sestorka Lemma 5] we know that $m'=n'=4$ can hold only when $|z_0'|<1.2197(b')^{-5/14}(c')^{9/14}$, i.e. we have a $0<d_0'<b$ such that $\{a,d_0',d_0,b\}$ is irregular $D(4)$-quadruple and we can use the same arguments to prove that such quadruple cannot exist by Proposition \[propozicija\_d+\] or we have a new quadruple with $0<d_0''<d_0'<b$. Since this process cannot be repeated infinitely, for some of those quadruples in the finite process we must have $n>6$ and contradiction wth Proposition \[propozicija\_d+\].
The last assertion of Theorem \[teorem\_izbacen\_slucaj\] follows from Lemma \[lema\_tau\].
Proofs of Theorems \[teorem1.5\] and \[teorem1.6\]
==================================================
Analogously as [@nas Proposition 3.1] and [@nas2 Lemma 16] a more general result for the lower bound on $m$ can be proven.
\[lema\_donja\_m\_staro\] Let $\{a,b,c,d\}$ be a $D(4)$-quadruple with $a<b<c<d$ for which $v_{2m}=w_{2n}$ has a solution such that $z_0=z_1=\pm 2$ and $Ln\geq m \geq n \geq 2$ and $m\geq 3$, for some real number $L>1$.
Suppose that $a\geq a_0$, $b\geq b_0$, $c\geq c_0$, $b\geq \rho a$ for some positive integers $a_0$, $b_0$, $c_0$ and a real number $\rho>1$. Then $$m>\alpha b^{-1/2} c^{1/2},$$ where $\alpha$ is any real number satisfying both inequalities $$\label{njdba_1}
\alpha^2+(1+2b_0^{-1}c_0^{-1})\alpha\leq 1$$ $$\label{njdba_2}
4\left(1-\frac{1}{L^2} \right)\alpha^2+\alpha(b_0(\lambda + \rho^{-1/2})+2c_0^{-1}(\lambda + \rho^{1/2}))\leq b_0$$ with $\lambda=(a_0+4)^{1/2}(\rho a_0+4)^{-1/2}$.\
Moreover, if $c^\tau \geq \beta b$ for some positive real numbers $\beta$ and $\tau$ then $$\label{njdba_3}
m>\alpha \beta^{1/2}c^{(1-\tau)/2}.$$
\[lema\_donja\_m\] Let us assume that $c>b^4$ and $z=v_m=w_n$ has a solution for some positive integers $m$ and $n$. Then $m\equiv n \ (\bmod \ 2)$ and $n>0.5348b^{-3/4}c^{1/4}$.
As in the proof of Lemma \[lema\_uvodna\_teorem\_izbacen\] we can see that $m\equiv n \ (\bmod \ 2)$ when $c>b^4$, and when $m$ and $n$ are even, the only possibility is $|z_0|=2$. By Theorem \[teorem\_izbacen\_slucaj\] we need to consider two cases.
1) Case $m$ and $n$ are even, $|z_0|=|z_1|=2$.\
From Lemma \[lema\_epsilon\_m\_n\] we have $m<1.252n+0.9995$, and since from Lemma \[donje na m i n\] we have $m\geq 6$ we also have $n\geq 6$ and $m<1.4n$. Using Lemma \[lema\_donja\_m\_staro\] yields $m>0.4999998b^{-1/2}c^{1/2}$, i.e. $$n>0.35714b^{-1/2}c^{1/2}.$$
2) Case $m$ and $n$ odd, $|z_0|=t$, $|z_1|=s$.\
Congruences (\[kong3\]) and (\[kong4\]) from the proof of Lemma \[lema\_uvodna\_teorem\_izbacen\] hold. Let us assume contrary, that $n<0.5348b^{-3/4}c^{1/4}$. Then $$\begin{aligned}
2t|am(m+1)-bn(n+1)|&<2tbn^2<2.00004b^{3/2}c^{1/2}n^2<0.57204c,\\
2rt(m-n)&<0.8rtn<0.800032bc^{1/2}n<0.42786c,\end{aligned}$$ which means that we have equality in those congruences, so a contradiction can be shown as in the proof of Lemma \[lema\_uvodna\_teorem\_izbacen\].
Let us assume that $d>d_+$. We prove this as in Proposition \[propozicija\_d+\]
i) Case $b<2a$ and $c\geq 890b^4$.\
Since $a'=4a$ and $b>10^5$, inequality $$308.07a'b(b-a)^2a^{-1}<1233b^3<b^4<c$$ holds, so we can use Lemma \[lema\_rickert\] and Lemma \[lema\_donja\_m\] and observe inequality $$0.5348b^{-3/4}c^{1/4}<\frac{2\log(128.08b^6c^2)\log(0.0000081b^2c^2)}{\log(bc)\log(0.00325b^{-3}c)}$$ and since the right hand side is decreasing in $c$ for $c>890b^4$ we get $b<99887$ a contradiction with $b>10^5$.
ii) Case $2a\leq b\leq 12a$ and $c\geq 1613b^4$.\
Here we have $a'=4(b-a)$ and $\frac{b}{2}\leq b-a\leq \frac{11}{12}b$, so $$308.07a'b(b-a)^2a^{-1}<11400b^3<b^4<c.$$ We observe an inequality $$0.5348b^{-3/4}c^{1/4}<\frac{2\log(58.71b^6c^2)\log(0.052c^2)}{\log(bc)\log(0.000087903b^{-3}c)}$$ which, after using that the right hand side is decreasing in $c$ for $c>1613b^4$, yields $b<99949$, a contradiction.
iii) Case $b>12a$ and $c\geq 52761 b^4$.\
Let us first assume that $c\geq 52761 b^4$. Here we have $a'=4(b-a)$ and $\frac{b}{12}< b-a<b$, so $$308.07a'b(b-a)^2a^{-1}<1233b^4<c.$$ Here we use Lemma \[lema\_rickert2\] to obtain inequality $$0.5348b^{-3/4}c^{1/4}<\frac{8\log(8.40335\cdot 10^{13}b^3c)\log(0.002579c^2)}{\log(bc)\log(0.0042145b^{-4}c)}$$ which, after using that the right hand side is decreasing in $c$ for $c>52761b^4$, yields $b<99998$, a contradiction.
Now, assume that $39247b^4<c<52761 b^4$. We can modify the method in the next way. For $a=1$ we only have to notice that the right hand side in the inequality in the Lemma \[lema\_rickert2\] is decreasing in $c$ and insert lower bound on $c$ and calculate an upper bound on $b$ from the inequality. We get $b<999994$, a contradiction. For $a\geq 2$, we modify estimate $0.016858a(a')^{-1}b^{-1}(b-a)^{-2}c>0.0042145b^{-4}ca_0$, where $a_0=42$ and get $b<73454$, a contradiction.
\[lema\_prethodi\_tm16\] If $\{a,b,c,d\}$ is a $D(4)$-quadruple with $a<b<c<d_+<d$ then $$d>\min\{0.249965b^{5.5}c^{6.5},0.240725a^{4.5}c^{5.5}\}.$$
From [@ireg_pro Lemma 5] and Theorem \[teorem\_izbacen\_slucaj\] we have that $m\geq 6$ or $n\geq 7$ so Lemma \[lemaw4w5w6\] implies $d>0.249965b^{5.5}c^{6.5}$ or $d>0.240725a^{4.5}c^{5.5}$.
Let $\{a,b,c,d\}$ be a $D(4)$-quadruple such that $a<b<c<d_+<d$ and let us assume contrary, that there exists $e<c$ such that $\{e,a,b,c\}$ is an irregular $D(4)$-quadruple. Then by Lemma \[lema\_prethodi\_tm16\] $c>0.240725a'^{4.5}b^{5.5}>0.240725\cdot (10^5)^{1.5}b^4>52761 b^4$, where $a'=\min\{a,e\}\geq 1$ or $c>0.249965a^{5.5}c^{6.5}>52761 b^4$. So, by Theorem \[teorem1.5\] $\{a,b,c,d\}$ must be a regular quadruple which is a contradiction.
Proof of Theorem \[teorem\_prebrojavanje\]
==========================================
In this section we aim to split our problem in several parts. We will consider separately the case when a triple $\{a,b,c\}$ is regular, i.e. $c=a+b+2r$, and when it is not regular and then $c>ab+a+b$. In the latter case, we will consider solutions of the equation $z=v_m=w_n$ without assuming that the inequalities from Lemma \[granice\_fundamentalnih\] hold when $|z_0|\neq2$. So Lemmas from this section will usually also address separately the case when $c>ab+a+b$, more specifically, only this case when $(|z_0|,|z_1|)=(t,s)$ and $z_0z_1>0$, which can then be used to prove the results to all the other cases, except the case $|z_0|=2$, by using Lemma \[lema\_nizovi\].
In the following pages, we will first introduce results concerning linear forms in three logarithms which will give us that there are at most $3$ possible extensions of a triple to a quadruple for a fixed fundamental solution. Then we will use Laurent theorems to give further technical tools and finally prove Theorem \[teorem\_prebrojavanje\].
This result has already been explained in the proof of Theorem \[teorem\_izbacen\_slucaj\]. We state it again explicitly as we find it important to emphasize when $d=d_+$ is achieved. Statement of this Lemma follows notation and cases from Theorem \[teorem\_izbacen\_slucaj\].
\[regularna\_rjesenja\] Let $\{a,b,c\}$ be a $D(4)$-triple such that $z=v_m=w_n$ has a solution $(m,n)$ for which $d=d_+=\frac{z^2-4}{c}$. Then only one of the following cases can occur:
i) $(m,n)=(2,2)$ and $z_0=z_1=\frac{1}{2}(st-cr)$,
ii) $(m,n)=(1,2)$ and $z_0=t$, $z_1=\frac{1}{2}(st-cr)$,
iii) $(m,n)=(2,1)$ and $z_0=\frac{1}{2}(st-cr)$, $z_1=s$,
iv) $(m,n)=(1,1)$ and $z_0=t$, $z_1=s$.
Let $\{a,b,c\}$ be a $D(4)$ triple. We define and observe $$\label{lambda_linearna}
\Lambda=m\log \xi-n\log\eta+\log\mu,$$ a linear form in three logarithms, where $\xi=\frac{s+\sqrt{ac}}{2}$, $\eta=\frac{t+\sqrt{bc}}{2}$ and $\mu=\frac{\sqrt{b}(x_0\sqrt{c}+z_0\sqrt{a})}{\sqrt{a}(y_1\sqrt{c}+z_1\sqrt{b})}$. This linear form and its variations were already studied before, for example in [@fil_xy4]. By Lemma 10 from that paper we know that $$\label{nejednakost_za_lambda}
0<\Lambda<\kappa_0\left(\frac{s+\sqrt{ac}}{2}\right)^{-2m},$$ where $\kappa_0$ is a coefficient which is defined in the proof of the Lemma with $$\label{kapa0}
\kappa_0=\frac{(z_0\sqrt{a}-x_0\sqrt{c})^2}{2(c-a)}.$$
\[lemma\_kappa\_linearna\] Let $(m,n)$ be a solution of the equation $z=v_m=w_n$ and assume that $m>0$ and $n>0$. Then $$0<\Lambda<\kappa\xi^{-2(m-\delta)},$$ where
i) $(\kappa,\delta)=(2.7\sqrt{ac},0)$ if the inequalities from Lemma \[granice\_fundamentalnih\] hold,
ii) $(\kappa,\delta)=(6,0)$ if $|z_0|=2$,
iii) $(\kappa,\delta)=\left(1/(2ab),0\right)$ if $z_0=t$,
iv) $(\kappa,\delta)=\left(2.0001b/c,1\right)$ if $z_0=-t$, $b>10^5$ and $c>ab+a+b$.
From Lemma \[granice\_fundamentalnih\] we get $$0<x_0\sqrt{c}-z_0\sqrt{a}<2x_0\sqrt{c}<2.00634\sqrt[4]{ac}\sqrt{c}.$$ Inserting in the expression (\[kapa0\]) yields $\kappa_0<2.7\sqrt{ac}$.\
If $|z_0|=2$, equation (\[prva\_pelova\_s\_a\]) yields $x_0=2$. Using $c>4a$ gives us the desired estimate.\
If $c>a+b+2r$ and $z_0=t$ then $x_0=r$ and $\kappa_0=\frac{8(c-a)}{(t\sqrt{a}+r\sqrt{c})^2}<\frac{1}{2ab}$.\
In the last case we observe that $$\kappa_0\left(\frac{s+\sqrt{ac}}{2}\right)^{-2}<\frac{2r^2c}{c-a}\cdot \frac{1}{ac}=\frac{2b\left(1+\frac{4}{ab}\right)}{c\left(1-\frac{a}{c}\right)}<2.0001\frac{b}{c}$$ where we have used that $c>ab>10^5a$.
The next results is due to Matveev [@matveev] and can be used to get a better lower bound on the linear form (\[lambda\_linearna\]) than (\[nejednakost\_za\_lambda\]).
\[Matveev\] Let $\alpha_1, \alpha_2,\alpha_3$ be a positive, totally real algebraic numbers such that they are multiplicatively independent. Let $b_1,b_2,b_3$ be rational integers with $b_3\neq 0$. Consider the following linear form $\Lambda$ in the three logarithms: $$\Lambda=b_1\log\alpha_1+b_2\log\alpha_2+b_3\log\alpha_3.$$ Define real numbers $A_1,A_2,A_3$ by $$A_j=\max\{D\cdot h(\alpha_j),|\log \alpha_j|\}\quad (j=1,2,3),$$ where $D=[\mathbb{Q}(\alpha_1,\alpha_2,\alpha_3):\mathbb{Q}]$. Put $$B=\max\left\{1,\max\{(A_j/A_3)|b_j|:\ j=1,2,3\}\right\}.$$ Then we have $$\log|\Lambda|>-C(D)A_1A_2A_3\log(1.5eD\log(eD)\cdot B)$$ with $$C(D)=11796480e^4D^2\log(3^{5.5}e^{20.2}D^2\log(eD)).$$
Put $\alpha_1=\xi=\frac{s+\sqrt{ac}}{2}$, $\alpha_2=\mu=\frac{\sqrt{b}(x_0\sqrt{c}+z_0\sqrt{a})}{\sqrt{a}(y_1\sqrt{c}+z_1\sqrt{b})}$ and $\alpha_3=\eta=\frac{t+\sqrt{bc}}{2}$. We can easily show, similarly as in [@sestorka] or [@nas2], that $$\frac{A_1}{A_3}<\frac{\log(1.001\sqrt{ac})}{\log(\sqrt{bc})}<1.0001,$$ and using similar arguments as in [@fm] yields $h(\alpha_2)<\frac{1}{4}\log(P_2)$ where $$\begin{aligned}
P_2=\max\{b^2(c-a)^2,x_0^2abc^2,y_1^2abc^2,x_0y_1a^{1/2}b^{3/2}c^2\}.\end{aligned}$$ First, we observe the case $c=a+b+2r<4b$, so the case $iii)$ in Lemma \[lema\_tau\] cannot hold. Also, case $|z_1|=s$ and $|z_0|=\frac{1}{2}(cr-st)$ cannot occur since the same Lemma implies $c=a+b+2r>a^2b$, i.e. $a=1$, and this case can be eliminated with the same arguments as in [@fht]. Also, since $s=a+r$ and $t=b+r$, we have $\frac{1}{2}(cr-st)=2$, so the only case is $|z_0|=|z_1|=x_0=y_1=2$. As in [@fm] we easily get $$A_2=\max\{4h(\alpha_3),|\log(\alpha_3)|\}<4\log c.$$ Now we will observe the case when $c>ab+a+b$ and $(|z_0|,x_0,|z_1|,y_1)=(t,r;s,r)$ without assuming the inequalities from the Lemma \[granice\_fundamentalnih\]. Here we have $$x_0^2abc^2=y_1^2abc^2=(ab+4)abc^2<c^4$$ and $$x_0y_1a^{1/2}b^{3/2}c^2=r^2(ab)^{1/2}bc^2<c^{9/2},$$ so in this case $$A_2<\frac{9}{2}\log c.$$
From Theorem \[teorem1.5\] we know that $c<39247b^4$ so $A_3>2\log(0.071c^{5/8})>\frac{5}{4}\log(0.014c)$ together with Lemma \[mnfilipin\] implies $$\begin{aligned}
B&=\max\{\frac{A_1}{A_3}m,\frac{A_2}{A_3},n\}\\
&<\max\{m,5.724,m+1\}=\max\{5.724,m+1\}.\end{aligned}$$ Since by Lemma \[donje na m i n\] and Theorem \[teorem\_izbacen\_slucaj\] we know that $m\geq 6$, we can take $B<m+1$. Now we apply Theorem \[Matveev\] which proves the next result.
\[prop\_matveev\] For $m\geq 6$ we have $$\frac{m}{\log (38.92(m+1))}<2.7717\cdot 10^{12}\log \eta\log c.$$
If we have a $D(4)$-quadruple $\{a,b,c,d\}$ then $z=\sqrt{cd+4}$ is a solution of the equation $z=v_m=w_n$ for some $m$, $n$ and fundamental solutions $(z_0,z_1)$.
Let $\{a,b,c\}$ be a $D(4)$-triple and let us assume that there are $3$ solutions to the equation $z=v_m=w_n$ which belong to the same fundamental solution. Let’s denote them with $(m_i,n_i)$, $i=0,1,2$ and let us assume that $m_0<m_1<m_2$ and $m_1\geq 4$. Denote $$\Lambda_i=m_i\log \xi-n_i\log\eta+\log\mu.$$
As in [@fm] we use an idea of Okazaki from [@okazaki] to find a lower bound on $m_2-m_1$ in the terms of $m_0$. We omit the proof since it is analogous to [@fm Lemma 7.1].
\[okazaki\] Assume that $v_{m_0}$ is positive. Then $$m_2-m_1>\Lambda_0^{-1}\Delta \log \eta,$$ where $$\Delta=\left|\begin{array}{cc} n_1-n_0 & n_2-n_1 \\ m_1-m_0 & m_2-m_1 \end{array} \right|>0.$$ In particular, if $m_0>0$ and $n_0>0$ then $$m_2-m_1>\kappa^{-1}(ac)^{m_0-\delta}\Delta \log \eta.$$
\[prop\_okazaki\_rjesenja\] Suppose that there exist $3$ positive solutions $(x_{(i)},y_{(i)},z_{(i)})$, $i=0,1,2$, of the system of Pellian equations (\[prva\_pelova\_s\_a\]) and (\[druga\_pelova\_s\_b\]) with $ z_{(0)} < z_{(1)} < z_{(2)}$ belonging to the same class of solutions and $c>b>10^5$. Put $z_{(i)} = v_{m_i} = w_{n_i}$. Then $m_0\leq 2$.
Let us assume contrary, that $m_0>2$. From Lemmas \[donje na m i n\] and \[regularna\_rjesenja\] we know that $m_0\geq 6$ and $m_2>\kappa^{-1}(ac)^{6-\delta}\log\eta>(2.7\sqrt{ac})^{-1}(ac)^6\log\sqrt{bc}>c^5$ by Lemmas \[okazaki\] and \[lemma\_kappa\_linearna\]. After observing that a left hand side in the inequality of Proposition \[prop\_matveev\] is increasing in $m$ we get $$\frac{c^5}{\log(38.92(c^5+1))}<2.7717\cdot 10^{12}\log^2 c.$$ This inequality cannot hold for $c>10^5$ so we conclude that $m_0\leq 2$.
The next result is proven as in [@nas2] with only some technical details changed, so we omit the proof.
\[padeova\_tm8.1\] Let $a,b,c$ be integers with $0<a<b<c$ and let $a_1=4a(c-b),$ $a_2=4b(c-a)$, $N=abz^2$, where $z$ is a solution of the system od Pellian equations (\[prva\_pelova\_s\_a\]) and (\[druga\_pelova\_s\_b\]). Put $u=c-b$, $v=c-a$ and $w=b-a$. Assume that $N\geq 10^5a_2$. Then the numbers $$\theta_1=\sqrt{a+a_1/N},\quad \theta_2=\sqrt{1+a_2/N}$$ satisfy $$\max\left\{\left|\theta_1-\frac{p_1}{q} \right|,\left|\theta_2-\frac{p_2}{q}\right|\right\}>\left(\frac{512.01a_1'a_2uN}{a_1}\right)^{-1}q^{-\lambda}$$ for all integers $p_1,p_2,q$ with $q>0$, where $a_1'=\max\{a_1,a_2-a_1\}$ and $$\lambda=1+\frac{\log\left(\frac{256a_1'a_2uN}{a_1}\right)}{\log\left(\frac{0.02636N^2}{a_1a_2(a_2-a_1)uvw}\right)}.$$
\[lema8.3\] Let $(x_{(i)},y_{(i)},z_{(i)})$ be positive solutions to the system of Pellian equations (\[prva\_pelova\_s\_a\]) and (\[druga\_pelova\_s\_b\]) for $i\in\{1,2\}$, and let $\theta_1$, $\theta_2$ be as in Proposition \[padeova\_tm8.1\] with $z=z_{1}$. Then we have $$\max\left\{\left|\theta_1-\frac{acy_{(1)}y_{(2)}}{abz_{(1)}z_{(2)}}\right|,\left|\theta_2-\frac{bcx_{(1)}x_{(2)}}{abz_{(1)}z_{(2)}}\right|\right\}<\frac{2c^{3/2}}{a^{3/2}}z_{(2)}^{-2}.$$
It is not hard to see that from Theorem \[padeova\] we have $$\begin{aligned}
\left|\sqrt{1+\frac{a_1}{N}}-\frac{p_1}{q}\right|&=\frac{y_{(1)}\sqrt{c}}{bz_{(1)}z_{(2)}}|z_{(2)}\sqrt{b}-y_{(2)}\sqrt{c}|&<\frac{4(c-b)\sqrt{c}y_{(1)}}{2b\sqrt{b}z_{(1)}z_{(2)}^2}<\frac{2c^{3/2}}{b^{3/2}}z_{(2)}^{-2}\end{aligned}$$ and similarly $$\left|\sqrt{1+\frac{a_2}{N}}-\frac{p_2}{q}\right|<\frac{2c^{3/2}}{a^{3/2}}z_{(2)}^{-2}.$$
\[prop\_odnos\_n1\_n2\] Suppose that $\{a,b,c,d_i\}$ are $D(4)$-quadruples with $a<b<c<d_1<d_2$ and $x_{(i)},y_{(i)},z_{(i)}$ are positive integers such that $ad_i+1=x_{(i)}^2$, $bd_i+1=y_{(i)}^2$ and $cd_i=z_{(i)}^2$ for $i\in\{1,2\} $.
i) If $n_1\geq 8$, then $$n_2<\frac{(n_1+1.1)(3.5205n_1+4.75675)}{0.4795n_1-3.82175}-1.1.$$ More specifically,if $n_1=8$, $n_2<2628n_1$, and if $n_1\geq9$ then $n_2<83n_1$.
ii) If $c>ab+a+b$ and $(z_0,z_1)=(t,s)$, $z_0z_1>0$ and $n_1\geq 9$ then $$n_2<\frac{(n_1+1)(2.5147n_1+5.11467)}{0.4853n_1-3.85292}-1<60n_1.$$
Put $N=abz^2$, $p_1=acy_{(1)}y_{(2)}$, $p_2=bcx_{(1)}x_{(2)}$ and $q=abz_{(1)}z_{(2)}$ in Proposition \[padeova\_tm8.1\] and Lemma \[lema8.3\]. We get $$z_{(2)}^{2-\lambda}<4096a^{\lambda-3/2}b^{\lambda+3}c^{7/2}z_{(1)}^{\lambda+2}.$$ We use estimates for fundamental solutions from Lemma \[granice\_fundamentalnih\] and the inequality from the proof of Lemma \[lemaw4w5w6\] which gives us $$0.49999\cdot0.99999^{n_1-1}(bc)^{\frac{n_1-1}{2}-\frac{1}{4}}c<w_{n_1}<1.000011\cdot 1.00001^{n_1-1}(bc)^{\frac{n_1-1}{2}+\frac{1}{4}}c.$$ Since $z_{(1)}=w_{n_1}$, we use this inequality to show that $$\frac{256a_1'a_2uN}{a_1}<(1.00002bc)^{n_1+3.5}$$ and $$\frac{0.02636N^2}{a_1a_2(a_2-a_1)uvw}>(0.41bc)^{2n_1-4},$$ where we have used the assumption $n_1\geq 8$. So $$2-\lambda>\frac{0.4795n_1-3.82175}{n_1-2}.$$ Now we can show that $$z_{(2)}^{0.4795n_1-3.82175}<4096^{n_1-2}(bc)^{4.0205n_1-4}z_{(1)}^{3.5205n_1-4.17825}.$$ On the other hand $$z_{(1)}>1.2589^{n_1-1}(bc)^{0.49n_1-0.24}.$$ By combining these inequalities we get $$z_{(2)}<z_{(1)}^{\sigma},$$ where $\sigma=\frac{3.5205n_1+4.75675}{0.4795n_1-3.82175}$. If $n_2\geq n_1\sigma+1.1(\sigma-1)$, similarly as in [@fm], we would get a contradiction from $$\frac{z_{(2)}}{z_{(1)}^{\sigma}}=\left(\frac{2\sqrt{b}}{y_{(1)}\sqrt{c}+z_{(1)}\sqrt{b}}\right)^{\sigma-1}\xi^{n_2-n_1\sigma}\frac{1-A\xi^{-2n_1\sigma}}{\left(1-A\xi^{-2n_1}\right)^{\sigma}}>1,$$ where $A=\frac{y_{(1)}\sqrt{c}-z_{(1)}\sqrt{b}}{y_{(1)}\sqrt{c}+z_{(1)}\sqrt{b}}$ and $\xi=\frac{t+\sqrt{bc}}{2}$ as before. So, $n_2< n_1\sigma+1.1(\sigma-1)$ must hold, which proves the first statement.
The second statement is proven analogously by using an inequality $$0.7435\cdot(ab)^{-1/2}c(t-1)^{n_1-1}<w_{n_1}<1.0001(ab)^{1/2}t^{n_1-1}.$$ Notice that in this case we didn’t use Lemma \[granice\_fundamentalnih\] since we have explicit values $(z_0,z_1)=(t,s)$.
We now observe a linear form in two logarithms $$\begin{aligned}
\Gamma=\Lambda_2-\Lambda_1&=j\log\frac{s+\sqrt{ac}}{2}-k\log\frac{t+\sqrt{bc}}{2}\\
&= (m_2-m_1)\log\frac{s+\sqrt{ac}}{2}-(n_2-n_1)\log\frac{t+\sqrt{bc}}{2}\\
&=: (m_2-m_1)\log\xi-(n_2-n_1)\log\eta\end{aligned}$$ for which we know that $\Gamma\neq 0$ since it is not hard to show that $\xi$ and $\eta$ are multiplicatively independent.
From Lemma \[lemma\_kappa\_linearna\] we have that $\Lambda_1,\ \Lambda_2\in \left\langle 0,\kappa\left(\xi \right)^{-2m_1} \right \rangle$ so $$0<|\Gamma|<\kappa \xi^{-2m_1}.$$ We can now use Laurent’s theorem from [@laurent] to find a lower bound on $|\Gamma|$, similarly as in [@nas2] and [@fm].
\[laurent\] Let $\gamma_1$ and $\gamma_2$ be multiplicatively independent algebraic numbers with $|\gamma_1|\geq 1$ and $|\gamma_2|\geq 1$. Let $b_1$ and $b_2$ be positive integers. Consider the linear form in two logarithms
$$\Gamma=b_2 \log \gamma_2-b_1 \log \gamma_1,$$ where $\log \gamma_1$ and $\log \gamma_2$ are any determinations of the logarithms of $\gamma_1$, $\gamma_2$ respectively. Let $\rho$ and $\mu$ be real numbers with $\rho>1$ and $1/3\leq \mu \leq 1$. Set $$\sigma=\frac{1+2\mu-\mu^2}{2},\quad \lambda=\sigma \log \rho.$$ Let $a_1$ and $a_2$ be real numbers such that $$\begin{aligned}
&a_i\geq \max\{1,\rho|\log\gamma_i|-\log|\gamma_i|+2Dh(\gamma_i)\},\quad i=1,2,\\
&a_1a_2\geq \lambda^2,\end{aligned}$$ where $D=[\mathbb{Q}(\gamma_1,\gamma_2):\mathbb{Q}]/[\mathbb{R}(\gamma_1,\gamma_2):\mathbb{R}]$. Let $h$ be a real number such that $$h \geq \max \left\{ D\left(\log\left( \frac{b_1}{a_2'}+\frac{b_2}{a_1'}\right)+\log \lambda'+1.75\right)+0.06,\lambda',\frac{D\log 2}{2}\right\}+\log\rho.$$ Put $$H=\frac{h}{\lambda},\quad
\omega=2\left( 1+\sqrt{1+\frac{1}{4H^2}}\right),\quad \theta=\sqrt{1+\frac{1}{4H^2}}+\frac{1}{2H}.$$
Then $$\log|\Lambda|\geq -C\left(h'+\frac{\lambda'}{\sigma} \right)^2a_1'a_2'-\sqrt{\omega \theta}\left(h'+\frac{\lambda'}{\sigma} \right)-\log\left(C' \left(h'+\frac{\lambda'}{\sigma} \right)^2a_1'a_2' \right)$$ with $C=C_0\mu/(\lambda^3\sigma)$ and $C'=\sqrt{C_0\omega\theta/\lambda^6}$ $$\begin{aligned}
&C_0=\left(\frac{\omega}{6}+\frac{1}{2}\sqrt{\frac{\omega^2}{9}+\frac{8\lambda\omega^{5/4}\theta^{1/4}}{3\sqrt{a_1a_2}H^{1/2}}+\frac{4}{3}\left( \frac{1}{a_1}+\frac{1}{a_2}\right)\frac{\lambda\omega}{H} } \right)^2.\end{aligned}$$
\[prop\_primjena\_laurenta\] If $z=v_{m_i}=w_{n_i}$ $(i\in \{1,2\})$ has a solution, then $$\frac{2m_1}{\log \eta}<\frac{C_0'\mu}{\lambda^3\sigma}(\rho+3)^2h^2+\frac{2\sqrt{\omega\theta}h+2\log\left(\sqrt{C_0'\omega\theta}\lambda^{-3}(\rho+3)^2\right)+4\log h}{(\log(10^5))^2}+1,$$ where $\rho=8.2$, $\mu=0.48$ $$C_0'=\left(\frac{\omega}{6}+\frac{1}{2}\sqrt{\frac{\omega^2}{9}+\frac{16\lambda\omega^{5/4}\theta^{1/4}}{3(\rho+3)H^{1/2}\log(10^5)}+\frac{16\lambda \omega}{3(\rho+3)H\log(10^5)}}\right)^2,$$ $h=4\log\left( \frac{2j}{\log\eta}+1 \right)+4\log\left(\frac{\lambda}{\rho+3}\right)+7.06+\log\rho$ and $\sigma,\lambda,H,\omega,\theta$ are as in Theorem \[laurent\].
Similarly as in [@nas2] we can take $a_i=(\rho+3)\log\gamma_i$, $i=1,2$, $h=4\log\left(\frac{2j}{\log\gamma_1}\right)+4\log\left(\frac{\lambda}{\rho+3}\right)+7.06+\log\rho$ which yields $C_0<C_0'$ as defined in the statement of the proposition. Now, combining Theorem \[laurent\] and Lemma \[lemma\_kappa\_linearna\] finishes the proof.
Let $\{a,b,c\}$ be a $D(4)$-triple. With $N(z_0,z_1)$ we will denote the number of nonregular solutions of the system (\[prva\_pelova\_s\_a\]) and (\[druga\_pelova\_s\_b\]), i.e. the number of integers $d>d_+$ which extend that triple to a quadruple and which correspond to the same fundamental solution $(z_0,z_1)$. From Proposition \[prop\_okazaki\_rjesenja\] we know that if we have $3$ possible solutions $m_0$, $m_1$, $m_2$ with the same fundamental solution $(z_0,z_1)$ then $m_0\leq 2$, so from Lemma \[donje na m i n\] we know that $N(z_0,z_1)\leq 2$ for each possible pair $(z_0,z_1)$ in Theorem \[teorem\_izbacen\_slucaj\]. Also, from the same Theorem we know that the case when $m$ is odd and $n$ is even cannot occur when $d>d_+$. So, if we denote with $N_{eo}$ the number of solutions $d>d_+$ when $m$ is even and $n$ is odd, and similarly for other cases, the number of extensions of a $D(4)$-triple to a $D(4)$-quadruple with $d>d_+$ is equal to $$N=N_{ee}+N_{eo}+N_{oo}.$$
**Case $iv)$**
This follows from Theorem \[teorem1.5\].
**Case $i)$**
Since $c=a+b+2r<4b$, only case $|z_0|=|z_1|=2$ can hold as explained before Proposition \[prop\_matveev\]. This implies $$N=N_{ee}=N\left(2,2\right) + N( -2,-2).$$ Let us prove that $N(2,2)\leq 1$. Assume contrary, that $N(z_0,z_1)=2$. Since $\frac{1}{2}(st-cr)=2$, then $(m_0,n_0)=(2,2)$ is a solution in this case. Beside that solution, there exist two more solutions $z=v_{m_i}=w_{n_i}$, $(m_i,n_i)$, $i=1,2$, such that $2=m_0<m_1<m_2$ and $m_1\geq 8$ and $n_1\geq 8$ by Lemma \[donje na m i n\]. From Lemmas \[lemma\_kappa\_linearna\] and \[okazaki\] we have $\kappa=6$, $\Delta\geq 4$ and $m_2-m_1>\kappa^{-1}(ac)^{m_0}\delta\log \eta$, i.e. $$\frac{j}{\log \eta}=\frac{m_2-m_1}{\log\eta}>\frac{4}{6}(ac)^2>6.66\cdot 10^9.$$ On the other hand, $n_1\geq 8$ and Proposition \[prop\_odnos\_n1\_n2\] gives us $$m_2\leq 2n_2+1\leq 2\cdot 2628n_1\leq 5256m_1+5256\leq5913m_1,$$ so $j=m_2-m_1\leq 5912m_1$. Using Proposition \[prop\_primjena\_laurenta\] yields $\frac{j}{\log\eta}<5.71\cdot 10^8$, which is a contradiction.
This proves that $N=N\left(2,2\right) + N \left( -2,-2\right)\leq 3$.
**Cases $ii)$ and $iii)$**
Since $c\neq a+b+2r$, we have $c>ab+a+b$. Let $N'(z_0,z_1)$ denote a number of solutions of equation $v_m=w_n$, with $m>2$ and $b>10^5$, but without assuming that inequalities from Lemma \[granice\_fundamentalnih\] hold. Then, by Lemma \[lema\_nizovi\] and Proposition \[prop\_m\_n\_9\] we have $$N\leq N(-2,-2)+N(2,2)+N'(z_0^{-},z_1^{-})+N'(z_0^+,z_1^+)$$ where $(z_0^+,z_1^+)\in \left\{ \left(\frac{1}{2}(st-cr),\frac{1}{2}(st-cr)\right), \left(t,\frac{1}{2}(st-cr)\right),\right.$ $\left. \left(\frac{1}{2}(st-cr),s\right),\right.$ $\left.\left(t,s\right) \right\}$ and $(z_0^{-},z_1^{-})=(-z_0^{+},-z_1^{+})$.
It is not hard to see that the previous proof for $N\left(2,2\right)\leq 1$ didn’t depend on the element $c$, so it holds in this case too. We will now show that $N'(-t,-s)\leq 2$ which by Lemma \[lema\_nizovi\] implies $N'(z_0^{-},z_1^{-})\leq 2$ and $N'(t,s)\leq 2$ for $c<b^2$ and $N'(t,s)\leq 1$ for $c>b^2$, which will prove our statements in these last two cases.
Let us assume contrary, that $N'(-t,-s)\geq 3$, i.e. there exist $m_0,m_1,m_2$, $2<m_0<m_1<m_2$. By Proposition \[prop\_m\_n\_9\] we know $m_0\geq 9$. From Lemma \[lemma\_kappa\_linearna\] we have $\Lambda<2.0001\frac{b}{c}\xi^{-2(m-1)}$ which can be used in Lemma \[okazaki\] to get $$m_2>m_2-m_1>1.999c^8\log\eta>1.999c^8\log \sqrt{c}.$$ Now, we can use Proposition \[prop\_matveev\] with $B(m_2)=m_2+1$, $\log\eta<\log(c/2)$ which gives us an inequality $$\frac{m_2}{\log (38.92(m_2+1) )}<2.81\cdot 10^{12}\log c\log (c/2).$$ Left hand side is increasing in $m_2$ so we can solve inequality in $c$ which yields $c<56$, a contradiction with $c>b>10^5$.
Now, let us prove that $N'(t,s)\leq 1$. Assume contrary $N'(t,s)\geq 2$ and for some $3$ solutions $ 1\leq m_0<m_1<m_2$ we also have $2<m_1$ ($m_0$ is associated with a regular solution). Then by Lemma \[lemma\_kappa\_linearna\], since $\Delta\geq 1$ and $c>4b$, $b>10^5$, we get $$\label{jbdazadnja}\frac{m_2-m_1}{\log\eta}>2ab(\sqrt{ac})^2>8b^2>8\cdot 10^{10}.$$ On the other hand, as in the proof of [@fm Lemma 7.1] it can be shown that $\frac{n_2-n_1}{m_2-m_1}>\frac{\log \xi}{\log \eta}$, which together with Proposition \[prop\_odnos\_n1\_n2\] implies $$\frac{m_2-m_1}{\log \eta}<\frac{n_2-n_1}{\log\xi}<\frac{f(n_1)n_1}{\log\xi},$$ where $f(n_1)=\frac{2.0294n_1+8.96759}{0.4853n_1-3.85292}\left(1+\frac{1}{n_1}\right)$. These two inequalities yields $n_1>9.34047\cdot 10^{11}$ and $f(n_1)\leq 4.1818$.
From $0<\Lambda_1<m_1\log\xi-n_1\log\eta+\log\mu$ we have
$$\frac{m_1}{\log\eta}>\frac{m_2-m_1}{f(n_1)\log\eta}-\frac{\log\mu}{\log\eta\log\xi}>\frac{m_2-m_1}{f(n_1)\log\eta}-1.$$ So, we can use Proposition \[prop\_primjena\_laurenta\] and an inequality $$\resizebox{\textwidth}{!}{$\frac{m_2-m_1}{f(n_1)\log\eta}<\frac{C_0'\mu}{\lambda^2\sigma}h^2(\rho+3)^2+\frac{2\sqrt{\omega\theta}h+2\log\left(\sqrt{C_0'\omega\theta}\lambda^{-3}(\rho+3)^2\right)+4\log h}{(\log(10^5))^2}+2$}$$
yields $\frac{m_2-m_1}{\log\eta}<152184$ which is in a contradiction with (\[jbdazadnja\]). So we must have $N'(t,s)\leq 1$.
It remains to prove that $N'(-t,-s)\leq 1$ for $c>b^2$. Again, let us assume contrary, that there are at least $2$ solutions $m_1<m_2$ besides a solution $(m_0,n_0)=(1,1)$ (which gives $d=d_{-}(a,b,c)$). Then $$\frac{m_2-m_1}{\log\eta}>2.0001^{-1}\frac{c}{b}(\xi)^{2(m_0-1)}\Delta>1.99\cdot 10^5.$$ After repeating steps as in the previous case, we get $f(n_1)<4.1819$ and Proposition \[prop\_primjena\_laurenta\] yields $\frac{m_2-m_1}{\log\eta}<152184$, a contradiction.
So, when $c>b^2$ we have $N\leq 2+1+2+2=7$ and when $a+b+2r\neq c<b^2$ we have $N\leq 2+1+2+1=6$.
Extension of a pair
===================
For completeness, to give all possible results similar to the ones in [@fm] and [@cfm], we have also considered extensions of a pair to a triple and estimated a number of extensions to a quadruple in such cases. Extensions of a pair to a triple were considered in [@bf_parovi] for the $D(4)$ case. Baćić and Filipin have shown that a pair $\{a,b\}$ can be extended to a triple with a $c$ given by $$\resizebox{\textwidth}{!}{$c=c_{\nu}^{\pm}=\frac{4}{ab}\left\{\left(\frac{\sqrt{b}\pm\sqrt{a}}{2}\right)^2\left(\frac{r+\sqrt{ab}}{2}\right)^{2\nu}+\left(\frac{\sqrt{b}\mp\sqrt{a}}{2}\right)^2\left(\frac{r-\sqrt{ab}}{2}\right)^{2\nu}-\frac{a+b}{2}\right\}$}$$ where $\nu\in\mathbb{N}$. These extensions are derived from the fundamental solution $(t_0,s_0)=(\pm2,2)$ of a Pell equation $$at^2-bs^2=4(a-b),$$ associated to the problem of extension of a pair to a triple. Under some conditions for the pair $\{a,b\}$ we can prove that these fundamental solutions are the only one. The next result is an improvement of [@bf_parovi Lemma 1].
\[lema\_b685a\] Let $\{a,b,c\}$ be a $D(4)$-triple. If $a<b<6.85a$ then $c=c_{\nu}^{\pm}$ for some $\nu$.
We follow the proof of [@bf_parovi Lemma 1]. Define $s'=\frac{rs-at}{2}$, $t'=\frac{rt-bs}{2}$ and $c'=\frac{(s')^2-4}{a}$. The cases $c'>b$, $c'=b$ and $c'=0$ are the same as in the [@bf_parovi Lemma 1] and yield $c=c_{\nu}^{\pm}$. It is only left to consider the case $0<c'<b$. Here we define $r'=\frac{s'r-at'}{2}$ and $b'=((r')^2-4)/a$. If $b'=0$ then it can be shown that $c'=c_{1}^-$ and $c=c_{\nu}^-$ for some $\nu$. We observe that $b'=d_{-}(a,b,c')$ so $$b'<\frac{b}{ac'}<\frac{6.85}{c'}.$$ This implies $b'c'\leq6$. If $c'=1$, since $b'>$ and $b'c'+4$ is a square, the only possibility is $b'=5$. In that case, $a$ and $b$ extend a pair $\{1,5\}$. Then $$\resizebox{\textwidth}{!}{$a=a_{\nu}^{\pm}=\frac{4}{5}\left\{\left(\frac{\sqrt{5}\pm1}{2}\right)^2\left(\frac{3+\sqrt{5}}{2}\right)^{2\nu}+\left(\frac{\sqrt{5}\mp1}{2}\right)^2\left(\frac{3-\sqrt{5}}{2}\right)^{2\nu}-3\right\}$}$$ and $b=d_+(1,5,a)=a_{\nu+1}^{\pm}$ for the same choice of $\pm$. Define $k:=\frac{b}{a}=\frac{a_{\nu+1}^{\pm}}{a_{\nu}^{\pm}}$. It can be easily shown that $k\leq 8$ and decreasing as $\nu$ increases. Also, $$\lim_{\nu\to\infty}\frac{a_{\nu+1}^{\pm}}{a_{\nu}^{\pm}}=\left(\frac{3+\sqrt{5}}{2}\right)^2>6.85,$$ which gives us a contradiction with the assumption $b<6.85a$.
If $c'\in\{2,3,4,6\}$ there is no $b'$ which satisfy needed conditions. The case $c'=5$ gives $b'=1$ which is analogous to the previous case.
For a pair $ (a,b)=(4620,31680)$, where $b>6.85a$, we have a solution $(s_0,t_0)=(68,178)$ of Pellian equation (\[par\_trojka\]), so it can be extended with a greater element to a triple with $c\neq c_{\nu}^{\pm}$, for example $c=146 434 197$. So, this result can’t be improved further more.
We have $$\begin{aligned}
c_1^{\pm}&=a+b\pm 2r,\\
c_2^{\pm}&=(ab+4)(a+b\pm2r)\mp4r,\\
c_3^{\pm}&=(a^2b^2+6ab+9)(a+b\pm2r)\mp4r(ab+3),\\
c_4^{\pm}&=(a^3b^3+8a^2b^2+20ab+16)(a+b\pm2r)\mp4r(a^2b^2+5ab+6),\\
c_5^{\pm}&=(a^4b^4+10a^3b^3+35a^2b^2+50ab+25)\mp4r(a^3b^3+7a^2b^2+15ab+10).
\end{aligned}$$ The aim is to use Theorem \[teorem\_prebrojavanje\] and since $N=0$ for $b<10^5$ we can use the lower bound on $b$ (more precisely, on $b$ if $b<c$ and on $c$ otherwise).
Case $c_1^{\pm}$ implies that $\{a,b,c\}$ is a regular triple so $N\leq 3$.
If $a=1$ then $c_2^-<b^2$ so the best conclusion is $N\leq 7$. On the other hand, if $c=c_2^-$ and $a^2\geq b$ we have $c>b^2$ since $a+b-2r\geq 1$ so $N=0$. It remains to observe $b>a^2$. Then it can be shown that $r^2<0.004b^2$ and $c_2^->0.872ab^2$, so if $a\geq2$ we can again conclude $N\leq 6$. Also if $c\geq c_2^+=(a+b)(ab+4)+2r(ab+2)>b^2$ we have $N\leq 6$. Observe that $c\geq c_5^{-}>a^4b^4$. If $b\leq7104a$ then $c>\frac{10^{20}}{7104^4}b^4>39263b^4$ so $N=0$ by Theorem \[teorem\_prebrojavanje\]. If $b>7104a$ then $a+b-2r>0.97b$ so $c_5^{-}>0.97ba^4b^5>97000b^4$ so again $N=0$.
Similarly, we observe $c_4^->a^3b^3$, and if $b\leq63a$ we get $N=0$. If $b>63a$ we have $c+b-2r>0.76b$ which will lead to $N=0$ when $a\geq 38$. Cases $a\leq 37$ can be observed separately, and only $a\geq 35$ led to $N=0$ and others to $N\leq 6$.
From Proposition \[prop\_covi\] and Lemma \[lema\_b685a\] we can conclude the result of Corollary \[kor2\] after observing that $10^5<b<6.85a$ implies $a\geq 14599$.
**Acknowledgement:** The author was supported by the Croatian Science Foundation under the project no. IP-2018-01-1313.\
[99]{}
Lj. Baćić, A. Filipin, [*On the extensibility of $D(4)$-pair $\{k-2,k+2\}$*]{}, J. Comb. Number Theory **5** (2013), 181–197.
Lj. Baćić, A. Filipin, [*The extensibility of $D(4)$-pairs*]{}, Math. Commun. **18** (2013), no. 2, 447–456.
M. A. Bennett, M. Cipu, M. Mignotte, R. Okazaki, [*On the number of solutions of simultaneous Pell equations. II*]{}, Acta Arith. [**122**]{} (2006), no. 4, 407–417.
M. Bliznac, A. Filipin, [*An upper bound for the number of Diophantine quintuples*]{}, Bull. Aust. Math. Soc., **94(3)** (2016), 384–394.
M. Bliznac Trebješanin, A. Filipin, [*Nonexistence of $D(4)$-quintuples*]{}, J. Number Theory, **194** (2019), 170–217.
M. Cipu, [*A new approach to the study of D(-1)-quadruples*]{}, RIMS Kokyuroku **2092** (2018), 122–129.
M. Cipu, Y. Fujita, T. Miyazaki, [*On the number of extensions of a Diophantine triple*]{}, Int. J. Number Theory **14** (2018), 899–917.
M. Cipu, Y. Fujita, [*Bounds for Diophantine quintuples* ]{}, Glas. Mat. Ser. III **50** (2015), 25–34.
A. Dujella, [*Diophantine $m$-tuples*]{}, <http://web.math.pmf.unizg.hr/~duje/dtuples.html>.
A. Dujella, [*There are only finitely many Diophantine quintuples*]{}, J. Reine Angew. Math. **566** (2004), 183–214.
A. Dujella, M. Mikić, [*On the torsion group of elliptic curves induced by $D(4)$-triples*]{}, An. Ştiinţ. Univ. “Ovidius” Constanţa Ser. Mat. **22** (2014), no. 2, 79–90
A. Dujella, A. Pethő, [*A generalization of a theorem of Baker and Davenport*]{}, Quart. J. Math. Oxford Ser. (2), **49** (1998), 291–306.
A. Dujella, A. M. S. Ramasamy, [*Fibonacci numbers and sets with the property $D(4)$*]{}, Bull. Belg. Math. Soc. Simon Stevin, **12(3)** (2005), 401–412.
A. Filipin, [*There does not exist a $D(4)$-sextuple*]{}, J. Number Theory **128** (2008), 1555–1565.
A. Filipin, [*On the size of sets in which $xy + 4$ is always a square*]{}, Rocky Mountain J. Math. **39** (2009), no. 4, 1195–1224.
A. Filipin, [*An irregular $D(4)$-quadruple cannot be extended to a quintuple*]{}, Acta Arith. **136** (2009), no. 2, 167–176.
A. Filipin, [*The extension of some $D(4)$-pairs*]{}, Notes Number Theory Discrete Math. **23** (2017), 126–135.
A. Filipin, Bo He, A. Togbé, [*On a family of two-parametric D(4)-triples*]{}, Glas. Mat. Ser. III **47** (2012), 31–51.
Y. Fujita, T. Miyazaki,[*The regularity of Diophantine quadruples*]{}, Trans. Amer. Math. Soc. **370** (2018), 3803–3831.
B. He, A. Togbé, V. Ziegler, [*There is no Diophantine quintuple*]{}, Trans. Amer. Math. Soc. **371** (2019), 6665–6709.
M. Laurent, [*Linear forms in two logarithms and interpolation determinants II*]{}, Acta Arith. **133 (4)** (2008), 325–348.
E. M. Matveev, [*An explicit lower bound for a homogeneous rational linear form in logarithms of algebraic numbers II* ]{}, Izv. Math **64** (2000), 1217–1269.
J. H. Rickert, [*Simultaneous rational approximations and related Diophantine equations*]{}, Proc. Cambridge Philos. Soc. **113** (1993) 461–472.
Marija Bliznac Trebješanin\
University of Split, Faculty of Science\
Ruera Boškovića 33, 21000 Split, Croatia\
Email: marbli@pmfst.hr\
|
---
abstract: |
By considering a certain univalent function in the open unit disk $\mathbb{U}
$, that maps $\mathbb{U}$ onto a strip domain, we introduce a new class of analytic and close-to-convex functions by means of a certain non-homogeneous Cauchy-Euler-type differential equation. We determine the coefficient bounds for functions in this new class. Relevant connections of some of the results obtained with those in earlier works are also provided.
address: 'Kocaeli University, Faculty of Aviation and Space Sciences,Arslanbey Campus, 41285 Kartepe-Kocaeli, TURKEY'
author:
- Serap BULUT
title: 'Coefficient bounds for close-to-convex functions associated with vertical strip domain'
---
Introduction
============
Let $\mathcal{A}$ denote the class of functions of the form$$f(z)=z+\sum_{n=2}^{\infty }a_{n}z^{n} \label{1.1}$$which are analytic in the open unit disk $\mathbb{U}=\left\{ z:z\in \mathbb{C}\;\text{and}\;\left\vert z\right\vert <1\right\} $. We also denote by $\mathcal{S}$ the class of all functions in the normalized analytic function class $\mathcal{A}$ which are univalent in $\mathbb{U}$.
For two functions $f$ and $g$, analytic in $\mathbb{U}$, we say that the function $f$ is subordinate to $g$ in $\mathbb{U}$, and write$$f\left( z\right) \prec g\left( z\right) \qquad \left( z\in \mathbb{U}\right)
,$$if there exists a Schwarz function $\omega $, analytic in $\mathbb{U}$, with$$\omega \left( 0\right) =0\qquad \text{and\qquad }\left\vert \omega \left(
z\right) \right\vert <1\text{\qquad }\left( z\in \mathbb{U}\right)$$such that$$f\left( z\right) =g\left( \omega \left( z\right) \right) \text{\qquad }\left( z\in \mathbb{U}\right) .$$Indeed, it is known that$$f\left( z\right) \prec g\left( z\right) \quad \left( z\in \mathbb{U}\right)
\Rightarrow f\left( 0\right) =g\left( 0\right) \text{\quad and\quad }f\left(
\mathbb{U}\right) \subset g\left( \mathbb{U}\right) .$$Furthermore, if the function $g$ is univalent in $\mathbb{U}$, then we have the following equivalence$$f\left( z\right) \prec g\left( z\right) \quad \left( z\in \mathbb{U}\right)
\Leftrightarrow f\left( 0\right) =g\left( 0\right) \text{\quad and\quad }f\left( \mathbb{U}\right) \subset g\left( \mathbb{U}\right) .$$
A function $f\in \mathcal{A}$ is said to be starlike of order $\alpha $$\left( 0\leq \alpha <1\right) $, if it satisfies the inequality$$\Re \left( \frac{zf^{\prime }(z)}{f(z)}\right) >\alpha \qquad \left( z\in
\mathbb{U}\right) .$$We denote the class which consists of all functions $f\in \mathcal{A}$ that are starlike of order $\alpha $ by $\mathcal{S}^{\ast }(\alpha )$. It is well-known that $\mathcal{S}^{\ast }(\alpha )\subset \mathcal{S}^{\ast }(0)=\mathcal{S}^{\ast }\subset \mathcal{S}.$
Let $0\leq \alpha ,\delta <1.$ A function $f\in \mathcal{A}$ is said to be close-to-convex of order $\alpha $ and type $\delta $ if there exists a function $g\in \mathcal{S}^{\ast }\left( \delta \right) $ such that the inequality$$\Re \left( \frac{zf^{\prime }(z)}{g(z)}\right) >\alpha \qquad \left( z\in
\mathbb{U}\right)$$holds. We denote the class which consists of all functions $f\in \mathcal{A}$ that are close-to-convex of order $\alpha $ and type $\delta $ by $\mathcal{C}(\alpha ,\delta )$. This class is introduced by Libera [@L].
In particular, when $\delta =0$ we have $\mathcal{C}(\alpha ,0)=\mathcal{C}(\alpha )$ of close-to-convex functions of order $\alpha $, and also we get $\mathcal{C}(0,0)=\mathcal{C}$ of close-to-convex functions introduced by Kaplan [@K]. It is well-known that $\mathcal{S}^{\ast }\subset \mathcal{C}\subset \mathcal{S}$.
Furthermore a function $f\in \mathcal{A}$ is said to be in the class $\mathcal{M}\left( \beta \right) $$\left( \beta >1\right) $ if it satisfies the inequality$$\Re \left( \frac{zf^{\prime }(z)}{f(z)}\right) <\beta \qquad \left( z\in
\mathbb{U}\right) .$$This class introduced by Uralegaddi et al. [@UGS].
Motivated by the classes $\mathcal{S}^{\ast }(\alpha )$ and $\mathcal{M}\left( \beta \right) $, Kuroki and Owa [@KO] introduced the subclass $\mathcal{S}\left( \alpha ,\beta \right) $ of analytic functions $f\in
\mathcal{A}$ which is given by Definition $\ref{dfn1}$ below.
\[dfn1\](see [@KO]) Let $\mathcal{S}\left( \alpha ,\beta \right) $ be a class of functions $f\in \mathcal{A}$ which satisfy the inequality$$\alpha <\Re \left( \frac{zf^{\prime }\left( z\right) }{f\left( z\right) }\right) \mathbb{<\beta }\qquad \left( z\in \mathbb{U}\right)$$for some real number $\alpha \;\left( \alpha <1\right) $ and some real number $\beta \;\left( \beta >1\right) .$
The class $\mathcal{S}\left( \alpha ,\beta \right) $ is non-empty. For example, the function $f\in \mathcal{A}$ given by$$f(z)=z\exp \left\{ \frac{\beta -\alpha }{\pi }i\int_{0}^{z}\frac{1}{t}\log
\left( \frac{1-e^{2\pi i\frac{1-\alpha }{\beta -\alpha }}t}{1-t}\right)
dt\right\}$$is in the class $\mathcal{S}\left( \alpha ,\beta \right) $.
Also for $f\in \mathcal{S}\left( \alpha ,\beta \right) $, if $\alpha \geq 0$ then $f\in \mathcal{S}^{\ast }(\alpha )$ in $\mathbb{U}$, which implies that $f\in \mathcal{S}.$
\[lm1\][@KO] Let $f\in \mathcal{A}$ and $\alpha <1<\beta $. Then $f\in \mathcal{S}\left( \alpha ,\beta \right) $ if and only if$$\frac{zf^{\prime }\left( z\right) }{f\left( z\right) }\prec 1+\frac{\beta
-\alpha }{\pi }i\log \left( \frac{1-e^{2\pi i\frac{1-\alpha }{\beta -\alpha }}z}{1-z}\right) \qquad \left( z\in \mathbb{U}\right) .$$
Lemma $\ref{lm1}$ means that the function $f_{\alpha ,\beta }:\mathbb{U\rightarrow C}$ defined by$$f_{\alpha ,\beta }(z)=1+\frac{\beta -\alpha }{\pi }i\log \left( \frac{1-e^{2\pi i\frac{1-\alpha }{\beta -\alpha }}z}{1-z}\right) \label{1.2}$$is analytic in $\mathbb{U}$ with $f_{\alpha ,\beta }(0)=1$ and maps the unit disk $\mathbb{U}$ onto the vertical strip domain$$\Omega _{\alpha ,\beta }=\left\{ w\in \mathbb{C}:\alpha <\Re \left( w\right)
\mathbb{<\beta }\right\} \label{1.x}$$conformally.
We note that the function $f_{\alpha ,\beta }$ defined by $\left( \ref{1.2}\right) $ is a convex univalent function in $\mathbb{U}$ and has the form$$f_{\alpha ,\beta }(z)=1+\sum_{n=1}^{\infty }B_{n}z^{n},$$where$$B_{n}=\frac{\beta -\alpha }{n\pi }i\left( 1-e^{2n\pi i\frac{1-\alpha }{\beta
-\alpha }}\right) \qquad \left( n=1,2,\ldots \right) . \label{1.3}$$
Making use of Definition $\ref{dfn1}$, Kuroki and Owa [@KO] proved the following coefficient bounds for the Taylor-Maclaurin coefficients for functions in the sublass $\mathcal{S}\left( \alpha ,\beta \right) $ of analytic functions $f\in \mathcal{A}$.
\[th.ko\][@KO Theorem 2.1] Let the function $f\in \mathcal{A}$ be defined by $(\ref{1.1})$. If $f\in \mathcal{S}\left( \alpha ,\beta \right) $, then$$\left\vert a_{n}\right\vert \leq \frac{\prod\limits_{k=2}^{n}\left[ k-2+\frac{2\left( \beta -\alpha \right) }{\pi }\sin \frac{\pi \left( 1-\alpha
\right) }{\beta -\alpha }\right] }{\left( n-1\right) !}\qquad \left(
n=2,3,\ldots \right) .$$
Here, in our present sequel to some of the aforecited works (especially [KO]{}), we first introduce the following subclasses of analytic functions.
\[dfn3\]Let $\alpha $ and $\beta $ be real such that $0\leq \alpha
<1<\beta $. We denote by $\mathcal{S}_{g}\left( \alpha ,\beta \right) $ the class of functions $f\in \mathcal{A}$ satisfying$$\alpha <\Re \left( \frac{zf^{\prime }\left( z\right) }{g\left( z\right) }\right) \mathbb{<\beta }\qquad \left( z\in \mathbb{U}\right) ,$$where $g\in \mathcal{S}\left( \delta ,\beta \right) $ with $0\leq \delta
<1<\beta .$
Note that for given $0\leq \alpha ,\delta <1<\beta $, $f\in \mathcal{S}_{g}\left( \alpha ,\beta \right) $ if and only if the following two subordination equations are satisfied:$$\frac{zf^{\prime }\left( z\right) }{g\left( z\right) }\prec \frac{1+\left(
1-2\alpha \right) z}{1-z}\qquad \text{and\qquad }\frac{zf^{\prime }\left(
z\right) }{g\left( z\right) }\prec \frac{1-\left( 1-2\beta \right) z}{1+z}.$$
$(i)$ If we let $\beta \rightarrow \infty $ in Definition $\ref{dfn3}$, then the class $\mathcal{S}_{g}\left( \alpha ,\beta \right) $ reduces to the class $\mathcal{C}(\alpha ,\delta )$ of close-to-convex functions of order $\alpha $ and type $\delta $.
$(ii)$ If we let $\delta =0,\;\beta \rightarrow \infty $ in Definition $\ref{dfn3}$, then the class $\mathcal{S}_{g}\left( \alpha ,\beta
\right) $ reduces to the class $\mathcal{C}(\alpha )$ of close-to-convex functions of order $\alpha $.
$(iii)$ If we let $\alpha =\delta =0,\;\beta \rightarrow \infty $ in Definition $\ref{dfn3}$, then the class $\mathcal{S}_{g}\left( \alpha
,\beta \right) $ reduces to the close-to-convex functions class $\mathcal{C}$.
Using $\left( \ref{1.x}\right) $ and by the principle of subordination, we can immediately obtain Lemma $\ref{lm}$.
\[lm\]Let $\alpha ,\beta $ and $\delta $ be real numbers such that $0\leq \alpha ,\delta <1<\beta $ and let the function $f\in \mathcal{A}$ be defined by $(\ref{1.1})$. Then $f\in \mathcal{S}_{g}\left( \alpha ,\beta
\right) $ if and only if$$\frac{zf^{\prime }\left( z\right) }{g\left( z\right) }\prec f_{\alpha ,\beta
}(z)$$where $f_{\alpha ,\beta }(z)$ is defined by $\left( \ref{1.2}\right) $.
\[dfn4\]A function $f\in \mathcal{A}$ is said to be in the class $\mathcal{B}_{g}\left( \alpha ,\beta ;\rho \right) $ if it satisfies the following non-homogenous Cauchy-Euler differential equation:$$z^{2}\frac{d^{2}w}{dz^{2}}+2\left( 1+\rho \right) z\frac{dw}{dz}+\rho \left(
1+\rho \right) w=\left( 1+\rho \right) \left( 2+\rho \right) \varphi (z)$$$$\left( w=f(z)\in \mathcal{A},\;\varphi \in \mathcal{S}_{g}\left( \alpha
,\beta \right) ,\;g\in \mathcal{S}\left( \delta ,\beta \right) ,\;0\leq
\alpha ,\delta <1<\beta ,\;\rho \in \mathbb{R}\backslash \left( -\infty ,-1\right] \right) .$$
$(i)$ If we let $\beta \rightarrow \infty $ in Definition $\ref{dfn4}$, then we get the class $\mathcal{B}_{g}\left( \alpha ;\rho \right) $ which consists of functions $f\in \mathcal{A}$ satisfying$$z^{2}\frac{d^{2}w}{dz^{2}}+2\left( 1+\rho \right) z\frac{dw}{dz}+\rho \left(
1+\rho \right) w=\left( 1+\rho \right) \left( 2+\rho \right) \varphi (z)$$$$\left( \varphi \in \mathcal{C}(\alpha ,\delta ),\;0\leq \alpha ,\delta
<1,\;\rho \in \mathbb{R}\backslash \left( -\infty ,-1\right] \right) .$$
$(ii)$ If we let $\delta =0,\;\beta \rightarrow \infty $ in Definition $\ref{dfn4}$, then we get the class $\mathcal{H}_{g}\left( \alpha
;\rho \right) $ which consists of functions $f\in \mathcal{A}$ satisfying$$z^{2}\frac{d^{2}w}{dz^{2}}+2\left( 1+\rho \right) z\frac{dw}{dz}+\rho \left(
1+\rho \right) w=\left( 1+\rho \right) \left( 2+\rho \right) \varphi (z)$$$$\left( \varphi \in \mathcal{C}(\alpha ),\;0\leq \alpha <1,\;\rho \in \mathbb{R}\backslash \left( -\infty ,-1\right] \right) .$$
$(iii)$ If we let $\alpha =\delta =0,\;\beta \rightarrow \infty $ in Definition $\ref{dfn4}$, then we get the class $\mathcal{M}_{g}\left(
\rho \right) $ which consists of functions $f\in \mathcal{A}$ satisfying$$z^{2}\frac{d^{2}w}{dz^{2}}+2\left( 1+\rho \right) z\frac{dw}{dz}+\rho \left(
1+\rho \right) w=\left( 1+\rho \right) \left( 2+\rho \right) \varphi (z)$$$$\left( \varphi \in \mathcal{C},\;\rho \in \mathbb{R}\backslash \left(
-\infty ,-1\right] \right) .$$
The coefficient problem for close-to-convex functions studied many authors in recent years, (see, for example [@B; @BHG; @R; @SAS; @SXW; @UNR; @UNAR]). Upon inspiration from the recent work of Kuroki and Owa [@KO] the aim of this paper is to obtain coefficient bounds for the Taylor-Maclaurin coefficients for functions in the function classes $\mathcal{S}_{g}\left(
\alpha ,\beta \right) $ and $\mathcal{B}_{g}\left( \alpha ,\beta ;\rho
\right) $ of analytic functions which we have introduced here. Also we investigate Fekete-Szegö problem for functions belong to the function class $\mathcal{S}_{g}\left( \alpha ,\beta \right) $.
Coefficient bounds
==================
In order to prove our main results (Theorems $\ref{thm1}$ and $\ref{thm2}$ below), we first recall the following lemma due to Rogosinski [@R2].
\[lm2\]Let the function $\mathfrak{g}$ given by$$\mathfrak{g}\left( z\right) =\sum_{k=1}^{\infty }\mathfrak{b}_{k}z^{k}\qquad
\left( z\in \mathbb{U}\right)$$be convex in $\mathbb{U}.$ Also let the function $\mathfrak{f}$ given by$$\mathfrak{f}(z)=\sum_{k=1}^{\infty }\mathfrak{a}_{k}z^{k}\qquad \left( z\in
\mathbb{U}\right)$$be holomorphic in $\mathbb{U}.$ If$$\mathfrak{f}\left( z\right) \prec \mathfrak{g}\left( z\right) \qquad \left(
z\in \mathbb{U}\right) ,$$then$$\left\vert \mathfrak{a}_{k}\right\vert \leq \left\vert \mathfrak{b}_{1}\right\vert \qquad \left( k=1,2,\ldots \right) .$$
We now state and prove each of our main results given by Theorems $\ref{thm1}
$ and $\ref{thm2}$ below.
\[thm1\]Let $\alpha ,\beta $ and $\delta $ be real numbers such that $0\leq \alpha ,\delta <1<\beta $ and let the function $f\in \mathcal{A}$ be defined by $(\ref{1.1})$. If $f\in \mathcal{S}_{g}\left( \alpha ,\beta
\right) $, then$$\begin{aligned}
\left\vert a_{n}\right\vert &\leq &\frac{\prod\limits_{k=2}^{n}\left[ k-2+\frac{2\left( \beta -\delta \right) }{\pi }\sin \frac{\pi \left( 1-\delta
\right) }{\beta -\delta }\right] }{n!} \\
&&+\frac{2\left( \beta -\alpha \right) }{n\pi }\sin \frac{\pi \left(
1-\alpha \right) }{\beta -\alpha }\left( 1+\sum_{j=1}^{n-2}\frac{\prod\limits_{k=2}^{n-j}\left[ k-2+\frac{2\left( \beta -\delta \right) }{\pi
}\sin \frac{\pi \left( 1-\delta \right) }{\beta -\delta }\right] }{\left(
n-j-1\right) !}\right) \qquad \left( n=2,3,\ldots \right) ,\end{aligned}$$where $g\in \mathcal{S}\left( \delta ,\beta \right) .$
Let the function $f\in \mathcal{S}_{g}\left( \alpha ,\beta \right) $ be of the form $\left( \ref{1.1}\right) $. Therefore, there exists a function$$g(z)=z+\sum_{n=2}^{\infty }b_{n}z^{n}\in \mathcal{S}\left( \delta ,\beta
\right) \label{2.2}$$so that$$\alpha <\Re \left( \frac{zf^{\prime }\left( z\right) }{g\left( z\right) }\right) \mathbb{<\beta }. \label{2.3}$$Note that by Theorem $\ref{th.ko}$, we have$$\left\vert b_{n}\right\vert \leq \frac{\prod\limits_{k=2}^{n}\left[ k-2+\frac{2\left( \beta -\delta \right) }{\pi }\sin \frac{\pi \left( 1-\delta
\right) }{\beta -\delta }\right] }{\left( n-1\right) !}\qquad \left(
n=2,3,\ldots \right) . \label{2.a}$$Let us define the function $p(z)$ by$$p(z)=\frac{zf^{\prime }\left( z\right) }{g\left( z\right) }\qquad (z\in
\mathbb{U}). \label{2.7}$$Then according to the assertion of Lemma $\ref{lm}$, we get$$p(z)\prec f_{\alpha ,\beta }(z)\qquad (z\in \mathbb{U}), \label{2.8}$$where $f_{\alpha ,\beta }(z)$ is defined by $\left( \ref{1.2}\right) $. Hence, using Lemma $\ref{lm2}$, we obtain$$\left\vert \frac{p^{\left( m\right) }\left( 0\right) }{m!}\right\vert
=\left\vert c_{m}\right\vert \leq \left\vert B_{1}\right\vert \qquad \left(
m=1,2,\ldots \right) , \label{2.9}$$where$$p(z)=1+c_{1}z+c_{2}z^{2}+\cdots \qquad (z\in \mathbb{U}) \label{2.10}$$and (by $\left( \ref{1.3}\right) $)$$\left\vert B_{1}\right\vert =\left\vert \frac{\beta -\alpha }{\pi }i\left(
1-e^{2\pi i\frac{1-\alpha }{\beta -\alpha }}\right) \right\vert =\frac{2\left( \beta -\alpha \right) }{\pi }\sin \frac{\pi \left( 1-\alpha \right)
}{\beta -\alpha }. \label{2.c}$$Also from $\left( \ref{2.7}\right) $, we find$$zf^{\prime }(z)=p(z)g(z). \label{2.11}$$Since $a_{1}=b_{1}=1$, in view of $\left( \ref{2.11}\right) $, we obtain$$na_{n}-b_{n}=c_{n-1}+c_{n-2}b_{2}+\cdots
+c_{1}b_{n-1}=c_{n-1}+\sum_{j=1}^{n-2}c_{j}b_{n-j}\qquad \left( n=2,3,\ldots
\right) . \label{2.12}$$Now we get from $\left( \ref{2.a}\right) ,\left( \ref{2.9}\right) $ and $\left( \ref{2.12}\right) ,$$$\begin{aligned}
\left\vert a_{n}\right\vert &\leq &\frac{\prod\limits_{k=2}^{n}\left[ k-2+\frac{2\left( \beta -\delta \right) }{\pi }\sin \frac{\pi \left( 1-\delta
\right) }{\beta -\delta }\right] }{n!} \\
&&+\frac{\left\vert B_{1}\right\vert }{n}\left( 1+\sum_{j=1}^{n-2}\frac{\prod\limits_{k=2}^{n-j}\left[ k-2+\frac{2\left( \beta -\delta \right) }{\pi
}\sin \frac{\pi \left( 1-\delta \right) }{\beta -\delta }\right] }{\left(
n-j-1\right) !}\right) \quad \left( n=2,3,\ldots \right) .\end{aligned}$$This evidently completes the proof of Theorem $\ref{thm1}.$
It is worthy to note that the inequality obtained for $\left\vert
a_{n}\right\vert $ in Theorem $\ref{thm1}$ is also valid when $\alpha
,\delta <1<\beta $ by Theorem $\ref{th.ko}$.
Letting $\beta \rightarrow \infty $ in Theorem $\ref{thm1}$, we have the coefficient bounds for close-to-convex functions of order $\alpha $ and type $\delta $.
[@L] Let $\alpha $ and $\delta $ be real numbers such that $0\leq
\alpha ,\delta <1$ and let the function $f\in \mathcal{A}$ be defined by $(\ref{1.1})$. If $f\in \mathcal{C}(\alpha ,\delta )$, then$$\left\vert a_{n}\right\vert \leq \frac{2\left( 3-2\delta \right) \left(
4-2\delta \right) \cdots \left( n-2\delta \right) }{n!}\left[ n\left(
1-\alpha \right) +\left( \alpha -\delta \right) \right] \qquad \left(
n=2,3,\ldots \right) .$$
Letting $\delta =0,\;\beta \rightarrow \infty $ in Theorem $\ref{thm1}$, we have the following coefficient bounds for close-to-convex functions of order $\alpha $.
Let $\alpha $ be a real number such that $0\leq \alpha <1$ and let the function $f\in \mathcal{A}$ be defined by $(\ref{1.1})$. If $f\in \mathcal{C}(\alpha )$, then$$\left\vert a_{n}\right\vert \leq n\left( 1-\alpha \right) +\alpha \qquad
\left( n=2,3,\ldots \right) .$$
Letting $\alpha =\delta =0,\;\beta \rightarrow \infty $ in Theorem $\ref{thm1}$, we have the well-known coefficient bounds for close-to-convex functions.
[@Re] Let the function $f\in \mathcal{A}$ be defined by $(\ref{1.1})$. If $f\in \mathcal{C}$, then$$\left\vert a_{n}\right\vert \leq n\qquad \left( n=2,3,\ldots \right) .$$
\[thm2\]Let $\alpha ,\beta $ and $\delta $ be real numbers such that $0\leq \alpha ,\delta <1<\beta $ and let the function $f\in \mathcal{A}$ be defined by $(\ref{1.1})$. If $f\in \mathcal{B}_{g}\left( \alpha ,\beta ;\rho
\right) $, then$$\begin{aligned}
\left\vert a_{n}\right\vert &\leq &\left\{ \frac{\prod\limits_{k=2}^{n}\left[ k-2+\frac{2\left( \beta -\delta \right) }{\pi }\sin \frac{\pi \left(
1-\delta \right) }{\beta -\delta }\right] }{n!}\right. \notag \\
&&\left. +\frac{2\left( \beta -\alpha \right) }{n\pi }\sin \frac{\pi \left(
1-\alpha \right) }{\beta -\alpha }\left( 1+\sum_{j=1}^{n-2}\frac{\prod\limits_{k=2}^{n-j}\left[ k-2+\frac{2\left( \beta -\delta \right) }{\pi
}\sin \frac{\pi \left( 1-\delta \right) }{\beta -\delta }\right] }{\left(
n-j-1\right) !}\right) \right\} \notag \\
&&\times \frac{\left( 1+\rho \right) \left( 2+\rho \right) }{\left( n+\rho
\right) \left( n+1+\rho \right) }\qquad \left( n=2,3,\ldots \right) ,
\label{3.1}\end{aligned}$$where $\rho \in \mathbb{R}\backslash \left( -\infty ,-1\right] .$
Let the function $f\in \mathcal{A}$ be given by $(\ref{1.1})$. Also let$$\varphi (z)=z+\sum_{n=2}^{\infty }\varphi _{n}z^{n}\in \mathcal{S}_{g}\left(
\alpha ,\beta \right) .$$We then deduce from Definition $\ref{dfn4}$ that$$a_{n}=\frac{\left( 1+\rho \right) \left( 2+\rho \right) }{\left( n+\rho
\right) \left( n+1+\rho \right) }\varphi _{n}\qquad \left( n=2,3,\ldots
;\;\rho \in \mathbb{R}\backslash \left( -\infty ,-1\right] \right) .$$Thus, by using Theorem $\ref{thm1}$ in conjunction with the above equality, we have assertion $\left( \ref{3.1}\right) $ of Theorem $\ref{thm2}$.
Letting $\beta \rightarrow \infty $ in Theorem $\ref{thm2}$, we have the following consequence.
Let $\alpha $ and $\delta $ be real numbers such that $0\leq \alpha
,\delta <1$ and let the function $f\in \mathcal{A}$ be defined by $(\ref{1.1})$. If $f\in \mathcal{B}_{g}\left( \alpha ;\rho \right) $, then$$\left\vert a_{n}\right\vert \leq \left\{ \frac{\prod\limits_{k=2}^{n}\left(
k-2\delta \right) }{n!}+\frac{2\left( 1-\alpha \right) }{n}\left(
1+\sum_{j=1}^{n-2}\frac{\prod\limits_{k=2}^{n-j}\left( k-2\delta \right) }{\left( n-j-1\right) !}\right) \right\} \frac{\left( 1+\rho \right) \left(
2+\rho \right) }{\left( n+\rho \right) \left( n+1+\rho \right) }\qquad
\left( n=2,3,\ldots \right) ,$$where $\rho \in \mathbb{R}\backslash \left( -\infty ,-1\right] .$
Letting $\delta =0,\;\beta \rightarrow \infty $ in Theorem $\ref{thm2}$, we have the following consequence.
Let $\alpha $ be a real number such that $0\leq \alpha <1$ and let the function $f\in \mathcal{A}$ be defined by $(\ref{1.1})$. If $f\in \mathcal{H}_{g}\left( \alpha ;\rho \right) $, then$$\left\vert a_{n}\right\vert \leq \left[ n\left( 1-\alpha \right) +\alpha \right] \frac{\left( 1+\rho \right) \left( 2+\rho \right) }{\left( n+\rho
\right) \left( n+1+\rho \right) }\qquad \left( n=2,3,\ldots \right) ,$$where $\rho \in \mathbb{R}\backslash \left( -\infty ,-1\right] .$
Letting $\alpha =\delta =0,\;\beta \rightarrow \infty $ in Theorem $\ref{thm2}$, we have the following consequence.
Let the function $f\in \mathcal{A}$ be defined by $(\ref{1.1})$. If $f\in
\mathcal{M}_{g}\left( \rho \right) $, then$$\left\vert a_{n}\right\vert \leq n\frac{\left( 1+\rho \right) \left( 2+\rho
\right) }{\left( n+\rho \right) \left( n+1+\rho \right) }\qquad \left(
n=2,3,\ldots \right) ,$$where $\rho \in \mathbb{R}\backslash \left( -\infty ,-1\right] .$
[99]{} S. Bulut, *Coefficient bounds for certain subclasses of close-to-convex functions of complex order*, Filomat **31** (20) (2017), 6401–6408.
S. Bulut, M. Hussain and A. Ghafoor, *On coefficient bounds of some new subfamilies of close-to-convex functions of complex order related to generalized differential operator*, Asian-Eur. J. Math. **13** (1) (2020), in press.
W. Kaplan, *Close-to-convex schlicht functions*, Michigan Math. J. **1** (1952), 169–185 (1953).
K. Kuroki and S. Owa, *Notes on new class for certain analytic functions*, RIMS Kokyuroku 1772 (2011), 21-25.
R.J. Libera, *Some radius of convexity problems*, Duke Math. J. **31** (1964), 143–158.
M.O. Reade, *On close-to-convex univalent functions*, Michigan Math. J. **3** (1) (1955), 59–62.
M.S. Robertson, *On the theory of univalent functions*, Ann. of Math. (2) **37** (2) (1936), 374–408.
W. Rogosinski, *On the coefficients of subordinate functions*, Proc. London Math. Soc. (Ser. 2) **48** (1943), 48–82.
H.M. Srivastava, O. Altintaş and S. Kirci Serenbay, *Coefficient bounds for certain subclasses of starlike functions of complex order*, Appl. Math. Lett. **24** (2011), 1359–1363.
H.M. Srivastava, Q.-H. Xu and G.-P. Wu, *Coefficient estimates for certain subclasses of spiral-like functions of complex order*, Appl. Math. Lett. **23** (2010), 763–768.
B.A. Uralegaddi, M.D. Ganigi and S.M. Sarangi, *Univalent functions with positive coefficients*, Tamkang J. Math. **25** (1994), 225–230.
W. Ul-Haq, A. Nazneen and N. Rehman, *Coefficient estimates for certain subfamilies of close-to-convex functions of complex order*, Filomat **28** (6) (2014), 1139–1142.
W. Ul-Haq, A. Nazneen, M. Arif and N. Rehman, *Coefficient bounds for certain subclasses of close-to-convex functions of Janowski type*, J. Comput. Anal. Appl. **16** (1) (2014), 133–138.
|
---
abstract: 'In contemporary applied and computational mathematics, a frequent challenge is to bound the expectation of the spectral norm of a sum of independent random matrices. This quantity is controlled by the norm of the expected square of the random matrix and the expectation of the maximum squared norm achieved by one of the summands; there is also a weak dependence on the dimension of the random matrix. The purpose of this paper is to give a complete, elementary proof of this important, but underappreciated, inequality.'
author:
- 'Joel A. Tropp'
date: '15 June 2015.'
title: |
The Expected Norm of a Sum of Independent Random Matrices:\
An Elementary Approach
---
Motivation
==========
Over the last decade, random matrices have become ubiquitous in applied and computational mathematics. As this trend accelerates, more and more researchers must confront random matrices as part of their work. Classical random matrix theory can be difficult to use, and it is often silent about the questions that come up in modern applications. As a consequence, it has become imperative to develop and disseminate new tools that are easy to use and that apply to a wide range of random matrices.
Matrix Concentration Inequalities
---------------------------------
Matrix concentration inequalities are among the most popular of these new methods. For a random matrix ${{\bm{Z}}}$ with appropriate structure, these results use simple parameters associated with the random matrix to provide bounds of the form $${\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{Z}}} - {\operatorname{\mathbb{E}}}{{\bm{Z}}} } \right\Vert} \quad \leq \quad \dots
\quad\text{and}\quad
{{\mathbb}{P}\left\{{ {\left\Vert { {{\bm{Z}}} - {\operatorname{\mathbb{E}}}{{\bm{Z}}} } \right\Vert} \geq t }\right\}}
\quad\leq\quad \dots$$ where ${\left\Vert {\cdot} \right\Vert}$ denotes the spectral norm, also known as the $\ell_2$ operator norm. These tools have already found a place in a huge number of mathematical research fields, including
[2]{}
- numerical linear algebra [@Tro11:Improved-Analysis]
- numerical analysis [@MB14:Far-Field-Compression]
- uncertainty quantification [@CG14:Computing-Active]
- statistics [@Kol11:Oracle-Inequalities]
- econometrics [@CC13:Optimal-Uniform]
- approximation theory [@CDL13:Stability-Accuracy]
- sampling theory [@BG13:Relevant-Sampling]
- machine learning [@DKC13:High-Dimensional-Gaussian; @LSS+14:Randomized-Nonlinear]
- learning theory [@FSV12:Learning-Functions; @MKR12:PAC-Bayesian]
- mathematical signal processing [@CBSW14:Coherent-Matrix]
- optimization [@CSW12:Linear-Matrix]
- computer graphics and vision [@CGH14:Near-Optimal-Joint]
- quantum information theory [@Hol12:Quantum-Systems]
- theory of algorithms [@HO14:Pipage-Rounding; @CKMP14:Solving-SDD] and
- combinatorics [@Oli10:Spectrum-Random].
These references are chosen more or less at random from a long menu of possibilities. See the monograph [@Tro15:Introduction-Matrix] for an overview of the main results on matrix concentration, many detailed applications, and additional background references.
The Expected Norm
-----------------
The purpose of this paper is to provide a complete proof of the following important, but underappreciated, theorem. This result is adapted from [@CGT12:Masked-Sample Thm. A.1].
\[thm:main\] Consider an independent family $\{ {{\bm{S}}}_1, \dots, {{\bm{S}}}_n \}$ of random $d_1 \times d_2$ complex-valued matrices with ${\operatorname{\mathbb{E}}}{{\bm{S}}}_i = {{\bm{0}}}$ for each index $i$, and define $$\label{eqn:indep-sum}
{{\bm{Z}}} := \sum_{i=1}^n {{\bm{S}}}_i.$$ Introduce the matrix variance parameter $$\label{eqn:variance-param}
\begin{aligned}
v({{\bm{Z}}}) :=& \max\left\{ {\left\Vert { {\operatorname{\mathbb{E}}}\big[ {{\bm{ZZ}}}^{*}\big] } \right\Vert}, \
{\left\Vert { {\operatorname{\mathbb{E}}}\big[ {{\bm{Z}}}^{*}{{\bm{Z}}} \big] } \right\Vert} \right\} \\
=& \max\left\{ {\left\Vert { \sum_i {\operatorname{\mathbb{E}}}\big[ {{\bm{S}}}_i {{\bm{S}}}_i^{*}\big] } \right\Vert}, \
{\left\Vert { \sum_i {\operatorname{\mathbb{E}}}\big[ {{\bm{S}}}_i^{*}{{\bm{S}}}_i \big] } \right\Vert} \right\}
\end{aligned}$$ and the large deviation parameter $$\label{eqn:large-dev-param}
L := \left( {\operatorname{\mathbb{E}}}\max\nolimits_i {{\left\Vert {{{\bm{S}}}_i} \right\Vert}^2} \right)^{1/2}.$$ Define the dimensional constant $$\label{eqn:dimensional}
C({{\bm{d}}}) := C(d_1, d_2) := 4 \cdot \big(1 + 2\lceil \log (d_1 + d_2) \rceil \big).$$ Then we have the matching estimates $$\label{eqn:main-ineqs}
\sqrt{c \cdot v({{\bm{Z}}})} \ +\ c \cdot L
\quad\leq\quad \left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{Z}}}} \right\Vert}^2} \right)^{1/2}
\quad\leq\quad \sqrt{C({{\bm{d}}}) \cdot v({{\bm{Z}}})}\ +\ C({{\bm{d}}}) \cdot L.$$ In the lower inequality, we can take $c := 1/4$. The symbol ${\left\Vert {\cdot} \right\Vert}$ denotes the $\ell_2$ operator norm, also known as the spectral norm, and ${}^*$ refers to the conjugate transpose operation. The map $\lceil \cdot \rceil$ returns the smallest integer that exceeds its argument.
The proof of this result occupies the bulk of this paper. Most of the page count is attributed to a detailed presentation of the required background material from linear algebra and probability. We have based the argument on the most elementary considerations possible, and we have tried to make the work self-contained. Once the reader has digested these ideas, the related—but more sophisticated —approach in the paper [@MJCFT14:Matrix-Concentration] should be accessible.
Discussion
----------
Before we continue, some remarks about Theorem \[thm:main\] are in order. First, although it may seem restrictive to focus on independent sums, as in , this model captures an enormous number of useful examples. See the monograph [@Tro15:Introduction-Matrix] for justification.
We have chosen the term *variance parameter* because the quantity is a direct generalization of the variance of a scalar random variable. The passage from the first formula to the second formula in is an immediate consequence of the assumption that the summands ${{\bm{S}}}_i$ are independent and have zero mean (see Section \[sec:upper\]). We use the term *large-deviation parameter* because the quantity reflects the part of the expected norm of the random matrix that is attributable to one of the summands taking an unusually large value. In practice, both parameters are easy to compute using matrix arithmetic and some basic probabilistic considerations.
In applications, it is common that we need high-probability bounds on the norm of a random matrix. Typically, the bigger challenge is to estimate the expectation of the norm, which is what Theorem \[thm:main\] achieves. Once we have a bound for the expectation, we can use scalar concentration inequalities, such as [@BLM13:Concentration-Inequalities Thm. 6.10], to obtain high-probability bounds on the deviation between the norm and its mean value.
We have stated Theorem \[thm:main\] as a bound on the second moment of ${\left\Vert {{{\bm{Z}}}} \right\Vert}$ because this is the most natural form of the result. Equivalent bounds hold for the first moment: $$\sqrt{c' \cdot v({{\bm{Z}}})} \ +\ c' \cdot L
\quad\leq\quad {\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{Z}}} } \right\Vert}
\quad\leq\quad \sqrt{C({{\bm{d}}}) \cdot v({{\bm{Z}}})}\ +\ C({{\bm{d}}}) \cdot L.$$ We can take $c' = 1/8$. The upper bound follows easily from and Jensen’s inequality. The lower bound requires the Khintchine–Kahane inequality [@LO94:Best-Constant].
Observe that the lower and upper estimates in differ only by the factor $C({{\bm{d}}})$. As a consequence, the lower bound has no explicit dimensional dependence, while the upper bound has only a weak dependence on the dimension. Under the assumptions of the theorem, it is not possible to make substantial improvements to either the lower bound or the upper bound. Section \[sec:examples\] provides examples that support this claim.
In the theory of matrix concentration, one of the major challenges is to understand what properties of the random matrix ${{\bm{Z}}}$ allow us to remove the dimensional factor $C({{\bm{d}}})$ from the estimate . This question is largely open, but the recent papers [@Oli13:Lower-Tail; @BV14:Sharp-Nonasymptotic; @Tro15:Second-Order-Matrix] make some progress.
The Uncentered Case
-------------------
Although Theorem \[thm:main\] concerns a centered random matrix, it can also be used to study a general random matrix. The following result is an immediate corollary of Theorem \[thm:main\].
\[thm:uncentered\] Consider an independent family $\{ {{\bm{S}}}_1, \dots, {{\bm{S}}}_n \}$ of random $d_1 \times d_2$ complex-valued matrices, not necessarily centered. Define $${{\bm{R}}} := \sum_{i=1}^n {{\bm{S}}}_i$$ Introduce the variance parameter $$\begin{aligned}
v({{\bm{R}}}) :=& \max\left\{ {\left\Vert { {\operatorname{\mathbb{E}}}\big[ ({{\bm{R}}} - {\operatorname{\mathbb{E}}}{{\bm{R}}})({{\bm{R}}} - {\operatorname{\mathbb{E}}}{{\bm{R}}})^{*}\big] } \right\Vert}, \
{\left\Vert { {\operatorname{\mathbb{E}}}\big[ ({{\bm{R}}} - {\operatorname{\mathbb{E}}}{{\bm{R}}})^{*}({{\bm{R}}} - {\operatorname{\mathbb{E}}}{{\bm{R}}}) \big] } \right\Vert} \right\} \\
=& \max\left\{ {\left\Vert { \sum_{i=1}^n {\operatorname{\mathbb{E}}}\big[ ({{\bm{S}}}_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i)({{\bm{S}}}_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i)^{*}\big] } \right\Vert}, \
{\left\Vert { {\operatorname{\mathbb{E}}}\big[ ({{\bm{S}}}_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i)^{*}({{\bm{S}}}_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i) \big] } \right\Vert} \right\}
\end{aligned}$$ and the large-deviation parameter $$L^2 := {\operatorname{\mathbb{E}}}\max\nolimits_i {{\left\Vert { {{\bm{S}}}_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i } \right\Vert}^2}.$$ Then we have the matching estimates $$\sqrt{c \cdot v({{\bm{R}}})}\ +\ c \cdot L
\quad\leq\quad \left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{R}}} - {\operatorname{\mathbb{E}}}{{\bm{R}}}} \right\Vert}^2} \right)^{1/2}
\quad\leq\quad \sqrt{C({{\bm{d}}}) \cdot v({{\bm{R}}})} \ + \ C({{\bm{d}}}) \cdot L.$$ We can take $c = 1/4$, and the dimensional constant $C({{\bm{d}}})$ is defined in .
Theorem \[thm:uncentered\] can also be used to study ${\left\Vert {{{\bm{R}}}} \right\Vert}$ by combining it with the estimates $${\left\Vert { {\operatorname{\mathbb{E}}}{{\bm{R}}} } \right\Vert} \ -\ \left( {\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{R}}} - {\operatorname{\mathbb{E}}}{{\bm{R}}} } \right\Vert}^2} \right)^{1/2}
\quad\leq\quad \left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{R}}}} \right\Vert}^2} \right)^{1/2}
\quad\leq\quad {\left\Vert { {\operatorname{\mathbb{E}}}{{\bm{R}}} } \right\Vert} \ + \ \left( {\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{R}}} - {\operatorname{\mathbb{E}}}{{\bm{R}}} } \right\Vert}^2} \right)^{1/2}.$$ These bounds follow from the triangle inequality for the spectral norm.
It is productive to interpret Theorem \[thm:uncentered\] as a perturbation result because it describes how far the random matrix ${{\bm{R}}}$ deviates from its mean ${\operatorname{\mathbb{E}}}{{\bm{R}}}$. We can derive many useful consequences from a bound of the form $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{R}}} - {\operatorname{\mathbb{E}}}{{\bm{R}}} } \right\Vert}^2} \right)^{1/2}
\quad\leq\quad \dots$$ This estimate shows that, on average, all of the singular values of ${{\bm{R}}}$ are close to the corresponding singular values of ${\operatorname{\mathbb{E}}}{{\bm{R}}}$. It also implies that, on average, the singular vectors of ${{\bm{R}}}$ are close to the corresponding singular vectors of ${\operatorname{\mathbb{E}}}{{\bm{R}}}$, provided that the associated singular values are isolated. Furthermore, we discover that, on average, each linear functional ${\operatorname{tr}}[ {{\bm{CR}}} ]$ is uniformly close to ${\operatorname{\mathbb{E}}}{\operatorname{tr}}[ {{\bm{CR}}} ]$ for each fixed matrix ${{\bm{C}}} \in \mathbb{M}^{d_2 \times d_1}$ with bounded Schatten $1$-norm ${{\left\Vert {{{\bm{C}}}} \right\Vert}_{S_1}} \leq 1$.
History
-------
Theorem \[thm:main\] is not new. A somewhat weaker version of the upper bound appeared in Rudelson’s work [@Rud99:Random-Vectors Thm. 1]; see also [@RV07:Sampling-Large Thm. 3.1] and [@Tro08:Conditioning-Random Sec. 9]. The first explicit statement of the upper bound appeared in [@CGT12:Masked-Sample Thm. A.1]. All of these results depend on the noncommutative Khintchine inequality [@LP86:Inegalites-Khintchine; @Pis98:Noncommutative-Vector; @Buc01:Operator-Khintchine]. In our approach, the main innovation is a particularly easy proof of a Khintchine-type inequality for matrices, patterned after [@MJCFT14:Matrix-Concentration Cor 7.3] and [@Tro15:Second-Order-Matrix Thm. 8.1].
The ideas behind the proof of the lower bound in Theorem \[thm:main\] are older. This estimate depends on generic considerations about the behavior of a sum of independent random variables in a Banach space. These techniques are explained in detail in [@LT91:Probability-Banach Ch. 6]. Our presentation expands on a proof sketch that appears in the monograph [@Tro15:Introduction-Matrix Secs. 5.1.2 and 6.1.2].
Target Audience
---------------
This paper is intended for students and researchers who want to develop a detailed understanding of the foundations of matrix concentration. The preparation required is modest.
- **Basic Convexity.** Some simple ideas from convexity play a role, notably the concept of a convex function and Jensen’s inequality.
- **Intermediate Linear Algebra.** The requirements from linear algebra are more substantial. The reader should be familiar with the spectral theorem for Hermitian (or symmetric) matrices, Rayleigh’s variational principle, the trace of a matrix, and the spectral norm. The paper includes reminders about this material. The paper elaborates on some less familiar ideas, including inequalities for the trace and the spectral norm.
- **Intermediate Probability.** The paper demands some comfort with probability. The most important concepts are expectation and the elementary theory of conditional expectation. We develop the other key ideas, including the notion of symmetrization.
Although many readers will find the background material unnecessary, it is hard to locate these ideas in one place and we prefer to make the paper self-contained. In any case, we provide detailed cross-references so that the reader may dive into the proofs of the main results without wading through the shallower part of the paper.
Roadmap
-------
Section \[sec:linear-algebra\] and Section \[sec:probability\] contain the background material from linear algebra and probability. To prove the upper bound in Theorem \[thm:main\], the key step is to establish the result for the special case of a sum of fixed matrices, each modulated by a random sign. This result appears in Section \[sec:khintchine\]. In Section \[sec:upper\], we exploit this result to obtain the upper bound in . In Section \[sec:lower\], we present the easier proof of the lower bound in . Finally, Section \[sec:examples\] shows that it is not possible to improve substantially.
Linear Algebra Background {#sec:linear-algebra}
=========================
Our aim is to make this paper as accessible as possible. To that end, this section presents some background material from linear algebra. Good references include [@Hal74:Finite-Dimensional-Vector; @Bha97:Matrix-Analysis; @HJ13:Matrix-Analysis]. We also assume some familiarity with basic ideas from the theory of convexity, which may be found in the books [@Lue69:Optimization-Vector; @Roc70:Convex-Analysis; @Bar02:Course-Convexity; @BV04:Convex-Optimization].
Convexity
---------
Let $V$ be a finite-dimensional linear space. A subset $E \subset V$ is [convex]{} when $${{\bm{x}}}, {{\bm{y}}} \in E
\quad\text{implies}\quad
\tau \cdot {{\bm{x}}} + (1- \tau) \cdot {{\bm{y}}} \in E
\quad\text{for each $\tau \in [0, 1]$.}$$ Let $E$ be a convex subset of a linear space $V$. A function $f : E \to {{\mathbb}{R}}$ is [convex]{} if $$\label{eqn:convexity}
f\big( \tau {{\bm{x}}} + (1-\tau) {{\bm{y}}} \big) \leq \tau \cdot f({{\bm{x}}}) + (1-\tau) \cdot f({{\bm{y}}})
\quad\text{for all $\tau \in [0,1]$ and all ${{\bm{x}}}, {{\bm{y}}} \in V$.}$$ We say that $f$ is [concave]{} when $-f$ is convex.
Vector Basics
-------------
Let ${{\mathbb}{C}}^d$ be the complex linear space of $d$-dimensional complex vectors, equipped with the usual componentwise addition and scalar multiplication. The $\ell_2$ norm ${\left\Vert {\cdot} \right\Vert}$ is defined on ${{\mathbb}{C}}^d$ via the expression $$\label{eqn:l2-norm}
{{\left\Vert { {{\bm{x}}} } \right\Vert}^2} := {{\bm{x}}}^{*}{{\bm{x}}}
\quad\text{for each ${{\bm{x}}} \in {{\mathbb}{C}}^d$.}$$ The symbol ${}^{*}$ denotes the conjugate transpose of a vector. Recall that the $\ell_2$ norm is a convex function.
A family $\{ {{\bm{u}}}_1, \dots, {{\bm{u}}}_d \} \subset {{\mathbb}{C}}^d$ is called an [orthonormal basis]{} if it satisfies the relations $${{\bm{u}}}_i^{*}{{\bm{u}}}_j = \begin{cases} 1, & i = j \\ 0, & i \neq j. \end{cases}$$ The orthonormal basis also has the property $$\sum_{i=1}^d {{\bm{u}}}_i {{\bm{u}}}_i^{*}= {\mathbf{I}}_d$$ where ${\mathbf{I}}_d$ is the $d \times d$ identity matrix.
Matrix Basics
-------------
A [matrix]{} is a rectangular array of complex numbers. Addition and multiplication by a complex scalar are defined componentwise, and we can multiply two matrices with compatible dimensions. We write $\mathbb{M}^{d_1 \times d_2}$ for the complex linear space of $d_1 \times d_2$ matrices. The symbol ${}^{*}$ also refers to the conjugate transpose operation on matrices.
A square matrix ${{\bm{H}}}$ is [Hermitian]{} when ${{\bm{H}}} = {{\bm{H}}}^{*}$. Hermitian matrices are sometimes called [conjugate symmetric]{}. We introduce the set of $d \times d$ Hermitian matrices: $${\mathbb{H}}_d := \big\{ {{\bm{H}}} \in \mathbb{M}^{d \times d} : {{\bm{H}}} = {{\bm{H}}}^{{*}} \big\}.$$ Note that the set ${\mathbb{H}}_d$ is a linear space over the real field.
An Hermitian matrix ${{\bm{A}}} \in {\mathbb{H}}_d$ is [positive semidefinite]{} when $$\label{eqn:psd}
{{\bm{u}}}^{*}{{\bm{A}}} {{\bm{u}}} \geq 0
\quad\text{for each ${{\bm{u}}} \in {{\mathbb}{C}}^d$.}$$ It is convenient to use the notation ${{\bm{A}}} {\preccurlyeq}{{\bm{H}}}$ to mean that ${{\bm{H}}} - {{\bm{A}}}$ is positive semidefinite. In particular, the relation ${{\bm{0}}} {\preccurlyeq}{{\bm{H}}}$ is equivalent to ${{\bm{H}}}$ being positive semidefinite. Observe that $${{\bm{0}}} {\preccurlyeq}{{\bm{A}}}
\quad\text{and}\quad
{{\bm{0}}} {\preccurlyeq}{{\bm{H}}}
\quad\text{implies}\quad
{{\bm{0}}} {\preccurlyeq}\alpha \cdot ({{\bm{A}}} + {{\bm{H}}})
\quad\text{for each $\alpha \geq 0$.}$$ In other words, addition and nonnegative scaling preserve the positive-semidefinite property.
For every matrix ${{\bm{B}}}$, both of its squares ${{\bm{BB}}}^{*}$ and ${{\bm{B}}}^{*}{{\bm{B}}}$ are Hermitian and positive semidefinite.
Basic Spectral Theory
---------------------
Each Hermitian matrix ${{\bm{H}}} \in {\mathbb{H}}_d$ can be expressed in the form $$\label{eqn:eig-decomp}
{{\bm{H}}} = \sum_{i=1}^d \lambda_i {{\bm{u}}}_i {{\bm{u}}}_i^{*}$$ where the $\lambda_i$ are uniquely determined real numbers, called [eigenvalues]{}, and $\{ {{\bm{u}}}_i \}$ is an orthonormal basis for ${{\mathbb}{C}}^d$. The representation is called an [eigenvalue decomposition]{}.
An Hermitian matrix ${{\bm{H}}}$ is positive semidefinite if and only if its eigenvalues $\lambda_i$ are all nonnegative. Indeed, using the eigenvalue decomposition , we see that $${{\bm{u}}}^{*}{{\bm{H}}} {{\bm{u}}}
= \sum_{i=1}^n \lambda_i \cdot {{\bm{u}}}^{*}{{\bm{u}}}_i {{\bm{u}}}_i^{*}{{\bm{u}}}
= \sum_{i=1}^n \lambda_i \cdot {{{\left\vert { \smash{{{\bm{u}}}^{*}{{\bm{u}}}_i} } \right\vert}}^2}.$$ To verify the forward direction, select ${{\bm{u}}} = {{\bm{u}}}_j$ for each index $j$. The reverse direction should be obvious.
We define a [monomial]{} function of an Hermitian matrix ${{\bm{H}}} \in {\mathbb{H}}_d$ by repeated multiplication: $${{\bm{H}}}^0 = {\mathbf{I}}_d,
\quad {{\bm{H}}}^1 = {{\bm{H}}},
\quad {{\bm{H}}}^2 = {{\bm{H}}} \cdot {{\bm{H}}},
\quad {{\bm{H}}}^3 = {{\bm{H}}} \cdot {{\bm{H}}}^2,
\quad\text{etc.}$$ For each nonnegative integer $r$, it is not hard to check that $$\label{eqn:monomial}
{{\bm{H}}} = \sum_{i=1}^d \lambda_i {{\bm{u}}}_i {{\bm{u}}}_i^{*}\quad\text{implies}\quad
{{\bm{H}}}^r = \sum_{i=1}^d \lambda_i^r {{\bm{u}}}_i {{\bm{u}}}_i^{*}.$$ In particular, ${{\bm{H}}}^{2p}$ is positive semidefinite for each nonnegative integer $p$.
Rayleigh’s Variational Principle
--------------------------------
The [Rayleigh principle]{} is an attractive expression for the maximum eigenvalue $\lambda_{\max}({{\bm{H}}})$ of an Hermitian matrix ${{\bm{H}}} \in {\mathbb{H}}_d$. This result states that $$\label{eqn:rayleigh}
\lambda_{\max}({{\bm{H}}}) = \max_{{\left\Vert { {{\bm{u}}}} \right\Vert} =1 } \ {{\bm{u}}}^{*}{{\bm{H}}} {{\bm{u}}}.$$ The maximum takes place over all unit-norm vectors ${{\bm{u}}} \in {{\mathbb}{C}}^d$. The identity follows from the Lagrange multiplier theorem and the existence of the eigenvalue decomposition . Similarly, the minimum eigenvalue $\lambda_{\min}({{\bm{H}}})$ satisfies $$\label{eqn:rayleigh-min}
\lambda_{\min}({{\bm{H}}}) = \min_{{\left\Vert { {{\bm{u}}}} \right\Vert} = 1 } \ {{\bm{u}}}^{*}{{\bm{H}}} {{\bm{u}}}.$$ We can obtain by applying to $-{{\bm{H}}}$.
Rayleigh’s principle implies that order relations for positive-semidefinite matrices lead to order relations for their eigenvalues.
\[fact:weyl\] Let ${{\bm{A}}}, {{\bm{H}}} \in {\mathbb{H}}_d$ be Hermitian matrices. Then $$\label{eqn:weyl}
{{\bm{A}}} {\preccurlyeq}{{\bm{H}}}
\quad\text{implies}\quad
\lambda_{\max}({{\bm{A}}}) \leq \lambda_{\max}({{\bm{H}}}).$$
The condition ${{\bm{A}}} {\preccurlyeq}{{\bm{H}}}$ implies that the eigenvalues of ${{\bm{H}}} - {{\bm{A}}}$ are nonnegative. Therefore, Rayleigh’s principle yields $$0 \leq \lambda_{\min}({{\bm{H}}} - {{\bm{A}}})
= \min_{{\left\Vert {{{\bm{u}}}} \right\Vert} = 1} \left( {{\bm{u}}}^{*}{{\bm{H}}} {{\bm{u}}} - {{\bm{u}}}^{*}{{\bm{A}}} {{\bm{u}}} \right)
\leq {{\bm{v}}}^{*}{{\bm{H}}} {{\bm{v}}} - {{\bm{v}}}^{*}{{\bm{A}}} {{\bm{v}}}$$ for any unit-norm vector ${{\bm{v}}}$. Select a unit-norm vector ${{\bm{v}}}$ for which $\lambda_{\max}({{\bm{A}}}) = {{\bm{v}}}^{*}{{\bm{A}}} {{\bm{v}}}$, and then rearrange: $$\lambda_{\max}( {{\bm{A}}} ) = {{\bm{v}}}^{*}{{\bm{A}}} {{\bm{v}}} \leq {{\bm{v}}}^{*}{{\bm{H}}} {{\bm{v}}}
\leq \lambda_{\max}({{\bm{H}}}).$$ The last relation is Rayleigh’s principle .
The Trace
---------
The [trace]{} of a square matrix ${{\bm{B}}} \in \mathbb{M}^{d \times d}$ is defined as $$\label{eqn:trace}
{\operatorname{tr}}{{\bm{B}}} := \sum_{i=1}^d b_{ii}.$$ It is clear that the trace is a linear functional on $\mathbb{M}^{d \times d}$. By direct calculation, one may verify that $${\operatorname{tr}}[ {{\bm{BC}}} ] = {\operatorname{tr}}[ {{\bm{CB}}} ]
\quad\text{for all ${{\bm{B}}} \in \mathbb{M}^{d \times r}$ and ${{\bm{C}}} \in \mathbb{M}^{r \times d}$.}$$ This property is called the [cyclicity]{} of the trace.
The trace of an Hermitian matrix ${{\bm{H}}} \in {\mathbb{H}}_d$ can also be expressed in terms of its eigenvalues: $$\label{eqn:trace-eig}
{\operatorname{tr}}{{\bm{H}}} = \sum_{i=1}^d \lambda_i.$$ This formula follows when we introduce the eigenvalue decomposition into . Then we invoke the linearity and the cyclicity properties of the trace, as well as the properties of an orthonormal basis. We also instate the convention that monomials bind before the trace: ${\operatorname{tr}}{{\bm{H}}}^r := {\operatorname{tr}}[ {{\bm{H}}}^r ]$ for each nonnegative integer $r$.
The Spectral Norm
-----------------
The [spectral norm]{} of a matrix ${{\bm{B}}} \in \mathbb{M}^{d_1 \times d_2}$ is defined as $$\label{eqn:spectral-norm}
{\left\Vert { {{\bm{B}}} } \right\Vert} := \max_{{\left\Vert {{{\bm{u}}}} \right\Vert} = 1}\ {\left\Vert { {{\bm{B}}} {{\bm{u}}} } \right\Vert}.$$ The maximum takes place over unit-norm vectors ${{\bm{u}}} \in {{\mathbb}{C}}^{d_2}$. We have the important identity $$\label{eqn:B*}
{{\left\Vert { {{\bm{B}}} } \right\Vert}^2} = {\left\Vert { \smash{{{\bm{B}}}^{*}{{\bm{B}}}} } \right\Vert} = {\left\Vert { \smash{{{\bm{BB}}}^{*}} } \right\Vert}
\quad\text{for every matrix ${{\bm{B}}}$.}$$ Furthermore, the spectral norm is a convex function, and it satisfies the triangle inequality.
For an Hermitian matrix, the spectral norm can be written in terms of the eigenvalues: $$\label{eqn:norm-herm}
{\left\Vert { {{\bm{H}}} } \right\Vert} = \max\big\{ \lambda_{\max}({{\bm{H}}}), \ - \lambda_{\min}({{\bm{H}}}) \big\}
\quad\text{for each Hermitian matrix ${{\bm{H}}}$.}$$ As a consequence, $$\label{eqn:norm-psd}
{\left\Vert { {{\bm{A}}} } \right\Vert} = \lambda_{\max}({{\bm{A}}})
\quad\text{for each positive-semidefinite matrix ${{\bm{A}}}$.}$$ This discussion implies that $$\label{eqn:norm-power}
{\left\Vert { {{\bm{H}}} } \right\Vert}^{2p} = {\left\Vert { \smash{{{\bm{H}}}^{2p}} } \right\Vert}
\quad\text{for each Hermitian ${{\bm{H}}}$ and each nonnegative integer $p$.}$$ Use the relations and to verify this fact.
Some Spectral Norm Inequalities
-------------------------------
We need some basic inequalities for the spectral norm. First, note that $$\label{eqn:norm-trace-bd}
{\left\Vert { {{\bm{A}}} } \right\Vert} \leq {\operatorname{tr}}{{\bm{A}}}
\quad\text{when ${{\bm{A}}}$ is positive semidefinite.}$$ This point follows from and because the eigenvalues of a positive-semidefinite matrix are nonnegative.
The next result uses the spectral norm to bound the trace of a product.
\[fact:trace-dual\] Consider Hermitian matrices ${{\bm{A}}}, {{\bm{H}}} \in {\mathbb{H}}_d$, and assume that ${{\bm{A}}}$ is positive semidefinite. Then $$\label{eqn:trace-dual}
{\operatorname{tr}}[ {{\bm{HA}}} ] \leq {\left\Vert {{{\bm{H}}}} \right\Vert} \cdot {\operatorname{tr}}{{\bm{A}}}.$$
Introducing the eigenvalue decomposition ${{\bm{A}}} = \sum_i \lambda_i {{\bm{u}}}_i {{\bm{u}}}_i^{*}$, we see that $${\operatorname{tr}}[ {{\bm{HA}}} ] = \sum_i \lambda_i {\operatorname{tr}}[ {{\bm{H}}} {{\bm{u}}}_i {{\bm{u}}}_i^{*}]
= \sum_i \lambda_i {{\bm{u}}}_i^{*}{{\bm{H}}} {{\bm{u}}}_i
\leq \lambda_{\max}({{\bm{H}}}) \cdot \sum_i \lambda_i
\leq {\left\Vert { {{\bm{H}}} } \right\Vert} \cdot {\operatorname{tr}}{{\bm{A}}}.$$ The first two relations follow from linearity and cyclicity of the trace. The first inequality depends on Rayleigh’s principle and the nonnegativity of the eigenvalues $\lambda_i$. The last bound follows from .
We also need a bound for the norm of a sum of squared positive-semidefinite matrices.
\[fact:sum-squares\] Consider positive-semidefinite matrices ${{\bm{A}}}_1, \dots, {{\bm{A}}}_n \in {\mathbb{H}}_d$. Then $${\left\Vert { \sum_{i=1}^n {{\bm{A}}}_i^2 } \right\Vert} \leq
\max\nolimits_i {\left\Vert { {{\bm{A}}}_i } \right\Vert} \cdot {\left\Vert { \sum_{i=1}^n {{\bm{A}}}_i } \right\Vert}.$$
Let ${{\bm{A}}}$ be positive semidefinite. We claim that $$\label{eqn:square-claim}
{{\bm{A}}}^2 {\preccurlyeq}M \cdot {{\bm{A}}}
\quad\text{whenever $\lambda_{\max}({{\bm{A}}}) \leq M$.}$$ Indeed, introducing the eigenvalue decomposition ${{\bm{A}}} = \sum_i \lambda_i {{\bm{u}}}_i {{\bm{u}}}_i^{*}$, we find that $$M \cdot {{\bm{A}}} - {{\bm{A}}}^2
= M \cdot \sum_i \lambda_i {{\bm{u}}}_i {{\bm{u}}}_i^{*}- \sum_i \lambda_i^2 {{\bm{u}}}_i{{\bm{u}}}_i^{*}= \sum_i \big(M - \lambda_i \big) \lambda_i \cdot {{\bm{u}}}_i {{\bm{u}}}_i^{*}.$$ The first relation uses . Since $0 \leq \lambda_i \leq M$, the scalar coefficients in the sum are nonnegative. Therefore, the matrix $M \cdot {{\bm{A}}} - {{\bm{A}}}^2$ is positive semidefinite, which is what we needed to show.
Select $M := \max_i \lambda_{\max}({{\bm{A}}}_i)$. The inequality ensures that $${{\bm{A}}}_i^2 {\preccurlyeq}M \cdot {{\bm{A}}}_i
\quad\text{for each index $i$.}$$ Summing these relations, we see that $$\sum_{i=1}^n {{\bm{A}}}_i^2 {\preccurlyeq}M \cdot \sum_{i=1}^n {{\bm{A}}}_i.$$ The monotonicity principle, Fact \[fact:weyl\], yields the inequality $$\lambda_{\max}\left( \sum_{i=1}^n {{\bm{A}}}_i^2 \right)
\leq \lambda_{\max}\left( M \cdot \sum_{i=1}^n {{\bm{A}}}_i \right)
= M \cdot \lambda_{\max}\left( \sum_{i=1}^n {{\bm{A}}}_i \right).$$ We have used the fact that the maximum eigenvalue of an Hermitian matrix is positive homogeneous. Finally, recall that, per , the spectral norm of a positive-semidefinite matrix equals its maximum eigenvalue.
GM–AM Inequality for the Trace
------------------------------
We require another substantial matrix inequality, which is one (of several) matrix analogs of the inequality between the geometric mean and the arithmetic mean.
\[fact:trace-gm-am\] Consider Hermitian matrices ${{\bm{H}}}, {{\bm{W}}}, {{\bm{Y}}} \in {\mathbb{H}}_d$. For each nonnegative integer $r$ and each integer $q$ in the range $0 \leq q \leq 2r$, $$\label{eqn:trace-gm-am}
{\operatorname{tr}}\big[ {{\bm{H}}}{{\bm{W}}}^q {{\bm{H}}} {{\bm{Y}}}^{2r-q} \big]
+ {\operatorname{tr}}\big[ {{\bm{H}}}{{\bm{W}}}^{2r-q} {{\bm{H}}} {{\bm{Y}}}^{q} \big]
\leq {\operatorname{tr}}\big[ {{\bm{H}}}^2 \cdot \big( {{\bm{W}}}^{2r} + {{\bm{Y}}}^{2r} \big) \big].$$ In particular, $$\sum_{q=0}^{2r} {\operatorname{tr}}\big[ {{\bm{H}}}{{\bm{W}}}^q {{\bm{H}}} {{\bm{Y}}}^{2r-q} \big]
\leq \frac{2r+1}{2} {\operatorname{tr}}\big[ {{\bm{H}}}^2 \cdot \big( {{\bm{W}}}^{2r} + {{\bm{Y}}}^{2r} \big) \big].$$
The result is a matrix version of the following numerical inequality. For $\lambda, \mu \geq 0$, $$\label{eqn:heinz}
\lambda^{\theta} \mu^{1 - \theta} + \lambda^{1-\theta} \mu^{\theta}
\leq \lambda + \mu
\quad\text{for each $\theta \in [0,1]$.}$$ To verify this bound, we may assume that $\lambda, \mu > 0$ because it is trivial to check when either $\lambda$ or $\mu$ equals zero. Notice that the left-hand side of the bound is a convex function of $\theta$ on the interval $[0,1]$. This point follows easily from the representation $$f(\theta) := \lambda^{\theta} \mu^{1 - \theta} + \lambda^{1-\theta} \mu^{\theta}
= {\mathrm{e}}^{\theta \log \lambda + (1-\theta) \log \mu}
+ {\mathrm{e}}^{(1-\theta) \log \lambda + \theta \log \mu}.$$ The value of the convex function $f$ on the interval $[0,1]$ is controlled by the maximum value it achieves at one of the endpoints: $$f(\theta) \leq \max\big\{ f(0), f(1) \big\}
= \lambda + \mu.$$ This inequality coincides with .
To prove from , we use eigenvalue decompositions. The case $r = 0$ is immediate, so we may assume that $r \geq 1$. Let $q$ be an integer in the range $0 \leq q \leq 2r$. Introduce eigenvalue decompositions: $${{\bm{W}}} = \sum_{i=1}^d \lambda_i {{\bm{u}}}_i {{\bm{u}}}_i^{*}\quad\text{and}\quad
{{\bm{Y}}} = \sum_{j=1}^d \mu_j {{\bm{v}}}_j {{\bm{v}}}_j^{*}.$$ Calculate that $$\label{eqn:gm-am-temp}
\begin{aligned}
{\operatorname{tr}}\big[ {{\bm{H}}}{{\bm{W}}}^q {{\bm{H}}} {{\bm{Y}}}^{2r-q} \big]
&= {\operatorname{tr}}\left[ {{\bm{H}}} \big(\sum_{i=1}^d \lambda_i^q {{\bm{u}}}_i{{\bm{u}}}_i^{*}\big)
{{\bm{H}}} \big( \sum_{j=1}^d \mu_j^{2r-q} {{\bm{v}}}_j {{\bm{v}}}_j^{*}\big) \right] \\
&=\sum_{i,j=1}^d \lambda_i^q \mu_j^{2r-q} \cdot
{\operatorname{tr}}\big[ {{\bm{H}}} {{\bm{u}}}_i {{\bm{u}}}_i^{*}{{\bm{H}}} {{\bm{v}}}_j {{\bm{v}}}_j^{*}\big] \\
&\leq \sum_{i,j=1}^d {\left\vert {\lambda_i} \right\vert}^q {\left\vert {\smash{\mu_j}} \right\vert}^{2r-q} \cdot{{{\left\vert { {{\bm{u}}}_i^{*}{{\bm{H}}} {{\bm{v}}}_j } \right\vert}}^2}.
\end{aligned}$$ The first identity relies on the formula for the eigenvalue decomposition of a monomial. The second step depends on the linearity of the trace. In the last line, we rewrite the trace using cyclicity, and the inequality emerges when we apply absolute values. The representation ${{{\left\vert { \smash{{{\bm{u}}}_i^{*}{{\bm{H}}} {{\bm{v}}}_j} } \right\vert}}^2}$ emphasizes that this quantity is nonnegative, which we use to justify several inequalities.
Invoking the inequality twice, we arrive at the bound $$\label{eqn:gm-am-temp2}
\begin{aligned}
{\operatorname{tr}}\big[ {{\bm{H}}}{{\bm{W}}}^q {{\bm{H}}} {{\bm{Y}}}^{2r-q} \big]
+ {\operatorname{tr}}\big[ {{\bm{H}}}{{\bm{W}}}^{2r-q} {{\bm{H}}} {{\bm{Y}}}^{q} \big]
&\leq \sum_{i,j=1}^d \big( {\left\vert {\lambda_i} \right\vert}^q {\left\vert {\smash{\mu_j}} \right\vert}^{2r-q}
+ {\left\vert {\lambda_i} \right\vert}^{2r-q} {\left\vert {\smash{\mu_j}} \right\vert}^{q}\big)
\cdot {{{\left\vert { {{\bm{u}}}_i^{*}{{\bm{H}}} {{\bm{v}}}_j } \right\vert}}^2} \\
&\leq \sum_{i,j=1}^d \big( \lambda_i^{2r} + \mu_j^{2r} \big)
\cdot {{{\left\vert { {{\bm{u}}}_i^{*}{{\bm{H}}} {{\bm{v}}}_j } \right\vert}}^2}.
\end{aligned}$$ The second inequality is , with $\theta = q/(2r)$ and $\lambda = \lambda_i^{2r}$ and $\mu = \mu_j^{2r}$.
It remains to rewrite the right-hand side of a more recognizable form. To that end, observe that $$\begin{aligned}
{\operatorname{tr}}\big[ {{\bm{H}}}{{\bm{W}}}^q {{\bm{H}}} {{\bm{Y}}}^{2r-q} \big]
&+ {\operatorname{tr}}\big[ {{\bm{H}}}{{\bm{W}}}^{2r-q} {{\bm{H}}} {{\bm{Y}}}^{q} \big] \\
&\leq \sum_{i,j=1}^d \big( \lambda_i^{2r} + \mu_j^{2r} \big)
\cdot {\operatorname{tr}}\big[ {{\bm{H}}} {{\bm{u}}}_i {{\bm{u}}}_i^{*}{{\bm{H}}} {{\bm{v}}}_j {{\bm{v}}}_j^{*}\big] \\
&= {\operatorname{tr}}\big[ {{\bm{H}}} \big( \sum_{i=1}^d \lambda_i^{2r} {{\bm{u}}}_i {{\bm{u}}}_i^{*}\big) {{\bm{H}}}
\big( \sum_{j=1}^d {{\bm{v}}}_j {{\bm{v}}}_j^{*}\big) \big]
+ {\operatorname{tr}}\big[ {{\bm{H}}} \big( \sum_{i=1}^d {{\bm{u}}}_i {{\bm{u}}}_i^{*}\big)
{{\bm{H}}} \big( \sum_{j=1}^d \mu_j^{2r} {{\bm{v}}}_j {{\bm{v}}}_j^{*}\big) \big] \\
&= {\operatorname{tr}}\big[ {{\bm{H}}}^2 \cdot {{\bm{W}}}^{2r} \big] + {\operatorname{tr}}\big[ {{\bm{H}}}^2 \cdot {{\bm{Y}}}^{2r} \big].
\end{aligned}$$ In the first step, we return the squared magnitude to its representation as a trace. Then we use linearity to draw the sums back inside the trace. Next, invoke to identify the powers ${{\bm{W}}}^{2r}$ and ${{\bm{Y}}}^{0} = {\mathbf{I}}_d$ and ${{\bm{W}}}^0 = {\mathbf{I}}_d$ and ${{\bm{Y}}}^{2r}$. Last, use the cyclicity of the trace to combine the factors of ${{\bm{H}}}$. The result follows from the linearity of the trace.
There is a cleaner, but more abstract, proof of the inequality . Consider the left- and right-multiplication operators $$\mathsf{W} : {{\bm{H}}} \mapsto {{\bm{WH}}}
\quad\text{and}\quad
\mathsf{Y} : {{\bm{H}}} \mapsto {{\bm{HY}}}.$$ Observe that powers of $\mathsf{W}$ and $\mathsf{Y}$ correspond to left- and right-multiplication by powers of ${{\bm{W}}}$ and ${{\bm{Y}}}$. Now, the operators $\mathsf{W}$ and $\mathsf{Y}$ commute, so there is a basis (orthonormal with respect to the trace inner product) for ${\mathbb{H}}_d$ in which they are simultaneously diagonalizable. Representing the operators in this basis, we can use to check that $$\mathsf{W}^q \mathsf{Y}^{2r-q}
+ \mathsf{W}^{2r-q} \mathsf{Y}^{q}
{\preccurlyeq}\mathsf{W}^{2r} + \mathsf{Y}^{2r}.$$ Now, calculate that $$\begin{aligned}
{\operatorname{tr}}\big[ {{\bm{HW}}}^q {{\bm{HY}}}^{2r-q} \big] + {\operatorname{tr}}\big[ {{\bm{HW}}}^{2r-q} {{\bm{HY}}}^{q} \big]
&= {\operatorname{tr}}\big[ {{\bm{H}}} \cdot \big(\mathsf{W}^q \mathsf{Y}^{2r-q} + \mathsf{W}^{2r-q} \mathsf{Y}^q \big)({{\bm{H}}}) \big] \\\
&\leq {\operatorname{tr}}\big[ {{\bm{H}}} \cdot \big(\mathsf{W}^{2r} + \mathsf{Y}^{2r} \big)({{\bm{H}}}) \big] \\
&= {\operatorname{tr}}\big[ {{\bm{H}}}^2 \cdot \big( {{\bm{W}}}^{2r} + {{\bm{Y}}}^{2r} \big) \big].
\end{aligned}$$ We omit the details.
The Hermitian Dilation
----------------------
Last, we introduce the [Hermitian dilation]{} ${\mathscr{H}}({{\bm{B}}})$ of a rectangular matrix ${{\bm{B}}} \in \mathbb{M}^{d_1 \times d_2}$. This is the Hermitian matrix $$\label{eqn:dilation}
{\mathscr{H}}({{\bm{B}}}) := \begin{bmatrix} {{\bm{0}}} & {{\bm{B}}} \\ {{\bm{B}}}^{*}& {{\bm{0}}} \end{bmatrix}
\in {\mathbb{H}}_{d_1 + d_2}.$$ Note that the map ${\mathscr{H}}$ is real linear. By direct calculation, $$\label{eqn:dilation-square}
{\mathscr{H}}({{\bm{B}}})^2 = \begin{bmatrix} {{\bm{BB}}}^{*}& {{\bm{0}}} \\ {{\bm{0}}} & {{\bm{B}}}^{*}{{\bm{B}}} \end{bmatrix}.$$ We also have the spectral-norm identity $$\label{eqn:dilation-norm}
{\left\Vert { {\mathscr{H}}({{\bm{B}}}) } \right\Vert} = {\left\Vert { {{\bm{B}}} } \right\Vert}.$$ To verify , calculate that $${{\left\Vert {{\mathscr{H}}({{\bm{B}}})} \right\Vert}^2}
= {\left\Vert { {\mathscr{H}}({{\bm{B}}})^2 } \right\Vert}
= {\left\Vert { \begin{bmatrix} {{\bm{BB}}}^{*}& {{\bm{0}}} \\ {{\bm{0}}} & {{\bm{B}}}^{*}{{\bm{B}}} \end{bmatrix} } \right\Vert}
= \max\big\{ {\left\Vert { \smash{{{\bm{BB}}}^{*}} } \right\Vert}, \ {\left\Vert { \smash{{{\bm{B}}}^{*}{{\bm{B}}}} } \right\Vert} \big\}
= {{\left\Vert { {{\bm{B}}} } \right\Vert}^2}.$$ The first identity is ; the second is . The norm of a block-diagonal Hermitian matrix is the maximum spectral norm of a block, which follows from the Rayleigh principle with a bit of work. Finally, invoke the property .
Probability Background {#sec:probability}
======================
This section contains some background material from the field of probability. Good references include the books [@LT91:Probability-Banach; @GS01:Probability-Random; @Tao12:Topics-Random].
Expectation
-----------
The symbol ${\operatorname{\mathbb{E}}}$ denotes the expectation operator. We will not define expectation formally or spend any energy on technical details. No issues arise if we assume, for example, that all random variables are bounded.
We use brackets to enclose the argument of the expectation when it is important for clarity, and we instate the convention that nonlinear functions bind before expectation. For instance, ${\operatorname{\mathbb{E}}}X^p := {\operatorname{\mathbb{E}}}[ X^p ]$ and ${\operatorname{\mathbb{E}}}\max\nolimits_i X_i := {\operatorname{\mathbb{E}}}[ \max\nolimits_i X_i ]$.
Sometimes, we add a subscript to indicate a partial expectation. For example, if $J$ is a random variable, ${\operatorname{\mathbb{E}}}_{J}$ refers to the average over $J$, with all other random variables fixed. We only use this notation when $J$ is independent from the other random variables, so there are no complications. In particular, we can compute iterated expectations: ${\operatorname{\mathbb{E}}}[ {\operatorname{\mathbb{E}}}_J [ \cdot ] ] = {\operatorname{\mathbb{E}}}[ \cdot ] $ whenever all the expectations are finite.
Random Matrices
---------------
A [random matrix]{} is a matrix whose entries are complex random variables, not necessarily independent. We compute the expectation of a random matrix ${{\bm{Z}}}$ componentwise: $$\label{eqn:expect-matrix}
({\operatorname{\mathbb{E}}}[ {{\bm{Z}}} ] )_{ij} = {\operatorname{\mathbb{E}}}[ z_{ij} ]
\quad\text{for each pair $(i, j)$ of indices.}$$ As in the scalar case, if ${{\bm{W}}}$ and ${{\bm{Z}}}$ are independent, $${\operatorname{\mathbb{E}}}[ {{\bm{WZ}}} ] = ({\operatorname{\mathbb{E}}}{{\bm{W}}} )({\operatorname{\mathbb{E}}}{{\bm{Z}}}).$$ Since the expectation is linear, it also commutes with all of the simple linear operations we perform on matrices.
It suffices to take a na[ï]{}ve view of independence, expectation, and so forth. For the technically inclined, let $(\Omega, {\mathscr{F}}, \mathbb{P})$ be a probability space. A $d_1 \times d_2$ random matrix ${{\bm{Z}}}$ is simply a measurable function $${{\bm{Z}}} : \Omega \to \mathbb{M}^{d_1 \times d_2}.$$ A family $\{ {{\bm{Z}}}_i : i = 1, \dots, n \}$ of random matrices is independent when $${{\mathbb}{P}\left\{{ {{\bm{Z}}}_i \in E_i \text{ for $i = 1, \dots, n$} }\right\}} = \prod_{i=1}^n {{\mathbb}{P}\left\{{ {{\bm{Z}}}_i \in E_i }\right\}}.$$ for any collection of Borel[^1] subsets $E_i \subset \mathbb{M}^{d_1 \times d_2}$.
Inequalities for Expectation
----------------------------
We need several basic inequalities for expectation. We set these out for future reference. Let $X, Y$ be (arbitrary) real random variables. The Cauchy–Schwarz inequality states that $$\label{eqn:expect-cs}
{\left\vert { {\operatorname{\mathbb{E}}}[ XY ] } \right\vert} \leq \big({\operatorname{\mathbb{E}}}X^2 \big)^{1/2} \cdot \big({\operatorname{\mathbb{E}}}Y^2 \big)^{1/2}.$$ For $r \geq 1$, the triangle inequality states that $$\label{eqn:Lr-triangle}
\big( {\operatorname{\mathbb{E}}}{\left\vert { X + Y } \right\vert}^r \big)^{1/r}
\leq \big( {\operatorname{\mathbb{E}}}{\left\vert {X} \right\vert}^r \big)^{1/r} + \big( {\operatorname{\mathbb{E}}}{\left\vert {Y} \right\vert}^r \big)^{1/r}.$$ Each of these inequalities is vacuous precisely when its right-hand side is infinite.
Jensen’s inequality describes how expectation interacts with a convex or concave function; cf. . Let $X$ be a random variable taking values in a finite-dimensional linear space $V$, and let $f : V \to {{\mathbb}{R}}$ be a function. Then $$\label{eqn:jensen}
\begin{aligned}
f({\operatorname{\mathbb{E}}}X) &\leq {\operatorname{\mathbb{E}}}f(X) \quad\text{when $f$ is convex, and} \\
{\operatorname{\mathbb{E}}}f(X) &\leq f({\operatorname{\mathbb{E}}}X) \quad\text{when $f$ is concave.} \\
\end{aligned}$$ The inequalities also hold when we replace ${\operatorname{\mathbb{E}}}$ with a partial expectation. Let us emphasize that these bounds do require that all of the expectations exist.
Symmetrization
--------------
Symmetrization is an important technique for studying the expectation of a function of independent random variables. The idea is to inject auxiliary randomness into the function. Then we condition on the original random variables and average with respect to the extra randomness. When the auxiliary random variables are more pliable, this approach can lead to significant simplifications.
A [Rademacher]{} random variable ${\varepsilon}$ takes the two values $\pm 1$ with equal probability. The following result shows how we can use Rademacher random variables to study a sum of independent random matrices.
\[fact:symmetrization\] Let ${{\bm{S}}}_1, \dots, {{\bm{S}}}_n \in \mathbb{M}^{d_1 \times d_2}$ be independent random matrices. Let ${\varepsilon}_1, \dots, {\varepsilon}_n$ be independent Rademacher random variables that are also independent from the random matrices. For each $r \geq 1$, $$\frac{1}{2} \cdot \left( {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{S}}}_i } \right\Vert}^r \right)^{1/r}
\leq \left( {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n ({{\bm{S}}}_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i) } \right\Vert}^r \right)^{1/r}
\leq 2 \cdot \left( {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{S}}}_i } \right\Vert}^r \right)^{1/r}.$$ This result holds whenever ${\operatorname{\mathbb{E}}}{\left\Vert {{{\bm{S}}}_i} \right\Vert}^r < \infty$ for each index $i$.
For notational simplicity, assume that $r = 1$. We discuss the general case at the end of the argument.
Let $\{{{\bm{S}}}_i' : i = 1, \dots, n\}$ be an independent copy of the sequence $\{ {{\bm{S}}}_i : i = 1, \dots, n \}$, and let ${\operatorname{\mathbb{E}}}'$ denote partial expectation with respect to the independent copy. Then $$\begin{aligned}
{\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n ( {{\bm{S}}}_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i) } \right\Vert}
&= {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n \big[ ( {{\bm{S}}}_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i)
- {\operatorname{\mathbb{E}}}' ( {{\bm{S}}}'_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i) \big] } \right\Vert} \\
&\leq {\operatorname{\mathbb{E}}}\left[ {\operatorname{\mathbb{E}}}' {\left\Vert { \sum_{i=1}^n \big[ ( {{\bm{S}}}_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i)
- ( {{\bm{S}}}'_i - {\operatorname{\mathbb{E}}}{{\bm{S}}}_i) \big] } \right\Vert} \right] \\
&= {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n ({{\bm{S}}}_i - {{\bm{S}}}_i') } \right\Vert}.
\end{aligned}$$ The first identity holds because ${\operatorname{\mathbb{E}}}' {{\bm{S}}}'_i = {\operatorname{\mathbb{E}}}{{\bm{S}}}_i$ by identical distribution. Since the spectral norm is convex, we can apply Jensen’s inequality conditionally to draw out the partial expectation ${\operatorname{\mathbb{E}}}'$. Last, we combine the iterated expectation into a single expectation.
Observe that ${{\bm{S}}}_i - {{\bm{S}}}'_i$ has the same distribution as its negation ${{\bm{S}}}_i' - {{\bm{S}}}_i$. It follows that the independent sequence $\{ {\varepsilon}_i ({{\bm{S}}}_i - {{\bm{S}}}_i') : i = 1, \dots, n \}$ has the same distribution as $\{ {{\bm{S}}}_i - {{\bm{S}}}_i' : i = 1, \dots, n \}$. Therefore, the expectation of any nonnegative function takes the same value for both sequences. In particular, $$\begin{aligned}
{\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n ({{\bm{S}}}_i - {{\bm{S}}}_i') } \right\Vert}
&= {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {\varepsilon}_i ({{\bm{S}}}_i - {{\bm{S}}}_i') } \right\Vert} \\
&\leq {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{S}}}_i } \right\Vert} + {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n (-{\varepsilon}_i) {{\bm{S}}}'_i } \right\Vert} \\
&= 2 {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{S}}}_i } \right\Vert}.
\end{aligned}$$ The second step is the triangle inequality, and the last line follows from the identical distribution of $\{ {\varepsilon}_i {{\bm{S}}}_i \}$ and $\{ - {\varepsilon}_i {{\bm{S}}}_i' \}$. Combine the last two displays to obtain the upper bound.
To obtain results for $r > 1$, we pursue the same approach. We require the additional observation that ${\left\Vert { \cdot } \right\Vert}^r$ is a convex function, and we also need to invoke the triangle inequality . Finally, we remark that the lower bound follows from a similar procedure, so we omit the demonstration.
The Expected Norm of a Matrix Rademacher Series {#sec:khintchine}
===============================================
To prove Theorem \[thm:main\], our overall strategy is to use symmetrization. This approach allows us to reduce the study of an independent sum of random matrices to the study of a sum of fixed matrices modulated by independent Rademacher random variables. This type of random matrix is called a [matrix Rademacher series]{}. In this section, we establish a bound on the spectral norm of a matrix Rademacher series. This is the key technical step in the proof of Theorem \[thm:main\].
\[thm:matrix-rademacher\] Let ${{\bm{H}}}_1, \dots, {{\bm{H}}}_n$ be fixed Hermitian matrices with dimension $d$. Let ${\varepsilon}_1, \dots, {\varepsilon}_n$ be independent Rademacher random variables. Then $$\label{eqn:rad-result}
\left( {\operatorname{\mathbb{E}}}{{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{H}}}_i } \right\Vert}^2} \right)^{1/2}
\leq \sqrt{1 + 2\lceil\log d\rceil} \cdot {\left\Vert { \sum_{i=1}^n {{\bm{H}}}_i^2 } \right\Vert}^{1/2}.$$
The proof of Theorem \[thm:matrix-rademacher\] occupies the bulk of this section, beginning with Section \[sec:rad-pf\]. The argument is really just a fancy version of the familiar calculation of the moments of a centered standard normal random variable; see Section \[sec:ibp\] for details.
Discussion
----------
Before we establish Theorem \[thm:matrix-rademacher\], let us make a few comments. First, it is helpful to interpret the result in the same language we have used to state Theorem \[thm:main\]. Introduce the matrix Rademacher series $${{\bm{X}}} := \sum_{i=1}^n {\varepsilon}_i {{\bm{H}}}_i.$$ Compute the matrix variance, defined in : $$v({{\bm{X}}}) := {\left\Vert { {\operatorname{\mathbb{E}}}{{\bm{X}}}^2 } \right\Vert}
= {\left\Vert { \sum_{i,j=1}^n {\operatorname{\mathbb{E}}}[ {\varepsilon}_i {\varepsilon}_j ] \cdot {{\bm{H}}}_i {{\bm{H}}}_j } \right\Vert}
= {\left\Vert { \sum_{i=1}^n {{\bm{H}}}_i^2 } \right\Vert}.$$ We may rewrite Theorem \[thm:matrix-rademacher\] as the statement that $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{X}}} } \right\Vert}^2} \right)^{1/2}
\leq \sqrt{(1 + 2 \lceil\log d\rceil) \cdot v({{\bm{X}}})}.$$ In other words, Theorem \[thm:matrix-rademacher\] is a sharper version of Theorem \[thm:main\] for the special case of a matrix Rademacher series.
Next, we have focused on bounding the second moment of ${\left\Vert {{{\bm{X}}}} \right\Vert}$ because this is the most natural form of the result. Note that we also control the first moment because of Jensen’s inequality : $$\label{eqn:rad-first-moment}
{\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{H}}}_i } \right\Vert}
\leq \left( {\operatorname{\mathbb{E}}}{{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{H}}}_i } \right\Vert}^2} \right)^{1/2}
\leq \sqrt{1 + 2\lceil\log d\rceil} \cdot {\left\Vert { \sum_{i=1}^n {{\bm{H}}}_i^2 } \right\Vert}^{1/2}.$$ A simple variant on the proof of Theorem \[thm:matrix-rademacher\] provides bounds for higher moments.
Third, the dimensional factor on the right-hand side of is asymptotically sharp. Indeed, let us write $K(d)$ for the minimum possible constant in the inequality $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{H}}}_i } \right\Vert}^2} \right)^{1/2}
\leq K(d) \cdot {\left\Vert { \sum_{i=1}^n {{\bm{H}}}_i^2 } \right\Vert}^{1/2}
\quad\text{for ${{\bm{H}}}_i \in {\mathbb{H}}_d$ and $n \in \mathbb{N}$.}$$ The example in Section \[sec:upper-var\] shows that $$K(d) \geq \sqrt{2 \log d}.$$ In other words, cannot be improved without making further assumptions.
Theorem \[thm:matrix-rademacher\] is a variant on the noncommutative Khintchine inequality, first established by Lust-Piquard [@LP86:Inegalites-Khintchine] and later improved by Pisier [@Pis98:Noncommutative-Vector] and by Buchholz [@Buc01:Operator-Khintchine]. The noncommutative Khintchine inequality gives bounds for the Schatten norm of a matrix Rademacher series, rather than for the spectral norm. Rudelson [@Rud99:Random-Vectors] pointed out that the noncommutative Khintchine inequality also implies bounds for the spectral norm of a matrix Rademacher series. In our presentation, we choose to control the spectral norm directly.
The Spectral Norm and the Trace Moments {#sec:rad-pf}
---------------------------------------
To begin the proof of Theorem \[thm:matrix-rademacher\], we introduce the random Hermitian matrix $$\label{eqn:rad-series}
{{\bm{X}}} := \sum_{i=1}^n {\varepsilon}_i {{\bm{H}}}_i$$ Our goal is to bound the expected spectral norm of ${{\bm{X}}}$. We may proceed by estimating the expected trace of a power of the random matrix, which is known as a [trace moment]{}. Fix a nonnegative integer $p$. Observe that $$\label{eqn:norm-trace-moments}
\left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{X}}}} \right\Vert}^2} \right)^{1/2}
\leq \big( {\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{X}}} } \right\Vert}^{2p} \big)^{1/(2p)}
= \big( {\operatorname{\mathbb{E}}}{\left\Vert { \smash{{{\bm{X}}}^{2p}} } \right\Vert} \big)^{1/(2p)}
\leq \left( {\operatorname{\mathbb{E}}}{\operatorname{tr}}{{\bm{X}}}^{2p} \right)^{1/(2p)}.$$ The first identity is Jensen’s inequality , applied to the concave function $t \mapsto t^{1/p}$. The second relation is . The final inequality is the bound on the norm of the positive-semidefinite matrix ${{\bm{X}}}^{2p}$ by its trace.
It should be clear that we can also bound expected powers of the spectral norm using the same technique. For simplicity, we omit this development.
Summation by Parts
------------------
To study the trace moments of the random matrix ${{\bm{X}}}$, we rely on a discrete analog of integration by parts. This approach is clearer if we introduce some more notation. For each index $i$, define the random matrices $${{\bm{X}}}_{+i} := {{\bm{H}}}_i + \sum_{j\neq i} {\varepsilon}_j {{\bm{H}}}_j
\quad\text{and}\quad
{{\bm{X}}}_{-i} := - {{\bm{H}}}_i + \sum_{j\neq i} {\varepsilon}_j {{\bm{H}}}_j$$ In other words, the distribution of ${{\bm{X}}}_{{\varepsilon}_i i}$ is the conditional distribution of the random matrix ${{\bm{X}}}$ given the value ${\varepsilon}_i$ of the $i$th Rademacher variable. This interpretation depends on the assumption that the Rademacher variables are independent.
Beginning with the trace moment, observe that $$\label{eqn:sum-parts}
\begin{aligned}
{\operatorname{\mathbb{E}}}{\operatorname{tr}}{{\bm{X}}}^{2p}
&= {\operatorname{\mathbb{E}}}{\operatorname{tr}}\big[ {{\bm{X}}} \cdot {{\bm{X}}}^{2p-1} \big] \\
&= \sum_{i=1}^n {\operatorname{\mathbb{E}}}\big[ {\operatorname{\mathbb{E}}}_{{\varepsilon}_i} {\operatorname{tr}}\big[ {\varepsilon}_i {{\bm{H}}}_i \cdot {{\bm{X}}}^{2p-1} \big] \big] \\
&= \frac{1}{2} \sum_{i=1}^n {\operatorname{\mathbb{E}}}{\operatorname{tr}}\Big[ {{\bm{H}}}_i \cdot \Big({{\bm{X}}}_{+i}^{2p-1} - {{\bm{X}}}_{-i}^{2p-1} \Big) \Big]
\end{aligned}$$ In the second step, we simply write out the definition of the random matrix ${{\bm{X}}}$ and use the linearity of the trace to draw out the sum. Then we write the expectation as an iterated expectation. To reach the next line, write out the partial expectation using the notation ${{\bm{X}}}_{\pm i}$ and the linearity of the trace.
A Difference of Powers
----------------------
Next, let us apply an algebraic identity to reduce the difference of powers in . For matrices ${{\bm{W}}}, {{\bm{Y}}} \in {\mathbb{H}}_d$, it holds that $$\label{eqn:diff-powers}
{{\bm{W}}}^{2p-1} - {{\bm{Y}}}^{2p-1} = \sum_{q=0}^{2p-2} {{\bm{W}}}^{q} ({{\bm{W}}} - {{\bm{Y}}}) {{\bm{Y}}}^{2p-2-q}.$$ To check this expression, just expand the matrix products and notice that the sum telescopes.
Introducing the relation with ${{\bm{W}}} = {{\bm{X}}}_{+i}$ and ${{\bm{Y}}} = {{\bm{X}}}_{-i}$ into the formula , we find that $$\label{eqn:alt-prod}
\begin{aligned}
{\operatorname{\mathbb{E}}}{\operatorname{tr}}{{\bm{X}}}^{2p}
&= \frac{1}{2} \sum_{i=1}^n {\operatorname{\mathbb{E}}}{\operatorname{tr}}\left[ {{\bm{H}}}_i
\cdot \sum_{q=0}^{2p-2} {{\bm{X}}}_{+i}^{q} \big({{\bm{X}}}_{+i} - {{\bm{X}}}_{-i} \big) {{\bm{X}}}_{-i}^{2p-2-q} \right] \\
&= \sum_{i=1}^n \sum_{q=0}^{2p-2} {\operatorname{\mathbb{E}}}{\operatorname{tr}}\left[ {{\bm{H}}}_i {{\bm{X}}}_{+i}^{q} {{\bm{H}}}_i {{\bm{X}}}_{-i}^{2p-2-q} \right].
\end{aligned}$$ Linearity of the trace allows us to draw out the sum over $q$, and we have used the observation that ${{\bm{X}}}_{+i} - {{\bm{X}}}_{-i} = 2 {{\bm{H}}}_i$.
A Bound for the Trace Moments
-----------------------------
We are now in a position to obtain a bound for the trace moments of ${{\bm{X}}}$. Beginning with , we compute that $$\label{eqn:post-gm-am}
\begin{aligned}
{\operatorname{\mathbb{E}}}{\operatorname{tr}}{{\bm{X}}}^{2p}
&= \sum_{i=1}^n \sum_{q=0}^{2p-2} {\operatorname{\mathbb{E}}}{\operatorname{tr}}\left[ {{\bm{H}}}_i {{\bm{X}}}_{+i}^{q} {{\bm{H}}}_i {{\bm{X}}}_{-i}^{2p-2-q} \right] \\
&\leq \sum_{i=1}^n \frac{2p-1}{2} {\operatorname{\mathbb{E}}}{\operatorname{tr}}\left[ {{\bm{H}}}_i^2 \cdot
\Big( {{\bm{X}}}_{+i}^{2p-2} + {{\bm{X}}}_{-i}^{2p-2} \Big) \right] \\
&= (2p-1) \cdot \sum_{i=1}^n {\operatorname{\mathbb{E}}}{\operatorname{tr}}\left[ {{\bm{H}}}_i^2 \cdot \big( {\operatorname{\mathbb{E}}}_{{\varepsilon}_i} {{\bm{X}}}^{2p-2} \big) \right] \\
&= (2p-1) \cdot {\operatorname{\mathbb{E}}}{\operatorname{tr}}\left[ \left( \sum_{i=1}^n {{\bm{H}}}_i^2 \right) \cdot {{\bm{X}}}^{2p-2} \right] \\
&\leq (2p-1) \cdot {\left\Vert { \sum_{i=1}^n {{\bm{H}}}_i^2 } \right\Vert} \cdot {\operatorname{\mathbb{E}}}{\operatorname{tr}}{{\bm{X}}}^{2p-2}.
\end{aligned}$$ The bound in the second line is the trace GM–AM inequality, Fact \[fact:trace-gm-am\], with $r = p - 1$ and ${{\bm{W}}} = {{\bm{X}}}_{+i}$ and ${{\bm{Y}}} = {{\bm{X}}}_{-i}$. To reach the third line, observe that the parenthesis in the second line is twice the partial expectation of ${{\bm{X}}}^{2p-2}$ with respect to ${\varepsilon}_i$. Afterward, we use linearity of the expectation and the trace to draw in the sum over $i$, and then we combine the expectations. Last, invoke the trace inequality from Fact \[fact:trace-dual\].
Iteration and the Spectral Norm Bound
-------------------------------------
The expression shows that the trace moment is controlled by a trace moment with a smaller power: $$\label{eqn:iteration}
{\operatorname{\mathbb{E}}}{\operatorname{tr}}{{\bm{X}}}^{2p}
\leq (2p-1) \cdot {\left\Vert { \sum_{i=1}^n {{\bm{H}}}_i^2 } \right\Vert} \cdot {\operatorname{\mathbb{E}}}{\operatorname{tr}}{{\bm{X}}}^{2p-2}.$$ Iterating this bound $p$ times, we arrive at the result $$\label{eqn:iterated}
\begin{aligned}
{\operatorname{\mathbb{E}}}{\operatorname{tr}}{{\bm{X}}}^{2p}
&\leq (2p-1)!! \cdot {\left\Vert { \sum_{i=1}^n {{\bm{H}}}_i^2 } \right\Vert}^p \cdot {\operatorname{tr}}{{\bm{X}}}^0 \\
&= d \cdot (2p-1)!! \cdot {\left\Vert { \sum_{i=1}^n {{\bm{H}}}_i^2 } \right\Vert}^p.
\end{aligned}$$ The double factorial is defined as $(2p-1)!! := (2p-1) (2p-3)(2p-5) \cdots (5)(3)(1)$.
The expression shows that we can control the expected spectral norm of ${{\bm{X}}}$ by means of a trace moment. Therefore, for any nonnegative integer $p$, it holds that $$\label{eqn:rad-norm-bd}
{\operatorname{\mathbb{E}}}{\left\Vert {{{\bm{X}}}} \right\Vert}
\leq \big( {\operatorname{\mathbb{E}}}{\operatorname{tr}}{{\bm{X}}}^{2p} \big)^{1/(2p)}
\leq \big( d \cdot (2p-1)!! \big)^{1/(2p)} \cdot {\left\Vert { \sum_{i=1}^n {{\bm{H}}}_i^2 } \right\Vert}^{1/2}.$$ The second inequality is simply our bound . All that remains is to choose the value of $p$ to minimize the factor on the right-hand side.
Calculating the Constant
------------------------
Finally, let us develop an accurate bound for the leading factor on the right-hand side of . We claim that $$\label{eqn:doublefactorial-claim}
(2p-1)!! \leq \left( \frac{2p+1}{{\mathrm{e}}} \right)^{p}.$$ Given this estimate, select $p = \lceil \log d \rceil$ to reach $$\label{eqn:rad-const-bd}
\big( d \cdot (2p-1)!! \big)^{1/(2p)}
\leq d^{1/(2p)} \sqrt{\frac{2p+1}{{\mathrm{e}}}}
\leq \sqrt{2p + 1}
= \sqrt{1 + 2 \lceil \log d \rceil}.$$ Introduce the inequality into to complete the proof of Theorem \[thm:matrix-rademacher\].
To check that is valid, we use some tools from integral calculus: $$\begin{aligned}
\log\big( (2p-1)!! \big) &= \sum_{i=1}^{p-1} \log(2i + 1) \\
&= \left[ \frac{1}{2} \log(2 \cdot 0 + 1) + \sum_{i=1}^{p-1} \log(2i + 1) + \frac{1}{2} \log(2p + 1) \right]
- \frac{1}{2} \log(2p+1) \\
&\leq \int_0^p \log(2x + 1) {\, {\mathrm{d}{x}}} - \frac{1}{2} \log(2p+1) \\
&= p \log(2p+1) - p.
\end{aligned}$$ The bracket in the second line is the trapezoid rule approximation of the integral in the third line. Since the integrand is concave, the trapezoid rule underestimates the integral. Exponentiating this formula, we arrive at .
Context {#sec:ibp}
-------
The proof of Theorem \[thm:matrix-rademacher\] is really just a discrete, matrix version of the familiar calculation of the $(2p)$th moment of a centered normal random variable. Let us elaborate. Recall the Gaussian integration by parts formula: $$\label{eqn:gauss-ibp}
{\operatorname{\mathbb{E}}}[ \gamma \cdot f(\gamma) ] = \sigma^2 \cdot {\operatorname{\mathbb{E}}}[ f'(\gamma) ]$$ where $\gamma \sim {\textsc{normal}}(0, \sigma^2)$ and $f : {{\mathbb}{R}}\to {{\mathbb}{R}}$ is any function for which the integrals are finite. This result follows when we write the expectations as integrals with respect to the normal density tand invoke the usual integration by parts rule. Now, suppose that we wish to compute the $(2p)$th moment of $\gamma$. We have $$\label{eqn:gauss-2p}
{\operatorname{\mathbb{E}}}\gamma^{2p} = {\operatorname{\mathbb{E}}}\big[ \gamma \cdot \gamma^{2p-1} \big]
= (2p-1) \cdot \sigma^2 \cdot {\operatorname{\mathbb{E}}}\gamma^{2p-2}.$$ The second identity is just with the choice $f(t) = t^{2p-1}$. Iterating , we discover that $${\operatorname{\mathbb{E}}}\gamma^{2p} = (2p-1)!! \cdot \sigma^{2p}.$$ In Theorem \[thm:matrix-rademacher\], the matrix variance parameter $v({{\bm{X}}})$ plays the role of the scalar variance $\sigma^2$.
In fact, the link with Gaussian integration by parts is even stronger. Consider a matrix Gaussian series $${{\bm{Y}}} := \sum_{i=1}^n \gamma_i {{\bm{H}}}_i$$ where $\{ \gamma_i \}$ is an independent family of standard normal variables. If we replace the discrete integration by parts in the proof of Theorem \[thm:matrix-rademacher\] with Gaussian integration by parts, the argument leads to the bound $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert { \sum_{i=1}^n \gamma_i {{\bm{H}}}_i } \right\Vert}^2} \right)^{1/2}
\leq \sqrt{1+ 2\lceil \log d \rceil} \cdot {\left\Vert { \sum_{i=1}^n {{\bm{H}}}_i^2 } \right\Vert}^{1/2}.$$ This approach requires matrix calculus, but it is slightly simpler than the argument for matrix Rademacher series in other respects. See [@Tro15:Second-Order-Matrix Thm. 8.1] for a proof of the noncommutative Khintchine inequality for Gaussian series along these lines.
Upper Bounds for the Expected Norm {#sec:upper}
==================================
We are now prepared to establish the upper bound for an arbitrary sum of independent random matrices. The argument is based on the specialized result, Theorem \[thm:matrix-rademacher\], for matrix Rademacher series. It proceeds by steps through more and more general classes of random matrices: first positive semidefinite, then Hermitian, and finally rectangular. Here is what we will show.
\[thm:upper\] Define the dimensional constant $C(d) := 4(1 + 2\lceil \log d \rceil)$. The expected spectral norm of a sum of independent random matrices satisfies the following upper bounds.
1. **The Positive-Semidefinite Case.** Consider an independent family $\{ {{\bm{T}}}_1, \dots, {{\bm{T}}}_n \}$ of random $d \times d$ positive-semidefinite matrices, and define the sum $${{\bm{W}}} := \sum_{i=1}^n {{\bm{T}}}_i.$$ Then $$\label{eqn:psd-case}
{\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{W}}} } \right\Vert}
\leq \left[ {\left\Vert { {\operatorname{\mathbb{E}}}{{\bm{W}}} } \right\Vert}^{1/2} +
\sqrt{C(d)} \cdot \big( {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert { {{\bm{T}}}_i } \right\Vert} \big)^{1/2} \right]^2.$$
2. **The Centered Hermitian Case.** Consider an independent family $\{ {{\bm{Y}}}_1, \dots, {{\bm{Y}}}_n \}$ of random $d \times d$ Hermitian matrices with ${\operatorname{\mathbb{E}}}{{\bm{Y}}}_i = {{\bm{0}}}$ for each index $i$, and define the sum $${{\bm{X}}} := \sum_{i=1}^n {{\bm{Y}}}_i.$$ Then $$\label{eqn:herm-case}
\left( {\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{X}}} } \right\Vert}^2} \right)^{1/2}
\leq \sqrt{C(d)} \cdot {\left\Vert { {\operatorname{\mathbb{E}}}{{\bm{X}}}^2 } \right\Vert}^{1/2}
+ C(d) \cdot \left( {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert { {{\bm{Y}}}_i } \right\Vert}^2 \right)^{1/2}.$$
3. **The Centered Rectangular Case.** Consider an independent family $\{ {{\bm{S}}}_1, \dots, {{\bm{S}}}_n \}$ of random $d_1 \times d_2$ matrices with ${\operatorname{\mathbb{E}}}{{\bm{S}}}_i = {{\bm{0}}}$ for each index $i$, and define the sum $${{\bm{Z}}} := \sum_{i=1}^n {{\bm{S}}}_i.$$ Then $$\label{eqn:rect-case}
{\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{Z}}} } \right\Vert}
\leq \sqrt{C(d)} \cdot \max\left\{ {\left\Vert { {\operatorname{\mathbb{E}}}\big[ {{\bm{ZZ}}}^{*}\big] } \right\Vert}^{1/2}, \
{\left\Vert { {\operatorname{\mathbb{E}}}\big[ {{\bm{Z}}}^{*}{{\bm{Z}}} \big] } \right\Vert}^{1/2} \right\}
+ C(d) \cdot \left( {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert { {{\bm{S}}}_i } \right\Vert}^2 \right)^{1/2}$$ where $d := d_1 + d_2$.
The proof of Theorem \[thm:upper\] takes up the rest of this section. The presentation includes notes about the provenance of various parts of the argument.
The upper bound in Theorem \[thm:main\] follows instantly from Case (3) of Theorem \[thm:upper\]. We just introduce the notation $v({{\bm{Z}}})$ for the variance parameter, and we calculate that $${\operatorname{\mathbb{E}}}\big[ {{\bm{ZZ}}}^{*}\big]
= \sum_{i,j=1}^n {\operatorname{\mathbb{E}}}\big[ {{\bm{S}}}_i {{\bm{S}}}_j^{*}\big]
= \sum_{i=1}^n {\operatorname{\mathbb{E}}}\big[ {{\bm{S}}}_i {{\bm{S}}}_i^{*}\big].$$ The first expression follows immediately from the definition of ${{\bm{Z}}}$ and the linearity of the expectation; the second identity holds because the random matrices ${{\bm{S}}}_i$ are independent and have mean zero. The formula for ${\operatorname{\mathbb{E}}}\big[ {{\bm{Z}}}^{*}{{\bm{Z}}} \big]$ is valid for precisely the same reasons.
Proof of the Positive-Semidefinite Case
---------------------------------------
Recall that ${{\bm{W}}}$ is a random $d \times d$ positive-semidefinite matrix of the form $${{\bm{W}}} := \sum_{i=1}^n {{\bm{T}}}_i
\quad\text{where the ${{\bm{T}}}_i$ are positive semidefinite.}$$ Let us introduce notation for the quantity of interest: $$E := {\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{W}}} } \right\Vert} = {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {{\bm{T}}}_i } \right\Vert}$$ By the triangle inequality for the spectral norm, $$\begin{aligned}
E \leq {\left\Vert { \sum_{i=1}^n {\operatorname{\mathbb{E}}}{{\bm{T}}}_i } \right\Vert} + {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n ({{\bm{T}}}_i - {\operatorname{\mathbb{E}}}{{\bm{T}}}_i) } \right\Vert}
\leq {\left\Vert { \sum_{i=1}^n {\operatorname{\mathbb{E}}}{{\bm{T}}}_i } \right\Vert} + 2 {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{T}}}_i } \right\Vert}.
\end{aligned}$$ The second inequality follows from symmetrization, Fact \[fact:symmetrization\]. In this expression, $\{{\varepsilon}_i\}$ is an independent family of Rademacher random variables, independent from $\{ {{\bm{T}}}_i \}$. Conditioning on the choice of the random matrices ${{\bm{T}}}_i$, we apply Theorem \[thm:matrix-rademacher\] via the bound : $$\begin{aligned}
{\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{T}}}_i } \right\Vert}
= {\operatorname{\mathbb{E}}}\left[ {\operatorname{\mathbb{E}}}_{{{\bm{{\varepsilon}}}}} {\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{T}}}_i } \right\Vert} \right]
\leq \sqrt{1 + 2 \lceil\log d\rceil} \cdot {\operatorname{\mathbb{E}}}\left[ {\left\Vert { \sum_{i=1}^n {{\bm{T}}}_i^2 } \right\Vert}^{1/2} \right].
\end{aligned}$$ The operator ${\operatorname{\mathbb{E}}}_{{{\bm{{\varepsilon}}}}}$ averages over the choice of the Rademacher random variables, with the matrices ${{\bm{T}}}_i$ fixed. Now, since the matrices ${{\bm{T}}}_i$ are positive-semidefinite, $$\begin{aligned}
{\operatorname{\mathbb{E}}}\left[ {\left\Vert { \sum_{i=1}^n {{\bm{T}}}_i^2 } \right\Vert}^{1/2} \right]
&\leq {\operatorname{\mathbb{E}}}\left[ \big(\max\nolimits_i {\left\Vert { {{\bm{T}}}_i } \right\Vert} \big)^{1/2} \cdot {\left\Vert { \sum_{i=1}^n {{\bm{T}}}_i } \right\Vert}^{1/2} \right] \\
&\leq \big( {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert {{{\bm{T}}}_i} \right\Vert} \big)^{1/2} \cdot
\Big( {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {{\bm{T}}}_i } \right\Vert} \Big)^{1/2} \\
&= \big( {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert {{{\bm{T}}}_i} \right\Vert} \big)^{1/2} \cdot E^{1/2}.
\end{aligned}$$ The first inequality is Fact \[fact:sum-squares\], and the second is the Cauchy–Schwarz inequality for expectation. In the last step, we identified a copy of the quantity $E$.
Combine the last three displays to see that $$\label{eqn:E-bd}
E \leq {\left\Vert { \sum_{i=1}^n {\operatorname{\mathbb{E}}}{{\bm{T}}}_i } \right\Vert}
+ \sqrt{4(1 + 2\lceil\log d\rceil)} \cdot \big( {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert {{{\bm{T}}}_i} \right\Vert} \big)^{1/2} \cdot E^{1/2}.$$ For any $\alpha, \beta \geq 0$, the quadratic inequality $t^2 \leq \alpha + \beta t$ implies that $$t \leq \frac{1}{2} \left[ \beta + \sqrt{ \beta^2 + 4 \alpha } \right]
\leq \frac{1}{2} \left[ \beta + \beta + 2 \sqrt{\alpha} \right]
= \sqrt{\alpha} + \beta$$ because the square root is subadditive. Applying this fact to the quadratic relation for $E^{1/2}$, we obtain $$E^{1/2} \leq {\left\Vert { \sum_{i=1}^n {\operatorname{\mathbb{E}}}{{\bm{T}}}_i } \right\Vert}^{1/2}
+ \sqrt{4(1+ 2\lceil \log d\rceil)} \cdot \big( {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert {{{\bm{T}}}_i} \right\Vert} \big)^{1/2}.$$ Square both sides to reach the conclusion .
This argument is adapted from Rudelson’s paper [@Rud99:Random-Vectors], which develops a version of this result for the case where the matrices ${{\bm{T}}}_i$ have rank one; see also [@RV07:Sampling-Large]. The paper [@Tro08:Conditioning-Random] contains the first estimates for the constants. Magen & Zouzias [@MZ11:Low-Rank-Matrix-Valued] observed that similar considerations apply when the matrices ${{\bm{T}}}_i$ have higher rank. The complete result first appeared in [@CGT12:Masked-Sample App.]. The constants in this paper are marginally better. Related bounds for Schatten norms appear in [@MJCFT14:Matrix-Concentration Sec. 7] and in [@JZ13:Noncommutative-Bennett].
The results described in the last paragraph are all matrix versions of the classical inequalities due to Rosenthal [@Ros70:Subspaces-Lp Lem. 1]. These bounds can be interpreted as polynomial moment versions of the Chernoff inequality.
Proof of the Hermitian Case
---------------------------
The result for Hermitian matrices is a corollary of Theorem \[thm:matrix-rademacher\] and the positive-semidefinite result . Recall that ${{\bm{X}}}$ is a $d \times d$ random Hermitian matrix of the form $${{\bm{X}}} := \sum_{i=1}^n {{\bm{Y}}}_i
\quad\text{where ${\operatorname{\mathbb{E}}}{{\bm{Y}}}_i = {{\bm{0}}}$.}$$ We may calculate that $$\begin{aligned}
\left( {\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{X}}} } \right\Vert}^2} \right)^{1/2}
&= \left( {\operatorname{\mathbb{E}}}{{\left\Vert { \sum_{i=1}^n {{\bm{Y}}}_i } \right\Vert}^2} \right)^{1/2} \\
&\leq 2 \left( {\operatorname{\mathbb{E}}}\left[ {\operatorname{\mathbb{E}}}_{{{\bm{{\varepsilon}}}}} {{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{Y}}}_i } \right\Vert}^2} \right] \right)^{1/2} \\
&\leq \sqrt{4(1 + 2\lceil\log d \rceil)} \cdot \Big( {\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {{\bm{Y}}}_i^2 } \right\Vert} \Big)^{1/2}.
\end{aligned}$$ The first inequality follows from the symmetrization procedure, Fact \[fact:symmetrization\]. The second inequality applies Theorem \[thm:matrix-rademacher\], conditional on the choice of ${{\bm{Y}}}_i$. The remaining expectation contains a sum of independent positive-semidefinite matrices. Therefore, we may invoke with ${{\bm{T}}}_i = {{\bm{Y}}}_i^2$. We obtain $${\operatorname{\mathbb{E}}}{\left\Vert { \sum_{i=1}^n {{\bm{Y}}}_i^2 } \right\Vert}
\leq \left[ {\left\Vert { \sum_{i=1}^n {\operatorname{\mathbb{E}}}{{\bm{Y}}}_i^2 } \right\Vert}^{1/2}
+ \sqrt{4(1 + 2\lceil \log d \rceil)} \cdot \big( {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert { \smash{{{\bm{Y}}}_i^2} } \right\Vert} \big)^{1/2} \right]^2.$$ Combine the last two displays to reach $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{X}}} } \right\Vert}^2} \right)^{1/2}
\leq \sqrt{4(1 + 2\lceil\log d \rceil)} \cdot \left[
{\left\Vert { \sum_{i=1}^n {\operatorname{\mathbb{E}}}{{\bm{Y}}}_i^2 } \right\Vert}^{1/2}
+ \sqrt{4(1 + 2\lceil \log d \rceil)} \cdot
\big( {\operatorname{\mathbb{E}}}\max\nolimits_i {{\left\Vert {{{\bm{Y}}}_i} \right\Vert}^2} \big)^{1/2} \right].$$ Rewrite this expression to reach .
A version of the result first appeared in [@CGT12:Masked-Sample App.]; the constants here are marginally better. Related results for the Schatten norm appear in the papers [@JX03:Noncommutative-Burkholder-I; @JX08:Noncommutative-Burkholder-II; @MJCFT14:Matrix-Concentration; @JZ13:Noncommutative-Bennett]. These bounds are matrix extensions of the scalar inequalities due to Rosenthal [@Ros70:Subspaces-Lp Thm. 3] and to Ros[é]{}n [@Ros70:Bounds-Central Thm. 1]; see also Nagaev–Pinelis [@NP77:Some-Inequalities Thm. 2]. They can be interpreted as the polynomial moment inequalities that sharpen the Bernstein inequality.
Proof of the Rectangular Case {#sec:rect-case-upper}
-----------------------------
Finally, we establish the rectangular result . Recall that ${{\bm{Z}}}$ is a $d_1 \times d_2$ random rectangular matrix of the form $${{\bm{Z}}} := \sum_{i=1}^n {{\bm{S}}}_i
\quad\text{where ${\operatorname{\mathbb{E}}}{{\bm{S}}}_i = {{\bm{0}}}$.}$$ Set $d := d_1 + d_2$, and form a random $d \times d$ Hermitian matrix ${{\bm{X}}}$ by dilating ${{\bm{Z}}}$: $${{\bm{X}}} := {\mathscr{H}}({{\bm{Z}}}) = \sum_{i=1}^n {\mathscr{H}}({{\bm{S}}}_i).$$ The Hermitian dilation ${\mathscr{H}}$ is defined in ; the second relation holds because the dilation is a real-linear map.
Evidently, the random matrix ${{\bm{X}}}$ is a sum of independent, centered, random Hermitian matrices ${\mathscr{H}}({{\bm{S}}}_i)$. Therefore, we may apply to ${{\bm{X}}}$ to see that $$\label{eqn:rect-dilated}
\left( {\operatorname{\mathbb{E}}}{{\left\Vert {{\mathscr{H}}({{\bm{Z}}})} \right\Vert}^2} \right)^{1/2}
\leq \sqrt{4(1+2\lceil\log d\rceil)} \cdot {\left\Vert { {\operatorname{\mathbb{E}}}\big[ {\mathscr{H}}({{\bm{Z}}})^2 \big] } \right\Vert}^{1/2}
+ 4(1 + 2\lceil \log d \rceil) \cdot \big( {\operatorname{\mathbb{E}}}\max\nolimits_i {{\left\Vert { {\mathscr{H}}({{\bm{S}}}_i) } \right\Vert}^2} \big)^{1/2}.$$ Since the dilation preserves norms , the left-hand side of is exactly what we want: $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert {{\mathscr{H}}({{\bm{Z}}})} \right\Vert}^2} \right)^{1/2}
= \left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{Z}}}} \right\Vert}^2} \right)^{1/2}.$$ To simplify the first term on the right-hand side of , invoke the formula for the square of the dilation: $$\label{eqn:dilation-Z-square}
{\left\Vert { {\operatorname{\mathbb{E}}}\big[ {\mathscr{H}}({{\bm{Z}}})^2 \big] } \right\Vert}
= {\left\Vert { \begin{bmatrix} {\operatorname{\mathbb{E}}}\big[ {{\bm{ZZ}}}^{*}\big] & {{\bm{0}}} \\
{{\bm{0}}} & {\operatorname{\mathbb{E}}}\big[ {{\bm{Z}}}^{*}{{\bm{Z}}} \big] \end{bmatrix} } \right\Vert}
= \max\left\{ {\left\Vert { {\operatorname{\mathbb{E}}}\big[ {{\bm{ZZ}}}^{*}\big] } \right\Vert}, \
{\left\Vert { {\operatorname{\mathbb{E}}}\big[ {{\bm{Z}}}^{*}{{\bm{Z}}} \big] } \right\Vert} \right\}.$$ The second identity relies on the fact that the norm of a block-diagonal matrix is the maximum norm of a diagonal block. To simplify the second term on the right-hand side of , we use again: $${\left\Vert { {\mathscr{H}}({{\bm{S}}}_i) } \right\Vert} = {\left\Vert { {{\bm{S}}}_i } \right\Vert}.$$ Introduce the last three displays into to arrive at the result .
The result first appeared in the monograph [@Tro15:Introduction-Matrix Eqn. (6.16)] with (possibly) incorrect constants. The current paper contains the first complete presentation of the bound.
Lower Bounds for the Expected Norm {#sec:lower}
==================================
Finally, let us demonstrate that each of the upper bounds in Theorem \[thm:upper\] is sharp up to the dimensional constant $C(d)$. The following result gives matching lower bounds in each of the three cases.
\[thm:lower\] The expected spectral norm of a sum of independent random matrices satisfies the following lower bounds.
1. **The Positive-Semidefinite Case.** Consider an independent family $\{ {{\bm{T}}}_1, \dots, {{\bm{T}}}_n \}$ of random $d \times d$ positive-semidefinite matrices, and define the sum $${{\bm{W}}} := \sum_{i=1}^n {{\bm{T}}}_i.$$ Then $$\label{eqn:psd-case-lower}
{\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{W}}} } \right\Vert}
\geq \frac{1}{4} \left[ {\left\Vert { {\operatorname{\mathbb{E}}}{{\bm{W}}} } \right\Vert}^{1/2} + \big({\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert { {{\bm{T}}}_i } \right\Vert} \big)^{1/2} \right]^2.$$
2. **The Centered Hermitian Case.** Consider an independent family $\{ {{\bm{Y}}}_1, \dots, {{\bm{Y}}}_n \}$ of random $d \times d$ Hermitian matrices with ${\operatorname{\mathbb{E}}}{{\bm{Y}}}_i = {{\bm{0}}}$ for each index $i$, and define the sum $${{\bm{X}}} := \sum_{i=1}^n {{\bm{Y}}}_i.$$
Then $$\label{eqn:herm-case-lower}
\left( {\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{X}}} } \right\Vert}^2} \right)^{1/2}
\geq \frac{1}{2} {\left\Vert { {\operatorname{\mathbb{E}}}{{\bm{X}}}^2 } \right\Vert}^{1/2}
+ \frac{1}{4} \left( {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert { {{\bm{Y}}}_i } \right\Vert}^2 \right)^{1/2}.$$
3. **The Centered Rectangular Case.** Consider an independent family $\{ {{\bm{S}}}_1, \dots, {{\bm{S}}}_n \}$ of random $d_1 \times d_2$ matrices with ${\operatorname{\mathbb{E}}}{{\bm{S}}}_i = {{\bm{0}}}$ for each index $i$, and define the sum $${{\bm{Z}}} := \sum_{i=1}^n {{\bm{S}}}_i.$$ Then $$\label{eqn:rect-case-lower}
{\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{Z}}} } \right\Vert}
\geq \frac{1}{2} \max\left\{ {\left\Vert { {\operatorname{\mathbb{E}}}\big[ {{\bm{ZZ}}}^{*}\big] } \right\Vert}^{1/2}, \
{\left\Vert { {\operatorname{\mathbb{E}}}\big[ {{\bm{Z}}}^{*}{{\bm{Z}}} \big] } \right\Vert} ^{1/2}\right\}
+ \frac{1}{4} \left( {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert { {{\bm{S}}}_i } \right\Vert}^2 \right)^{1/2}.$$
The rest of the section describes the proof of Theorem \[thm:lower\].
The lower bound in Theorem \[thm:main\] is an immediate consequence of Case (3) of Theorem \[thm:lower\]. We simply introduce the notation $v({{\bm{Z}}})$ for the variance parameter.
The Positive-Semidefinite Case
------------------------------
The lower bound in the positive-semidefinite case is relatively easy. Recall that $${{\bm{W}}} := \sum_{i=1}^n {{\bm{T}}}_i
\quad\text{where the ${{\bm{T}}}_i$ are positive semidefinite.}$$ First, by Jensen’s inequality and the convexity of the spectral norm, $$\label{eqn:psd-lower-mean}
{\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{W}}} } \right\Vert} \geq {\left\Vert { {\operatorname{\mathbb{E}}}{{\bm{W}}} } \right\Vert}.$$ Second, let $I$ be the minimum value of the index $i$ where $\max_i {\left\Vert {{{\bm{T}}}_i} \right\Vert}$ is achieved; note that $I$ is a random variable. Since the summands ${{\bm{T}}}_i$ are positive semidefinite, it is easy to see that $${{\bm{T}}}_I {\preccurlyeq}\sum_{i=1}^n {{\bm{T}}}_i.$$ Therefore, by the norm identity for a positive-semidefinite matrix and the monotonicity of the maximum eigenvalue, Fact \[fact:weyl\], we have $$\max\nolimits_i {\left\Vert {{{\bm{T}}}_i} \right\Vert}
= {\left\Vert {{{\bm{T}}}_I} \right\Vert}
= \lambda_{\max}({{\bm{T}}}_I)
\leq \lambda_{\max}\left( \sum_{i=1}^n {{\bm{T}}}_i \right)
= {\left\Vert { \sum_{i=1}^n {{\bm{T}}}_i } \right\Vert}
= {\left\Vert { {{\bm{W}}} } \right\Vert}.$$ Take the expectation to arrive at $$\label{eqn:psd-lower-max}
{\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert { {{\bm{T}}}_i } \right\Vert} \leq
{\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{W}}} } \right\Vert}.$$ Average the two bounds and to obtain $${\operatorname{\mathbb{E}}}{\left\Vert {{{\bm{W}}}} \right\Vert}
\geq \frac{1}{2} \big[ {\left\Vert { {\operatorname{\mathbb{E}}}{{\bm{W}}} } \right\Vert} + {\operatorname{\mathbb{E}}}\max\nolimits_i {\left\Vert {{{\bm{T}}}_i} \right\Vert} \big].$$ To reach , apply the numerical fact that $2(a + b) \geq \big(\sqrt{a} + \sqrt{b}\big)^{2}$, valid for all $a, b \geq 0$.
Hermitian Case
--------------
The Hermitian case is similar in spirit, but the details are a little more involved. Recall that $${{\bm{X}}} := \sum_{i=1}^n {{\bm{Y}}}_i
\quad\text{where ${\operatorname{\mathbb{E}}}{{\bm{Y}}}_i = {{\bm{0}}}$.}$$ First, using the identity , we have $$\label{eqn:herm-lower-var}
\left( {\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{X}}} } \right\Vert}^2} \right)^{1/2}
= \big( {\operatorname{\mathbb{E}}}{\left\Vert { \smash{{{\bm{X}}}^2} } \right\Vert} \big)^{1/2}
\geq {\left\Vert { {\operatorname{\mathbb{E}}}{{\bm{X}}}^2 } \right\Vert}^{1/2}.$$ The second relation is Jensen’s inequality .
To obtain the other part of our lower bound, we use the lower bound from the symmetrization result, Fact \[fact:symmetrization\]: $${\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{X}}} } \right\Vert}^2}
= {\operatorname{\mathbb{E}}}{{\left\Vert { \sum_{i=1}^n {{\bm{Y}}}_i } \right\Vert}^2}
\geq \frac{1}{4} {\operatorname{\mathbb{E}}}{{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{Y}}}_i } \right\Vert}^2}$$ where $\{{\varepsilon}_i\}$ is an independent family of Rademacher random variables, independent from $\{{{\bm{Y}}}_i\}$. Now, we condition on the choice of $\{{{\bm{Y}}}_i\}$, and we compute the partial expectation with respect to the ${\varepsilon}_i$. Let $I$ be the minimum value of the index $i$ where $\max\nolimits_i {{\left\Vert {{{\bm{Y}}}_i} \right\Vert}^2}$ is achieved. By Jensen’s inequality , applied conditionally, $${\operatorname{\mathbb{E}}}_{{{\bm{{\varepsilon}}}}} {{\left\Vert { \sum_{i=1}^n {\varepsilon}_i {{\bm{Y}}}_i } \right\Vert}^2}
\geq {\operatorname{\mathbb{E}}}_{{\varepsilon}_I} {{\left\Vert { {\operatorname{\mathbb{E}}}\left[ \sum_{i=1}^n {\varepsilon}_i {{\bm{Y}}}_i \, \big\vert\, {\varepsilon}_I \right] } \right\Vert}^2}
= {\operatorname{\mathbb{E}}}_{{\varepsilon}_I} {{\left\Vert { {\varepsilon}_I {{\bm{Y}}}_I } \right\Vert}^2}
= \max\nolimits_i {{\left\Vert {{{\bm{Y}}}_i} \right\Vert}^2}.$$ Combining the last two displays and taking a square root, we discover that $$\label{eqn:herm-lower-max}
\left( {\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{X}}} } \right\Vert}^2} \right)^{1/2}
\geq \frac{1}{2} \big( {\operatorname{\mathbb{E}}}\max\nolimits_i {{\left\Vert {{{\bm{Y}}}_i} \right\Vert}^2} \big)^{1/2}.$$ Average the two bounds and to conclude that is valid.
The Rectangular Case
--------------------
The rectangular case follows instantly from the Hermitian case when we apply to the Hermitian dilation. Recall that $${{\bm{Z}}} := \sum_{i=1}^n {{\bm{S}}}_i
\quad\text{where ${\operatorname{\mathbb{E}}}{{\bm{S}}}_i = {{\bm{0}}}$.}$$ Define a random matrix ${{\bm{X}}}$ by applying the Hermitian dilation to ${{\bm{Z}}}$: $${{\bm{X}}} := {\mathscr{H}}({{\bm{Z}}}) = \sum_{i=1}^n {\mathscr{H}}({{\bm{S}}}_i).$$ Since ${{\bm{X}}}$ is a sum of independent, centered, random Hermitian matrices, the bound yields $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert { {\mathscr{H}}({{\bm{Z}}}) } \right\Vert}^2} \right)^{1/2}
\geq \frac{1}{2} {\left\Vert { {\operatorname{\mathbb{E}}}\big[ {\mathscr{H}}({{\bm{Z}}})^2 \big] } \right\Vert}
+ \frac{1}{4} \big( {\operatorname{\mathbb{E}}}\max\nolimits_i {{\left\Vert {{\mathscr{H}}({{\bm{S}}}_i)} \right\Vert}^2} \big)^{1/2}.$$ Repeating the calculations in Section \[sec:rect-case-upper\], we arrive at the advertised result .
Optimality of Theorem \[thm:main\] {#sec:examples}
==================================
The lower bounds and upper bounds in Theorem \[thm:main\] match, except for the dimensional factor $C({{\bm{d}}})$. In this section, we show by example that neither the lower bounds nor the upper bounds can be sharpened substantially. More precisely, the logarithms cannot appear in the lower bound, and they must appear in the upper bound. As a consequence, unless we make further assumptions, Theorem \[thm:main\] cannot be improved except by constant factors and, in one place, by an iterated logarithm.
Upper Bound: Variance Term {#sec:upper-var}
--------------------------
First, let us show that the variance term in the upper bound in must contain a logarithm. This example is drawn from [@Tro15:Introduction-Matrix Sec. 6.1.2].
For a large parameter $n$, consider the $d \times d$ random matrix $${{\bm{Z}}} := \sum_{i=1}^d \sum_{j=1}^n \frac{1}{\sqrt{n}} {\varepsilon}_{ij} \mathbf{E}_{ii}$$ As before, $\{ {\varepsilon}_{ij} \}$ is an independent family of Rademacher random variables, and $\mathbf{E}_{ii}$ is a $d \times d$ matrix with a one in the $(i, i)$ position and zeroes elsewhere. The variance parameter satisfies $$v({{\bm{Z}}}) = {\left\Vert { \sum_{i=1}^d \sum_{j=1}^n \frac{1}{n} \mathbf{E}_{ii} } \right\Vert}
= {\left\Vert { {\mathbf{I}}_d } \right\Vert} = 1.$$ The large deviation parameter satisfies $$L^2 = {\operatorname{\mathbb{E}}}\max\nolimits_{i,j} {{\left\Vert { \frac{1}{\sqrt{n}} {\varepsilon}_{ij} \mathbf{E}_{ii} } \right\Vert}^2}
= \frac{1}{n}.$$ Therefore, the variance term drives the upper bound . For this example, it is easy to estimate the norm directly. Indeed, $${\operatorname{\mathbb{E}}}{{\left\Vert { {{\bm{Z}}} } \right\Vert}^2}
\approx {\operatorname{\mathbb{E}}}{{\left\Vert { \sum_{i=1}^d \gamma_i \mathbf{E}_{ii} } \right\Vert}^2}
= {\operatorname{\mathbb{E}}}\max_{i=1,\dots, d} {{{\left\vert {\smash{\gamma_i}} \right\vert}}^2}
\approx 2 \log d.$$ Here, $\{\gamma_i\}$ is an independent family of standard normal variables, and the first approximation follows from the central limit theorem. The norm of a diagonal matrix is the maximum absolute value of one of the diagonal entries. Last, we use the well-known fact that the expected maximum among $d$ squared standard normal variables is asymptotic to $2\log d$. In summary, $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{Z}}}} \right\Vert}^2} \right)^{1/2}
\approx \sqrt{2 \log d \cdot v({{\bm{X}}})}.$$ We conclude that the variance term in the upper bound must carry a logarithm. Furthermore, it follows that Theorem \[thm:matrix-rademacher\] is numerically sharp.
Upper Bound: Large-Deviation Term
---------------------------------
Next, we verify that the large-deviation term in the upper bound in must also contain a logarithm, although the bound is slightly suboptimal. This example is drawn from [@Tro15:Introduction-Matrix Sec. 6.1.2].
For a large parameter $n$, consider the $d \times d$ random matrix $${{\bm{Z}}} := \sum_{i=1}^d \sum_{j=1}^n \big( \delta_{ij} - n^{-1} \big) \cdot \mathbf{E}_{ii}$$ where $\{ \delta_{ij} \}$ is an independent family of $\textsc{bernoulli}\big(n^{-1}\big)$ random variables. That is, $\delta_{ij}$ takes only the values zero and one, and its expectation is $n^{-1}$. The variance parameter for the random matrix is $$v({{\bm{Z}}}) = {\left\Vert { \sum_{i=1}^d \sum_{j=1}^n {\operatorname{\mathbb{E}}}\big( \delta_{ij} - n^{-1} \big)^2 \cdot \mathbf{E}_{ii} } \right\Vert}
= {\left\Vert {\sum_{i=1}^d \sum_{j=1}^n n^{-1}\big(1 - n^{-1}\big) \cdot \mathbf{E}_{ii} } \right\Vert}
\approx 1.$$ The large deviation parameter is $$L^2 = {\operatorname{\mathbb{E}}}\max\nolimits_{i,j} {{\left\Vert { \big(\delta_{ij} - n^{-1} \big)\cdot \mathbf{E}_{ii} } \right\Vert}^2}
\approx 1.$$ Therefore, the large-deviation term drives the upper bound in : $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{Z}}}} \right\Vert}^2} \right)^{1/2} \leq \sqrt{4 (1 + 2\lceil \log d \rceil)} + 4(1 + 2 \lceil \log d \rceil).$$ On the other hand, by direct calculation $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{Z}}}} \right\Vert}^2} \right)^{1/2}
\approx \left( {\operatorname{\mathbb{E}}}{{\left\Vert { \sum_{i=1}^d (Q_i - 1) \cdot \mathbf{E}_{ii} } \right\Vert}^2} \right)^{1/2}
= \left( {\operatorname{\mathbb{E}}}\max_{i=1,\dots,d} {{{\left\vert {Q_i - 1} \right\vert}}^2} \right)^{1/2}
\approx \mathrm{const} \cdot \frac{\log d}{\log \log d}.$$ Here, $\{Q_i\}$ is an independent family of $\textsc{poisson}(1)$ random variables, and the first approximation follows from the Poisson limit of a binomial. The second approximation depends on a (messy) calculation for the expected squared maximum of a family of independent Poisson variables. We see that the large deviation term in the upper bound cannot be improved, except by an iterated logarithm factor.
Lower Bound: Variance Term
--------------------------
Next, we argue that there are examples where the variance term in the lower bound from cannot have a logarithmic factor.
Consider a $d \times d$ random matrix of the form $${{\bm{Z}}} := \sum_{i,j=1}^d {\varepsilon}_{ij} \mathbf{E}_{ij}.$$ Here, $\{ {\varepsilon}_{ij} \}$ is an independent family of Rademacher random variables. The variance parameter satisfies $$v({{\bm{Z}}}) = \max\left\{ {\left\Vert { \sum_{i,j=1}^d \big( {\operatorname{\mathbb{E}}}{\varepsilon}_{ij}^2 \big)\cdot \mathbf{E}_{ij} \mathbf{E}_{ij}^{*}} \right\Vert},
{\left\Vert { \sum_{i,j=1}^d \big( {\operatorname{\mathbb{E}}}{\varepsilon}_{ij}^2 \big)\cdot \mathbf{E}_{ij}^{*}\mathbf{E}_{ij} } \right\Vert} \right\}
= \max\left\{ {\left\Vert { d \cdot {\mathbf{I}}_d } \right\Vert}, {\left\Vert { d \cdot {\mathbf{I}}_d} \right\Vert} \right\}
= d.$$ The large-deviation parameter is $$L^2 = {\operatorname{\mathbb{E}}}\max\nolimits_{i,j} {{\left\Vert { {\varepsilon}_{ij} \mathbf{E}_{ij} } \right\Vert}^2} = 1.$$ Therefore, the variance term controls the lower bound in : $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{Z}}}} \right\Vert}^2} \right)^{1/2} \geq \sqrt{c d} + c.$$ Meanwhile, it can be shown that the norm of the random matrix ${{\bm{Z}}}$ satisfies $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{Z}}}} \right\Vert}^2} \right)^{1/2} \approx \sqrt{2d}.$$ See the paper [@BV14:Sharp-Nonasymptotic] for an elegant proof of this nontrivial result. We see that the variance term in the lower bound in cannot have a logarithmic factor.
Lower Bound: Large-Deviation Term
---------------------------------
Finally, we produce an example where the large-deviation term in the lower bound from cannot have a logarithmic factor.
Consider a $d \times d$ random matrix of the form $${{\bm{Z}}} := \sum_{i=1}^d P_i \mathbf{E}_{ii}.$$ Here, $\{P_i\}$ is an independent family of symmetric random variables whose tails satisfy $${{\mathbb}{P}\left\{{ {\left\vert {P_i} \right\vert} \geq t }\right\}} = \begin{cases} t^{-4}, & t \geq 1 \\ 1, & t \leq 1. \end{cases}$$ The key properties of these variables are that $${\operatorname{\mathbb{E}}}P_i^2 = 2
\quad\text{and}\quad
{\operatorname{\mathbb{E}}}\max_{i=1,\dots,d} P_i^2 \approx {\rm const} \cdot d^2.$$ The second expression just describes the asymptotic order of the expected maximum. We quickly compute that the variance term satisfies $$v({{\bm{Z}}}) = {\left\Vert { \sum_{i=1}^d \big( {\operatorname{\mathbb{E}}}P_i^2 \big) \mathbf{E}_{ii} } \right\Vert}
= 2.$$ Meanwhile, the large-deviation factor satisfies $$L^2 = {\operatorname{\mathbb{E}}}\max_{i=1,\dots, d} {{\left\Vert { P_i \mathbf{E}_{ii} } \right\Vert}^2}
= {\operatorname{\mathbb{E}}}\max_{i=1, \dots, d} {{{\left\vert {P_i} \right\vert}}^2}
\approx \mathrm{const} \cdot d^2.$$ Therefore, the large-deviation term drives the lower bound : $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{Z}}}} \right\Vert}^2} \right)^{1/2}
\gtrapprox \mathrm{const} \cdot d.$$ On the other hand, by direct calculation, $$\left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{Z}}}} \right\Vert}^2} \right)^{1/2}
= \left( {\operatorname{\mathbb{E}}}{{\left\Vert { \sum_{i=1}^d P_i \mathbf{E}_{ii} } \right\Vert}^2} \right)^{1/2}
= \left( {\operatorname{\mathbb{E}}}\max_{i=1,\dots,d} {{{\left\vert { P_i } \right\vert}}^2} \right)^{1/2}
\approx \mathrm{const} \cdot d.$$ We conclude that the large-deviation term in the lower bound cannot carry a logarithmic factor.
Acknowledgments {#acknowledgments .unnumbered}
===============
The author wishes to thank Ryan Lee for a careful reading of the manuscript. The author gratefully acknowledges support from ONR award N00014-11-1002 and the Gordon & Betty Moore Foundation.
[LPSS[[$^{+}$]{}]{}14]{}
A. Barvinok. , volume 54 of [*Graduate Studies in Mathematics*]{}. American Mathematical Society, Providence, RI, 2002.
R. F. Bass and K. Gr[ö]{}chenig. Relevant sampling of band-limited functions. , 57(1):43–58, 2013.
A. Bandeira and R. V. Handel. Sharp nonasymptotic bounds on the norm of random matrices with independent entries. Available at <http://arXiv.org/abs/1408.6185>, Aug. 2014.
R. Bhatia. , volume 169 of [*Graduate Texts in Mathematics*]{}. Springer-Verlag, New York, 1997.
S. Boucheron, G. Lugosi, and P. Massart. . Oxford University Press, Oxford, 2013. A nonasymptotic theory of independence, With a foreword by Michel Ledoux.
A. Buchholz. Operator [K]{}hintchine inequality in non-commutative probability. , 319(1):1–16, 2001.
S. Boyd and L. Vandenberghe. . Cambridge University Press, Cambridge, 2004.
Y. Chen, S. Bhojanapalli, S. Sanghavi, and R. Ward. Coherent matrix completion. In [*Proc. 31st Intl. Conf. Machine Learning*]{}, Beijing, 2014.
X. Chen and T. M. Christensen. Optimal uniform convergence rates for sieve nonparametric instrumental variables regression. Available at <http://arXiv.org/abs/1311.0412>, Nov. 2013.
A. Cohen, M. A. Davenport, and D. Leviatan. On the stability and accuracy of least squares approximations. , 13(5):819–834, 2013.
P. Constantine and D. Gleich. Computing active subspaces. Available at <http://arXiv.org/abs/1408.0545>, Aug. 2014.
Y. Chen, L. Guibas, and Q. Huang. Near-optimal joint object matching via convex relaxation. In [*Proc. 31st Intl. Conf. Machine Learning*]{}, Beijing, 2014.
R. Y. Chen, A. Gittens, and J. A. Tropp. The masked sample covariance estimator: an analysis using matrix concentration inequalities. , 1(1):2–20, 2012.
M. B. Cohen, R. Kyng, G. L. Miller, J. W. Pachocki, R. Peng, A. B. Rao, and S. C. Xu. Solving [SDD]{} linear systems in nearly $m \log^{1/2}(n)$ time. In [*Proceedings of the 46th Annual ACM Symposium on Theory of Computing*]{}, STOC ’14, pages 343–352, New York, NY, USA, 2014. ACM.
S.-S. Cheung, A. M.-C. So, and K. Wang. Linear matrix inequalities with stochastically dependent perturbations and applications to chance-constrained semidefinite optimization. , 22(4):1394–1430, 2012.
J. Djolonga, A. Krause, and V. Cevher. High-dimensional [G]{}aussian process bandits. In C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, editors, [*Advances in Neural Information Processing Systems 26*]{}, pages 1025–1033. Curran Associates, Inc., 2013.
M. Fornasier, K. Schnass, and J. Vybiral. Learning functions of few arbitrary linear parameters in high dimensions. , 12(2):229–262, 2012.
G. R. Grimmett and D. R. Stirzaker. . Oxford University Press, New York, third edition, 2001.
P. R. Halmos. . Springer-Verlag, New York-Heidelberg, second edition, 1974. Undergraduate Texts in Mathematics.
R. A. Horn and C. R. Johnson. . Cambridge University Press, Cambridge, second edition, 2013.
N. J. A. Harvey and N. Olver. Pipage rounding, pessimistic estimators and matrix concentration. In [*Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms*]{}, SODA ’14, pages 926–945. SIAM, 2014.
A. S. Holevo. , volume 16 of [*De Gruyter Studies in Mathematical Physics*]{}. De Gruyter, Berlin, 2012. A mathematical introduction.
M. Junge and Q. Xu. Noncommutative [B]{}urkholder/[R]{}osenthal inequalities. , 31(2):948–995, 2003.
M. Junge and Q. Xu. Noncommutative [B]{}urkholder/[R]{}osenthal inequalities. [II]{}. [A]{}pplications. , 167:227–282, 2008.
M. Junge and Q. Zeng. Noncommutative [B]{}ennett and [R]{}osenthal inequalities. , 41(6):4287–4316, 2013.
V. Koltchinskii. , volume 2033 of [*Lecture Notes in Mathematics*]{}. Springer, Heidelberg, 2011. Lectures from the 38th Probability Summer School held in Saint-Flour, 2008, [É]{}cole d’[É]{}t[é]{} de Probabilit[é]{}s de Saint-Flour. \[Saint-Flour Probability Summer School\].
R. Lata[ł]{}a and K. Oleszkiewicz. On the best constant in the [K]{}hinchin-[K]{}ahane inequality. , 109(1):101–104, 1994.
F. Lust-Piquard. Inégalités de [K]{}hintchine dans [$C\sb p\;(1<p<\infty)$]{}. , 303(7):289–292, 1986.
D. Lopez-Paz, S. Sra, A. Smola, Z. Ghahramani, and B. Sch[ö]{}lkopf. Randomized nonlinear component analysis. In [*Proc. 31st Intl. Conf. Machine Learning*]{}, Beijing, July 2014.
M. Ledoux and M. Talagrand. . Classics in Mathematics. Springer-Verlag, Berlin, 2011. Isoperimetry and processes, Reprint of the 1991 edition.
D. G. Luenberger. . John Wiley & Sons, Inc., New York-London-Sydney, 1969.
W. B. March and G. Biros. Far-field compression for fast kernel summation methods in high dimensions. Available at <http://arXiv.org/abs/1409.2802>, Sep. 2014.
L. Mackey, M. I. Jordan, R. Y. Chen, B. Farrell, and J. A. Tropp. Matrix concentration inequalities via the method of exchangeable pairs. , 42(3):906–945, 2014.
E. Morvant, S. Ko[ç]{}o, and L. Ralaivola. -[B]{}ayesian generalization bound on confusion matrix for multi-class classification. In [*Proc. 29th Intl. Conf. Machine Learning*]{}, Edinburgh, 2012.
A. Magen and A. Zouzias. Low rank matrix-valued [C]{}hernoff bounds and approximate matrix multiplication. In [*Proceedings of the [T]{}wenty-[S]{}econd [A]{}nnual [ACM]{}-[SIAM]{} [S]{}ymposium on [D]{}iscrete [A]{}lgorithms*]{}, pages 1422–1436. SIAM, Philadelphia, PA, 2011.
S. V. Nagaev and I. F. Pinelis. Some inequalities for the distributions of sums of independent random variables. , 22(2):254–263, 1977.
R. I. Oliveira. The spectrum of random [$k$]{}-lifts of large graphs (with possibly large [$k$]{}). , 1(3-4):285–306, 2010.
R. I. Oliveira. The lower tail of random quadratic forms, with applications to ordinary least squares and restricted eigenvalue properties. Available at <http://arXiv.org/abs/1312.2903>, Dec. 2013.
G. Pisier. Non-commutative vector valued [$L\sb p$]{}-spaces and completely [$p$]{}-summing maps. , (247):vi+131, 1998.
R. T. Rockafellar. . Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ, 1997. Reprint of the 1970 original, Princeton Paperbacks.
B. Ros[é]{}n. On bounds on the central moments of even order of a sum of independent random variables. , 41:1074–1077, 1970.
H. P. Rosenthal. On the subspaces of [$L\sp{p}$]{} [$(p>2)$]{} spanned by sequences of independent random variables. , 8:273–303, 1970.
M. Rudelson. Random vectors in the isotropic position. , 164(1):60–72, 1999.
M. Rudelson and R. Vershynin. Sampling from large matrices: an approach through geometric functional analysis. , 54(4):Art. 21, 19 pp. (electronic), 2007.
T. Tao. , volume 132 of [*Graduate Studies in Mathematics*]{}. American Mathematical Society, Providence, RI, 2012.
J. A. Tropp. On the conditioning of random subdictionaries. , 25(1):1–24, 2008.
J. A. Tropp. Improved analysis of the subsampled randomized [H]{}adamard transform. , 3(1-2):115–126, 2011.
J. A. Tropp. An introduction to matrix concentration inequalities. , 8(1–2), May 2015.
J. A. Tropp. Second-order matrix concentration inequalities. Available at <http://arXiv.org/abs/1504.05919>, Apr. 2015.
[^1]: Open sets in $\mathbb{M}^{d_1 \times d_2}$ are defined with respect to the metric topology induced by the spectral norm.
|
---
abstract: 'Using the DPW method, we construct genus zero Alexandrov-embedded constant mean curvature (greater than one) surfaces with any number of Delaunay ends in hyperbolic space.'
author:
- Thomas Raujouan
title: 'Constant mean curvature $n$-noids in hyperbolic space'
---
Introduction {#introduction .unnumbered}
============
In [@dpw], Dorfmeister, Pedit and Wu introduced a loop group method (the DPW method) for constructing harmonic maps from a Riemann surface into a symmetric space. As a consequence, their method provides a Weierstrass-type representation of constant mean curvature surfaces (CMC) in Euclidean space ${\mathbb{R}}^3$, three-dimensional sphere ${\mathbb{S}}^3$, or hyperbolic space ${\mathbb{H}}^3$. Many examples have been constructed (see for example [@newcmc; @dw; @kkrs; @dik; @heller1; @heller2]). Among them, Traizet [@nnoids; @minoids] showed how the DPW method in ${\mathbb{R}}^3$ can construct genus zero $n$-noids with Delaunay ends (as Kapouleas did with partial differential equations techniques in [@kapouleas]) and glue half-Delaunay ends to minimal surfaces (as did Mazzeo and Pacard in [@pacard], also with PDE techniques). A natural question is wether these constructions can be carried out in ${\mathbb{H}}^3$. Although properly embedded CMC annuli of mean curvature $H>1$ in ${\mathbb{H}}^3$ are well-known since the work of Korevaar, Kusner, Meeks and Solomon [@meeks], no construction similar to [@kapouleas] or [@pacard] can be found in the literature. This paper uses the DPW method in ${\mathbb{H}}^3$ to construct these surfaces. The two resulting theorems are as follows.
\[theoremConstructionNnoids\] Given a point $p\in{\mathbb{H}}^3$, $n\geq 3$ distinct unit vectors $u_1, \cdots, u_n$ in the tangent space of ${\mathbb{H}}^3$ at $p$ and $n$ non-zero real weights $\tau_1,\cdots,\tau_n$ satisfying the balancing condition $$\label{eqBalancingNnoids}
\sum_{i=1}^{n}\tau_i u_i = 0$$ and given $H>1$, there exists a smooth $1$-parameter family of CMC $H$ surfaces $\left(M_t\right)_{0<t<T}$ with genus zero, $n$ Delaunay ends and the following properties:
1. Denoting by $w_{i,t}$ the weight of the $i$-th Delaunay end, $$\lim\limits_{t\to 0} \frac{w_{i,t}}{t} = \tau_i.$$
2. Denoting by $\Delta_{i,t}$ the axis of the $i$-th Delaunay end, $\Delta_{i,t}$ converges to the oriented geodesic through the point $p$ in the direction of $u_i$.
3. If all the weights $\tau_i$ are positive, then $M_t$ is Alexandrov-embeddedd.
4. If all the weights $\tau_i$ are positive and if for all $i\neq j\in[1,n]$, the angle $\theta_{ij}$ between $u_i$ and $u_j$ satisfies $$\label{eqAnglesNnoid}
\left| \sin\frac{\theta_{ij}}{2} \right|>\frac{\sqrt{H^2-1}}{2H},$$ then $M_t$ is embedded.
\[theoremConstructionMinoids\] Let $M_0\subset{\mathbb{R}}^3$ be a non-degenerate minimal $n$-noid with $n\geq 3$ and let $H>1$. There exists a smooth family of CMC $H$ surfaces $\left(M_t\right)_{0<|t|<T}$ in ${\mathbb{H}}^3$ such that
1. The surfaces $M_t$ have genus zero and $n$ Delaunay ends.
2. After a suitable blow-up, $M_t$ converges to $M_0$ as $t$ tends to $0$.
3. If $M_0$ is Alexandrov-embedded, then all the ends of $M_t$ are of unduloidal type if $t>0$ and of nodoidal type if $t<0$. Moreover, $M_t$ is Alexandrov-embedded if $t>0$.
Following the proofs of [@nnoids; @minoids] gives an effective strategy to construct the desired CMC surfaces $M_t$. This is done in Sections \[sectionNnoids\] and \[sectionMinoids\]. However, showing that $M_t$ is Alexandrov-embedded requires a precise knowledge of its ends. This is the purpose of the main theorem (Section \[sectionPerturbedDelaunayImmersions\], Theorem \[theoremPerturbedDelaunay\]). We consider a family of holomorphic perturbations of the data giving rise via the DPW method to a half-Delaunay embedding $f_0: {\mathbb{D}}^*\subset {\mathbb{C}}\longrightarrow {\mathbb{H}}^3$ and show that the perturbed induced surfaces $f_t({\mathbb{D}}^*)$ are also embedded. Note that the domain on which the perturbed immersions are defined does not depend on the parameter $t$, which is stronger than $f_t$ having an embedded end, and is critical for showing that the surfaces $M_t$ are Alexandrov-embedded. The essential hypothesis on the perturbations is that they do not occasion a period problem on the domain ${\mathbb{D}}^*$ (which is not simply connected). The proof relies on the Fröbenius method for linear differential systems with regular singular points. Although this idea has been used in ${\mathbb{R}}^3$ by Kilian, Rossman, Schmitt [@krs] and [@raujouan], the case of ${\mathbb{H}}^3$ generates two extra resonance points that are unavoidable and make their results inapplicable. Our solution is to extend the Fröbenius method to loop-group-valued differential systems.
![*Theorem \[theoremConstructionNnoids\] ensures the existence of $n$-noids with small necks. For $H>1$ small enough ($H\simeq1.5$ on the picture), there exists embedded $n$-noids with more than six ends.*](7noidmainlevee.png){width="7cm"}
Delaunay surfaces in ${\mathbb{H}}^3$ via the DPW method {#sectionNotations}
========================================================
This Section fixes the notation and recalls the DPW method in ${\mathbb{H}}^3$.
Hyperbolic space
----------------
#### Matrix model.
Let ${\mathbb{R}}^{1,3}$ denote the space ${\mathbb{R}}^4$ with the Lorentzian metric ${\left\langle x, x \right\rangle} = -x_0^2+x_1^2+x_2^2+x_3^2$. Hyperbolic space is the subset ${\mathbb{H}}^3 $ of vectors $x\in{\mathbb{R}}^{1,3}$ such that ${\left\langle x, x \right\rangle} = -1$ and $x_0>0$, with the metric induced by ${\mathbb{R}}^{1,3}$. The DPW method constructs CMC immersions into a matrix model of ${\mathbb{H}}^3$. Consider the identification $$x=(x_0,x_1,x_2,x_3)\in{\mathbb{R}}^{1,3} \simeq X = \begin{pmatrix}
x_0+x_3 & x_1+ix_2\\
x_1-ix_2 & x_0 - x_3
\end{pmatrix}\in {\mathcal{H}}_2$$ where ${\mathcal{H}}_2:=\{M\in {\mathcal{M}}(2,{\mathbb{C}})\mid M^*=M \}$ denotes the Hermitian matrices. In this model, ${\left\langle X, X \right\rangle} = -\det X$ and ${\mathbb{H}}^3$ is identified with the set ${\mathcal{H}}_2^{++}\cap {\mathrm{SL}}(2,{\mathbb{C}})$ of Hermitian positive definite matrices with determinant $1$. This fact enjoins us to write $${\mathbb{H}}^3 = \left\{ F{F}^* \mid F\in{\mathrm{SL}}(2,{\mathbb{C}}) \right\}.$$ Setting $$\label{eqPauliMatrices}
\sigma_1 = \begin{pmatrix}
0& 1\\
1 & 0
\end{pmatrix}, \qquad \sigma_2 = \begin{pmatrix}
0& i\\
-i & 0
\end{pmatrix}, \qquad \sigma_3 = \begin{pmatrix}
1& 0\\
0 & -1
\end{pmatrix},$$ gives us an orthonormal basis $\left(\sigma_1,\sigma_2,\sigma_3\right)$ of the tangent space $T_{{\mathrm{I}}_2}{\mathbb{H}}^3$ of ${\mathbb{H}}^3$ at the identity matrix. We choose the orientation of ${\mathbb{H}}^3$ induced by this basis.
#### Rigid motions.
In the matrix model of ${\mathbb{H}}^3$, ${\mathrm{SL}}(2,{\mathbb{C}})$ acts as rigid motions: for all $p\in{\mathbb{H}}^3$ and $A\in{\mathrm{SL}}(2,{\mathbb{C}})$, this action is denoted by $$A\cdot p := Ap{A}^*\in{\mathbb{H}}^3.$$ This action extends to tangent spaces: for all $v\in T_p{\mathbb{H}}^3$, $A\cdot v := Av{A}^*\in T_{A\cdot p}{\mathbb{H}}^3$. The DPW method takes advantage of this fact and contructs immersions in ${\mathbb{H}}^3$ with the moving frame method.
#### Geodesics.
Let $p\in{\mathbb{H}}^3$ and $v\in UT_p{\mathbb{H}}^3$. Define the map $$\label{eqExpMapGamma}
{
\begin{array}{ccccc}
{\mathrm{geod}}(p,v) & : & {\mathbb{R}}& \longrightarrow & {\mathbb{H}}^3 \\
& & t & \longmapsto & p\cosh t + v \sinh t. \\
\end{array}
}$$ Then ${\mathrm{geod}}(p,v)$ is the unit speed geodesic through $p$ in the direction $v$. The action of ${\mathrm{SL}}(2,{\mathbb{C}})$ extends to oriented geodesics via: $$A\cdot {\mathrm{geod}}(p,v) := {\mathrm{geod}}(A\cdot p, A\cdot v).$$
#### Parallel transport.
Let $p,q\in{\mathbb{H}}^3$ and $v\in T_p{\mathbb{H}}^3$. We denote the result of parallel transporting $v$ from $p$ to $q$ along the geodesic of ${\mathbb{H}}^3$ joining $p$ to $q$ by $\Gamma_p^qv \in T_q{\mathbb{H}}^3$. The parallel transport of vectors from the identity matrix is easy to compute with Proposition \[propParallTranspI2\].
\[propParallTranspI2\] For all $p\in{\mathbb{H}}^3$ and $v\in T_{{\mathrm{I}}_2}{\mathbb{H}}^3$, there exists a unique $S\in{\mathcal{H}}_2^{++}\cap{\mathrm{SL}}(2,{\mathbb{C}})$ such that $p=S\cdot {\mathrm{I}}_2$. Moreover, $ \Gamma_{{\mathrm{I}}_2}^p v = S\cdot v$.
The point $p$ is in ${\mathbb{H}}^3$ identified with ${\mathcal{H}}_2^{++}\cap {\mathrm{SL}}(2,{\mathbb{C}})$. Define $S$ as the unique square root of $p$ in ${\mathcal{H}}_2^{++}\cap {\mathrm{SL}}(2,{\mathbb{C}})$. Then $p= S\cdot {\mathrm{I}}_2$. Define for $t\in[0,1]$: $$S(t) := \exp\left(t \log S\right), \qquad \gamma(t) := S(t)\cdot {\mathrm{I}}_2, \qquad v(t) := S(t)\cdot v.$$ Then $v(t)\in T_{\gamma(t)}{\mathbb{H}}^3$ because $${\left\langle v(t), \gamma(t) \right\rangle} = {\left\langle S(t)\cdot {\mathrm{I}}_2, S(t)\cdot v \right\rangle} = {\left\langle {\mathrm{I}}_2, v \right\rangle} = 0$$ and $S\cdot v = v(1)\in T_p{\mathbb{H}}^3$.
Suppose that $S$ is diagonal. Then $$S(t) = \begin{pmatrix}
e^{\frac{at}{2}} & 0\\
0 & e^{\frac{-at}{2}}
\end{pmatrix} \qquad (a\in{\mathbb{R}})$$ and using Equations and , $$\gamma(t) = \begin{pmatrix}
e^{at} & 0\\
0 & e^{-at}
\end{pmatrix} = {\mathrm{geod}}({\mathrm{I}}_2,\sigma_3)(at)$$ is a geodesic curve. Write $v=v^1\sigma_1+v^2\sigma_2+v^3\sigma_3$ and compute $S(t)\cdot \sigma_i$ to find $$v(t) = v^1\sigma_1 + v^2\sigma_2 + v^3\begin{pmatrix}
e^{at} & 0\\
0 & -e^{-at}
\end{pmatrix}.$$ Compute in ${\mathbb{R}}^{1,3}$ $$\frac{Dv(t)}{dt} = \left(\frac{dv(t)}{dt}\right)^T = av^3\left(\gamma(t)\right)^T = 0$$ to see that $v(t)$ is the parallel transport of $v$ along the geodesic $\gamma$.
If $S$ is not diagonal, write $S = QDQ^{-1}$ where $Q\in{\mathrm{SU}}(2)$ and $D\in{\mathcal{H}}_2^{++}\cap {\mathrm{SL}}(2,{\mathbb{C}})$ is diagonal. Then, $$S\cdot v = Q\cdot \left( D\cdot \left( Q^{-1}\cdot v \right) \right) = Q\cdot \Gamma_{{\mathrm{I}}_2}^{D\cdot {\mathrm{I}}_2}(Q^{-1}\cdot v).$$ But for all $A\in{\mathrm{SL}}(2,{\mathbb{C}})$, $p,q\in{\mathbb{H}}^3$ and $v\in T_p{\mathbb{H}}^3$, $$A\cdot \Gamma_p^qv = \Gamma_{A\cdot p}^{A\cdot q}A\cdot v$$ and thus $$S\cdot v = \Gamma_{{\mathrm{I}}_2}^pv.$$
The DPW method for CMC $H>1$ surfaces in ${\mathbb{H}}^3$ {#sectionDPWMethod}
---------------------------------------------------------
#### Loop groups. {#sectionLoopGroups}
In the DPW method, a whole family of surfaces is constructed, depending on a spectral parameter ${\lambda}$. This parameter will always be in one of the following subsets of ${\mathbb{C}}$ ($\rho>1$): $${\mathbb{S}}^1 = \left\{ {\lambda}\in{\mathbb{C}}\mid|{\lambda}|=1 \right\}, \quad {\mathbb{A}}_\rho = \left\{ {\lambda}\in{\mathbb{C}}\mid \rho^{-1}<|{\lambda}|<\rho \right\}, \quad {\mathbb{D}}_\rho = \left\{ {\lambda}\in{\mathbb{C}}\mid |{\lambda}|<\rho \right\}.$$ Any smooth map $f:{\mathbb{S}}^1\longrightarrow {\mathcal{M}}(2,{\mathbb{C}})$ can be decomposed into its Fourier series $$f({\lambda}) = \sum_{i\in{\mathbb{Z}}}f_i{\lambda}^i.$$ Let $|\cdot|$ denote a norm on ${\mathcal{M}}(2,{\mathbb{C}})$. Fix some $\rho>1$ and consider $${\left\Vertf\right\Vert}_\rho := \sum_{i\in{\mathbb{Z}}} |f_i|\rho^{|i|}.$$ Let $G$ be a Lie group or algebra of ${\mathcal{M}}(2,{\mathbb{C}})$. We define
- $\Lambda G$ as the set of smooth functions $f:{\mathbb{S}}^1 \longrightarrow G$.
- $\Lambda G_\rho\subset \Lambda G$ as the set of functions $f$ such that ${\left\Vertf\right\Vert}_{\rho}$ is finite. If $G$ is a group (or an algebra) then $(\Lambda G_\rho,{\left\Vert\cdot\right\Vert}_{\rho})$ is a Banach Lie group (or algebra).
- $\Lambda G_\rho^{\geq 0}\subset\Lambda G_\rho$ as the set of functions $f$ such that $f_i=0$ for all $i<0$.
- $\Lambda_+ G_\rho\subset \Lambda G_\rho^{\geq 0}$ as the set of functions such that $f_0$ is upper-triangular.
- $\Lambda_+^{\mathbb{R}}{\mathrm{SL}}(2,{\mathbb{C}})_\rho\subset \Lambda_+ {\mathrm{SL}}(2,{\mathbb{C}})_\rho$ as the set of functions that have positive elements on the diagonal.
We also define $\Lambda{\mathbb{C}}$ as the set of smooth maps from ${\mathbb{S}}^1$ to ${\mathbb{C}}$, and $\Lambda{\mathbb{C}}_\rho$ and $\Lambda {\mathbb{C}}_\rho^{\geq 0}$ as above. Note that every function of $\Lambda G_\rho$ holomorphically extends to ${\mathbb{A}}_\rho$ and that every function of $\Lambda G_\rho^{\geq 0}$ holomorphically extends to ${\mathbb{D}}_\rho$.
We will use the Fröbenius norm on ${\mathcal{M}}(2,{\mathbb{C}})$: $$\left| A \right| := \left(\sum_{i,j} |a_{ij}|^2\right)^{\frac{1}{2}}.$$ Recall that this norm is sub-multiplicative. Therefore, the norm ${\left\Vert\cdot\right\Vert}_{\rho}$ is sub-multiplicative. Moreover, $$\forall A \in {\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}, \quad {\left\VertA^{-1}\right\Vert}_\rho = {\left\VertA\right\Vert}_\rho,$$ $$\forall A\in\Lambda{\mathcal{M}}(2,{\mathbb{C}})_\rho,\quad \forall {\lambda}\in{\mathbb{A}}_{\rho},\quad \left|A({\lambda})\right|\leq{\left\VertA\right\Vert}_{\rho}.$$
The DPW method relies on the Iwasawa decomposition. The following theorem is proved in [@minoids].
The multiplication map ${\Lambda {\mathrm{SU}}(2)_\rho}\times {\Lambda_+^{\mathbb{R}}{\mathrm{SL}}(2,{\mathbb{C}})}_\rho \longmapsto {\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}$ is a smooth diffeomorphism between Banach manifolds. Its inverse map is called “Iwasawa decomposition” and is denoted for $\Phi\in{\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}$: $${\mathop \mathrm{Iwa}}(\Phi) = \left( {\mathop \mathrm{Uni}}(\Phi), {\mathop \mathrm{Pos}}(\Phi) \right).$$
#### The ingredients. {#sectionIngredients}
Let $H>1$, $q={\mathop \mathrm{arcoth}}H>0$ and $\rho>e^q$. The DPW method takes for input data:
- A Riemann surface $\Sigma$.
- A holomorphic $1$-form on $\Sigma$ with values in ${{\Lambda {\mathfrak{sl}}(2,{\mathbb{C}})}_\rho}$ of the following form: $$\label{eqPotentielXialphabetagamma}
\xi = \begin{pmatrix}
\alpha & {\lambda}^{-1}\beta \\
\gamma & -\alpha
\end{pmatrix}$$ where $\alpha$, $\beta$, $\gamma$ are holomorphic $1$-forms on $\Sigma$ with values in $\Lambda{\mathbb{C}}_{\rho}^{\geq 0}$. The $1$-form $\xi$ is called “the potential”.
- A base point $z_0\in\Sigma$.
- An initial condition $\phi\in{\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}$.
#### The recipe.
The DPW method consists in the following steps:
1. Let $\widetilde{z}_0$ be any point above $z_0$ in the universal cover $\widetilde{\Sigma}$ of $\Sigma$. Solve on $\widetilde{\Sigma}$ the following Cauchy problem: $$\label{eqCauchyProblem}
\left\{
\begin{array}{l}
d\Phi = \Phi \xi \\
\Phi(\widetilde{z}_0) = \phi.
\end{array}
\right.$$ Then $\Phi: {\widetilde{\Sigma}}\longrightarrow {\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}$ is called “the holomorphic frame”.
2. Compute pointwise on ${\widetilde{\Sigma}}$ the Iwasawa decomposition of $\Phi$: $$(F(z),B(z)) := {\mathop \mathrm{Iwa}}\Phi(z).$$ The unitary part $F$ of this decomposition is called “the unitary frame”.
3. Define $f: {\widetilde{\Sigma}}\longrightarrow {\mathbb{H}}^3$ via the Sym-Bobenko formula: $$f(z) = F(z,e^{-q}){F(z,e^{-q})}^* =: { \mathrm{Sym}}_q F(z)$$ where $F(z,{\lambda}_0):=F(z)({\lambda}_0)$.
Then $f$ is a CMC $H>1$ ($H=\coth q$) conformal immersion from $\widetilde{\Sigma}$ to ${\mathbb{H}}^3$. Its Gauss map (in the direction of the mean curvature vector) is given by $$N(z) = F(z,e^{-q})\sigma_3{F(z,e^{-q})}^* =: {\mathrm{Nor}}_qF(z)$$ where $\sigma_3$ is defined in . The differential of $f$ is given by $$\label{eqdf}
df(z) = 2\sinh(q)b(z)^2 F(z,e^{-q}) \begin{pmatrix}
0 & \beta(z,0)\\
{\overline{\beta(z,0)}} & 0
\end{pmatrix} F(z,e^{-q})^*$$ where $b(z)>0$ is the upper-left entry of $B(z)\mid_{{\lambda}=0}$. The metric of $f$ is given by $$ds_f(z) = 2\sinh(q)b(z)^2 \left|\beta(z,0)\right|$$ and its Hopf differential reads $$\label{eqHopf}
-2\beta(z,0)\gamma(z,0)\sinh q ~ dz^2.$$
The results of this paper hold for any $H>1$. We thus fix now $H>1$ and $q={\mathop \mathrm{arcoth}}H$.
#### Rigid motions.
Let $H\in {\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}$ and define the new holomorphic frame $\widetilde{\Phi} = H\Phi$ with unitary part $\widetilde{F}$ and induced immersion $\widetilde{f} = { \mathrm{Sym}}_q \widetilde{F}$. If $H\in{\Lambda {\mathrm{SU}}(2)_\rho}$, then $\widetilde{F} = HF$ and $\widetilde{\Phi}$ gives rise to the same immersion as $\Phi$ up to an isometry of ${\mathbb{H}}^3$: $$\widetilde{f}(z) = H(e^{-q})\cdot f(z).$$ If $H\notin {\Lambda {\mathrm{SU}}(2)_\rho}$, this transformation is called a “dressing” and may change the surface.
#### Gauging.
Let $G : {\widetilde{\Sigma}}\longrightarrow{\Lambda_+{\mathrm{SL}}(2,{\mathbb{C}})_\rho}$ and define the new potential: $$\widehat{\xi} = \xi\cdot G := G^{-1}\xi G + G^{-1}dG.$$ The potential $\widehat{\xi}$ is a DPW potential and this operation is called “gauging”. The data $\left(\Sigma,\xi,z_0,\phi\right)$ and $\left( \Sigma,\widehat{\xi},z_0,\phi\times G(z_0) \right)$ give rise to the same immersion.
#### The monodromy problem.
Since the immersion $f$ is only defined on the universal cover ${\widetilde{\Sigma}}$, one might ask for conditions ensuring that it descends to a well-defined immersion on $\Sigma$. For any deck transformation $\tau\in{\mathrm{Deck}}\left(\widetilde{\Sigma}/\Sigma\right)$, define the monodromy of $\Phi$ with respect to $\tau$ as: $${\mathcal{M}}_\tau(\Phi) := \Phi(\tau(z))\Phi(z)^{-1} \in {\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}.$$ This map is independent of $z\in\widetilde{\Sigma}$. The standard sufficient conditions for the immersion $f$ to be well-defined on $\Sigma$ is the following set of equations, called the monodromy problem in ${\mathbb{H}}^3$: $$\label{eqMonodromyProblem}
\forall \tau\in{\mathrm{Deck}}\left(\widetilde{\Sigma}/\Sigma\right),\quad
\left\{
\begin{array}{l}
{\mathcal{M}}_\tau(\Phi)\in{\Lambda {\mathrm{SU}}(2)_\rho},\\
{\mathcal{M}}_\tau(\Phi)(e^{-q}) = \pm{\mathrm{I}}_2.
\end{array}
\right.$$ Use the point $\widetilde{z}_0$ defined in step 1 of the DPW method to identify the fundamental group $\pi_1(\Sigma,z_0)$ with ${\mathrm{Deck}}(\widetilde\Sigma / \Sigma)$. Let $\left\{ \gamma_i \right\}_{i\in I}$ be a set of generators of $\pi_1\left(\Sigma,z_0\right)$. Then the problem is equivalent to $$\label{eqMonodromyProblemLoop}
\forall i\in I, \quad
\left\{
\begin{array}{l}
{\mathcal{M}}_{\gamma_i}(\Phi)\in{\Lambda {\mathrm{SU}}(2)_\rho},\\
{\mathcal{M}}_{\gamma_i}(\Phi)(e^{-q}) = \pm{\mathrm{I}}_2.
\end{array}
\right.$$
#### Example: the standard sphere.
The DPW method can produce spherical immersions of $\Sigma = {\mathbb{C}}\cup\left\{\infty \right\}$ with the potential $$\xi_{\mathbb{S}}(z,{\lambda}) = \begin{pmatrix}
0 & {\lambda}^{-1}dz \\
0 & 0
\end{pmatrix}$$ and initial condition $\Phi_{\mathbb{S}}(0,{\lambda}) = {\mathrm{I}}_2$. The potential is not regular at $z=\infty$ because it has a double pole there. However, the immersion will be regular at this point because $\xi_{\mathbb{S}}$ is gauge-equivalent to a regular potential at $z=\infty$. Indeed, consider on ${\mathbb{C}}^*$ the gauge $$G(z,{\lambda}) = \begin{pmatrix}
z&0\\
-{\lambda}& \frac{1}{z}
\end{pmatrix}.$$ The gauged potential is then $$\xi_{\mathbb{S}}\cdot G (z,{\lambda}) = \begin{pmatrix}
0 & {\lambda}^{-1}z^{-2}dz \\
0&0
\end{pmatrix}$$ which is regular at $z=\infty$. The holomorphic frame is $$\label{eqPhi_Ss}
\Phi_{\mathbb{S}}(z,{\lambda}) = \begin{pmatrix}
1 & {\lambda}^{-1}z \\
0 & 1
\end{pmatrix}$$ and its unitary factor is $$F_{\mathbb{S}}(z,{\lambda}) = \frac{1}{\sqrt{1+|z|^2}}\begin{pmatrix}
1 & {\lambda}^{-1}z\\
-{\lambda}{\overline{z}} & 1
\end{pmatrix}.$$ The induced CMC-$H$ immersion is $$f_{\mathbb{S}}(z) = \frac{1}{1+|z|^2}\begin{pmatrix}
1+ e^{2q}|z|^2 & 2z\sinh q \\
2{\overline{z}}\sinh q & 1+e^{-2q}|z|^2
\end{pmatrix}.$$ It is not easy to see that $f_{\mathbb{S}}(\Sigma)$ is a sphere because it is not centered at ${\mathrm{I}}_2$. To solve this problem, notice that $F_{\mathbb{S}}(z,e^{-q}) = R(q)\widetilde{F}_{\mathbb{S}}(z)R(q)^{-1}$ where $$\label{eqR(q)}
R(q) := \begin{pmatrix}
e^{\frac{q}{2}} & 0\\
0 & e^{\frac{-q}{2}}
\end{pmatrix}\in{\mathrm{SL}}(2,{\mathbb{C}})$$ and $$\widetilde{F}_{\mathbb{S}}(z) := \frac{1}{\sqrt{1+|z|^2}}\begin{pmatrix}
1 & z\\
-{\overline{z}} & 1
\end{pmatrix}\in{\mathrm{SU}}(2).$$ Apply an isometry by setting $$\label{eqftildeSs}
\widetilde{f}_{\mathbb{S}}(z) := R(q)^{-1}\cdot f_{\mathbb{S}}(z)$$ and compute $$\widetilde{f}_{\mathbb{S}}(z) = \frac{1}{1+|z|^2}\begin{pmatrix}
e^{-q}+ e^{q}|z|^2 & 2z\sinh q \\
2{\overline{z}}\sinh q & e^{q} + e^{-q}|z|^2
\end{pmatrix} = (\cosh q){\mathrm{I}}_2 + \frac{\sinh q}{1+|z|^2} \begin{pmatrix}
|z|^2-1 & 2z \\
2{\overline{z}} & 1-|z|^2
\end{pmatrix}$$ i.e. $$\label{eqftilde_Ss=gammav_Ss}
\widetilde{f}_{\mathbb{S}}(z)= {\mathrm{geod}}({\mathrm{I}}_2,v_{\mathbb{S}}(z))(q)$$ with ${\mathrm{geod}}$ defined in and where in the basis $(\sigma_1,\sigma_2,\sigma_3)$ of $T_{{\mathrm{I}}_2}{\mathbb{H}}^3$, $$\label{eqVecteurvS}
v_{\mathbb{S}}(z) := \frac{1}{1+|z|^2}\left( 2{\mathop \mathrm{Re}}z, 2 {\mathop \mathrm{Im}}z, |z|^2-1 \right)$$ describes a sphere of radius one in the tangent space of ${\mathbb{H}}^3$ at ${\mathrm{I}}_2$ (it is the inverse stereographic projection from the north pole). Hence, $\widetilde{f}_{\mathbb{S}}(\Sigma)$ is a sphere centered at ${\mathrm{I}}_2$ of hyperbolic radius $q$ and $f_{\mathbb{S}}\left(\Sigma\right)$ is a sphere of same radius centered at ${\mathrm{geod}}({\mathrm{I}}_2, \sigma_3)(q)$.
One can compute the normal map of $f_{\mathbb{S}}$: $$N_{\mathbb{S}}(z) := {\mathrm{Nor}}_q F_{\mathbb{S}}(z) = R(q)\cdot \widetilde{N}_{\mathbb{S}}(z)$$ where $$\begin{aligned}
\widetilde{N}_{\mathbb{S}}(z) &:= {\mathrm{Nor}}_q\left(\widetilde{F}_{\mathbb{S}}(z)\right) = \frac{1}{1+|z|^2}\begin{pmatrix}
e^{-q}- e^{q}|z|^2 & -2z\cosh q \\
-2{\overline{z}}\cosh q & e^{-q}|z|^2 - e^{q}
\end{pmatrix}\\
&= -(\sinh q) {\mathrm{I}}_2 - (\cosh q) v_{\mathbb{S}}(z) = -\dot{{\mathrm{geod}}}\left( {\mathrm{I}}_2,v_{\mathbb{S}}(z) \right)(q).\end{aligned}$$ Note that this implies that the normal map ${\mathrm{Nor}}_q$ is oriented by the mean curvature vector.
Delaunay surfaces {#sectionDelaunaySurfaces}
-----------------
#### The data.
Let $\Sigma={\mathbb{C}}^*$, $\xi_{r,s}(z,{\lambda}) = A_{r,s}({\lambda})z^{-1}dz$ where $$\label{eqArs}
A_{r,s}({\lambda}) := \begin{pmatrix}
0 & r{\lambda}^{-1}+s\\
r{\lambda}+s & 0
\end{pmatrix}, \qquad r,s\in{\mathbb{R}}, \quad {\lambda}\in{\mathbb{S}}^1,$$ and initial condition $\Phi_{r,s}(1) = {\mathrm{I}}_2$. With these data, the holomorphic frame reads $$\Phi_{r,s}(z) = z^{A_{r,s}}.$$ The unitary frame $F_{r,s}$ is not explicit, but the DPW method states that the map $f_{r,s}={ \mathrm{Sym}}_q(F_{r,s})$ is a CMC $H$ immersion from the universal cover $\widetilde{{\mathbb{C}}^*}$ of ${\mathbb{C}}^*$ into ${\mathbb{H}}^3$.
#### Monodromy.
Computing the monodromy along $\gamma(\theta) = e^{i\theta}$ for $\theta\in [0,2\pi]$ gives $${\mathcal{M}}\left(\Phi_{r,s}\right):= {\mathcal{M}}_\gamma\left(\Phi_{r,s}\right) = \exp\left(2i\pi A_{r,s}\right).$$ Recall that $r,s\in{\mathbb{R}}$ to see that $iA_{r,s}\in{\Lambda {\mathfrak{su}}(2)_\rho}$, and thus ${\mathcal{M}}\left(\Phi_{r,s}\right)\in{\Lambda {\mathrm{SU}}(2)_\rho}$: the first equation of is satisfied. To solve the second one, we look forward to finding $r$ and $s$ such that $A_{r,s}(e^{-q})^2=\frac{1}{4}{\mathrm{I}}_2$, which will imply that ${\mathcal{M}}\left(\Phi_{r,s}\right)(e^{-q})=-{\mathrm{I}}_2$. This condition is equivalent to $$\label{eqrs}
r^2+s^2+2rs\cosh q = \frac{1}{4}.$$ Seeing this equation as a polynomial in $r$ and computing its discriminant ($1+4s^2\sinh^2q>0$) ensures the existence of an infinite number of solutions: given a couple $(r,s)\in{\mathbb{R}}^2$ solution to , $f_{r,s}$ is a well-defined CMC $H$ immersion from ${\mathbb{C}}^*$ into ${\mathbb{H}}^3$.
#### Surface of revolution.
Let $(r,s)\in{\mathbb{R}}^2$ satisfying and let $\theta\in{\mathbb{R}}$. Then, $$\Phi_{r,s}\left(e^{i\theta}z\right) = \exp\left(i\theta A_{r,s}\right)\Phi_{r,s}(z).$$ Using $iA_{r,s}\in{\Lambda {\mathfrak{su}}(2)_\rho}$ and diagonalising $A_{r,s}(e^{-q})$ gives $$\begin{aligned}
f_{r,s}\left(e^{i\theta} z\right) &= \exp\left(i\theta A_{r,s}(e^{-q})\right)\cdot f_{r,s}(z)\\
&=\left( H_{r,s}\exp\left(i\theta D\right)H_{r,s}^{-1}\right)\cdot f_{r,s}(z)\end{aligned}$$ where $$H_{r,s} = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & -2\left(re^{-q}+s\right)\\
2\left(re^q+s\right) & 1
\end{pmatrix}, \qquad D = \begin{pmatrix}
\frac{1}{2}& 0\\0 & \frac{-1}{2}
\end{pmatrix}.$$ Noting that $\exp\left(i\theta D\right)$ acts as a rotation of angle $\theta$ around the axis ${\mathrm{geod}}({\mathrm{I}}_2,\sigma_3)$ and that $H_{r,s}$ acts as an isometry of ${\mathbb{H}}^3$ independant of $\theta$ shows that $\exp\left(i\theta A_{r,s}(e^{-q})\right)$ acts as a rotation around the axis $H_{r,s}\cdot {\mathrm{geod}}({\mathrm{I}}_2,\sigma_3)$ and that $f_{r,s}$ is CMC $H>1$ immersion of revolution of ${\mathbb{C}}^*$ into ${\mathbb{H}}^3$ and by definition (as in [@meeks]) a Delaunay immersion.
#### The weight as a parameter.
For a fixed $H>1$, CMC $H$ Delaunay surfaces in ${\mathbb{H}}^3$ form a family parametrised by the weight. This weight can be computed in the DPW framework: given a solution $(r,s)$ of , the weight $w$ of the Delaunay surface induced by the DPW data $({\mathbb{C}}^*,\xi_{r,s},1,{\mathrm{I}}_2)$ reads $$\label{eqPoidsDelaunayPure}
w = 2\pi \times 4 rs\sinh q$$ (see [@loopgroups] for details).
Writing $t:=\frac{w}{2\pi}$ and assuming $t\neq 0$, Equations and imply that $$\label{eqSystemers}
\left\{
\begin{array}{l}
t\leq T_1,\\
r^2= \frac{1}{8}\left( 1-2Ht \pm 2 \sqrt{T_1-t}\sqrt{T_2-t} \right),\\
s^2 = \frac{1}{8}\left( 1-2Ht \pm 2\sqrt{T_1-t}\sqrt{T_2-t} \right)
\end{array}
\right.$$ with $$T_1 = \frac{\tanh\frac{q}{2}}{2}<\frac{1}{2\tanh\frac{q}{2}}=T_2.$$
First, note that and imply $$r^2 + s^2 =\frac{1}{4}\left( 1 - 2t\coth q \right) = \frac{1}{4}(1-2Ht)$$ and thus $$\label{eqt<T2}
t\leq \frac{H}{2}<T_2.$$ If $r=0$, then $t=0$. Thus, $r\neq 0$ and $$s = \frac{t}{4r\sinh q}.$$ Equation is then equivalent to $$r^2 + \frac{t^2}{16r^2\sinh^2 q} +\frac{Ht}{2} = \frac{1}{4} \iff r^4 - \frac{1-2Ht}{4}r^2 + \frac{t^2}{16\sinh^2q} = 0.$$ Using $\coth q = H$, the discriminant of this quadratic polynomial in $r^2$ is $$\Delta(t) = \frac{1}{16}\left( 1-4Ht+4t^2 \right)$$ which in turn is a quadratic polynomial in $t$ with discriminant $$\widetilde{\Delta} = \frac{H^2-1}{16}>0$$ because $H>1$. Thus, $$\Delta(t) = \frac{(T_1-t)(T_2-t)}{4}$$ because $H=\coth q$. Using , $\Delta(t)\geq 0$ if, and only if $t\leq T_1$ and $$r^2 = \frac{1}{8}\left( 1-2Ht \pm 2\sqrt{(T_1-t)(T_2-t)} \right).$$ By symmetry of Equations and , $s^2$ is as in .
We consider the two continuous parametrisations of $r$ and $s$ for $t\in(-\infty,T_1)$ such that $(r,s)$ satisfies Equations and with $w=2\pi t$: $$\label{eqrst}
\left\{
\begin{array}{l}
r(t) := \frac{\pm 1}{2\sqrt{2}}\left(1-2Ht + 2\sqrt{T_1-t}\sqrt{ T_2-t } \right)^{\frac{1}{2}},\\
s(t) := \frac{t}{4r(t)\sinh q}.
\end{array}
\right.$$
Choosing the parametrisation satisfying $r>s$ maps the unit circle of ${\mathbb{C}}^*$ onto a parallel circle of maximal radius, called a “bulge” of the Delaunay surface. As $t$ tends to $0$, the immersions tend towards a parametrisation of a sphere on every compact subset of ${\mathbb{C}}^*$, which is why we call this family of immersions “the spherical family”. When $r<s$, the unit circle of ${\mathbb{C}}^*$ is mapped onto a parallel circle of minimal radius, called a “neck” of the Delaunay surface. As $t$ tends to $0$, the immersions degenerate into a point on every compact subset of ${\mathbb{C}}^*$. Nevertheless, we call this family the “catenoidal family” because applying a blowup to the immersions makes them converge towards a catenoidal immersion of ${\mathbb{R}}^3$ on every compact subset of ${\mathbb{C}}^*$ (see Section \[sectionBlowUp\] for more details). In both cases, the weight of the induced surfaces is given by $w = 2\pi t$.
Perturbed Delaunay immersions {#sectionPerturbedDelaunayImmersions}
=============================
In this section, we study the immersions induced by a perturbation of Delaunay DPW data with small non-vanishing weights in a neighbourhood of $z=0$. Our results are the same wether we choose the spherical or the catenoidal family of immersions. We thus drop the index $r,s$ in the Delaunay DPW data and replace it by a small value of $t=4rs\sinh q$ in a neighbourhood of $t=0$ such that $$t<T_{\max}:=\frac{\tanh \frac{q}{2}}{2}.$$ For all $\epsilon>0$, we denote $$D_\epsilon := \left\{ z\in{\mathbb{C}}\mid |z|<\epsilon \right\}, \qquad D_\epsilon^* := D_\epsilon \backslash\left\{0\right\}.$$
\[defPerturbedDelaunayPotential\] Let $\rho>e^q$, $0<T<T_{\max}$ and $\epsilon>0$. A perturbed Delaunay potential is a continuous one-parameter family $(\xi_t)_{t\in (-T,T)}$ of DPW potentials defined for $(t,z)\in(-T,T)\times D_{\epsilon}^*$ by $$\begin{aligned}
\xi_t(z) = A_tz^{-1}dz + C_t(z)dz
\end{aligned}$$ where $A_t\in\Lambda{\mathfrak{sl}}(2,{\mathbb{C}})_{\rho}$ is a Delaunay residue as in satisfying and $C_t(z)\in{\mathfrak{sl}}(2,{\mathbb{C}})_{\rho}$ is ${\mathcal{C}}^1$ with respect to $(t,z)$, holomorphic with respect to $z$ for all $t$ and satisfies $C_0(z)=0$ for all $z$.
\[theoremPerturbedDelaunay\] Let $\rho>e^q$, $0<T<T_{\max}$, $\epsilon>0$ and $\xi_t$ be a perturbed Delaunay potential ${\mathcal{C}}^2$ with respect to $(t,z)$. Let $\Phi_t$ be a holomorphic frame associated to $\xi_t$ for all $t$ via the DPW method. Suppose that the family of initial conditions $\phi_t$ is ${\mathcal{C}}^2$ with respect to $t$, with $\phi_0=\widetilde z_0^{A_0}$, and that the monodromy problem is solved for all $t\in (-T,T)$. Let $f_t={ \mathrm{Sym}}_q\left({\mathop \mathrm{Uni}}\Phi_t\right)$. Then,
1. For all $\delta>0$, there exist $0<\epsilon'<\epsilon$, $T'>0$ and $C>0$ such that for all $z\in D_{\epsilon'}^*$ and $t\in(-T',T')\backslash\{0\}$, $$\begin{aligned}
{d_{{\mathbb{H}}^3}\left(f_t(z),f_t^{\mathcal{D}}(z)\right)}\leq C|t||z|^{1-\delta}
\end{aligned}$$ where $f_t^{\mathcal{D}}$ is a Delaunay immersion of weight $2\pi t$.
2. There exist $T'>0$ and $\epsilon'>0$ such that for all $0<t<T'$, $f_t$ is an embedding of $D_{\epsilon'}^*$.
3. The limit axis as $t$ tends to $0$ of the Delaunay immersion $f_t^{\mathcal{D}}$ oriented towards the end at $z=0$ is given by: $$\frac{1}{\sqrt{2}}\begin{pmatrix}
1 & -e^q\\
e^{-q} & 1
\end{pmatrix} \cdot {\mathrm{geod}}\left({\mathrm{I}}_2,-\sigma_3\right)\quad \text{in the spherical family ($r>s$)},$$ $${\mathrm{geod}}\left({\mathrm{I}}_2,-\sigma_1\right)\quad \text{in the catenoidal family ($r<s$)}.$$
Let $\xi_t$ and $\Phi_t$ as in Theorem \[theoremPerturbedDelaunay\] with $\rho$, $T$ and $\epsilon$ fixed. This Section is dedicated to the proof of Theorem \[theoremPerturbedDelaunay\].
The ${\mathcal{C}}^2$-regularity of $\xi_t$ essentially means that $C_t(z)$ is ${\mathcal{C}}^2$ with respect to $(t,z)$. Together with the ${\mathcal{C}}^2$-regularity of $\phi_t$, it implies that $\Phi_t$ is ${\mathcal{C}}^2$ with respect to $(t,z)$. Thus, ${\mathcal{M}}(\Phi_t)$ is also ${\mathcal{C}}^2$ with respect to $t$. These regularities and the fact that there exists a solution $\Phi_t$ solving the monodromy problem are used in Section \[sectionPropertyXit\] to deduce an essential piece of information about the potential $\xi_t$ (Proposition \[propC3\]). This step then allows us to write in Section \[sectionzAP\] the holomorphic frame $\Phi_t$ in a $Mz^AP$ form given by the Fröbenius method (Proposition \[propzAP1\]), and to gauge this expression, in order to gain an order of convergence with respect to $z$ (Proposition \[propzAP2\]). During this process, the holomorphic frame will loose one order of regularity with respect to $t$, which is why Theorem \[theoremPerturbedDelaunay\] asks for a ${\mathcal{C}}^2$-regularity of the data. Section \[sectionDressedDelaunayFrames\] is devoted to the study of dressed Delaunay frames $Mz^A$ in order to ensure that the immersions $f_t$ will converge to Delaunay immersions as $t$ tends to $0$, and to estimate the growth of their unitary part around the end at $z=0$. Section \[sectionConvergenceImmersions\] proves that these immersions do converge, which is the first point of Theorem \[theoremPerturbedDelaunay\]. Before proving the embeddedness in Section \[sectionEmbeddedness\], Section \[sectionConvergenceNormales\] is devoted to the convergence of the normal maps. Finally, we compute the limit axes in Section \[sectionLimitAxis\].
A property of $\xi_t$ {#sectionPropertyXit}
---------------------
We begin by diagonalising $A_t$ in a unitary basis (Proposition \[propDiagA\]) in order simplify the computations in Proposition \[propC3\], in which we use the Fröbenius method for a fixed value of ${\lambda}=e^{\pm q}$. This will ensure the existence of the ${\mathcal{C}}^1$ map $P_1\in{\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}$ that will be used in Section \[sectionzAP\] to define the factor $P$ in the $Mz^AP$ form of $\Phi_t$.
\[propDiagA\] There exist $e^q<R<\rho$ and $0<T'<T$ such that for all $t\in(-T',T')$, $A_t = H_tD_tH_t^{-1}$ with $H_t\in{\Lambda {\mathrm{SU}}(2)_R}$ and $iD_t\in{\Lambda {\mathfrak{su}}(2)_R}$. Moreover, $H_t$ and $D_t$ are smooth with respect to $t$.
For all ${\lambda}\in{\mathbb{S}}^1$, $$\begin{aligned}
\label{eqDetA}
-\det A_t ({\lambda}) &= \frac{1}{4} + \frac{t{\lambda}^{-1}({\lambda}-e^q)({\lambda}-e^{-q})}{4\sinh q}\\
&= \frac{1}{4} + \frac{t}{4\sinh q}\left( {\lambda}+{\lambda}^{-1} - 2 \cosh q \right) \in {\mathbb{R}}.\nonumber
\end{aligned}$$ Extending this determinant as a holomorphic function on ${\mathbb{A}}_{{\rho}}$, there exists $T'>0$ such that $$\left|-\det A_t({\lambda}) - \frac{1}{4}\right| <\frac{1}{4} \quad \forall (t,{\lambda})\in(-T',T')\times {\mathbb{A}}_{{\rho}}$$ With this choice of $T'$, the function $\mu_t: {\mathbb{A}}_{{\rho}}\longrightarrow {\mathbb{C}}$ defined as the positive-real-part square root of $(-\det A_t)$ is holomorphic on ${\mathbb{A}}_{\rho}$ and real-valued on ${\mathbb{S}}^1$. Note that $\mu_t$ is also the positive-real-part eigenvalue of $A_t$ and thus $A_t=H_tD_tH_t^{-1}$ with $$\label{eqHtDt}
H_t({\lambda}) = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & \frac{-(r{\lambda}^{-1}+s)}{\mu_t({\lambda})}\\
\frac{r{\lambda}+s}{\mu_t({\lambda})} & 1
\end{pmatrix},\qquad D_t({\lambda}) = \begin{pmatrix}
\mu_t({\lambda}) & 0\\
0 & -\mu_t({\lambda})
\end{pmatrix}.$$ Let $e^q<R<\rho$. For all $t\in(-T',T')$, $\mu_t\in\Lambda{\mathbb{C}}_{R}$ and the map $t\mapsto \mu_t$ is smooth on $(-T',T')$. Moreover, $H_t\in{\Lambda {\mathrm{SU}}(2)}_{R}$, $iD_t\in\Lambda{\mathfrak{su}}(2)_R$ and these functions are smooth with respect to $t$.
The bound $t<T'$ ensures that that $4 \det A_t({\lambda})$ is an integer only for $t=0$ and ${\lambda}=e^{\pm q}$. These points make the Fröbenius system resonant, but they are precisely the points that bear an extra piece of information due to the hypotheses on ${\mathcal{M}}(\Phi_t)(e^q)$ and $\Phi_0$. Allowing the parameter $t$ to leave the interval $(-T',T')$ would bring other resonance points and make Section \[sectionzAP\] invalid. This is why Theorem \[theoremPerturbedDelaunay\] does not state that the end of the immersion $f_t$ is a Delaunay end for all $t$.
At $t=0$, the change of basis $H_t$ in the diagonalisation of $A_t$ takes different values wether $r>s$ (spherical family) or $r<s$ (catenoidal family). One has: $$\label{eqH0spherical}
H_0({\lambda}) = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & -{\lambda}^{-1}\\
{\lambda}& 1
\end{pmatrix} \qquad \text{in the spherical case},$$ $$\label{eqH0catenoidal}
H_0({\lambda}) = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & -1\\
1 & 1
\end{pmatrix} \qquad \text{in the catenoidal case}.$$ In both cases, $\mu_0 = \frac{1}{2}$, and thus, $D_0$ is the same.
#### A basis of $\Lambda{\mathcal{M}}(2,{\mathbb{C}})_{\rho}$.
Let $R$ and $T'$ given by Proposition \[propDiagA\]. Identify $\Lambda{\mathcal{M}}(2,{\mathbb{C}})_{{\rho}}$ with the free $\Lambda{\mathbb{C}}_\rho$-module ${\mathcal{M}}(2,\Lambda{\mathbb{C}}_{{\rho}})$ and define for all $t\in(-T',T')$ the basis $${\mathcal{B}}_t = H_t\left(E_{1},E_{2},E_{3},E_{4}\right)H_t^{-1}=:\left(X_{t,1},X_{t,2},X_{t,3},X_{t,4}\right)$$ where $$E_1 = \begin{pmatrix}
1&0\\
0&0
\end{pmatrix},\quad E_2 = \begin{pmatrix}
0&1\\
0&0
\end{pmatrix}, \quad E_3 = \begin{pmatrix}
0&0\\
1&0
\end{pmatrix}, \quad E_4 = \begin{pmatrix}
0&0\\
0&1
\end{pmatrix}.$$ For all $t\in(-T',T')$, write $$\label{eqCt0}
C_t(0) = \begin{pmatrix}
tc_1(t) & {\lambda}^{-1}tc_2(t)\\
tc_3(t) & -tc1(t)
\end{pmatrix} = \sum_{j=1}^{4}t\widehat{c}_j(t)X_{t,j}.$$ The functions $c_j, \widehat{c}_j$ are ${\mathcal{C}}^1$ with respect to $t\in(-T',T')$ and take values in $\Lambda{\mathbb{C}}_{{R}}$. Moreover, the functions $c_i(t)$ holomorphically extend to ${\mathbb{D}}_\rho$.
\[propC3\] There exists a continuous function $\widetilde{c}_3: (-T',T')\longrightarrow\Lambda{\mathbb{C}}_{{R}}$ such that for all ${\lambda}\in{\mathbb{S}}^1$ and $t\in(-T',T')$, $$\widehat{c}_3(t) = t({\lambda}-e^q)({\lambda}-e^{-q})\widetilde{c}_3(t).$$
It suffices to show that $\widehat{c}_3(0)=0$ and that the holomorphic extension of $\widehat{c}_3(t)$ satisfies $\widehat{c}_3(t,e^{\pm q}) = 0$ for all $t$.
To show that $\widehat{c}_3(0)=0$, recall that the monodromy problem is solved for all $t$ and note that ${\mathcal{M}}(\Phi_0) = -{\mathrm{I}}_2$, which implies that, as a function of $t$, the derivative of ${\mathcal{M}}(\Phi_t)$ at $t=0$ is in ${\Lambda {\mathfrak{su}}(2)_\rho}$. On the other hand, Proposition 4 in Appendix B of [@raujouan] ensures that $$\frac{d{\mathcal{M}}(\Phi_t)}{dt}\mid_{t=0} = \left( \int_{\gamma} \Phi_0 \frac{d\xi_t}{dt}\mid_{t=0} \Phi_0^{-1} \right)\times {\mathcal{M}}(\Phi_0)$$ where $\gamma$ is a generator of $\pi_1(D_\epsilon^*,z_0)$. Expanding the right-hand side gives $$-\int_{\gamma} z^{A_0}\frac{d A_t}{dt}\mid_{t=0} z^{-A_0}z^{-1}dz - \int_{\gamma} z^{A_0}\frac{d C_t(z)}{dt}\mid_{t=0} z^{-A_0}dz \in{\Lambda {\mathfrak{su}}(2)_\rho}.$$ Using Proposition 4 of [@raujouan] once again, note that the first term is the derivative of ${\mathcal{M}}(z^{A_t})$ at $t=0$, which is in ${\Lambda {\mathfrak{su}}(2)_\rho}$ because ${\mathcal{M}}(z^{A_t})\in{\Lambda {\mathrm{SU}}(2)_\rho}$ and ${\mathcal{M}}(z^{A_0}) = -{\mathrm{I}}_2$. Therefore, the second term is also in ${\Lambda {\mathfrak{su}}(2)_\rho}$. Diagonalising $A_0$ with Proposition \[propDiagA\] and using $H_0\in\Lambda{\mathrm{SU}}(2)_{{R}}$ gives $$2i\pi{\mathop\mathrm{Res}}_{z=0} \left( z^{D_0}H_0^{-1}\frac{d}{dt}C_t(z)\mid_{t=0}H_0z^{-D_0} \right)\in \Lambda{\mathfrak{su}}(2)_{{R}}.$$ But using Equation , $$\begin{aligned}
z^{D_0}H_0^{-1}\frac{d}{dt}C_t(z)\mid_{t=0}H_0z^{-D_0} &= z^{D_0}H_0^{-1}\left(\sum_{j=1}^{4}\widehat{c}_j(0)X_{0,j}\right)H_0z^{-D_0}\\
&= \sum_{j=1}^{4}\widehat{c}_j(0)z^{D_0}E_jz^{-D_0} \\
&= \begin{pmatrix}
\widehat{c}_1(0) & z\widehat{c}_2(0)\\
z^{-1}\widehat{c}_3(0) & \widehat{c}_4(0)
\end{pmatrix}.
\end{aligned}$$ Thus, $$2i\pi\begin{pmatrix}
0 & 0\\
\widehat{c}_3(0) & 0
\end{pmatrix}\in\Lambda{\mathfrak{su}}(2)_{{R}}$$ which gives $\widehat{c}_3(0)=0$.
Let ${\lambda}_0\in\{e^q,e^{-q}\}$ and $t\neq 0$. Using the Fröbenius method (Theorem 4.11 of [@teschl] and Lemma 11.4 of [@taylor]) at the resonant point ${\lambda}_0$ ensures the existence of $\epsilon'>0$, $B,M\in{\mathcal{M}}(2,{\mathbb{C}})$ and a holomorphic map $P: D_{\epsilon'}\longrightarrow{\mathcal{M}}(2,{\mathbb{C}})$ such that for all $z\in D_{\epsilon'}^*$, $$\left\{
\begin{array}{l}
\Phi_t(z,{\lambda}_0) = Mz^Bz^{A_t({\lambda}_0)}P(z),\\
B^2=0,\\
P(0) = {\mathrm{I}}_2,\\
\left[A_t({\lambda}_0),d_zP(0)\right] + d_zP(0) = C_t(0,{\lambda}_0) - B.
\end{array}
\right.$$ Compute the monodromy of $\Phi_t$ at ${\lambda}={\lambda}_0$: $${\mathcal{M}}(\Phi_t)({\lambda}_0) = M\exp(2i\pi B)z^B\exp(2i\pi A_t({\lambda}_0))z^{-B}M^{-1} = -M\exp(2i\pi B)M^{-1}.$$ Since the monodromy problem is solved, this quantity equals $-{\mathrm{I}}_2$. Use $B^2=0$ to show that $B=0$ and thus, $$\left\{
\begin{array}{l}
\Phi_t(z,{\lambda}_0) = Mz^{A_t({\lambda}_0)}P(z),\\
P(0) = {\mathrm{I}}_2,\\
\left[A_t({\lambda}_0),d_zP(0)\right] + d_zP(0) = C_t(0,{\lambda}_0).
\end{array}
\right.$$ Diagonalise $A_t({\lambda}_0)$ with Proposition \[propDiagA\] and write $d_zP(0)=\sum p_jX_{t,j}$ to get for all $j\in[1,4]$ $$p_j\left([D_t({\lambda}_0),E_j] + E_j\right) = t\widehat{c}_j(t,{\lambda}_0)E_j.$$ In particular, using $\mu_t({\lambda}_0)=1/2$, $$t\widehat{c}_3(t,{\lambda}_0) = p_3\left([D_t({\lambda}_0),E_3] + E_3\right)=0.$$
Note that with the help of Equations and or , and one can compute the series expansion of $\widehat{c}_3(0)$: $$\widehat{c}_3(0) = \frac{-{\lambda}^{-1}}{2}c_2(0,0) + {\mathcal{O}}({\lambda}^0) \quad \text{if}\quad r<s,$$ $$\widehat{c}_3(0) = \frac{1}{2} c_3(0,0) + {\mathcal{O}}({\lambda}) \quad \text{if} \quad r>s.$$ Hence, $$\label{eqsc2+rc3=0}
sc_2(t,0)+rc_3(t,0) {\underset{t\to 0}{\longrightarrow}} 0.$$
The following map will be useful in the next section: $$\label{eqP1}
t\in(-T',T')\longmapsto P^1(t) := t\widehat{c}_1(t)X_{t,1} + \frac{t\widehat{c}_2(t)}{1+2\mu_t}X_{t,2} + \frac{t\widehat{c}_3(t)}{1-2\mu_t}X_{t,3} + t\widehat{c}_4(t)X_{t,4}.$$ For all $t$, Proposition \[propC3\] ensures that the map $P^1(t,{\lambda})$ holomorphically extends to ${\mathbb{A}}_R$. Taking a smaller value of $R$ if necessary, $P^1(t)\in \Lambda{\mathcal{M}}(2,{\mathbb{C}})_R$ for all $t$. Moreover, $${\mathop \mathrm{tr}}P^1(t) = t\widehat c_1 (t) + t\widehat c_4(t) = {\mathop \mathrm{tr}}C_t(0) = 0.$$ Thus, $P^1\in {\mathcal{C}}^1((-T',T'),{{\Lambda {\mathfrak{sl}}(2,{\mathbb{C}})}_R})$.
The $z^AP$ form of $\Phi_t$ {#sectionzAP}
---------------------------
The map $P^1$ defined above allows us to use the Fröbenius method in a loop group framework and in the non-resonant case, that is, for all $t$ (Proposition \[propzAP1\]). The techniques used in [@raujouan] will then apply in order to gauge the $Mz^AP$ form and gain an order on $z$ (Proposition \[propzAP2\]).
\[propzAP1\] There exists $\epsilon'>0$ such that for all $t\in(-T',T')$ there exist $M_t\in\Lambda{\mathrm{SL}}(2,{\mathbb{C}})_{R}$ and a holomorphic map $P_t: D_{\epsilon'}\longrightarrow {\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_R}$ such that for all $z\in D_{\epsilon'}^*$, $$\Phi_t(z) = M_tz^{A_t}P_t(z).$$ Moreover, $M_t$ is ${\mathcal{C}}^1$ with respect to $t$, $M_0={\mathrm{I}}_2$, $P_t(z)$ is ${\mathcal{C}}^1$ with respect to $(t,z)$, $P_0(z) = {\mathrm{I}}_2$ for all $z$ and $P_t(0)={\mathrm{I}}_2$ for all $t$.
For all $k\in {\mathbb{N}}^*$ and $t\in(-T',T')$, define the linear map $${
\begin{array}{ccccc}
{\mathcal{L}}_{t,k} & : & \Lambda{\mathcal{M}}(2,{\mathbb{C}})_{{\rho}} & \longrightarrow & \Lambda{\mathcal{M}}(2,{\mathbb{C}})_{{\rho}} \\
& & X & \longmapsto & [A_t,X] + kX. \\
\end{array}
}$$ Use the bases ${\mathcal{B}}_t$ and restrict ${\mathcal{L}}_{t,k}$ to $\Lambda{\mathcal{M}}(2,{\mathbb{C}})_R$ to get $$\label{eqDiagLlk}
{\mathrm{Mat}}_{{\mathcal{B}}_t}{\mathcal{L}}_{t,k} = \begin{pmatrix}
k & 0 &0&0\\
0&k+2\mu_t&0&0\\
0&0&k-2\mu_t&0\\
0&0&0&k
\end{pmatrix}.$$ Note that $$\det {\mathcal{L}}_{t,k} = k^2(k^2-4\mu_t^2).$$ Thus, for all $k\geq 2$, $\det {\mathcal{L}}_{t,k}$ is an invertible element of $\Lambda{\mathbb{C}}_{{R}}$, which implies that ${\mathcal{L}}_{t,k}$ is invertible for all $t\in(-T',T')$ and $k\geq 2$.
Write $$C_t(z) = \sum_{k\in{\mathbb{N}}}C_{t,k}z^k.$$ With $P^0 := {\mathrm{I}}_2$ and $P^1$ as in Equation , define for all $k\geq 1$: $$P^{k+1}(t) := {\mathcal{L}}_{t,k+1}^{-1}\left( \sum_{i+j=k}P^i(t)C_{t,j} \right).$$ so that the sequence $(P^k)_{k\in{\mathbb{N}}}\subset{\mathcal{C}}^1\left( (-T',T'),\Lambda{\mathfrak{sl}}(2,{\mathbb{C}})_{{\rho}} \right)$ satisfies the following recursive system for all $t\in(-T',T')$: $$\left\{
\begin{array}{l}
P^0(t)={\mathrm{I}}_2,\\
{\mathcal{L}}_{t,k+1}(P^{k+1}(t)) = \sum_{i+j=k}P^i(t)C_{t,j}.
\end{array}
\right.$$ With $P_t(z):= \sum P^k(t)z^k$, the Fröbenius method ensures that there exists $\epsilon'>0$ such that for all $z\in D_{\epsilon'}^*$ and $t\in(-T',T')$, $$\Phi_t(z,{\lambda})= M_tz^{A_t}P_t(z)$$ where $M_t\in\Lambda{\mathrm{SL}}(2,{\mathbb{C}})_{{R}}$ is ${\mathcal{C}}^1$ with respect to $t$, $P_t(z)$ is ${\mathcal{C}}^1$ with respect to $t$ and $z$, and for all $t$, $P_t: D_{\epsilon'}\longrightarrow\Lambda{\mathrm{SL}}(2,{\mathbb{C}})_{{R}}$ is holomorphic and satisfies $P_t(0)={\mathrm{I}}_2$. Moreover, note that $P_0(z)={\mathrm{I}}_2$ for all $z\in D_{\epsilon'}$ and thus $M_0={\mathrm{I}}_2$.
\[propzAP2\] There exists ${\epsilon'}>0$ such that for all $t\in(-T',T')$ there exist an admissible gauge $G_t: D_{{\epsilon}}\longrightarrow {\Lambda_+{\mathrm{SL}}(2,{\mathbb{C}})_R}$, a change of coordinates $h_t: D_{{\epsilon'}}\longrightarrow D_{\epsilon}$, a holomorphic map $\widetilde{P}_t: D_{{\epsilon'}}\longrightarrow {\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_R}$ and $\widetilde{M}_t\in{\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_R}$ such that for all $z\in D_{{\epsilon'}}^*$, $$h_t^*\left( \Phi_t G_t \right)(z) = \widetilde{M}_tz^{A_t}\widetilde{P}_t(z).$$ Moreover, $\widetilde{M}_t$ is ${\mathcal{C}}^1$ with respect to $t$, $\widetilde{M}_0={\mathrm{I}}_2$ and there exists a uniform $C>0$ such that for all $t$ and $z$, $${\left\Vert\widetilde{P}_t(z)-{\mathrm{I}}_2\right\Vert}_{\rho} \leq C|t||z|^2.$$
The proof goes as in Section 3.3 of [@raujouan]. Expand $P^1(t)$ given by Equation as a series to get (this is a tedious but simple computation): $$P_1(t,{\lambda}) = \begin{pmatrix}
0 & \frac{stc_2(t,0)+rtc_3(t,0)}{2s}{\lambda}^{-1} \\
\frac{stc_2(t,0)+rtc_3(t,0)}{2r} & 0
\end{pmatrix} + \begin{pmatrix}
{\mathcal{O}}({\lambda}^0) & {\mathcal{O}}({\lambda}^0)\\
{\mathcal{O}}({\lambda}) & {\mathcal{O}}({\lambda}^0)
\end{pmatrix}.$$ Define $$p_t:= {2\sinh q(sc_2(t,0)+rc_3(t,0))}$$ so that $$g_t := p_tA_t- P^1(t)\in\Lambda_+{\mathfrak{sl}}(2,{\mathbb{C}})_{R}$$ and recall Equation together with $P_0 = {\mathrm{I}}_2$ to show that $g_0=0$. Thus, $$G_t:=\exp\left( g_tz \right)\in\Lambda_+{\mathrm{SL}}(2,{\mathbb{C}})_{R}$$ is an admissible gauge. Let $\epsilon'<|p_t|^{-1}$ for all $t\in(-T',T')$. Define $${
\begin{array}{ccccc}
h_t & : & D_{\epsilon'} & \longrightarrow & D_\epsilon \\
& & z & \longmapsto & \frac{z}{1+p_tz}. \\
\end{array}
}$$ Then, $$\widetilde{\xi}_t:=h_t^*\left(\xi_t\cdot G_t\right) = A_t z^{-1}dz + \widetilde{C}_t(z)dz$$ is a perturbed Delaunay potential as in Definition \[defPerturbedDelaunayPotential\] such that $\widetilde{C}_t(0)=0$ for all $t\in (-T',T')$. The holomorphic frame $$\widetilde{\Phi}_t := h_t^*\left(\Phi_tG_t\right)$$ satisfies $d\widetilde{\Phi}_t = \widetilde{\Phi}_t\widetilde{\xi}_t$. With $\widetilde{C}_t(0)=0$, one can apply the Fröbenius method on $\widetilde{\xi}_t$ to get $$\widetilde{\Phi}_t(z) = \widetilde{M}_tz^{A_t}\widetilde{P}_t(z)$$ with $\widetilde{M}_0={\mathrm{I}}_2$ and $${\left\Vert\widetilde{P}_t(z) - {\mathrm{I}}_2\right\Vert}_{{R}} \leq C|t||z|^2.$$
#### Conclusion.
The new frame $\widetilde{\Phi}_t$ is associated to a perturbed Delaunay potential $(\widetilde{\xi}_t)_{t\in(-T',T')}$, defined for $z\in D_{\epsilon'}^*$, with values in ${{\Lambda {\mathfrak{sl}}(2,{\mathbb{C}})}_R}$ and of the form $$\widetilde{\xi}_t(z) = A_tz^{-1}dz + \widehat{C}_t(z)zdz$$ Note that $\widetilde{C}_t(z)\in{\mathfrak{sl}}(2,{\mathbb{C}})_{R}$ is now ${\mathcal{C}}^1$ with respect to $(t,z)$. The monodromy problem is solved for $\widetilde \Phi_t$ and for any $\widetilde z_0$ in the universal cover $\widetilde D_{\epsilon'}^*$, $\widetilde \Phi_0(\widetilde z_0) = \widetilde z_0^{A_0}$. Moreover, writing $\widetilde{f}_t:={ \mathrm{Sym}}_q ({\mathop \mathrm{Uni}}\widetilde{\Phi}_t)$ and ${f}_t:={ \mathrm{Sym}}_q ({\mathop \mathrm{Uni}}{\Phi}_t)$, then $\widetilde{f}_t=h_t^*f_t$ with $h_0(z)=z$. Hence in order to prove Theorem \[theoremPerturbedDelaunay\] it suffices to prove the following proposition.
\[propThmSimplifie\] Let $\rho>e^q$, $0<T<T_{\max}$, $\epsilon>0$ and $\xi_t$ be a perturbed Delaunay potential as in Definition \[defPerturbedDelaunayPotential\]. Let $\Phi_t$ be a holomorphic frame associated to $\xi_t$ for all $t$ via the DPW method. Suppose that the monodromy problem is solved for all $t\in (-T,T)$ and that $$\Phi_t(z) = M_tz^{A_t}P_t(z)$$ where $M_t\in{\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}$ is ${\mathcal{C}}^1$ with respect to $t$, satisfies $M_0={\mathrm{I}}_2$, and $P_t:D_\epsilon\longrightarrow{\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}$ is a holomorphic map such that for all $t$ and $z$, $${\left\VertP_t(z) - {\mathrm{I}}_2\right\Vert}_\rho \leq C|t||z|^2$$ where $C>0$ is a uniform constant. Let $f_t={ \mathrm{Sym}}_q\left({\mathop \mathrm{Uni}}\Phi_t\right)$. Then the three points of Theorem \[theoremPerturbedDelaunay\] hold for $f_t$.
Thus, we now reset the values of $\rho$, $T$ and $\epsilon$ and suppose that we are given a perturbed Delaunay frame $\xi_t$ and a holomorphic frame $\Phi_t$ associated to it and satisfying the hypotheses of Proposition \[propThmSimplifie\].
Dressed Delaunay frames {#sectionDressedDelaunayFrames}
-----------------------
In this section we study dressed Delaunay frames arising from the DPW data $(\widetilde{{\mathbb{C}}}^*,\xi_t^{\mathcal{D}},1,M_t)$, where $\widetilde{{\mathbb{C}}}^*$ is the universal cover of ${\mathbb{C}}^*$ and $$\xi_t^{\mathcal{D}}(z) := A_tz^{-1}dz$$ with $A_t$ as in satisfying , and $M_t$ as in Proposition \[propzAP2\]. The induced holomorphic frame is $$\Phi_t^{\mathcal{D}}(z) = M_tz^{A_t}.$$ Note that the fact that the monodromy problem is solved for $\Phi_t$ implies that it is solved for $\Phi_t^{\mathcal{D}}$ because $P_t$ is holomorphic on $D_{\epsilon}$. Let $\widetilde{D}_1^*$ be the universal cover of $D_1^*$ and let $$F_t^{\mathcal{D}}:= {\mathop \mathrm{Uni}}\Phi_t^{\mathcal{D}}, \qquad B_t^{\mathcal{D}}:= {\mathop \mathrm{Pos}}\Phi_t^{\mathcal{D}}, \qquad f_t^{\mathcal{D}}:= { \mathrm{Sym}}_q F_t^{\mathcal{D}}.$$ In this section, our goal is to prove the following proposition.
\[propConvUnitFrame\] The immersion $f_t^{\mathcal{D}}$ is a CMC $H$ Delaunay immersion of weight $2\pi t$ for $|t|$ small enough. Moreover, for all $\delta>0$ and $e^q<R<\rho$ there exists $C,T'>0$ such that $${\left\VertF_t^{\mathcal{D}}(z)\right\Vert}_{{R}} \leq C|z|^{-\delta}.$$ for all $(t,z)\in(-T',T')\times \widetilde{D}_1^*$.
#### Delaunay immersion.
We will need the following lemma, inspired by [@schmitt].
\[lemmaUniComPointwise\] Let $M\in{\mathrm{SL}}(2,{\mathbb{C}})$ and ${\mathcal{A}}\in{\mathfrak{su}}(2)$ such that $$\label{eqLemmaPolar}
M\exp({\mathcal{A}})M^{-1}\in{\mathrm{SU}}(2).$$ Then there exist $U\in{\mathrm{SU}}(2)$ and $K\in{\mathrm{SL}}(2,{\mathbb{C}})$ such that $M=UK$ and $\left[ K,{\mathcal{A}}\right]=0$.
Let $$K = \sqrt{{M}^*M}, \qquad U=MK^{-1}$$ be a polar decomposition of $M$. The matrix $K$ is then hermitian and positive-definite because $\det M \neq 0$. Moreover, $U\in{\mathrm{SU}}(2)$ and Equation is then equivalent to $$\label{eqLemmaPolar2}
K\exp\left( {\mathcal{A}}\right)K^{-1} \in{\mathrm{SU}}(2).$$ Let us diagonalise $K = QDQ^{-1}$ where $Q\in{\mathrm{SU}}(2)$ and $$D=\begin{pmatrix}
x&0\\0&x^{-1}
\end{pmatrix}, \quad x>0.$$ Hence Equation now reads $$\label{eqLemmaPolar3}
D\exp\left( Q^{-1}{\mathcal{A}}Q \right)D^{-1}\in{\mathrm{SU}}(2).$$ But $Q\in{\mathrm{SU}}(2)$ and ${\mathcal{A}}\in{\mathfrak{su}}(2)$, so $Q^{-1}{\mathcal{A}}Q \in{\mathfrak{su}}(2)$ and $\exp\left( Q^{-1}{\mathcal{A}}Q \right)\in{\mathrm{SU}}(2)$. Let us write $$\exp\left( Q^{-1}{\mathcal{A}}Q \right) = \begin{pmatrix}
p & -{\overline{q}} \\ q & {\overline{p}}
\end{pmatrix}, \quad |p|^2+|q|^2 = 1$$ so that Equation is now equivalent to $$x=1 \text{ or } q=0.$$ If $x=1$ then $K={\mathrm{I}}_2$ and $\left[ K,{\mathcal{A}}\right]=0$. If $q=0$ then $Q^{-1}{\mathcal{A}}Q$ is diagonal and ${\mathcal{A}}$ is $Q$-diagonalisable. Thus $K$ and ${\mathcal{A}}$ are simultaneously diagonalisable and $\left[ K,{\mathcal{A}}\right]=0$.
\[corUnitCom\] There exists $T'>0$ such that for all $t\in (-T',T')$, $$\Phi_t(z) = U_tz^{A_t}K_t$$ where $U_t\in{\Lambda {\mathrm{SU}}(2)}_{R}$ and $K_t\in {\Lambda {\mathrm{SL}}(2,{\mathbb{C}})}_{R}$ for any $e^q<R<\rho$.
Write ${\mathcal{M}}\left( \Phi_t^{\mathcal{D}}\right) = M_t\exp\left( {\mathcal{A}}_t \right)M_t^{-1}$ with ${\mathcal{A}}_t := 2i\pi A_t\in {\Lambda {\mathfrak{su}}(2)_\rho}$ continuous on $(-T,T)$. The map $$M\longmapsto \sqrt{M^*M} = \exp\left(\frac{1}{2}\log M^*M\right)$$ is a diffeomorphism from a neighbourhood of ${\mathrm{I}}_2\in{\Lambda {\mathrm{SL}}(2,{\mathbb{C}})}_{{\rho}}$ to another neighbourhood of ${\mathrm{I}}_2$. Using the convergence of $M_t$ towards ${\mathrm{I}}_2$ as $t$ tends to $0$, this allows us to use Lemma \[lemmaUniComPointwise\] pointwise on ${\mathbb{A}}_{{\rho}}$ and thus construct $K_t:=\sqrt{M^*M}\in{\Lambda {\mathrm{SL}}(2,{\mathbb{C}})}_{R}$ for all $t\in(-T',T')$ and any $e^q<R<\rho$. Let $U_t := M_tK_t^{-1}\in{\Lambda {\mathrm{SL}}(2,{\mathbb{C}})}_{R}$ and compute $U_tU_t^*$ to show that $U_t\in{\Lambda {\mathrm{SU}}(2)}_{R}$. Use Lemma \[lemmaUniComPointwise\] to show that $\left[ K_t({\lambda}), {\mathcal{A}}_t({\lambda}) \right]=0$ for all ${\lambda}\in{\mathbb{S}}^1$. Hence $\left[ K_t,{\mathcal{A}}_t \right]=0$ and thus $\Phi_t^{\mathcal{D}}= U_tz^{A_t}K_t$.
Returning to the proof of Proposition \[propConvUnitFrame\], let $\theta\in{\mathbb{R}}$, $z\in{\mathbb{C}}^*$ and $e^q<R<\rho$. Apply Corollary \[corUnitCom\] to get $$\Phi_t^{\mathcal{D}}(e^{i\theta}z) = U_t\exp(i\theta A_t)U_t^{-1}\Phi_t^{\mathcal{D}}(z)$$ and note that $U_t\in{\Lambda {\mathrm{SU}}(2)_R}$, $iA_t\in{\Lambda {\mathfrak{su}}(2)_R}$ imply $$\label{eqRt}
R_t(\theta) := U_t\exp(i\theta A_t)U_t^{-1} \in {\Lambda {\mathrm{SU}}(2)_R}.$$ Hence $$F_t^{\mathcal{D}}(e^{i\theta z}) = R_t(\theta)F_t^{\mathcal{D}}(z)$$ and $$f_t^{\mathcal{D}}(e^{i\theta}z) =R_t(\theta,e^{-q})\cdot f_t^{\mathcal{D}}(z).$$ Use Section \[sectionDelaunaySurfaces\] and note that $U_t$ does not depend on $\theta$ to see that $f_t^{\mathcal{D}}$ is a CMC immersion of revolution and hence a Delaunay immersion. Its weight can be read on its Hopf diffferential, which in turn can be read on the potential $\xi_t^{\mathcal{D}}$ (see Equation ). Thus, $f_t^{\mathcal{D}}$ is a CMC $H$ Delaunay immersion of weight $2\pi t$, which proves the first point of Proposition \[propConvUnitFrame\].
#### Restricting to a meridian.
Note that for all $t\in(-T',T')$ and $z\in{\mathbb{C}}^*$, $$\label{eqFtDmeridien}
{\left\VertF_t^{\mathcal{D}}(z)\right\Vert}_{{R}} \leq C {\left\VertF_t^{\mathcal{D}}(|z|)\right\Vert}_{{R}}$$ where $$C = \sup\left\{ {\left\VertR_t(\theta)\right\Vert}_{{R}} \mid (t,\theta)\in (-T',T')\times [0,2\pi] \right\}$$ depends only on $R$. We thus restrict $F_t^{\mathcal{D}}$ to ${\mathbb{R}}_+^*$ with $\widehat{F}_t^{\mathcal{D}}(x) := F_t^{\mathcal{D}}(|z|)$ ($x=|z|$).
#### Grönwall over a period.
Recalling the Lax Pair associated to $F_t^{\mathcal{D}}$ (see Appendix C in [@raujouan]), the restricted map $\widehat{F}_t^{\mathcal{D}}$ satisfies $d\widehat{F}_t^{\mathcal{D}}= \widehat{F}_t^{\mathcal{D}}\widehat{W}_tdx$ with $$\widehat{W}_t(x,{\lambda}) = \frac{1}{x}\begin{pmatrix}
0 & {\lambda}^{-1}rb^2(x)-sb^{-2}(x)\\
sb^{-2}(x) - {\lambda}r b^2(x) & 0
\end{pmatrix}$$ where $b(x)$ is the upper-left entry of $B_t^{\mathcal{D}}(x)\mid_{{\lambda}=0}$. Recall Section \[sectionDPWMethod\] and define $$g_t(x) = 2\sinh q |r|b(x)^2x^{-1}$$ so that the metric of $f_t^{\mathcal{D}}$ reads $g_t(x)|dz|$. Let $\widetilde{f}_t^{\mathcal{D}}:=\exp^*f_t^{\mathcal{D}}$. Then the metric of $\widetilde{f}_t^{\mathcal{D}}$ satisfies $$d\widetilde{s}^2 = 4r^2(\sinh q)^2b^4(e^u)(du^2+d\theta^2)$$ at the point $u+i\theta = \log z$. Using Proposition \[propJleliLopez\] of Appendix \[appendixJleliLopez\] gives $$\int_{0}^{ S_t}2|r|b^2(e^u)du = \pi \quad \text{and} \quad \int_{0}^{S_t}\frac{du}{2\sinh q|r|b^2(e^u)} = \frac{\pi}{|t|}$$ where $S_t>0$ is the period of the profile curve of $\widetilde{f}_t$. Thus, $$\int_{1}^{e^{S_t}}|rb^2(x)x^{-1}|dx = \frac{\pi}{2} = \int_{1}^{e^{S_t}}|sb^{-2}(x)x^{-1}|dx.$$ Using $${\left\Vert\widehat{W}_t(x)\right\Vert}_{{R}} = \sqrt{2}{\left|sb^{-2}(x)x^{-1}\right|} + 2R{\left|rb^2(x)x^{-1}\right|},$$ we deduce $$\label{eqIntegraleW}
\int_{1}^{e^{S_t}}{\left\Vert\widehat{W}_t(x)\right\Vert}_{R} dx = \frac{\pi}{2}(2R+\sqrt{2}) <C$$ where $C>0$ is a constant depending only on $\rho$ and $T$. Applying Grönwall’s lemma to the inequality $${\left\Vert\widehat{F}_t^{\mathcal{D}}(x)\right\Vert}_{{R}} \leq {\left\Vert\widehat{F}_t^{\mathcal{D}}(1)\right\Vert}_{{R}} + \int_{1}^{x}{\left\Vert\widehat{F}_t^{\mathcal{D}}(u)\right\Vert}_{{R}}{\left\Vert\widehat{W}_t(u)\right\Vert}_{{R}} |du|$$ gives $${\left\Vert\widehat{F}_t^{\mathcal{D}}(x)\right\Vert}_{{R}} \leq {\left\Vert\widehat{F}_t^{\mathcal{D}}(1)\right\Vert}_{{R}}\exp\left( \int_{1}^{x}{\left\Vert\widehat{W}_t(u)\right\Vert}_{{R}} |du| \right).$$ Use Equation together with the fact that $\widetilde{F}_0^{\mathcal{D}}(0)=F_0^{\mathcal{D}}(1)={\mathrm{I}}_2$ and the continuity of Iwasawa decomposition to get $C,T>0$ such that for all $t\in(-T',T')$ and $x\in [1,e^{S_t}]$ $$\label{eqFtDPeriode}
{\left\Vert\widehat{F}_t^{\mathcal{D}}(x)\right\Vert}_{{R}} \leq C.$$
#### Control over the periodicity matrix.
Let $t\in(-T',T')$ and $\Gamma_t := \widehat{F}_t^{\mathcal{D}}(xe^{S_t})\widehat{F}_t^{\mathcal{D}}(x)^{-1}\in{\Lambda {\mathrm{SU}}(2)}_{{R}}$ for all $x>0$. The periodicity matrix $\Gamma_t$ does not depend on $x$ because $\widehat{W}_t(xe^{S_t}) = \widehat{W}_t(x)$ (by periodicity of the metric in the $\log$ coordinate). Moreover, $${\left\Vert\Gamma_t\right\Vert}_{{R}} = {\left\Vert\widehat{F}_t^{\mathcal{D}}(e^{S_t})\widehat{F}_t^{\mathcal{D}}(1)^{-1}\right\Vert}_{{R}} \leq {\left\Vert\widehat{F}_t^{\mathcal{D}}(e^{S_t})\right\Vert}_{{R}} {\left\Vert\widehat{F}_t^{\mathcal{D}}(1)\right\Vert}_{{R}},$$ and using Equation , $$\label{eqPeriode}
{\left\Vert\Gamma_t\right\Vert}_{{R}} \leq C.$$
#### Conclusion.
Let $x<1$. Then there exist $k\in{\mathbb{N}}^*$ and $\zeta \in \left[1,e^{S_t}\right)$ such that $x=\zeta e^{-kS_t}$. Thus, using Equations and , $${\left\Vert\widehat{F}_t^{\mathcal{D}}(x)\right\Vert}_{{R}} \leq {\left\Vert\Gamma_t^{-k}\right\Vert}_{{R}} {\left\Vert\widehat{F}_t^{\mathcal{D}}(\zeta)\right\Vert}_{{R}} \leq C^{k+1}.$$ Writing $$k = \frac{\log \zeta}{S_t}-\frac{\log x}{S_t},$$ one gets $$C^k = \exp\left(\frac{\log \zeta}{S_t}\log C\right)\exp\left( \frac{-\log C}{S_t}\log x \right) \leq Cx^{-\delta_t}$$ where $\delta_t = \frac{\log C}{S_t}$ does not depend on $x$ and tends to $0$ as $t$ tends to $0$ (because $S_t{\underset{t\to 0}{\longrightarrow}}+\infty$). Returning back to $F_t^{\mathcal{D}}$, we showed that for all $\delta>0$ there exist $T'>0$ and $C>0$ such that for all $t\in(-T',T')$ and $0<|z|<1$, $${\left\VertF_t^{\mathcal{D}}(z)\right\Vert}_{{R}} \leq C|z|^{-\delta}$$ and Proposition \[propConvUnitFrame\] is proved.
Convergence of the immersions {#sectionConvergenceImmersions}
-----------------------------
In this section, we prove the first point of Theorem \[theoremPerturbedDelaunay\]: the convergence of the immersions $f_t$ towards the immersions $f_t^{\mathcal{D}}$. Our proof relies on the Iwasawa decomposition being a diffeomorphism in a neighbourhood of ${\mathrm{I}}_2$.
#### Behaviour of the Delaunay positive part.
Let $z\in\widetilde{D}_1^*$. The Delaunay positive part satisfies $${\left\VertB_t^{\mathcal{D}}(z)\right\Vert}_{{\rho}} = {\left\VertF_t^{\mathcal{D}}(z)^{-1}M_tz^{A_t}\right\Vert}_{{\rho}} \leq {\left\VertF_t^{\mathcal{D}}(z)\right\Vert}_{{\rho}} {\left\VertM_t\right\Vert}_{{\rho}} {\left\Vertz^{A_t}\right\Vert}_{{\rho}}.$$ Diagonalise $A_t=H_tD_tH_t^{-1}$ as in Proposition \[propDiagA\]. By continuity of $H_t$ and $M_t$, and according to Proposition \[propConvUnitFrame\], there exists $C,T'>0$ such that for all $t\in(-T',T')$ $${\left\VertB_t^{\mathcal{D}}(z)\right\Vert}_{{R}} \leq C|z|^{-\delta}{\left\Vertz^{-\mu_t}\right\Vert}_{{R}}.$$ Recall Equation and extend $\mu_t^2=-\det A_t$ to ${\mathbb{A}}_{\widetilde{R}}$ with $\rho>\widetilde{R}>R$. One can thus assume that for $t\in(-T',T')$ and ${\lambda}\in{\mathbb{A}}_{\widetilde{R}}$, $$\left|\mu_t({\lambda})\right|<\frac{1}{2}+\delta,$$ which implies that $$\left|z^{-\mu_t({\lambda})}\right| \leq |z|^{\frac{-1}{2}-\delta}.$$ This gives us the following estimate in the $\Lambda{\mathbb{C}}_{{R}}$ norm (using Cauchy formula and $\widetilde{R}>R$): $${\left\Vertz^{-\mu_t}\right\Vert}_{{R}}\leq C|z|^{\frac{-1}{2}-\delta}$$ and $$\label{eqBtD}
{\left\VertB_t^{\mathcal{D}}(z)\right\Vert}_{{R}} \leq C|z|^{\frac{-1}{2}-2\delta}.$$
#### Behaviour of a holomorphic frame.
Let $$\widetilde{\Phi}_t := B_t^{\mathcal{D}}\left(\Phi_t^{\mathcal{D}}\right)^{-1}\Phi_t \left(B_t^{\mathcal{D}}\right)^{-1}.$$ Recall Proposition \[propzAP2\] and use Equation to get for all $t\in(-T',T')$ and $z\in D_\epsilon^*$: $${\left\Vert\widetilde{\Phi}_t(z) - {\mathrm{I}}_2\right\Vert}_{{R}} = {\left\VertB_t^{\mathcal{D}}(z)\left( P_t(z) - {\mathrm{I}}_2 \right)B_t^{\mathcal{D}}(z)^{-1}\right\Vert}_{{R}} \leq C|t||z|^{1-4\delta}.$$
#### Behaviour of the perturbed immersion.
Note that $$\begin{aligned}
\widetilde{\Phi}_t &= B_t^{\mathcal{D}}\left(\Phi_t^{\mathcal{D}}\right)^{-1}\Phi_t \left(B_t^{\mathcal{D}}\right)^{-1} \\
&= \left(F_t^{\mathcal{D}}\right)^{-1}F_t\times B_t\left(B_t^{\mathcal{D}}\right)^{-1}\end{aligned}$$ and recall that Iwasawa decomposition is differentiable at the identity to get $${\left\VertF_t^{\mathcal{D}}(z)^{-1}F_t(z) - {\mathrm{I}}_2\right\Vert}_{{R}} = {\left\Vert{\mathop \mathrm{Uni}}\widetilde{\Phi}_t(z) - {\mathop \mathrm{Uni}}{\mathrm{I}}_2\right\Vert}_{{R}} \leq C|t||z|^{1-4\delta}.$$ The map $$\widetilde{F}_t(z) := F_t^{\mathcal{D}}(z,e^{-q})^{-1}F_t(z,e^{-q}) \in{\mathrm{SL}}(2,{\mathbb{C}})$$ satisfies $$\label{eqFtilde-I2}
\left|\widetilde{F}_t(z) - {\mathrm{I}}_2 \right| \leq {\left\VertF_t^{\mathcal{D}}(z)^{-1}F_t(z) - {\mathrm{I}}_2\right\Vert}_{{R}} \leq C|t||z|^{1-4\delta}.$$
\[lemmaTraceAA\*\] There exists a neighbourhood $V\subset {\mathrm{SL}}(2,{\mathbb{C}})$ of ${\mathrm{I}}_2$ and $C>0$ such that for all $A\in{\mathrm{SL}}(2,{\mathbb{C}})$, $$A\in V \implies \left\vert {\mathop \mathrm{tr}}\left(A{A}^*\right) - 2 \right\vert \leq C\left|A-{\mathrm{I}}_2\right|^2.$$
Consider $\exp: U\subset{\mathfrak{sl}}(2,{\mathbb{C}}) \longrightarrow V\subset {\mathrm{SL}}(2,{\mathbb{C}})$ as a local chart of ${\mathrm{SL}}(2,{\mathbb{C}})$ around ${\mathrm{I}}_2$. Let $A\in V$. Write $${
\begin{array}{ccccc}
f & : & {\mathrm{SL}}(2,{\mathbb{C}}) & \longrightarrow & {\mathbb{R}}\\
& & X & \longmapsto & {\mathop \mathrm{tr}}\left(X{X}^*\right) \\
\end{array}
}$$ and $a=\log A\in U$ to get $$\left\vert f(A) - f({\mathrm{I}}_2) \right\vert \leq df({\mathrm{I}}_2)\cdot a + C|a|^2.$$ Notice that for all $a\in{\mathfrak{sl}}(2,{\mathbb{C}})$, $$df({\mathrm{I}}_2)\cdot a = {\mathop \mathrm{tr}}(a+{a}^*) = 0$$ to end the proof.
\[corollarydH3\] There exists a neighbourhood $V\subset {\mathrm{SL}}(2,{\mathbb{C}})$ of ${\mathrm{I}}_2$ and $C>0$ such that for all $F_1,F_2\in{\mathrm{SL}}(2,{\mathbb{C}})$, $$F_2^{-1}F_1 \in V \implies {d_{{\mathbb{H}}^3}\left(f_1,f_2\right)}< C\left| F_2^{-1}F_1 - {\mathrm{I}}_2 \right|,$$ where $f_i = F_i\cdot {\mathrm{I}}_2 \in{\mathbb{H}}^3$.
Just remark that $${d_{{\mathbb{H}}^3}\left(f_1,f_2\right)} = \cosh^{-1}\left( -{\left\langle f_1, f_2 \right\rangle} \right) = \cosh^{-1}\left( \frac{1}{2}{\mathop \mathrm{tr}}(f_2^{-1}f_1) \right)$$ and that $${\mathop \mathrm{tr}}\left( f_2^{-1}f_1 \right) = {\mathop \mathrm{tr}}\left( \left(F_2{F_2}^*\right)^{-1}F_1{F_1}^* \right) = {\mathop \mathrm{tr}}\widetilde{F}{\widetilde{F}}^*$$ where $\widetilde{F} = F_2^{-1}F_1$. Apply Lemma \[lemmaTraceAA\*\] and use $\cosh^{-1}(1+x)\sim \sqrt{2x} $ as $x \to 0$ to end the proof.
Without loss of generality, we can suppose from that $C|t||z|^{1-4\delta}$ is small enough for $\widetilde{F}_t(z)$ to be in $V$ for all $t$ and $z$. Apply Corollary \[corollarydH3\] to end the proof of the first point in Theorem \[theoremPerturbedDelaunay\]: $$\label{eqDistftDft}
{d_{{\mathbb{H}}^3}\left(f_t(z),f_t^{\mathcal{D}}(z)\right)}\leq C|t||z|^{1-4\delta}.$$
Convergence of the normal maps {#sectionConvergenceNormales}
------------------------------
Before starting the proof of the second point of Theorem \[theoremPerturbedDelaunay\], we will need to compare the normal maps of our immersions. Let $N_t := {\mathrm{Nor}}_qF_t$ and $N_t^{\mathcal{D}}:= {\mathrm{Nor}}_qF_t^{\mathcal{D}}$ be the normal maps associated to the immersions $f_t$ and $f_t^{\mathcal{D}}$. This section is devoted to the proof of the following proposition.
\[propConvNormales\] For all $\delta>0$ there exist $\epsilon',T',C>0$ such that for all $t\in(-T',T')$ and $z\in D_\epsilon^*$, $${\left\Vert\Gamma_{f_t(z)}^{f_t^{\mathcal{D}}(z)}N_t(z) - N_t^{\mathcal{D}}(z)\right\Vert}_{T{\mathbb{H}}^3} \leq C|t||z|^{1-\delta}.$$
The following lemma measures the lack of euclideanity in the parallel transportation of unitary vectors.
\[lemmaTriangIneq\] Let $a,b,c\in{\mathbb{H}}^3$, $v_a\in T_a{\mathbb{H}}^3$ and $v_b\in T_b{\mathbb{H}}^3$ both unitary. Let ${\mathcal{A}}$ be the hyperbolic area of the triangle $(a,b,c)$. Then $${\left\Vert\Gamma_a^bv_a - v_b\right\Vert}_{T_b{\mathbb{H}}^3} \leq {\mathcal{A}}+ {\left\Vert\Gamma_a^cv_a - \Gamma_b^cv_b\right\Vert}_{T_c{\mathbb{H}}^3}.$$
Just use the triangular inequality and Gauss-Bonnet formula in ${\mathbb{H}}^2$ to write: $$\begin{aligned}
{\left\Vert\Gamma_a^bv_a - v_b\right\Vert}_{T_b{\mathbb{H}}^3} &= {\left\Vert\Gamma_c^a \Gamma _b^c \Gamma_a^b v_a - \Gamma_c^a \Gamma_b^c v_b\right\Vert}_{T_a{\mathbb{H}}^3} \\
&\leq {\left\Vert\Gamma_c^a \Gamma _b^c \Gamma_a^b v_a - v_a\right\Vert}_{T_a{\mathbb{H}}^3} + {\left\Vertv_a - \Gamma_c^a \Gamma_b^c v_b\right\Vert}_{T_a{\mathbb{H}}^3} \\
&\leq {\mathcal{A}}+ {\left\Vert\Gamma_a^cv_a - \Gamma_b^cv_b\right\Vert}_{T_c{\mathbb{H}}^3}.
\end{aligned}$$
Lemma \[lemmaPolarSymNor\] below clarifies how the unitary frame encodes the immersion and the normal map.
\[lemmaPolarSymNor\] Let $f = { \mathrm{Sym}}_q F$ and $N={\mathrm{Nor}}_qF$. Denoting by $(S(z),Q(z))\in{\mathcal{H}}_2^{++}\cap {\mathrm{SL}}(2,{\mathbb{C}}) \times {\mathrm{SU}}(2)$ the polar decomposition of $F(z,e^{-q})$, $$f = S^2 \quad \text{and} \quad N = \Gamma_{{\mathrm{I}}_2}^{f}(Q\cdot \sigma_3).$$
The formula for $f$ is straightforward after noticing that $QQ^*={\mathrm{I}}_2$ and $S^*=S$. The formula for $N$ is a direct consequence of Proposition \[propParallTranspI2\].
#### Proof of Proposition \[propConvNormales\].
Let $\delta>0$, $t\in(-T',T')$ and $z\in D_{\epsilon'}^*$. Using Lemma \[lemmaTriangIneq\], $${\left\Vert\Gamma_{f_t(z)}^{f_t^{\mathcal{D}}(z)}N_t(z) - N_t^{\mathcal{D}}(z)\right\Vert} \leq {\mathcal{A}}+ {\left\Vert\Gamma_{f_t(z)}^{{\mathrm{I}}_2}N_t(z) - \Gamma_{f_t^{\mathcal{D}}(z)}^{{\mathrm{I}}_2}N_t^{\mathcal{D}}(z)\right\Vert}$$ where ${\mathcal{A}}$ is the hyperbolic area of the triangle $\left({\mathrm{I}}_2,f_t(z),f_t^{\mathcal{D}}(z)\right)$. Using Heron’s formula in ${\mathbb{H}}^2$ (see [@heron], p.66), Proposition \[propConvUnitFrame\] and the first point of Theorem \[theoremPerturbedDelaunay\], $${\mathcal{A}}\leq {d_{{\mathbb{H}}^3}\left(f_t(z),f_t^{\mathcal{D}}(z)\right)}\times {d_{{\mathbb{H}}^3}\left({\mathrm{I}}_2,f_t^{\mathcal{D}}(z)\right)} \leq C|t||z|^{1-\delta}.$$ Moreover, denoting by $Q_t$ and $Q_t^{\mathcal{D}}$ the unitary parts of $F_t(e^q)$ and $F_t^{\mathcal{D}}(e^q)$ in their polar decomposition and using Lemma \[lemmaPolarSymNor\] together with Corollary \[corQ2v-Q1v\] and Equation , $$\begin{aligned}
{\left\Vert\Gamma_{f_t(z)}^{{\mathrm{I}}_2}N_t(z) - \Gamma_{f_t^{\mathcal{D}}(z)}^{{\mathrm{I}}_2}N_t^{\mathcal{D}}(z)\right\Vert} &= {\left\VertQ_t(z)\cdot \sigma_3 - Q_t^{\mathcal{D}}(z)\cdot \sigma_3\right\Vert}_{T_{{\mathrm{I}}_2}{\mathbb{H}}^3}\\
&\leq C {\left\VertF_t^{\mathcal{D}}(z)\right\Vert}_{{R}}^2{\left\VertF_t^{\mathcal{D}}(z)^{-1}F_t^{\mathcal{D}}(z)-{\mathrm{I}}_2\right\Vert}_{{R}}\\
&\leq C |t||z|^{1-3\delta}.\end{aligned}$$
Embeddedness {#sectionEmbeddedness}
------------
In this section, we prove the second point of Theorem \[theoremPerturbedDelaunay\]. We thus assume that $t>0$. We suppose that $C,\epsilon,T,\delta>0$ satisfy Proposition \[propConvNormales\] and the first point of Theorem \[theoremPerturbedDelaunay\].
\[lemmaDelaunayPointsEloignes\] Let $r_t>0$ such that the tubular neighbourhood of $f_t^{\mathcal{D}}({\mathbb{C}}^*)$ with hyperbolic radius $r_t$ is embedded. There exists $T>0$ and $0<\epsilon'<\epsilon$ such that for all $0<t<T$, $x\in\partial D_\epsilon$ and $y\in D_{\epsilon'}^*$, $${d_{{\mathbb{H}}^3}\left(f_t^{\mathcal{D}}(x),f_t^{\mathcal{D}}(y)\right)}>4r_t.$$
The convergence of $f_t^{\mathcal{D}}({\mathbb{C}}^*)$ towards a chain of spheres implies that $r_t$ tends to $0$ as $t$ tends to $0$. If $f_t^{\mathcal{D}}$ does not degenerate into a point, then it converges towards the parametrisation of a sphere, and for all $0<\epsilon'<\epsilon$ there exists $T>0$ satisfying the inequality. If $f_t^{\mathcal{D}}$ does degenerate into a point, then a suitable blow-up makes it converge towards a catenoidal immersion on the punctured disk $D_{\epsilon}^*$ (see Section \[sectionBlowUp\]). This implies that for $\epsilon'>0$ small enough, there exists $T>0$ satisfying the inequality.
We can now prove embeddedness with the same method than in [@raujouan]. Let ${\mathcal{D}}_t:=f_t^{\mathcal{D}}\left({\mathbb{C}}^*\right)\subset {\mathbb{H}}^3$ be the image Delaunay surface of $f_t^{\mathcal{D}}$. We denote by $\eta_t^{\mathcal{D}}:{\mathcal{D}}_t\longrightarrow T{\mathbb{H}}^3$ the Gauss map of ${\mathcal{D}}_t$. We also write ${\mathcal{M}}_t=f_t(D_{\epsilon}^*)$ and $\eta_t:{\mathcal{M}}_t\longrightarrow T{\mathbb{H}}^3$. Let $r_t$ be the maximal value of $\alpha$ such that the following map is a diffeomorphism: $${
\begin{array}{ccccc}
{\mathcal{T}}& : & \left(-\alpha,\alpha\right)\times {\mathcal{D}}_t & \longrightarrow & {\mathop \mathrm{Tub}}_\alpha{\mathcal{D}}_t\subset{\mathbb{H}}^3 \\
& & (s,p) & \longmapsto & {\mathrm{geod}}\left(p,\eta_t^{\mathcal{D}}(p)\right)(s). \\
\end{array}
}$$ According to Lemma \[lemmaTubularRadius\], the maximal tubular radius satisfies $r_t\sim t$ as $t$ tends to $0$. Using the first point of Theorem \[theoremPerturbedDelaunay\], we thus assume that on $D_{\epsilon}^*$, $${d_{{\mathbb{H}}^3}\left(f_t(z),f_t^{\mathcal{D}}(z)\right)}\leq \alpha r_t$$ where $\alpha<1$ is given by Lemma \[lemmaCompNormalesDelaunay\] of Appendix \[appendixJleliLopez\].
Let $\pi_t$ be the projection from ${\mathop \mathrm{Tub}}_{r_t}{\mathcal{D}}_t$ to ${\mathcal{D}}_t$. Then the map $${
\begin{array}{ccccc}
\varphi_t & : & D_\epsilon^* & \longrightarrow & {\mathcal{D}}_t \\
& & z & \longmapsto & \pi_t\circ f_t(z) \\
\end{array}
}$$ is well-defined and satisfies $$\label{eqDistPhitFtD}
{d_{{\mathbb{H}}^3}\left(\varphi_t(z),f_t^{\mathcal{D}}(z)\right)}\leq 2\alpha r_t$$ because of the triangular inequality.
For $t>0$ small enough, $\varphi_t$ is a local diffeomorphism on $D_{\epsilon}^*$.
It suffices to show that for all $z\in D_{\epsilon}^*$, $$\label{eqCompNormalesDiffeo}
{\left\Vert\Gamma_{{\varphi}_t(z)}^{f_t(z)}\eta_t^{\mathcal{D}}(\varphi_t(z))-N_t(z)\right\Vert}<1.$$ Using Lemma \[lemmaTriangIneq\] (we drop the variable $z$ to ease the notation), $${\left\Vert\Gamma_{{\varphi}_t}^{f_t}\eta_t^{\mathcal{D}}(\varphi_t)-N_t\right\Vert}\leq {\mathcal{A}}+ {\left\Vert\Gamma_{{\varphi}_t}^{f_t^{\mathcal{D}}}\eta_t^{\mathcal{D}}(\varphi_t)-\Gamma_{f_t}^{f_t^{\mathcal{D}}}N_t\right\Vert}$$ where ${\mathcal{A}}$ is the area of the triangle $\left( f_t,f_t^{\mathcal{D}},\varphi_t \right)$. Recall the isoperimetric inequality in ${\mathbb{H}}^2$ (see [@teufel1991]): $${\mathcal{P}}^2\geq 4\pi{\mathcal{A}}+ {\mathcal{A}}^2$$ from which we deduce $$\label{eqIsoperimetric}
{\mathcal{A}}\leq {\mathcal{P}}^2 \leq \left( 2{d_{{\mathbb{H}}^3}\left(f_t,f_t^{\mathcal{D}}\right)} + 2{d_{{\mathbb{H}}^3}\left(\varphi_t,f_t^{\mathcal{D}}\right)} \right)^2\leq \left(6\alpha r_t\right)^2$$ which uniformly tends to $0$ as $t$ tends to $0$. Using the triangular inequality and Proposition \[propConvNormales\], $${\left\Vert\Gamma_{{\varphi}_t}^{f_t^{\mathcal{D}}}\eta_t^{\mathcal{D}}(\varphi_t)-\Gamma_{f_t}^{f_t^{\mathcal{D}}}N_t\right\Vert} \leq {\left\Vert\Gamma_{{\varphi}_t}^{f_t^{\mathcal{D}}}\eta_t^{\mathcal{D}}(\varphi_t)-N_t^{\mathcal{D}}\right\Vert} + C|t||z|^{1-\delta}$$ and the second term of the right-hand side uniformly tends to $0$ as $t$ tends to $0$. Because $\alpha$ satisfies Lemma \[lemmaCompNormalesDelaunay\] in Appendix \[appendixJleliLopez\], $${\left\Vert\Gamma_{{\varphi}_t}^{f_t^{\mathcal{D}}}\eta_t^{\mathcal{D}}(\varphi_t)-N_t^{\mathcal{D}}\right\Vert} < 1$$ which implies Equation .
Let $\epsilon'>0$ given by Lemma \[lemmaDelaunayPointsEloignes\]. The restriction $${
\begin{array}{ccccc}
\widetilde{\varphi}_t & : & \varphi_t^{-1}\left(\varphi_t(D_{\epsilon'}^*)\right)\cap D_\epsilon^* & \longrightarrow & \varphi_t\left( D_{\epsilon'}^* \right) \\
& & z & \longmapsto & \varphi_t(z) \\
\end{array}
}$$ is a covering map because it is a proper local diffeomorphism between locally compact spaces. To show this, proceed by contradiction as in ${\mathbb{R}}^3$ (see [@raujouan]): let $(x_i)_{i\in{\mathbb{N}}}\subset \varphi_t^{-1}\left(\varphi_t(D_{\epsilon'}^*)\right)\cap D_\epsilon^*$ such that $\left(\widetilde{\varphi}_t(x_i)\right)_{i\in{\mathbb{N}}}$ converges to $p\in\varphi_t\left(D_{\epsilon'}^*\right)$. Then $(x_i)_i$ converges to $x\in\overline{D}_\epsilon$. Using Equation and the fact that $f_t^{\mathcal{D}}$ has an end at $0$, $x\neq 0$. If $x\in\partial D_\epsilon$, denoting $\widetilde{x}\in D_{\epsilon'}^*$ such that $\widetilde{\varphi}_t(\widetilde{x}) = p$, one has $${d_{{\mathbb{H}}^3}\left(f_t^{\mathcal{D}}(x) , f_t^{\mathcal{D}}(\widetilde{x})\right)} < {d_{{\mathbb{H}}^3}\left(f_t^{\mathcal{D}}(x) , p\right)} + {d_{{\mathbb{H}}^3}\left(f_t^{\mathcal{D}}(\widetilde{x}),\widetilde{\varphi}_t(\widetilde{x})\right)} < 4\alpha r_t < 4r_t$$ which contradicts the definition of $\epsilon'$.
Let us now prove as in [@raujouan] that $\widetilde{\varphi}_t$ is a one-sheeted covering map. Let $\gamma : [0,1] \longrightarrow D_{\epsilon'}^*$ be a loop of winding number $1$ around $0$, $\Gamma = f_t^{\mathcal{D}}(\gamma)$ and $\widetilde{\Gamma} = \widetilde{\varphi}_t(\gamma) \subset{\mathcal{D}}_t$ and let us construct a homotopy between $\Gamma$ and $\widetilde{\Gamma}$. For all $s\in\left[0,1\right]$, let $\sigma_s: \left[0,1\right]\longrightarrow {\mathbb{H}}^3$ be a geodesic arc joining $\sigma_s(0)=\widetilde{\Gamma}(s)$ to $\sigma_s(1) = \Gamma(s)$. For all $s,r\in[0,1]$, ${d_{{\mathbb{H}}^3}\left(\sigma_s(r) , \Gamma(s)\right)} \leq \alpha r_t$ which implies that $\sigma_s(r) \in {\mathop \mathrm{Tub}}_{r_t}{\mathcal{D}}_t$ because ${\mathcal{D}}_t$ is complete. One can thus define the following homotopy between $\Gamma$ and $\widetilde{\Gamma}$ $${
\begin{array}{ccccc}
H & : & [0,1]^2 & \longrightarrow & {\mathcal{D}}_t \\
& & (r,s) & \longmapsto & \pi_t\circ \sigma_s(r) \\
\end{array}
}$$ where $\pi_t$ is the projection from ${\mathop \mathrm{Tub}}_{r_t}{\mathcal{D}}_t$ to ${\mathcal{D}}_t$. Using the fact that $f_t^{\mathcal{D}}$ is an embedding, the degree of $\Gamma$ is one, and the degree of $\widetilde{\Gamma}$ is also one. Hence, $\widetilde{\varphi}_t$ is one-sheeted.
Finally, the map $\widetilde{\varphi}_t$ is a one-sheeted covering map and hence a diffeomorphism, so $f_t\left(D_{\epsilon'}^*\right)$ is a normal graph over ${\mathcal{D}}_t$ contained in its embedded tubular neighbourhood and $f_t\left(D_{\epsilon'}^*\right)$ is thus embedded, which proves the second point of Theorem \[theoremPerturbedDelaunay\].
Limit axis {#sectionLimitAxis}
----------
In this section, we prove the third point of Theorem \[theoremPerturbedDelaunay\] and compute the limit axis of $f_t^{\mathcal{D}}$ as $t$ tends to $0$. Recall that $f_t^{\mathcal{D}}= { \mathrm{Sym}}_q\left({\mathop \mathrm{Uni}}\left(M_tz^{A_t}\right)\right)$ where $M_t$ tends to ${\mathrm{I}}_2$ as $t$ tends to $0$. Hence, the limit axis of $f_t^{\mathcal{D}}$ and $\widetilde{f}_t^{\mathcal{D}}:= { \mathrm{Sym}}_q\left({\mathop \mathrm{Uni}}\left(z^{A_t}\right)\right)$ are the same. Two cases can occur, wether $r> s$ or $r< s$.
#### Spherical family.
At $t=0$, $r=\frac{1}{2}$ and $s=0$. The limit potential is thus $$\xi_0(z,{\lambda}) = \begin{pmatrix}
0 & \frac{{\lambda}^{-1}}{2} \\
\frac{{\lambda}}{2} & 0
\end{pmatrix}z^{-1} dz.$$ Consider the gauge $$G(z,{\lambda}) = \frac{1}{\sqrt{2z}}\begin{pmatrix}
1 & 0\\
{\lambda}& 2z
\end{pmatrix}.$$ The gauge potential is then $$\xi_0\cdot G (z,{\lambda}) = \begin{pmatrix}
0 & {\lambda}^{-1}dz\\
0&0
\end{pmatrix} = \xi_{\mathbb{S}}(z,{\lambda})$$ where $\xi_{\mathbb{S}}$ is the spherical potential as in Section \[sectionDPWMethod\]. Let $\widetilde{\Phi}:=z^{A_0}G$ be the gauged holomorphic frame and compute $$\begin{aligned}
\widetilde{\Phi}(1,{\lambda}) &= G(1,{\lambda}) \\
&= \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & 0\\
{\lambda}& 2
\end{pmatrix} = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & -{\lambda}^{-1} \\
{\lambda}& 1
\end{pmatrix}\begin{pmatrix}
1 & {\lambda}^{-1} \\
0 & 1
\end{pmatrix}\\
&= H({\lambda})\Phi_{\mathbb{S}}(1,{\lambda})\end{aligned}$$ where $\Phi_{\mathbb{S}}$ is defined in and $H=H_0$ as in . This means that $\widetilde{\Phi} = H\Phi_{\mathbb{S}}$, ${\mathop \mathrm{Uni}}\widetilde{\Phi} = H F_{\mathbb{S}}$ and ${ \mathrm{Sym}}_q({\mathop \mathrm{Uni}}\widetilde{\Phi}) = H(e^{-q})\cdot f_{\mathbb{S}}$ because $H\in{\Lambda {\mathrm{SU}}(2)_R}$. Thus, using Equations and , $$\begin{aligned}
\widetilde{f}_0^{\mathcal{D}}(\infty) &= { \mathrm{Sym}}_q({\mathop \mathrm{Uni}}\widetilde{\Phi})(\infty) = H(e^{-q})\cdot f_{\mathbb{S}}(\infty) \\
&= \left(H(e^{-q})R(q)\right)\cdot {\mathrm{geod}}\left( {\mathrm{I}}_2,\sigma_3 \right)(q)\\
&= H(e^{-q})\cdot {\mathrm{geod}}\left( {\mathrm{I}}_2,\sigma_3 \right)(2q).\end{aligned}$$ And with the same method, $$\widetilde{f}_0^{\mathcal{D}}(0) = H(e^{-q})\cdot {\mathrm{geod}}\left({\mathrm{I}}_2,\sigma_3\right)(0).$$ This means that the limit axis of $\widetilde{f}_t^{\mathcal{D}}$ as $t\to 0$, oriented from $z=\infty$ to $z=0$ is given in the spherical family by $$H(e^{-q})\cdot {{\mathrm{geod}}}\left({\mathrm{I}}_2,-\sigma_3\right).$$
#### Catenoidal family.
We cannot use the same method as above, as the immersion $\widetilde{f}_t^{\mathcal{D}}$ degenerates into the point ${\mathrm{I}}_2$. Use Proposition \[propBlowupCatenoid\] of Section \[sectionBlowUp\] to get $$\widehat{f}:=\lim\limits_{t\to 0} \frac{1}{t} \left(f_t - {\mathrm{I}}_2\right) = {\psi} \subset T_{{\mathrm{I}}_2}{\mathbb{H}}^3$$ where ${\psi}$ is the immersion of a catenoid of axis oriented by $-\sigma_1$ as $z\to 0$. This suffices to show that the limit axis oriented from the end at $\infty$ to the end at $0$ of the catenoidal family $\widetilde{f}_t^{\mathcal{D}}$ converges as $t$ tends to $0$ to the oriented geodesic $ {{\mathrm{geod}}}({\mathrm{I}}_2,-\sigma_1) $.
Gluing Delaunay ends to hyperbolic spheres {#sectionNnoids}
==========================================
In this section, we follow step by step the method Martin Traizet used in ${\mathbb{R}}^3$ ([@nnoids]) to construct CMC $H>1$ $n$-noids in ${\mathbb{H}}^3$ and prove Theorem \[theoremConstructionNnoids\]. This method relies on the Implicit Function Theorem and aim to find a couple $(\xi_t,\Phi_t)$ satisfying the hypotheses of Theorem \[theoremPerturbedDelaunay\] around each pole of an $n$-punctured sphere. More precisely, Implicit Function Theorem is used to solve the monodromy problem around each pole and to ensure that the potential is regular at $z=\infty$. The set of equations characterising this problem at $t=0$ is the same than in [@nnoids], and the partial derivative with respect to the parameters is the same than in [@nnoids] at $t=0$. Therefore, the Implicit Function Theorem can be used exactly as in [@nnoids] and we do net repeat it here. Showing that the surface has Delaunay ends involves slightly different computations, but the method is the same than in [@nnoids], namely, find a suitable gauge and change of coordinates around each pole of the potential in order to retrieve a perturbed Delaunay potential as in Definition \[defPerturbedDelaunayPotential\]. One can then apply Theorem \[theoremPerturbedDelaunay\]. Finally, we show that the surface is Alexandrov-embedded (and embedded in some cases) by adapting the arguments of [@minoids] to the case of ${\mathbb{H}}^3$.
The DPW data
------------
Let $H>1$, $q={\mathop \mathrm{arcoth}}H$ and $\rho>e^{q}$. Let $n\geq 3$ and $u_1,\cdots,u_n$ unitary vectors of $T_{{\mathrm{I}}_2}{\mathbb{H}}^3$. Suppose, by applying a rotation, that $u_i\neq \pm \sigma_3$ for all $i\in\left[1,n\right]$. Let $v_{\mathbb{S}}: {\mathbb{C}}\cup\left\{ \infty \right\} \longrightarrow {\mathbb{S}}^2$ defined as in Equation and $\pi_i := v_{\mathbb{S}}^{-1}(u_i)\in{\mathbb{C}}^*$. Consider $3n$ parameters $a_i$, $b_i$, $p_i \in \Lambda{\mathbb{C}}_\rho^{\geq 0}$ assembled into a vector ${\mathbf{x}}$ which stands in a neighbourhood of a central value ${\mathbf{x}}_0$ so that the central values of $a_i$ and $p_i$ are $\tau_i$ and $\pi_i$. Introduce a real parameter $t$ in a neighbourhood of $0$ and define $$\label{defBetatNnoids}
\beta_t({\lambda}) := t\left( {\lambda}- e^q \right)\left({\lambda}- e^{-q}\right).$$ The potential we use is $$\label{defpotentialnnoids}
\xi_{t,{\mathbf{x}}}(z,{\lambda}) := \begin{pmatrix}
0 & {\lambda}^{-1}dz\\
\beta_t({\lambda})\omega_{\mathbf{x}}(z,{\lambda}) & 0
\end{pmatrix}$$ where $$\label{defomegannoids}
\omega_{\mathbf{x}}(z,{\lambda}) := \sum_{i=1}^{n}\left( \frac{a_i({\lambda})}{(z-p_i({\lambda}))^2} + \frac{b_i({\lambda})}{z-p_i({\lambda})} \right)dz.$$ The initial condition is the identity matrix, taken at the point $z_0=0 \in \Omega$ where $$\Omega = \left\{ z\in{\mathbb{C}}\mid \forall i\in\left[ 1,n \right], |z-\pi_i|>\epsilon \right\}$$ and $\epsilon>0$ is a fixed constant such that the disks $D(\pi_i,2\epsilon)\subset{\mathbb{C}}$ are disjoint and do not contain $0$. Although the poles $p_1,\dots,p_n$ of the potential $\xi_{t,{\mathbf{x}}}$ are functions of ${\lambda}$, $\xi_{t,{\mathbf{x}}}$ is well-defined on $\Omega$ for ${\mathbf{x}}$ sufficiently close to ${\mathbf{x}}_0$. We thus define $\Phi_{t,{\mathbf{x}}}$ as the solution to the Cauchy problem with data $(\Omega,\xi_{t,{\mathbf{x}}},0,{\mathrm{I}}_2)$.
The main properties of this potential are the same than in [@nnoids], namely: it is a perturbation of the spherical potential $\xi_{0,{\mathbf{x}}}$ and the factor $\left({\lambda}- e^{-q}\right)$ in $\beta_t$ ensures that the second equation of the monodromy problem is solved.
Let $\{\gamma_1,\cdots,\gamma_{n-1}\}$ be a set of generators of the fundamental group $\pi_1(\Omega,0)$ and define for all $i\in [1,n-1]$ $$M_i(t,{\mathbf{x}}) := {\mathcal{M}}_{\gamma_i}(\Phi_{t,{\mathbf{x}}}).$$ Noting that $${\lambda}\in{\mathbb{S}}^1 \implies {\lambda}^{-1}\left( {\lambda}- e^q \right)\left({\lambda}- e^{-q}\right) = -2\left(\cosh q - {\mathop \mathrm{Re}}{\lambda}\right) \in{\mathbb{R}},$$ the unitarity of the monodromy is equivalent to $$\widetilde{M}_i(t,{\mathbf{x}})({\lambda}) := \frac{{\lambda}}{\beta_t({\lambda})}\log M_i(t,{\mathbf{x}})({\lambda})\in {\Lambda {\mathfrak{su}}(2)_\rho}.$$ Note that at $t=0$, the expression above takes the same value than in [@nnoids], and so does the regularity conditions. One can thus apply Propositions 2 and 3 of [@nnoids] which we recall in Proposition \[propPropsnnoids\] below.
\[propPropsnnoids\] For t in a neighbourhood of $0$, there exists a unique smooth map $t\mapsto {\mathbf{x}}(t) = (a_{i,t},b_{i,t},p_{i,t})_{1\leq i\leq n}\in({{\mathcal{W}}_R^{\geq 0}})^3$ such that ${\mathbf{x}}(0) = {\mathbf{x}}_0$, the monodromy problem and the regularity problem are solved at $(t,{\mathbf{x}}(t))$ and the following normalisations hold: $$\forall i\in[1,n-1],\qquad {\mathop \mathrm{Re}}(a_{i,t})\mid_{{\lambda}=0} = \tau_i \quad \text{and} \quad p_{i,t}\mid_{{\lambda}=0} = \pi_i.$$ Moreover, at $t=0$, ${\mathbf{x}}_0$ is a constant with $a_i$ real and such that $$b_i = \frac{-2a_i\overline{p_i}}{1+|p_i|^2}\quad \text{and} \quad \sum_{i=1}^{n}a_iv_{\mathbb{S}}(p_i)=0.$$
Now write $\omega_t := \omega_{{\mathbf{x}}(t)}$, $\xi_t := \xi_{t,{\mathbf{x}}(t)}$ and apply the DPW method to define the holomorphic frame $\Phi_t$ associated to $\xi_t$ on the universal cover $\widetilde{\Omega}$ of $\Omega$ with initial condition $\Phi_t(0) = {\mathrm{I}}_2$. Let $F_t:={\mathop \mathrm{Uni}}\Phi_t$ and $f_t:={ \mathrm{Sym}}_q F_t$. The monodromy problem for $\Phi_t$ being solved, $f_t$ descends to a well-defined CMC $H$ immersion on $\Omega$. Use Theorem 3 and Corollary 1 of [@nnoids] to extend $f_t$ to $\Sigma_t := {\mathbb{C}}\cup\left\{\infty\right\}\backslash\left\{p_{1,t}(0),\dots ,p_{n,t}(0) \right\}$ and define $M_t=f_t(\Sigma_t)$. Moreover, with the same proof as in [@nnoids] (Proposition 4, point (2)), $a_{i,t}$ is a real constant with respect to ${\lambda}$ for all $i$ and $t$.
Delaunay ends {#sectionDelaunayEndsNnoids}
-------------
#### Perturbed Delaunay potential.
Let $i\in\left[1,n\right]$. We are going to gauge $\xi_t$ around its pole $p_{i,t}(0)$ and show that the gauged potential is a perturbed Delaunay potential as in Definition \[defPerturbedDelaunayPotential\]. Let $(r,s):(-T,T)\longrightarrow {\mathbb{R}}^2$ be the continuous solution to (see Section \[sectionDelaunaySurfaces\] for details) $$\left\{
\begin{array}{l}
rs = ta_{i,t},\\
r^2+s^2+2rs \cosh q = \frac{1}{4},\\
r> s.
\end{array}
\right.$$ For all $t$ and ${\lambda}$, define $\psi_{i,t,{\lambda}}(z):=z+p_{i,t}({\lambda})$ and $$G_t(z,{\lambda}) := \begin{pmatrix}
\frac{\sqrt{z}}{\sqrt{r+s{\lambda}}} & 0 \\
\frac{-{\lambda}}{2\sqrt{z}\sqrt{r+s{\lambda}}} & \frac{\sqrt{r+s{\lambda}}}{\sqrt{z}}
\end{pmatrix}.$$ For $T$ small enough, one can thus define on a uniform neighbourhood of $0$ the potential $$\widetilde{\xi}_{i,t}(z,{\lambda}) := ((\psi_{i,t,{\lambda}}^*\xi_t)\cdot G_t)(z,{\lambda}) = \begin{pmatrix}
0 & r{\lambda}^{-1}+s\\
\frac{\beta_t({\lambda})}{r+s{\lambda}}(\psi_{i,t,{\lambda}}^*\omega_t(z))z^2+\frac{{\lambda}}{4(r+s{\lambda})} & 0
\end{pmatrix}z^{-1}dz.$$ Note that by definition of $r$, $s$ and $\beta_t$, $$\left(r+s{\lambda}\right)\left( r{\lambda}+s \right) = \frac{{\lambda}}{4} + \beta_t({\lambda})a_{i,t}$$ and thus $$\widetilde{\xi}_{i,t}(z,{\lambda}) = A_t({\lambda})z^{-1}dz + C_t(z,{\lambda})dz$$ with $A_t$ as in Equation satisfies Equation and $C_t$ as in Definition \[defPerturbedDelaunayPotential\]. The potential $\widetilde{\xi}_{i,t}$ is thus a perturbed Delaunay potential as in Definition \[defPerturbedDelaunayPotential\]. Moreover, using Theorem 3 of [@nnoids], the induced immersion $\widetilde{f}_{i,t}$ satisfies $$\widetilde{f}_{i,t} = \psi_{i,t,0}^*f_t.$$
#### Applying Theorem \[theoremPerturbedDelaunay\].
The holomorphic frame $\widetilde{\Phi}_{i,t}:=\Phi_tG_{i,t}$ associated to $\widetilde{\xi}_{i,t}$ satisfies the regularity and monodromy hypotheses of Theorem \[theoremPerturbedDelaunay\], but at $t=0$ and $z=1$, $$\widetilde{\Phi}_{i,0}(1,{\lambda}) = \begin{pmatrix}
1 & \left(1+\pi_i\right){\lambda}^{-1}\\
0 & 1
\end{pmatrix}\begin{pmatrix}
\sqrt{2} & 0\\
\frac{-{\lambda}}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{pmatrix} = \frac{1}{\sqrt{2}}\begin{pmatrix}
1-\pi_i & (1+\pi_i){\lambda}^{-1}\\
-{\lambda}& 1
\end{pmatrix} =: M_i({\lambda}),$$ and thus $\widetilde{\Phi}_{i,0}(z) = M_iz^{A_0}$. Recall and let $$\label{eqHnnoids}
H :=H_0 = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & -{\lambda}^{-1}\\
{\lambda}& 1
\end{pmatrix}\in{\Lambda {\mathrm{SU}}(2)_\rho}$$ and $Q_i := {\mathop \mathrm{Uni}}\left( M_iH\right)$. Using Lemma 1 in [@raujouan], $Q_i$ can be made explicit and one can find a change of coordinates $h$ and a gauge $G$ such that $\widehat{\Phi}_{i,t} := HQ_i^{-1}(h^* \widetilde{\Phi}_{i,t})G$ solves $d\widehat{\Phi}_{i,t} = \widehat{\Phi}_{i,t}\widehat{\xi}_{i,t}$ where $\widehat{\xi}_{i,t}$ is a perturbed Delaunay potential and $\widehat{\Phi}_{i,0}(z) = z^{A_0}$. Explicitely, $$Q_i({\lambda}) = \frac{1}{\sqrt{1+|\pi_i|^2}}\begin{pmatrix}
1 & {\lambda}^{-1}\pi_i\\
-{\lambda}{\overline{\pi}}_i & 1
\end{pmatrix}$$ and $$h(z) = \frac{(1+|\pi_i|^2)z}{1-{\overline{\pi}}_iz},\qquad G(z,{\lambda}) = \frac{1}{\sqrt{1-{\overline{\pi}}_iz}}\begin{pmatrix}
1 & 0\\
-{\lambda}{\overline{\pi}}_iz & 1-{\overline{\pi}}_iz
\end{pmatrix}.$$ One can thus apply Theorem \[theoremPerturbedDelaunay\] on $\widehat{\xi}_{i,t}$ and $\widehat{\Phi}_{i,t}$, which proves the existence of the family $\left(M_t\right)_{0<t<T}$ of CMC $H$ surfaces of genus zero and $n$ Delaunay ends, each of weight (according to Equation ) $$w_{i,t} = 8\pi rs\sinh q = \frac{8\pi t a_{i,t}}{\sqrt{H^2-1}},$$ which proves the first point of Theorem \[theoremConstructionNnoids\] (after a normalisation on $t$). Let $\widehat{f}_{i,t}:={ \mathrm{Sym}}_q({\mathop \mathrm{Uni}}\widehat{\Phi}_{i,t})$ and $\widehat{f}_{i,t}^{\mathcal{D}}$ the Delaunay immersion given by Theorem \[theoremPerturbedDelaunay\].
#### Limit axis.
In order to compute the limit axis of $f_t$ at the end around $p_{i,t}$, let $\widehat{\Delta}_{i,t}$ be the oriented axis of $\widehat{f}_{i,t}^{\mathcal{D}}$ at $w=0$. Then, using Theorem \[theoremPerturbedDelaunay\], $$\widehat{\Delta}_{i,0} = H(e^{-q})\cdot {\mathrm{geod}}\left( {\mathrm{I}}_2, -\sigma_3 \right).$$ And using $\widehat{f}_{i,t}(w) = H(e^{-q})Q_i(e^{-q})^{-1}\cdot \left( h^*f_t(z) \right)$, $$\widehat{\Delta}_{i,0} = H(e^{-q})Q_i(e^{-q})^{-1}\cdot\Delta_{i,0}$$ and thus $$\Delta_{i,0} = Q_i(e^{-q}) \cdot {\mathrm{geod}}({\mathrm{I}}_2,-\sigma_3).$$ Computing $M_iH = \Phi_{\mathbb{S}}(\pi_i)$ as in , one has $Q_i=F_{\mathbb{S}}(\pi_i)$. Hence $$\Delta_{i,0} = {{\mathrm{geod}}}\left( f_{\mathbb{S}}(\pi_i), -N_{\mathbb{S}}(\pi_i) \right)$$ where $N_{\mathbb{S}}$ is the normal map associated to $\Phi_{\mathbb{S}}$. Using Equation , $f_{\mathbb{S}}(z) = R(q)\cdot \widetilde{f}_{\mathbb{S}}(z)$ and $N_{\mathbb{S}}(z) =R(q) \cdot \widetilde{N}_{\mathbb{S}}(z)$ where $\widetilde{N}_{\mathbb{S}}$ is the normal map of $\widetilde{f}_{\mathbb{S}}$. Using Equation and the fact that $\widetilde{f}_{\mathbb{S}}$ is a spherical immersion gives $$\widetilde{N}_{\mathbb{S}}(z) = \Gamma_{{\mathrm{I}}_2}^{\widetilde{f}_{\mathbb{S}}(z)}\left(-v_{\mathbb{S}}(z)\right)$$ and thus $$\begin{aligned}
\Delta_{i,0} &= {{\mathrm{geod}}}\left( R(q)\cdot \widetilde{f}_{\mathbb{S}}(\pi_i), -R(q)\cdot \widetilde{N}_{\mathbb{S}}(\pi_i) \right)\\
&= R(q)\cdot {{\mathrm{geod}}}\left(\widetilde{f}_{\mathbb{S}}(\pi_i),\Gamma_{{\mathrm{I}}_2}^{\widetilde{f}_{\mathbb{S}}(\pi_i)}v_{\mathbb{S}}(\pi_i)\right)\\
&= R(q)\cdot {{\mathrm{geod}}}\left({\mathrm{I}}_2,u_i\right).\end{aligned}$$ Apply the isometry given by $R(q)^{-1}$ and note that $R(q)$ does not depend on $i$ to prove point 2 of Theorem \[theoremConstructionNnoids\].
Embeddedness {#embeddedness}
------------
We suppose that $t>0$ and that all the weights $\tau_i$ are positive, so that the ends of $f_t$ are embedded. Recall the definition of Alexandrov-embeddedness (as stated in [@minoids]):
\[defAlexandrovEmbedded\] A surface $M^2\subset{\mathcal{M}}^3$ of finite topology is Alexandrov-embedded if $M$ is properly immersed, if each end of $M$ is embedded, and if there exists a compact $3$-manifold $\overline{W}$ with boundary $\partial \overline{W} = \overline{S}$, $n$ points $p_1,\cdots,p_n\in\overline{S}$ and a proper immersion $F: W=\overline{W}\backslash\{ p_1,\cdots,p_n \}\longrightarrow {\mathcal{M}}$ whose restriction to $S=\overline{S}\backslash\{ p_1,\cdots, p_n \}$ parametrises $M$.
The following lemma is proved in [@minoids] in ${\mathbb{R}}^3$ and for surfaces with catenoidal ends, but the proof is the same in ${\mathbb{H}}^3$ for surfaces with any type of embedded ends. For any oriented surface $M$ with Gauss map $N$ and any $r>0$, the tubular map of $M$ with radius $r$ is defined by $${
\begin{array}{ccccc}
{\mathcal{T}}& : & (-r,r)\times M & \longrightarrow & {\mathop \mathrm{Tub}}_{r}M \\
& & (s,p) & \longmapsto & {\mathrm{geod}}(p,N(p))(s). \\
\end{array}
}$$
\[lemmaExtendingAlexandrov\] Let $M$ be an oriented Alexandrov-embedded surface of ${\mathbb{H}}^3$ with $n$ embedded ends. Let $r>0$ and suppose that the tubular map of $M$ with radius $r$ is a local diffeomorphism. With the notations of Definition \[defAlexandrovEmbedded\], there exist a hyperbolic $3$-manifold $W'$ containing $W$ and a local isometry $F': W'\longrightarrow {\mathbb{H}}^3$ extending $F$ such that the tubular neighbourhood ${\mathop \mathrm{Tub}}_rS$ is embedded in $W'$.
In order to show that $M_t$ is embedded, we will use the techniques of [@minoids]. Thus, we begin by lifting $M_t$ to ${\mathbb{R}}^3$ with the exponential map at the identity $\exp_{{\mathrm{I}}_2}: {\mathbb{R}}^3\longrightarrow {\mathbb{H}}^3$. This map is a diffeomorphism, so $M_t$ is Alexandrov-embedded if, and only if its lift $\widehat{M}_t$ to ${\mathbb{R}}^3$ given by the immersion $$\widehat{f}_t := \exp_{{\mathrm{I}}_2}^{-1}\circ f_t: \Sigma_t\longrightarrow{\mathbb{R}}^3$$ is Alexandrov-embedded.
Let $T,\epsilon>0$ such that $f_t$ (and hence $\widehat{f}_t$) is an embedding of $D^*(p_{i,t},\epsilon)$ for all $i\in[1,n]$ and let $f_{i,t}^{\mathcal{D}}: {\mathbb{C}}\backslash\{ p_{i,t} \}\longrightarrow{\mathbb{H}}^3$ be the Delaunay immersion approximating $f_t$ in $D^*(p_{i,t},\epsilon)$. Let $\widehat{f}_{i,t}^{\mathcal{D}}:=\exp_{{\mathrm{I}}_2}^{-1}\circ f_{i,t}^{\mathcal{D}}$. Apply an isometry of ${\mathbb{H}}^3$ so that the limit immersion $f_0$ maps $\Sigma_0$ to a $n$-punctured geodesic spere of hyperbolic radius $q$ centered at ${\mathrm{I}}_2$. Then $\widehat{f}_0(\Sigma_0)$ is a Euclidean sphere of radius $q$ centered at the origin. Define $${
\begin{array}{ccccc}
\widehat{N}_t & : & \Sigma_t & \longrightarrow & {\mathbb{S}}^2 \\
& & z & \longmapsto & d(\exp_{{\mathrm{I}}_2}^{-1})(f_t(z))N_t(z). \\
\end{array}
}$$ At $t=0$, $\widehat{N}_0$ is the normal map of $\widehat{f}_0$ (by Gauss Lemma), but not for $t>0$ because the Euclidean metric of ${\mathbb{R}}^3$ is not the metric induced by $\exp_{{\mathrm{I}}_2}$.
Let $${
\begin{array}{ccccc}
h_i & : & {\mathbb{R}}^3 & \longrightarrow & {\mathbb{R}}\\
& & x & \longmapsto & {\left\langle x, -\widehat{N}_0(p_{i,0}) \right\rangle} \\
\end{array}
}$$ be the height function in the direction of the limit axis.
As in [@minoids], one can show that
There exist $\delta<\delta'$ and $0<\epsilon'<\epsilon$ such that for all $i\in[1,n]$ and $0<t<T$, $$\max_{C(p_{i,t},\epsilon)} h_i\circ \widehat{f}_t < \delta < \min_{C(p_{i,t},\epsilon')}h_i\circ\widehat{f}_t \leq \max_{C(p_{i,t},\epsilon')}h_i\circ\widehat{f}_t < \delta'.$$
Define for all $i$ and $t$: $$\gamma_{i,t} := \left\{ z\in D^*_{p_{i,t},\epsilon} \mid h_i\circ\widehat{f}_t(z) = \delta \right\},\quad \gamma_{i,t}' := \left\{ z\in D^*_{p_{i,t},\epsilon'} \mid h_i\circ\widehat{f}_t(z) = \delta' \right\}.$$ From their convergence as $t$ tends to $0$,
The regular curves $\gamma_{i,t}$ and $\gamma_{i,t}'$ are topological circles around $p_{i,t}$.
Define $D_{i,t}, D_{i,t}'$ as the topological disks bounded by $\gamma_{i,t}, \gamma_{i,t}'$, and $\Delta_{i,t},\Delta_{i,t}'$ as the topological disks bounded by $\widehat{f}_t(\gamma_{i,t}), \widehat{f}_t(\gamma_{i,t}')$. Let ${\mathcal{A}}_{i,t}:=D_{i,t}\backslash D_{i,t}'$. Then $\widehat{f}_t({\mathcal{A}}_{i,t})$ is a graph over the plane $\{h_i(x)=\delta\}$. Moreover, for all $z\in D_{i,t}^*$, $h_i\circ \widehat{f}_t(z)\geq \delta' > \delta$. Thus,
The intersection $\widehat{f}_t(D_{i,t}^*)\cap \Delta_{i,t}$ is empty.
Define a sequence $(R_{i,t,k})$ such that $\widehat{f}_t(D_{i,t}^*)$ transversally intersects the planes $\{ h_i(x) = R_{i,t,k} \}$. Define $$\gamma_{i,t,k} := \left\{ z\in D_{i,t}^* \mid h_i\circ\widehat{f}_{i,t}(z) = R_{i,t,k} \right\},$$ and the topological disks $\Delta_{i,t,k}\subset \{ h_i(x) = R_{i,t,k} \} $ bounded by $\widehat{f}_t(\gamma_{i,t,k})$. Define ${\mathcal{A}}_{i,t,k}$ as the annuli bounded by $\gamma_{i,t}$ and $\gamma_{i,t,k}$. Define ${W}_{i,t,k}\subset {\mathbb{R}}^3$ as the interior of $\widehat{f}_t({\mathcal{A}}_{i,t,k})\cup \Delta_{i,t}\cup \Delta_{i,t,k}$ and $${W}_{i,t} := \bigcup_{k\in {\mathbb{N}}} {W}_{i,t,k}.$$ Hence,
The union $\widehat{f}_t(D_{i,t}^*)\cup \Delta_{i,t}$ is the boundary of a topological punctured ball ${W}_{i,t}\subset {\mathbb{R}}^3$.
The union $$\widehat{f}_t\left( \Sigma_t\backslash \left( D_{1,t}\cup\cdots D_{n,t} \right) \right) \cup \Delta_{1,t}\cup \cdots \cup \Delta_{n,t}$$ is the boundary of a topological ball ${W}_{0,t}\subset {\mathbb{R}}^3$. Take $$W_t:= W_{0,t}\cup W_{1,t}\cup\cdots\cup W_{n,t}$$ to show that $\widehat{M}_t$, and hence $M_t$ is Alexandrov-embedded for $t>0$ small enough.
Let $S\subset{\mathbb{H}}^3$ be a sphere of hyperbolic radius $q$ centered at $p\in{\mathbb{H}}^3$. Let $n\geq 2$ and $\left\{ u_i \right\}_{i\in [1,n]}\subset T_p{\mathbb{H}}^3$. Let $\left\{ p_i \right\}_{i\in[1,n]}$ defined by $p_i=S\cap {\mathrm{geod}}(p,u_i)({\mathbb{R}}_+)$. For all $i\in[1,n]$, let $S_i\subset{\mathbb{H}}^3$ be the sphere of hyperbolic radius $q$ such that $S\cap S_i =\{ p_i\}$. For all $(i,j)\in[1,n]^2$, let $\theta_{ij}$ be the angle between $u_i$ and $u_j$.
If for all $i\neq j$, $$\theta_{ij} > 2\arcsin\left(\frac{1}{2\cosh q}\right)$$ then $S_i\cap S_j = \emptyset$ for all $i\neq j$.
Without loss of generality, we assume that $p={\mathrm{I}}_2$. We use the ball model of ${\mathbb{H}}^3$ equipped with its metric $$ds_{\mathbb{B}}^2(x) = \frac{4ds_E^2}{\left(1-{\left\Vertx\right\Vert}_E^2\right)^2}$$ where $ds_E$ is the euclidean metric and ${\left\Vertx\right\Vert}_E$ is the euclidean norm. In this model, the sphere $S$ is centered at the origin and has euclidean radius $R=\tanh \frac{q}{2}$. For all $i\in[1,n]$, the sphere $S_i$ has euclidean radius $$r = \frac{1}{2}\left(\tanh\frac{3q}{2} - \tanh\frac{q}{2}\right) = \frac{\tanh \frac{q}{2}}{2\cosh q -1}.$$ Let $j\neq i$. In order to have $S_i\cap S_j = \emptyset$, one must solve $$(R+r)\sin\frac{\theta_{ij}}{2} \geq r,$$ which gives the expected result.
In order to prove the last point of Theorem \[theoremConstructionNnoids\], just note that $$H = \coth q \implies \frac{1}{2\cosh q}=\frac{\sqrt{H^2-1}}{2H}.$$ Suppose that the angle $\theta_{ij}$ between $u_i$ and $u_j$ satisfies Equation for all $i\neq j$. Then for $t>0$ small enough, the proper immersion $F_t$ given by Definition \[defAlexandrovEmbedded\] is injective (because of the convergence towards a chain of spheres) and hence $M_t$ is embedded.
This means for example that in hyperbolic space, one can construct embedded CMC $n$-noids with seven coplanar ends or more.
Gluing Delaunay ends to minimal $n$-noids {#sectionMinoids}
=========================================
Again, this section is an adaptation of Traizet’s work in [@minoids] applied to the proof of Theorem \[theoremConstructionMinoids\]. We first give in Section \[sectionBlowUp\] a blow-up result for CMC $H>1$ surfaces in Hyperbolic space. We then introduce in Section \[sectionDataMinoids\] the DPW data giving rise to the surface $M_t$ of Theorem \[theoremConstructionMinoids\] and prove the convergence towards the minimal $n$-noid. Finally, using the same arguments than in [@minoids], we prove Alexandrov-embeddedness in Section \[sectionAlexEmbeddedMnoids\].
A blow-up result {#sectionBlowUp}
----------------
As in ${\mathbb{R}}^3$ (see [@minoids]), the DPW method accounts for the convergence of CMC $H>1$ surfaces in ${\mathbb{H}}^3$ towards minimal surfaces of ${\mathbb{R}}^3$ (after a suitable blow-up). We work with the following Weierstrass parametrisation: $$\label{eqWeierstrass}
W(z) = W(z_0) + {\mathop \mathrm{Re}}\int_{z_0}^z\left( \frac{1}{2}(1-g^2)\omega, \frac{i}{2}(1+g^2)\omega, g\omega \right)$$
\[propBlowUpDPW\] Let $\Sigma$ be a Riemann surface, $(\xi_t)_{t\in I}$ a family of DPW potentials on $\Sigma$ and $(\Phi_t)_{t\in I}$ a family of solutions to $d\Phi_t = \Phi_t\xi_t$ on the universal cover $\widetilde{\Sigma}$ of $\Sigma$, where $I\subset {\mathbb{R}}$ is a neighbourhood of $0$. Fix a base point $z_0\in\widetilde{\Sigma}$ and $\rho>e^{q} >1$. Assume that
1. $(t,z)\mapsto \xi_t(z)$ and $t\mapsto \Phi_t(z_0)$ are ${\mathcal{C}}^1$ maps into $\Omega^1(\Sigma,{{\Lambda {\mathfrak{sl}}(2,{\mathbb{C}})}_\rho})$ and ${\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}$ respectively.
2. For all $t\in I$, $\Phi_t$ solves the monodromy problem .
3. $\Phi_0(z,{\lambda})$ is independant of ${\lambda}$: $$\Phi_0(z,{\lambda}) = \begin{pmatrix}
a(z) & b(z)\\
c(z) & d(z)
\end{pmatrix}.$$
Let $f_t = { \mathrm{Sym}}_q\left({\mathop \mathrm{Uni}}(\Phi_t)\right): \Sigma \longrightarrow {\mathbb{H}}^3$ be the CMC $H=\coth q$ immersion given by the DPW method. Then, identifying $T_{{\mathrm{I}}_2}{\mathbb{H}}^3$ with ${\mathbb{R}}^3$ via the basis $(\sigma_1,\sigma_2,\sigma_3)$ defined in , $$\lim\limits_{t\to 0}\frac{1}{t}\left(f_t - {\mathrm{I}}_2\right) = W$$ where $W$ is a (possibly branched) minimal immersion with the following Weierstrass data: $$g(z) = \frac{a(z)}{c(z)}, \quad \omega(z) = -4(\sinh q) c(z)^2 \frac{\partial \xi_{t,12}^{(-1)}(z)}{\partial t}\mid_{t=0}.$$ The limit is for the uniform ${\mathcal{C}}^1$ convergence on compact subsets of $\Sigma$.
With the same arguments as in [@minoids], $(t,z)\mapsto \Phi_t(z)$, $(t,z)\mapsto F_t(z)$ and $(t,z)\mapsto B_t(z)$ are ${\mathcal{C}}^1$ maps into ${\Lambda {\mathrm{SL}}(2,{\mathbb{C}})_\rho}$, ${\Lambda {\mathrm{SU}}(2)_\rho}$ and ${\Lambda_+^{\mathbb{R}}{\mathrm{SL}}(2,{\mathbb{C}})}_{\rho}$ respectively. At $t=0$, $\Phi_0$ is constant. Thus, $F_0$ and $B_0$ are constant with respect to ${\lambda}$: $$F_0 = \frac{1}{\sqrt{|a|^2+|c|^2}}\begin{pmatrix}
a & -\bar{c}\\
c & \bar{a}
\end{pmatrix}, \quad B_0 = \frac{1}{\sqrt{|a|^2+|c|^2}} \begin{pmatrix}
|a|^2+|c|^2 & \bar{a}b + \bar{c}d\\
0 & 1
\end{pmatrix}.$$ Thus, $F_0(z,e^{-q})\in{\mathrm{SU}}(2)$ and $f_0(z)$ degenerates into the identity matrix. Let $b_t := B_{t,11}\mid_{{\lambda}=0}$ and $\beta_t$ the upper-right residue at ${\lambda}=0$ of the potential $\xi_t$. Recalling Equation , $$df_t(z) = 2b_t(z)^2\sinh q F_t(z,e^{-q})\begin{pmatrix}
0 & \beta_t(z)\\
{\overline{\beta}}_t(z) & 0
\end{pmatrix}{F_t(z,e^{-q})}^*.$$ Hence $(t,z)\mapsto df_t(z)$ is a ${\mathcal{C}}^1$ map. At $t=0$, $\xi_0 = \Phi_0^{-1}d\Phi_0$ is constant with respect to ${\lambda}$, so $\beta_0=0$ and $df_0(z) = 0$. Define $\widetilde{f}_t(z) := \frac{1}{t}\left( f_t(z) - {\mathrm{I}}_2 \right)$ for $t\neq 0$. Then $d\widetilde{f}_t(z)$ extends at $t=0$, as a continuous function of $(t,z)$ by $$\begin{aligned}
d\widetilde{f}_0 = \frac{d}{dt}df_t\mid_{t=0} &= 2\sinh q\begin{pmatrix}
a & -{\overline{c}}\\
c & {\overline{a}}
\end{pmatrix}\begin{pmatrix}
0 & \beta'\\
{\overline{\beta'}} & 0
\end{pmatrix}\begin{pmatrix}
{\overline{a}} & {\overline{c}}\\
-c & a
\end{pmatrix}\\
&= 2 \sinh q\begin{pmatrix}
-ac\beta' - {\overline{ac\beta'}} & a^2\beta' - {\overline{c^2\beta'}}\\
{\overline{a^2\beta'}} - c^2\beta' & ac\beta' + {\overline{ac\beta'}}
\end{pmatrix}
\end{aligned}$$ where $\beta' = \frac{d}{dt}\beta_t\mid_{t=0}$. In $T_{{\mathrm{I}}_2}{\mathbb{H}}^3$, this gives $$d\widetilde{f}_0 = 4\sinh q{\mathop \mathrm{Re}}\left( \frac{1}{2}\beta'(a^2-c^2), \frac{-i}{2}\beta'(a^2+c^2), -ac\beta' \right).$$ Writing $g=\frac{a}{c}$ and $\omega = -4c^2\beta' \sinh q$ gives: $$\widetilde{f}_0(z) = \widetilde{f}_0(z_0) + {\mathop \mathrm{Re}}\int_{z_0}^{z} \left( \frac{1}{2}(1-g^2)\omega, \frac{i}{2}(1+g^2)\omega, g\omega \right).$$
As a useful example for Proposition \[propBlowUpDPW\], one can show the convergence of Delaunay surfaces in ${\mathbb{H}}^3$ towards a minimal catenoid.
\[propBlowupCatenoid\] Let $q>0$, $A_t = A_{r,s}$ as in with $r\leq s$ satisfying . Let $\Phi_t(z):=z^{A_t}$ and $f_t := { \mathrm{Sym}}_q\left({\mathop \mathrm{Uni}}\Phi_t\right)$. Then $$\widetilde{f} := \lim\limits_{t\to 0}\frac{1}{t}\left(f_t-{\mathrm{I}}_2\right) = \psi$$ where $\psi:{\mathbb{C}}^*\longrightarrow{\mathbb{R}}^3$ is the immersion of a catenoid centered at $(0,0,1)$, of neck radius $1$ and of axis orientd by the positive $x$-axis in the direction from $z=0$ to $z=\infty$.
Compute $$\Phi_0(z,{\lambda}) = \begin{pmatrix}
\cosh\left(\frac{\log z}{2}\right) & \sinh\left(\frac{\log z}{2}\right)\\
\sinh\left(\frac{\log z}{2}\right) & \cosh\left(\frac{\log z}{2}\right)
\end{pmatrix}$$ and $$\frac{\partial \xi_{t,12}^{(-1)}(z)}{\partial t}\mid_{t=0} = \frac{z^{-1}dz}{2\sinh q}$$ in order to apply Proposition \[propBlowUpDPW\] and get $$\widetilde{f}(z) = \widetilde{f}(1) + {\mathop \mathrm{Re}}\int_{1}^{z} \left( \frac{1}{2}(1-g^2)\omega, \frac{i}{2}(1+g^2)\omega, g\omega \right)$$ where $$g(z) = \frac{z+1}{z-1}\qquad\text{and}\qquad \omega(z) = \frac{-1}{2}\left(\frac{z-1}{z}\right)^2dz.$$ Note that $\Phi_t(1) = {\mathrm{I}}_2$ for all $t$ to show that $\widetilde{f}(1) = 0$ and get $$\widetilde{f}(z) = {\mathop \mathrm{Re}}\int_{1}^{z} \left( w^{-1}dw, \frac{-i}{2}(1+w^2)w^{-2}dw, \frac{1}{2}(1-w^2)w^{-2}dw \right).$$ Integrating gives for $(x,y)\in{\mathbb{R}}\times [0,2\pi]$: $$\widetilde{f}(e^{x+iy}) = \psi(x,y)$$ where $${
\begin{array}{ccccc}
\psi & : & {\mathbb{R}}\times [0,2\pi] & \longrightarrow & {\mathbb{R}}^3 \\
& & (x,y) & \longmapsto & \left( x,\cosh(x)\sin(y), 1 - \cosh(x)\cos(y) \right) \\
\end{array}
}$$ and hence the result.
The DPW data {#sectionDataMinoids}
------------
In this Section, we introduce the DPW data inducing the surface $M_t$ of Theorem \[theoremConstructionMinoids\]. The method is very similar to Section \[sectionNnoids\] and to [@minoids], which is why we omit the details.
#### The data.
Let $(g,\omega)$ be the Weierstrass data (for the parametrisation defined in ) of the minimal $n$-noid $M_0\subset {\mathbb{R}}^3$. If necessary, apply a Möbius transformation so that $g(\infty)\notin\left\{0,\infty \right\}$, and write $$g(z) = \frac{A(z)}{B(z)}, \qquad \omega(z) = \frac{B(z)^2dz}{\prod_{i=1}^{n}(z-p_{i,0})^2}.$$ Let $H>1$, $q>0$ so that $H=\coth q$ and $\rho>e^{q}$. Consider $3n$ parameters $a_i,b_i,p_i\in\Lambda{\mathbb{C}}_\rho$ ($i\in[1,n]$) assembled into a vector ${\mathbf{x}}$. Let $$A_{\mathbf{x}}(z,{\lambda}) = \sum_{i=1}^{n} a_i({\lambda})z^{n-1}, \qquad B_{\mathbf{x}}(z,{\lambda}) = \sum_{i=1}^{n}b_i({\lambda})z^{n-1}$$ and $$g_{\mathbf{x}}(z,{\lambda}) = \frac{A_{\mathbf{x}}(z,{\lambda})}{B_{\mathbf{x}}(z,{\lambda})}, \qquad \omega_{\mathbf{x}}(z,{\lambda}) = \frac{B_{\mathbf{x}}(z,{\lambda})^2dz}{\prod_{i=1}^{n}(z-p_i({\lambda}))^2}.$$ The vector ${\mathbf{x}}$ is chosen in a neighbourhood of a central value ${\mathbf{x}}_0\in{\mathbb{C}}^{3n}$ so that $A_{{\mathbf{x}}_0}=A$, $B_{{\mathbf{x}}_0} = B$ and $\omega_{{\mathbf{x}}_0} = \omega$. Let $p_{i,0}$ denote the central value of $p_i$. Introduce a real parameter $t$ in a neighbourhood of $0$ and write $$\beta_t({\lambda}) := \frac{t({\lambda}-e^q)({\lambda}-e^{-q})}{4\sinh q}.$$ The potential we use is $$\xi_{t,{\mathbf{x}}}(z,{\lambda}) = \begin{pmatrix}
0 & {\lambda}^{-1}\beta_t({\lambda})\omega_{\mathbf{x}}(z,{\lambda})\\
d_zg_{\mathbf{x}}(z,{\lambda}) & 0
\end{pmatrix}$$ defined for $(t,{\mathbf{x}})$ sufficiently close to $(0,{\mathbf{x}}_0)$ on $$\Omega = \left\{ z\in{\mathbb{C}}\mid \forall i\in\left[ 1,n \right], |z-p_{i,0}|>\epsilon \right\}\cup \left\{\infty \right\}$$ where $\epsilon>0$ is a fixed constant such that the disks $D(p_{i,0},2\epsilon)$ are disjoint. The initial condition is $$\phi({\lambda}) = \begin{pmatrix}
ig_{\mathbf{x}}(z_0,{\lambda}) & i\\
i & 0
\end{pmatrix}$$ taken at $z_0\in\Omega$ away from the poles and zeros of $g$ and $\omega$. Let $\Phi_{t,{\mathbf{x}}}$ be the holomorphic frame arising from the data $\left(\Omega,\xi_{t,{\mathbf{x}}},z_0,\phi\right)$ via the DPW method and $f_{t,{\mathbf{x}}}:={ \mathrm{Sym}}_q\left({\mathop \mathrm{Uni}}\Phi_{t,{\mathbf{x}}}\right)$.
Follow Section 6 of [@minoids] to show that the potential $\xi_{t,{\mathbf{x}}}$ is regular at the zeros of $B_{\mathbf{x}}$ and to solve the monodromy problem around the poles at $p_{i,0}$ for $i\in\left[1,n-1\right]$. The Implicit Function Theorem allows us to define ${\mathbf{x}}= {\mathbf{x}}(t)$ in a small neighbourhood $(-T,T)$ of $t=0$ satisfying ${\mathbf{x}}(0) = {\mathbf{x}}_0$ and such that the monodromy problem is solved for all $t$. We can thus drop from now on the index ${\mathbf{x}}$ in our data. As in [@minoids], $f_t$ descends to $\Omega$ and analytically extends to ${\mathbb{C}}\cup \{\infty\}\backslash\left\{ p_{1,0},\dots,p_{n,0} \right\}$. This defines a smooth family $(M_t)_{-T<t<T}$ of CMC $H$ surfaces of genus zero with $n$ ends in ${\mathbb{H}}^3$.
The convergence of $\frac{1}{t}\left( M_t-{\mathrm{I}}_2 \right)$ towards the minimal $n$-noid $M_0$ (point 2 of Theorem \[theoremConstructionMinoids\]) is a straightforward application of Proposition \[propBlowUpDPW\] together with $$\frac{\Phi_{0,11}(z)}{\Phi_{0,21}(z)} = g(z),\quad -4\left(\sinh q\right)\left(\Phi_{0,21}(z)\right)^2\frac{\partial\xi_{t,12}^{(-1)}(z)}{\partial t} = \omega(z).$$
#### Delaunay residue.
To show that $\xi_t$ is a perturbed Delaunay potential around each of its poles, let $i\in[1,n]$ and follow Section \[sectionDelaunayEndsNnoids\] with $$\psi_{i,t,{\lambda}}(z) = g_t^{-1}\left( z+g_t(p_{i,t}({\lambda})) \right).$$ Define $$\widetilde{\omega}_{i,t}(z,{\lambda}):=\psi_{i,t,{\lambda}}^*\omega_t(z)$$ and $$\alpha_{i,t}({\lambda}) := {\mathop\mathrm{Res}}_{z=0}(z\widetilde{\omega}_{i,t}(z,{\lambda})).$$ Use Proposition 5, Claim 1 of [@minoids] to show that for $T$ small enough, $\alpha_{i,t}$ is real and does not depend on ${\lambda}$. Set $$\left\{
\begin{array}{l}
rs = \frac{t\alpha_{i,t}}{4\sinh q},\\
r^2+s^2+2rs\cosh q = \frac{1}{4},\\
r<s
\end{array}
\right.$$ and $$G_{t}(z,{\lambda}) = \begin{pmatrix}
\frac{\sqrt{r{\lambda}+s}}{\sqrt{z}} & \frac{-1}{2\sqrt{r{\lambda}+s}\sqrt{z}}\\
0 & \frac{\sqrt{z}}{\sqrt{r{\lambda}+s}}
\end{pmatrix}.$$ Define the gauged potential $$\widetilde{\xi}_{i,t}(z,{\lambda}) := \left( (\psi_{i,t,{\lambda}}^*\xi_t)\cdot G_t \right)(z,{\lambda})$$ and compute its residue to show that it is a perturbed Delaunay potential as in Definition \[defPerturbedDelaunayPotential\].
#### Applying Theorem \[theoremPerturbedDelaunay\].
At $t=0$ and $z=1$, writing $\pi_i:=g(p_{i,0})$ to ease the notation, $$\widetilde{\Phi}_{i,0}(1,{\lambda}) = \begin{pmatrix}
i\left( 1+\pi_i \right) & i\\
i&0
\end{pmatrix}\begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \\
0 & \sqrt{2}
\end{pmatrix} = \frac{i}{\sqrt{2}}\begin{pmatrix}
1+\pi_i & 1-\pi_i\\
1 & -1
\end{pmatrix} =: M_i,$$ and thus, $\widetilde{\xi}_{i,0}(z) = M_iz^{A_0}$. Recall and let $$H:= H_0 = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & -1 \\
1 & 1
\end{pmatrix}\in{\Lambda {\mathrm{SU}}(2)_\rho}$$ and $Q_i := {\mathop \mathrm{Uni}}\left( M_iH^{-1} \right)$. Using Lemma 1 in [@minoids], $Q_i$ can be made explicit and one can find a change of coordinates $h$ and a gauge $G$ such that $\widehat{\Phi}_{i,t} := (Q_iH)^{-1}\left(h^* \widetilde{\Phi}_{i,t}\right)G$ solves $d\widehat{\Phi}_{i,t} = \widehat{\Phi}_{i,t}\widehat{\xi}_{i,t}$ where $\widehat{\xi}_{i,t}$ is a perturbed Delaunay potential and $\widehat{\Phi}_{i,0}(z) = z^{A_0}$. One can thus apply Theorem \[theoremPerturbedDelaunay\] on $\widehat{\xi}_{i,t}$ and $\widehat{\Phi}_{i,t}$, which proves the existence of the family $\left(M_t\right)_{-T<t<T}$ of CMC $H$ surfaces of genus zero and $n$ Delaunay ends, each of weight (according to Equation ) $$w_{i,t} = 8\pi rs\sinh q = 2\pi t \alpha_{i,t},$$ which proves the first point of Theorem \[theoremConstructionMinoids\]. Let $\widehat{f}_{i,t}:={ \mathrm{Sym}}_q\left({\mathop \mathrm{Uni}}\widehat{\Phi}_{i,t}\right)$ and let $\widehat{f}_{i,t}^{\mathcal{D}}$ be the Delaunay immersion given by Theorem \[theoremPerturbedDelaunay\].
#### Limit axis.
In order to compute the limit axis of $f_t$ at the end around $p_{i,t}$, let $\widehat{\Delta}_{i,t}$ be the oriented axis of $\widehat{f}_{i,t}^{\mathcal{D}}$ at $z=0$. Then, using Theorem \[theoremPerturbedDelaunay\], $$\widehat{\Delta}_{i,0} = {\mathrm{geod}}\left( {\mathrm{I}}_2, -\sigma_1 \right).$$ And using $\widehat{f}_{i,t}(z) = (Q_iH)^{-1}\cdot \left( h^*f_t(z) \right)$, $$\widehat{\Delta}_{i,0} = (Q_iH)^{-1}\cdot\Delta_{i,0},$$ and thus, $$\Delta_{i,0} = (QH) \cdot {\mathrm{geod}}({\mathrm{I}}_2,-\sigma_1).$$ Compute $H\cdot (-\sigma_1) = \sigma_3$ and note that $M_iH^{-1} = \Phi_0(\pi_i)$ to get $$\Delta_{i,0} = {\mathrm{geod}}\left( {\mathrm{I}}_2, N_0(p_{i,0}) \right)$$ where $N_0$ is the normal map of the minimal immersion.
#### Type of the ends.
Suppose that $t$ is positive. Then the end at $p_{i,t}$ is unduloidal if, and only if its weight is positive; that is, $\alpha_{i,t}$ is positive. Use Proposition 5 of [@minoids] to show that if the normal map $N_0$ of $M_0$ points toward the inside, then $\alpha_{i,0}=\tau_i$ where $2\pi\tau_iN_0(p_{i,0})$ is the flux of $M_0$ around the end at $p_{i,0}$ ($\alpha_{i,0}=-\tau_i$ for the other orientation). Thus, if $M_0$ is Alexandrov-embedded, then the ends of $M_t$ are of unduloidal type for $t>0$ and of nodoidal type for $t<0$.
Alexandrov-embeddedness {#sectionAlexEmbeddedMnoids}
-----------------------
In order to show that $M_t$ is Alexandrov-embedded for $t>0$ small enough, one can follow the proof of Proposition 6 in [@minoids]. Note that this proposition does not use the fact that $M_t$ is CMC $H$, but relies on the fact that the ambient space is ${\mathbb{R}}^3$. This enjoins us to lift $f_t$ to ${\mathbb{R}}^3$ via the exponential map at the identity, hence defining an immersion $\widehat{f}_t: \Sigma_t\longrightarrow{\mathbb{R}}^3$ which is not CMC anymore, but is Alexandrov-embedded if, and only if $f_t$ is Alexandrov-embedded. Let $\psi:\Sigma_0\longrightarrow M_0\subset {\mathbb{R}}^3$ be the limit minimal immersion. In order to adapt the proof of [@minoids] and show that $M_t$ is Alexandrov-embedded, one will need the following Lemma.
Let $\widetilde{f}_t := \frac{1}{t}\widehat{f}_t$. Then $\widetilde{f}_t$ converges to $\psi$ on compact subsets of $\Sigma_0$.
For all $z$, $$\exp_{{\mathrm{I}}_2}(\widehat{f}_0(z)) = f_0(z) = {\mathrm{I}}_2,$$ so $\widehat{f}_0(z) = 0$. Thus $$\lim\limits_{t\to 0} \widetilde{f}_t(z) = \frac{d}{dt}\widehat{f}_t(z)\mid_{t=0}.$$ Therefore, using Proposition \[propBlowupCatenoid\], $$\begin{aligned}
\psi(z) &= \lim\limits_{t\to 0} \frac{1}{t}\left( f_t(z) - {\mathrm{I}}_2 \right)\\
&= \lim\limits_{t\to 0} \frac{1}{t}\left( \exp_{{\mathrm{I}}_2}(\widehat{f}_t(z)) - \exp_{{\mathrm{I}}_2}(\widehat{f}_0(z)) \right)\\
&= \frac{d}{dt}\exp_{{\mathrm{I}}_2}(\widehat{f}_t(z))\mid_{t=0}\\
&= d\exp_{{\mathrm{I}}_2}(0)\cdot \frac{d}{dt}\widehat{f}_t(z)\mid_{t=0}\\
&= \lim\limits_{t\to 0} \widetilde{f}_t(z).
\end{aligned}$$
CMC surfaces of revolution in ${\mathbb{H}}^3$ {#appendixJleliLopez}
==============================================
Following Sections 2.2 and 2.3 of [@jlelilopez],
\[propJleliLopez\] Let $X:{\mathbb{R}}\times\left[0,2\pi\right]\longrightarrow{\mathbb{H}}^3$ be a conformal immersion of revolution with metric $g^2(s)\left(ds^2+d\theta^2\right)$. If $X$ is CMC $H>1$, then $g$ is periodic and denoting by $S$ its period, $$\sqrt{H^2-1} \int_{0}^{S} g(s)ds = \pi \quad \text{and} \quad \int_{0}^{S}\frac{ds}{g(s)} = \frac{2\pi^2}{|w|}$$ where $w$ is the weight of $X$, as defined in [@loopgroups].
According to Equation (11) in [@jlelilopez], writing $\tau = \frac{\sqrt{|w|}}{\sqrt{2\pi}}$ and $g = \tau e^\sigma$, $$\label{eqDiffsigma}
\left(\sigma'\right)^2 = 1-\tau^2\left( \left( He^{\sigma} + \iota e^{-\sigma} \right)^2 - e^{2\sigma} \right)$$ where $\iota\in\left\{\pm 1\right\}$ is the sign of $w$. The solutions $\sigma$ are periodic with period $S>0$. Apply an isometry and a change of the variable $s\in{\mathbb{R}}$ so that $$\sigma'(0)=0 \quad \text{and}\quad \sigma(0) = \min\limits_{s\in{\mathbb{R}}} \sigma(s).$$ By symmetry of Equation , one can thus define $$a := e^{2\sigma(0)}= \min\limits_{s\in{\mathbb{R}}} e^{2\sigma(s)} \quad \text{and}\quad b := e^{2\sigma\left(\frac{S}{2}\right)} = \max\limits_{s\in{\mathbb{R}}} e^{2\sigma(s)}.$$ With these notations, Equation can be written in a factorised form as $$\label{eqDiffsigmafactorised}
\left( \sigma' \right)^2 = \tau^2(H^2-1)e^{-2\sigma}\left( b - e^{2\sigma} \right)\left(e^{2\sigma} - a\right)$$ with $$\label{eqaPeriode}
a = \frac{1-2\iota\tau^2 H - \sqrt{1-4\tau^2(\iota H - \tau^2)}}{2\tau^2 (H^2-1)}$$ and $$b = \frac{1-2\iota\tau^2 H + \sqrt{1-4\tau^2(\iota H - \tau^2)}}{2\tau^2 (H^2-1)}.$$ In order to compute the first integral, change variables $v=e^\sigma$, $y=\sqrt{b-v^2}$ and $x= \frac{y}{\sqrt{b - a}}$ and use Equation to get $$\begin{aligned}
\sqrt{H^2-1} \int_{0}^{S} \tau e^{\sigma(s)}ds &= 2 \sqrt{H^2-1} \int_{\sqrt{a}}^{\sqrt{b}} \frac{\tau v dv}{\tau\sqrt{H^2-1}\sqrt{b-v^2}\sqrt{v^2-a}} \\
&= -2\int_{\sqrt{b-a}}^{0}\frac{dy}{\sqrt{b-a-y^2}} \\
&= 2\int_{0}^{1}\frac{dx}{\sqrt{1-x^2}} \\
&= \pi.
\end{aligned}$$ In the same manner with the changes of variables $v=e^{-\sigma}$, $y=\sqrt{a^{-1}-v^2}$ and $x = \frac{y}{\sqrt{a^{-1}-b^{-1}}}$, $$\begin{aligned}
\int_{0}^{S}\frac{ds}{\tau e^{\sigma(s)}} &= \frac{-2}{\tau\sqrt{H^2-1}} \int_{a^{-1/2}}^{b^{-1/2}}\frac{ dv}{v\sqrt{b-v^{-2}}\sqrt{v^{-2}-a}}\\
&= \frac{2}{\tau^2\sqrt{H^2-1}}\int_{0}^{\sqrt{a^{-1}-b^{-1}}}\frac{dy}{\sqrt{b-a-aby^2}}\\
&= \frac{2}{\tau^2\sqrt{H^2-1}\sqrt{ab}}\int_{0}^{1}\frac{dx}{\sqrt{1-x^2}}\\
&= \frac{\pi}{\tau^2}
\end{aligned}$$ because $ab=\frac{1}{H^2-1}$.
\[lemmaTubularRadius\] Let ${\mathcal{D}}_t$ be a Delaunay surface in ${\mathbb{H}}^3$ of constant mean curvature $H>1$ and weight $2\pi t>0$ with Gauss map $\eta_t$. Let $r_t$ be the maximal value of $R$ such that the map $${
\begin{array}{ccccc}
T & : & \left(-R,R\right)\times {\mathcal{D}}_t & \longrightarrow & {\mathop \mathrm{Tub}}_{r_t}\subset{\mathbb{H}}^3 \\
& & (r,p) & \longmapsto & {\mathrm{geod}}(p,\eta_t(p))(r) \\
\end{array}
}$$ is a diffeomorphism. Then $r_t\sim t$ as $t$ tends to $0$.
The quantity $r_t$ is the inverse of the maximal geodesic curvature of the surface. This maximal curvature is attained for small values of $t$ on the points of minimal distance between the profile curve and the axis. Checking the direction of the mean curvature vector at this point, the maximal curvature curve is not the profile curve but the parallel curve. Hence $r_t$ is the minimal hyperbolic distance between the profile curve and the axis. A study of the profile curve’s equation as in Proposition \[propJleliLopez\] shows that $$r_t = \sinh^{-1}\left( \tau \exp\left( \sigma_{\min} \right) \right) = \sinh^{-1}\left( \tau \sqrt{a(\tau)} \right).$$ But using Equation , as $\tau$ tends to $0$, $a\sim \tau^2 = |t|$, which gives the expected result.
\[lemmaCompNormalesDelaunay\] Let ${\mathcal{D}}_t$ be a Delaunay surface in ${\mathbb{H}}^3$ of weight $2\pi t>0$ with Gauss map $\eta_t$ and maximal tubular radius $r_t$. There exist $T>0$ and $\alpha<1$ such that for all $0<t<T$ and $p,q\in{\mathcal{D}}_t$ satisfying ${d_{{\mathbb{H}}^3}\left(p,q\right)}<\alpha r_t$, $${\left\Vert\Gamma_p^q\eta_t(p) - \eta_t(q)\right\Vert}<1.$$
Let $t>0$. Then for all $p,q\in{\mathcal{D}}_t$, $${\left\Vert\Gamma_p^q\eta_t(p) - \eta_t(q)\right\Vert} \leq {\underset{s\in\gamma_t}{\sup}}{\left\VertII_t(s)\right\Vert}\times \ell(\gamma_t)$$ where $II_t$ is the second fundamental form of ${\mathcal{D}}_t$, $\gamma_t\subset {\mathcal{D}}_t$ is any path joining $p$ to $q$ and $\ell(\gamma_t)$ is the hyperbolic length of $\gamma_t$. Using the fact that the maximal geodesic curvature $\kappa_t$ of ${\mathcal{D}}_t$ satisfies $\kappa_t\sim \coth r_t$ as $t$ tends to zero, there exists a uniform constant $C>0$ such that $${\underset{s\in{\mathcal{D}}_t}{\sup}}{\left\VertII_t(s)\right\Vert}<C\coth r_t.$$ Let $0<\alpha < (1+C)^{-1}<1$ and suppose that ${d_{{\mathbb{H}}^3}\left(p,q\right)}<\alpha r_t$. Let $\sigma_t: [0,1]\rightarrow{\mathbb{H}}^3$ be the geodesic curve of ${\mathbb{H}}^3$ joining $p$ to $q$. Then $\sigma_t([0,1])\subset {\mathop \mathrm{Tub}}_{\alpha r_t}$ and thus the projection $\pi_t: \sigma_t([0,1])\rightarrow {\mathcal{D}}_t$ is well-defined. Let $\gamma_t:=\pi_t\circ \sigma_t$. Then $$\begin{aligned}
{\left\Vert\Gamma_p^q\eta_t(p) - \eta_t(q)\right\Vert} &\leq C\coth r_t \times {\underset{s\in \sigma_t}{\sup}}{\left\Vertd\pi_t(s)\right\Vert}\\
&\leq C\coth r_t \times {\underset{s\in {\mathop \mathrm{Tub}}_{\alpha r_t}}{\sup}}{\left\Vertd\pi_t(s)\right\Vert}\times {d_{{\mathbb{H}}^3}\left(p,q\right)}\\
&\leq C\coth r_t \times \frac{\tanh r_t}{\tanh r_t - \tanh(\alpha r_t)}\times \alpha r_t\\
&\leq \frac{C\alpha r_t}{\tanh r_t - \tanh(\alpha r_t)}\sim \frac{C\alpha}{1-\alpha}<1
\end{aligned}$$ as $t$ tends to zero.
Remarks on the polar decomposition {#appendixDecompositionPolaire}
==================================
Let ${{\mathrm{SL}}\left(2,{\mathbb{C}}\right)^{++}}$ be the subset of ${\mathrm{SL}}(2,{\mathbb{C}})$ whose elements are hermitian positive definite. Let $${
\begin{array}{ccccc}
{\mathrm{Pol}}& : & {\mathrm{SL}}(2,{\mathbb{C}}) & \longrightarrow & {{\mathrm{SL}}\left(2,{\mathbb{C}}\right)^{++}}\times {\mathrm{SU}}(2) \\
& & A & \longmapsto & \left({\mathrm{Pol}}_1(A),{\mathrm{Pol}}_2(A)\right) \\
\end{array}
}$$ be the polar decomposition on ${\mathrm{SL}}(2,{\mathbb{C}})$. This map is differentiable and satisfies the following proposition.
\[propDiffPolar2\] For all $A\in{\mathrm{SL}}(2,{\mathbb{C}})$, ${\left\Vertd{\mathrm{Pol}}_2\left(A\right)\right\Vert} \leq |A|$.
We first write the differential of ${\mathrm{Pol}}_2$ at the identity in an explicit form. Writing $${
\begin{array}{ccccc}
d{\mathrm{Pol}}_2({\mathrm{I}}_2) & : & {\mathfrak{sl}}(2,{\mathbb{C}}) & \longrightarrow & {\mathfrak{su}}(2) \\
& & M & \longmapsto & {\mathrm{pol}}_2(M) \\
\end{array}
}$$ gives $${\mathrm{pol}}_2\begin{pmatrix}
a & b \\
c & -a
\end{pmatrix} = \begin{pmatrix}
i{\mathop \mathrm{Im}}a & \frac{b-{\overline{c}}}{2} \\
\frac{c-\bar{b}}{2} & -i{\mathop \mathrm{Im}}a
\end{pmatrix}.$$ Note that for all $M\in{\mathfrak{sl}}(2{\mathbb{C}})$, $$\begin{aligned}
\left|{\mathrm{pol}}_2(M)\right|^2 &= 2\left({\mathop \mathrm{Im}}a\right)^2 + \frac{1}{4}\left( \left| b-{\overline{c}} \right|^2 + \left|c-{\overline{b}}\right|^2 \right)\\
&\leq \left|M\right|^2 - \frac{1}{2}\left|b+{\overline{c}}\right|^2\\
&\leq \left|M\right|^2.
\end{aligned}$$
We then compute the differential of ${\mathrm{Pol}}_2$ at any point of ${\mathrm{SL}}(2,{\mathbb{C}})$. Let $(S_0,Q_0)\in{{\mathrm{SL}}\left(2,{\mathbb{C}}\right)^{++}}\times {\mathrm{SU}}(2)$. Consider the differentiable maps $${
\begin{array}{ccccc}
\phi & : & {\mathrm{SL}}(2,{\mathbb{C}}) & \longrightarrow & {\mathrm{SL}}(2,{\mathbb{C}}) \\
& & A & \longmapsto & S_0AQ_0 \\
\end{array}
} \quad \text{and}\quad {
\begin{array}{ccccc}
\psi & : & {\mathrm{SU}}(2) & \longrightarrow & {\mathrm{SU}}(2) \\
& & Q & \longmapsto & QQ_0. \\
\end{array}
}$$ Then $\psi\circ {\mathrm{Pol}}_2 \circ \phi^{-1} = {\mathrm{Pol}}_2$ and for all $M\in T_{S_0Q_0}{\mathrm{SL}}(2,{\mathbb{C}})$, $$d{\mathrm{Pol}}_2\left(S_0Q_0\right)\cdot M = {\mathrm{pol}}_2\left( S_0^{-1}MQ_0^{-1} \right)Q_0.$$
Finally, let $A\in{\mathrm{SL}}(2,{\mathbb{C}})$ with polar decomposition ${\mathrm{Pol}}( A) = (S,Q)$. Then for all $M\in T_A{\mathrm{SL}}(2,{\mathbb{C}})$, $$\begin{aligned}
\left|d{\mathrm{Pol}}_2(A)\cdot M\right| = \left|{\mathrm{pol}}_2\left( S^{-1}MQ^{-1} \right)Q\right|\leq \left|S\right|\times \left|M\right|
\end{aligned}$$ and thus using $$S = \exp\left( \frac{1}{2}\log\left(AA^*\right) \right)$$ gives $${\left\Vertd{\mathrm{Pol}}_2(A)\right\Vert} \leq \left|S\right|\leq \left|A\right|.$$
\[corQ2v-Q1v\] Let $0<q<\log \rho$ and $F_1,F_2\in{\Lambda {\mathrm{SU}}(2)_\rho}$ with unitary parts $Q_i = {\mathrm{Pol}}_2(F_i(e^{-q}))$. Let $\epsilon>0$ such that $${\left\VertF_2^{-1}F_1-{\mathrm{I}}_2\right\Vert}_\rho < \epsilon.$$ If $\epsilon$ is small enough, then there exists a uniform $C>0$ such that for all $v\in T_{{\mathrm{I}}_2}{\mathbb{H}}^3$, $${\left\VertQ_2\cdot v - Q_1\cdot v\right\Vert}_{T_{{\mathrm{I}}_2}{\mathbb{H}}^3}\leq C{\left\VertF_2\right\Vert}_{\rho}^2\epsilon.$$
Let $v\in T_{{\mathrm{I}}_2}{\mathbb{H}}^3$ and consider the following differentiable map $${
\begin{array}{ccccc}
\phi & : & {\mathrm{SU}}(2) & \longrightarrow & T_{{\mathrm{I}}_2}{\mathbb{H}}^3 \\
& & Q & \longmapsto & Q\cdot v. \\
\end{array}
}$$ Then $$\begin{aligned}
{\left\VertQ_2\cdot v - Q_1\cdot v\right\Vert}_{T_{{\mathrm{I}}_2}{\mathbb{H}}^3} &= {\left\Vert\phi(Q_2) - \phi(Q_1)\right\Vert}_{T_{{\mathrm{I}}_2}{\mathbb{H}}^3} \\
&\leq {\underset{t\in\left[0,1\right]}{\sup}}{\left\Vertd\phi(\gamma(t))\right\Vert}\times \int_{0}^{1}\left| \dot{\gamma}(t) \right|dt
\end{aligned}$$ where $\gamma : \left[0,1\right]\longrightarrow{\mathrm{SU}}(2)$ is a path joining $Q_2$ to $Q_1$. Recalling that ${\mathrm{SU}}(2)$ is compact gives $$\label{eqQ2v-q1v}
{\left\VertQ_2\cdot v - Q_1\cdot v\right\Vert}_{T_{{\mathrm{I}}_2}{\mathbb{H}}^3} \leq C\left|Q_2-Q_1\right|$$ where $C>0$ is a uniform constant. But writing $A_i=F_i(e^{-q})\in{\mathrm{SL}}(2,{\mathbb{C}})$, $$\begin{aligned}
\left|Q_2-Q_1\right| &= \left|{\mathrm{Pol}}_2(A_2) - {\mathrm{Pol}}_2(A_1)\right|\\
&\leq {\underset{t\in\left[0,1\right]}{\sup}}{\left\Vertd{\mathrm{Pol}}_2(\gamma(t))\right\Vert}\times \int_{0}^{1}\left| \dot{\gamma}(t) \right|dt
\end{aligned}$$ where $\gamma : \left[0,1\right]\longrightarrow {\mathrm{SL}}(2,{\mathbb{C}})$ is a path joining $A_2$ to $A_1$. Take for example $$\gamma(t) := A_2\times \exp\left( t\log \left( A_2^{-1}A_1 \right) \right).$$ Suppose now that $\epsilon$ is small enough for $\log$ to be a diffeomorphism from $D({\mathrm{I}}_2,\epsilon)\cap {\mathrm{SL}}(2,{\mathbb{C}})$ to $D(0,\epsilon')\cap {\mathfrak{sl}}(2,{\mathbb{C}})$. Then $${\left\VertA_2^{-1}A_1-{\mathrm{I}}_2\right\Vert}\leq {\left\VertF_2^{-1}F_1-{\mathrm{I}}_2\right\Vert}_{\rho}<\epsilon$$ implies $$\left|\gamma(t)\right|\leq \widetilde{C}\left|A_2\right|\quad \text{and}\quad \left|\dot{\gamma}(t)\right|\leq \widetilde{C}\widehat{C}\left|A_2\right|\epsilon$$ where $\widetilde{C},\widehat{C}>0$ are uniform constants. Using Proposition \[propDiffPolar2\] gives $$\left|Q_2-Q_1\right|\leq \widehat{C}\widetilde{C}^2\left|A_2\right|^2\epsilon$$ and inserting this inequality into gives $${\left\VertQ_2\cdot v - Q_1\cdot v\right\Vert}_{T_{{\mathrm{I}}_2}{\mathbb{H}}^3} \leq C\widehat{C}\widetilde{C}^2\left|A_2\right|^2\epsilon \leq C\widehat{C}\widetilde{C}^2{\left\VertF_2\right\Vert}_{\rho}^2\epsilon.$$
\[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{}
[10]{}
Charles-Eugène Delaunay, *Sur la surface de révolution dont la courbure moyenne est constante*, Journal de Mathématiques Pures et Appliquées **6** (1841), 309–314.
Josef Dorfmeister, Franz Pedit, and Hongyong Wu, *Weierstrass type representation of harmonic maps into symmetric spaces*, Communications in Analysis and Geometry **6** (1998), no. 4, 633–668.
Josef Dorfmeister and Hongyou Wu, *Construction of constant mean curvature n-noids from holomorphic potentials*, Mathematische Zeitschrift **258** (2007), no. 4, 773.
Josef F. Dorfmeister, Jun-ichi Inoguchi, and Shimpei Kobayashi, *Constant mean curvature surfaces in hyperbolic 3-space via loop groups*, Journal für die reine und angewandte Mathematik (Crelles Journal) **2014** (2012), no. 686, 1–36.
Otto Forster, *Lectures on [R]{}iemann surfaces*, vol. 81, Springer-Verlag New York.
Shoichi Fujimori, Shimpei Kobayashi, and Wayne Rossman, *Loop group methods for constant mean curvature surfaces*, arXiv preprint math/0602570 (2006).
Sebastian Heller, *Higher genus minimal surfaces in [S]{}3 and stable bundles*, Journal für die reine und angewandte Mathematik (Crelles Journal) **2013** (2012), no. 685, 105–122.
, *Lawson’s genus two surface and meromorphic connections*, Mathematische Zeitschrift **274** (2013), no. 3, 745–760 (en).
Mohamed Jleli and Rafael López, *Bifurcating [Nodoids]{} in [Hyperbolic]{} [Space]{}*, Advanced Nonlinear Studies **15** (2016), no. 4, 849–865.
Nicolaos Kapouleas, *Complete constant mean curvature surfaces in euclidean three-space*, Annals of Mathematics **131** (1990), no. 2, 239–330.
M. Kilian, W. Rossman, and N. Schmitt, *Delaunay ends of constant mean curvature surfaces*, Compositio Mathematica **144** (2008), no. 1, 186–220.
Martin Kilian, *Constant mean curvature cylinders*.
Martin Kilian, Ian McIntosh, and Nicholas Schmitt, *New constant mean curvature surfaces*, Experiment. Math. **9** (2000), no. 4, 595–611.
Martin Kilian, Ian Mclntosh, and Nicholas Schmitt, *New [Constant]{} [Mean]{} [Curvature]{} [Surfaces]{}*, Experimental Mathematics **9** (2000), no. 4, 595–611.
Shimpei Kobayashi, *Bubbletons in 3-dimensional space forms*, Balkan Journal of Geometry and Its Applications **9**, no. 1, 44–68.
Nicholas J. Korevaar, Rob Kusner, William H. Meeks, and Bruce Solomon, *Constant [Mean]{} [Curvature]{} [Surfaces]{} in [Hyperbolic]{} [Space]{}*, American Journal of Mathematics **114** (1992), no. 1, 1.
Nicholas J. Korevaar, Rob Kusner, and Bruce Solomon, *The structure of complete embedded surfaces with constant mean curvature*, J. Differential Geom. **30** (1989), no. 2, 465–503.
Rafe Mazzeo and Frank Pacard, *[Constant mean curvature surfaces with Delaunay ends]{}*, Comm. Analysis and Geometry **9** (2001), no. 1, 169–237.
Thomas Raujouan, *On [Delaunay]{} [Ends]{} in the [DPW]{} [Method]{}*, arXiv:1710.00768 \[math\] (2017), arXiv: 1710.00768.
N. [Schmitt]{}, *[Constant Mean Curvature Trinoids]{}*, ArXiv Mathematics e-prints (2004).
N. Schmitt, M. Kilian, S.-P. Kobayashi, and W. Rossman, *Unitarization of monodromy representations and constant mean curvature trinoids in 3-dimensional space forms*, Journal of the London Mathematical Society **75** (2007), no. 3, 563–581.
Richard M. Schoen, *Uniqueness, symmetry, and embeddedness of minimal surfaces*, J. Differential Geom. **18** (1983), no. 4, 791–809.
Michael E. Taylor, *Introduction to differential equations*, Pure and applied undergraduate texts, no. v. 14, American Mathematical Society, Providence, R.I, 2011.
Gerald Teschl, *Ordinary differential equations and dynamical systems*, vol. 140, American Math. Soc.
Eberhard Teufel, *A generalization of the isoperimetric inequality in the hyperbolic plane*, Archiv der Mathematik **57** (1991), no. 5, 508–513 (en).
Martin Traizet, *Opening nodes on horosphere packings*, Trans. Amer. Math. Soc. **368** (2016), no. 8, 5701–5725.
, *[Construction of constant mean curvature n-noids using the <span style="font-variant:small-caps;">DPW</span> method]{}*, To appear in J. Reine Angew. Math. (2017).
, *Gluing [Delaunay]{} ends to minimal n-noids using the [DPW]{} method*, arXiv:1710.09261v3 (2017).
, *Opening nodes and the [DPW]{} method*, arXiv:1808.01366 (2018).
E. B Vinberg, *Geometry [II]{}: [Spaces]{} of [Constant]{} [Curvature]{}*, Springer Berlin Heidelberg, Berlin, Heidelberg, 1993 (English), OCLC: 906710787.
Thomas Raujouan\
Institut Denis Poisson\
Université de Tours, 37200 Tours, France\
`raujouan@univ-tours.fr`
|
---
abstract: 'We study a topological sigma-model ($A$-model) in the case when the target space is an ($m_0|m_1$)-dimensional supermanifold. We prove under certain conditions that such a model is equivalent to an $A$-model having an ($m_0-m_1$)-dimensional manifold as a target space. We use this result to prove that in the case when the target space of $A$-model is a complete intersection in a toric manifold, this $A$-model is equivalent to an $A$-model having a toric supermanifold as a target space.'
author:
- |
Albert Schwarz[^1]\
Department of Mathematics, University of California,\
Davis, CA 95616\
ASSCHWARZ@UCDAVIS.EDU
title: 'Sigma-models having supermanifolds as target spaces.'
---
Our goal is to study a two-dimensional topological $\sigma$-model ($A$-model). Sigma-models having supermanifolds as target spaces were considered in an interesting paper \[5\]. However, the approach of \[5\] leads to a conclusion that in the case when the target space of $A$-model is a supermanifold the contribution of rational curves to correlation functions vanishes (i.e. these functions are essentially trivial). In our approach $A$-model having a $(m_0|m_1)$-dimensional supermanifold as a target space is not trivial, but it is equivalent to an $A$-model with $(m_0-m_1)$-dimensional target space. We hope, that this equivalence can be used to understand better the mirror symmetry, because it permits us to replace most interesting target spaces with supermanifolds having non-trivial Killing vectors and to use $T$-duality.
We start with a definition of $A$-model given in \[1\]. This definition can be applied to the case when the target space is a complex Kähler supermanifold $M$. Repeating the consideration of \[1\] we see that the correlation functions can be expressed in terms of rational curves in $M$, i.e. holomorphic maps of $CP^1$ into $M$. (We restrict ourselves to the genus $0$ case and assume that the situation is generic; these restrictions will be lifted in a forthcoming paper \[8\]).
Let us consider for simplicity the case when ($m_0|m_1$)-dimensional complex supermanifold $M$ corresponds to an $m_1$-dimensional holomorphic vector bundle $\alpha$ over an $m_0$-dimensional complex manifold $M_0$ (i.e. $M$ can be obtained from the total space of the bundle $\alpha$ by means of reversion of parity of the fibres.) The natural map of $M$ onto $M_0$ will be denoted by $\pi$. To construct the correlation functions of the $A$-model with the target space $M$ we should fix real submanifolds $N_1,...,N_k$ of $M_0$ and the points $x_1,...,x_k\in CP^1$. For every two-dimensional homology class $\lambda \in H_2(M,{\bf Z})$ we consider the space $D_{\lambda}$ of holomorphic maps $\varphi$ of $CP^1$ into $M$ that transform $CP^1$ into a cycle $\varphi (CP^1)\in \lambda$ and satisfy the conditions $\pi (\varphi (x_1))\in N_1,...,\pi(\varphi
(x_k))\in N_k$. (We identify the homology of $M$ with the homology of its body $M_0$; the condition $\varphi (CP^1)\in \lambda$ means that the image of the fundamental homology class of $CP^1$ by the homomorphism $(\pi \varphi )_*:H_2(CP^1,{\bf Z})\rightarrow H_2(M_0,{\bf Z})$ is equal to $\lambda$.) The space $D_{\lambda}$ contributes to the correlation function under consideration only if $$2m_0-(q_1+...+q_k)+<c_1(T),\lambda >=2m_1(<c_1(\alpha ),\lambda >+1)$$ where $c_1(T)$ is the first Chern class of the tangent bundle to $M,
c_1(\alpha)$ is the first Chern class of the bundle $\alpha$ and $q_i=2m_0-\dim N_i$ denotes the codimension of $N_i$. If $\varphi \in
D_{\lambda}$ then $\pi (\varphi) \in D_{\lambda}^0$ where $
D_{\lambda}$ is the space of holomorphic maps $\phi :CP^1\rightarrow M_0$ obeying $\phi (CP^1) \in \lambda$ and $\phi (x_1)\in N_1,...,\phi
(x_k)\in N_k$.
Let us consider a holomorphic vector bundle $\xi _{\lambda}$ over $D^0_{\lambda}$ having the vector space of holomorphic sections of the pullback of $\alpha$ by the map $\phi \in D^0_{\lambda}$ as a fiber over $\phi$. It is easy to check that $D_{\lambda}$ can be obtained from the total space of $\xi _{\lambda}$ by means of parity reversion in the fibers. It follows from the index theorem that the virtual dimension of $D^0_{\lambda}$ is equal to $d_1=2m_0-\sum q_i+2<c_1(T),\lambda >$; our assumption that the situation is generic means that $d_1=\dim
D^0_{\lambda}$. The Riemann-Roch theorem together with equation (1) permits us to say that the dimension of the fiber of $\xi _{\lambda}$ is equal to $d_2=2m_1(<c_2(\alpha),\lambda >+1)$ and coincides with $d_1$. We see that the even dimension $d_1$ of $ D_{\lambda}$ coincides with its odd dimension $d_2$. The contribution of $D_{\lambda}$ into the correlation function can be expressed in terms of the Euler number of the vector bundle $\xi _{\lambda}$ (see \[2\] or \[3\] for explanation of similar statements in a little bit different situations).
Let us consider now a holomorphic section $F$ of $\alpha$. We will assume that the zero locus of $F$ is a manifold and denote this manifold by $X$. The Kähler metric on $M$ induces a Kähler metric on $X$; therefore we can consider an $A$-model with the target space $X$ . We’ll check that the correlation functions of this $A$-model coincide with the correlation functions of the $A$-model with target space $M$. More precisely, the correlation function of $A$-model with target space $M$ constructed by means of submanifolds $N_1,...,N_k \subset M_0$ coincides with the correlation function of $A$-model with target space $X$ constructed by means of submanifolds $N_1^{\prime}=N_1\cap
X,...,N_k^{\prime}=N_k\cap X$ of the manifold $X$. (Without loss of generality we can assume that $N_i^{\prime}=N_i\cap X$ is a submanifold). To prove this statement we notice that using the section $F$ of $\alpha$ we can construct a section $f_{\lambda}$ of $\xi _{\lambda}$ assigning to every map $\phi \in D_{\lambda}^0$ an element $f_{\lambda}(\phi)=F\cdot
\phi$ of the fiber of $\xi_{\lambda}$ over $\phi \in D_{\lambda}^0$. It is easy to check that zeros of the section $f_{\lambda}$ can be identified with holomorphic maps $\phi \in D_{\lambda}^0$ satisfying $\phi (CP^1)\subset X$. The number of such maps enters the expression for correlation functions of $A$-model with target space $X$. From the other side this number coincides with the Euler number of $\xi
_{\lambda}$ entering corresponding expression in the case of target space $M$. This remark proves the coincidence of correlation functions for the target space $M$ with correlation functions for the target space $X$. Let us stress, however, that not all correlation functions for the target space $X$ can be obtained by means of above construction. Using the language of cohomology one can say that a correlation function of an $A$-model with the target space $X$ corresponds to a set of cohomology classes $\nu _1,...,\nu _k\in H(X,{\bf C})$. Such a correlation function is equal to a correlation function of an $A$-model with the target space $M$ if there exist cohomology classes $\tilde {\nu }_1,...,\tilde {\nu} _k\in H(M_0,{\bf C})$ obeying $\nu_1=i^*\tilde {\nu}_1,...,\nu_k=i^*\tilde {\nu}_k$. (Here $i$ denotes the embedding of $X$ into $M_0$. We used the fact that cohomology class $\nu _i\in H(X,{\bf C})$, dual to $N_i^{\prime}=N_i\cap X$, is equal to $i^*\tilde {\nu}_i$ where $\tilde {\nu}_i\in H(M_0,{\bf C})$ is dual to $N_i$).
To prove that correlation functions of $A$-model having a supermanifold as a target space coincide with correlation functions of an ordinary $A$-model we used arguments similar to the arguments, utilized in \[4\]. One can say that our consideration reveals the geometric meaning of Kontsevich’s calculation.
We define a Calabi-Yau supermanifold (CY- supermanifold) as a Kähler supermanifold having trivial canonical line bundle. (Recall that the fiber $K_x$ of canonical line bundle $K$ over a complex supermanifold $M$ can be defined as $m(T_x(M))$ where $T_x(M)$ denotes the tangent space at the point $x\in M$ and $m(E)$ denotes the one-dimensional linear space of complex measures on a complex linear space $E$.) It is easy to prove that the canonical bundle over $X$ in the construction above can be obtained by means of restriction of the canonical bundle over $M$; therefore if $M$ is a CY-supermanifold, $X$ is a CY-manifold. The proof is based on the following simple remark. If $E=E_0+E_1$ is a linear superspace with even part $E_0$ and odd part $E_1$ and $A:E_0\rightarrow E_1$ is a surjective linear operator, then $m(\ker A)$ is canonically isomorphic to $m(E)$. (This follows from the canonical isomorphisms $m(E)=m(E_0)\otimes m(E_1)^*,\
m(E_1)=m(E_0)\otimes m(\ker A)^*$).
Let us consider an example when $M_0=CP^n$. For every $k\in{\bf Z}$ we construct a line bundle $\alpha _k$ over $CP^n$ in the following way. We define the total space $E_k$ of the bundle $\alpha _k$ taking quotient of ${\bf C}^{n+2}\setminus \{ 0\} $ with respect to equivalence relation $$(z_1,...,z_{n+2})\sim (\lambda z_1,...,\lambda z_{n+1}, \lambda ^kz_{n+2}).$$ The projection map $E_k\rightarrow CP^n$ is induced by the map $(z_1,...,z_{n+1},z_{n+2})\rightarrow (z_1,...,z_{n+1})$. The $(n|1)$-dimensional complex supermanifold $M_k$ corresponding to the line bundle $\alpha _k$, can be obtained from superspace $({\bf
C}^{n+1}\setminus \{ 0\} )\times {\bf C}^{0|1}$ by means of identification $$(z_1,...,z_{n+1},\theta)\sim (\lambda z_1,...,\lambda z_{n+1}, \lambda
^k\theta),\ \ \ \lambda \in {\bf C}.$$ (here $z_1,...,z_{n+1}$ are even coordinates is ${\bf C}^{n+1|1},\theta$ is an odd coordinate). To give another description of the supermanifold $M_k$ we consider a hypersurface $N_k\subset {\bf C}^{n+1|1}$, determined by the equation $$\bar {z}_1z_1+...+\bar {z}_{n+1}z_{n+1}+k\bar {\theta}\theta=c,\ \ \
c>0.$$ The restriction of standard symplectic form $\omega$ on ${\bf
C}^{n+1|1}$ to the hypersurface $N_k$ is a degenerate closed $2$-form. One can factorize $N_k$ with respect to null-vectors of this $2$-form; it is easy to check that this factorization leads to identification of points of $N_k$ given by the formula (3) with $|\lambda|=1$. This means that a manifold obtained from $N_k$ by means of factorization with respect to null-vectors of $\omega$ restricted to $N_k$ can be identified with $M_k$. This construction equippes $M_k$ with symplectic structure (depending on the choice of $c>0$). Taking into account that $M_k$ is equipped with a complex structure we can construct a family of Kähler metrics on $M_k$.
Every homogeneous polynomial $P(z_1,...,z_{n+1})$ of degree $k$ determines a section of the line bundle $\alpha _k$. We see therefore that $A$-model with the target space $M_k$ is equivalent to $A$-model with the target space $X_k$ where $X_k$ is the hypersurface in $CP^n$ singled out by the equation $P(z_1,...,z_{n+1})=0$. (Of course, we should assume that this hypersurface is smooth.) For example in the case $n=4,
\ k=5$, the $A$-model with target space $M_5$ is equivalent to the $A$-model on the quintic in $CP^4$. Notice, that in this case $M_5$ is a CY- supermanifold (more generally, $M_k$ is a CY-supermanifold if $k=n+1$).
It is important to emphasize that the manifold $M_k$ has $n+1$ commuting holomorphic Killing vectors (i.e. $n+1$-dimensional torus $T^{n+1}$ acts on $M_k$; corresponding transformations of $M_k$ are holomorphic and preserve the metric). This means that we can apply the machinery of $T$-duality to the $\sigma$-model with target space $M_k$; one can conjecture that $T$-duality is related to mirror symmetry in this situation. To describe the above mentioned action of $T^{n+1}$ on $M_k$ we represent a point of $T^{n+1}$ as a row $\sigma =(\sigma
_1,...,\sigma_{n+1},\sigma _{n+2})$ where $|\sigma_i|=1,\sigma_1...\sigma
_{n+2}=1$. Every point $\sigma \in T^{n+1}$ determines a transformation of $M_k$, sending $(z_1,...,z_{n+1},\theta)$ into $(\sigma
_1z_1,...,\sigma _{n+1}z_{n+1},\sigma _{n+2}\theta)$. It is easy to check that this transformation is holomorphic and isometric.
One can obtain an essential generalization of the above construction using the notion of toric supermanifold.
Let us consider an $(m|n)$-dimensional complex linear superspace ${\bf C}^{m|n}$ with the standard Kähler metric $ds^2=\sum _{a=1}^{m+n}
d\bar {z}_a\cdot dz_a$. (We denote coordinates in ${\bf C}^{m|n}$ by $z_1,...,z_{m+n}$. The coordinates $z_1,...,z_m$ are even, the coordinates $z_{m+1},...,z_{m+n}$ are odd.) Poisson brackets of functions $\varphi _1=\bar {z}_1z_1,...,\varphi _{m+n}=\bar {z}_{m+n}z_{m+n}$ vanish; therefore corresponding hamiltonian vector fields generate an action of $(m+n)$-dimensional torus $T^{m+n}$ on ${\bf C}^{m|n}$. Let us consider functions $\psi _i=\sum _{k=1}^{m+n} a_{ik}\varphi _k,\ 1\leq
i\leq s,\ a_{ik}\in {\bf Z}$. Corresponding hamiltonian vector fields generate an action of $s$-dimensional torus $T^s\subset T^{m+n}$ on ${\bf C}^{m+n}$. Let us consider a subset $R_{a,c}$ of ${\bf C}^{m|n}$ determined by the equations $\psi _1=c_1,...,\psi_s=c_s$ where $a=(a_{ij})$ is an arbitrary integer matrix and $c_1,...,c_s$ are arbitrary numbers. It is easy to see that $R_{a,c}$ is invariant with respect to the action of $T^{m+n}$, and therefore with respect to the action of $T^s\subset T^{m+n}$. We define a supertoric variety $V_{a,c}$ as a quotient of $R_{a,c}$ with respect to this action of $T^s$.
One can say also that $V_{a,c}$ is obtained from $R_{a,c}$ by means of factorization with respect to null-vectors of a degenerate closed $2$-form on $R_{a,c}$ (of the restriction of standard symplectic form $\omega$ on ${\bf C}^{m|n}$ to $R_{a,c}$). We will consider the case when $V_{a,c}$ is a supermanifold; then this manifold has a natural symplectic structure (the degenerate form on $R_{a,c}$ determines a non-degenerate closed $2$-form on $V_{a,c}$.) The manifold $V_{a,c}$ has also natural complex structure; the complex and symplectic structure determine together a Kähler metric on a toric supermanifold $V_{a,c}$. To introduce a complex structure on $V_{a,c}$ we notice that $V_{a,c}$ can be represented as quotient $\tilde {R}_{a,c}/\tilde {T}^s$, where $\tilde {T}^s$ denotes the complexification of the torus $T^s$ and $\tilde {R}_{a,c}$ denotes the minimal domain in ${\bf C}^{m|n}$, that contains $R_{a,c}$ and is invariant with respect to the action of complexification $\tilde {T}^{m+n}$ of the torus $T^{m+n}$ (The map corresponding to a point $(\alpha_1,...,\alpha _{m+n})\in \tilde
{T}^{m+n}=({\bf C}\setminus \{ 0\} )^{m+n}$ transforms $(z_1,...,z_{m+n})$ into $(\alpha_1z_1,...,\alpha _{m+n}z_{m+n})$.) We mentioned already that the torus $T^{m+n}$ transforms $R_{a,c}$ into itself. Taking into account that the action of $T^{m+n}$ preserves complex and symplectic structure on ${\bf C}^{m|n}$ we arrive at the conclusion that the action of $T^{m+n}$ on $R_{a,c}$ descends to a holomorphic action of $T^{m+n}$ on $V_{a,c}$, preserving Kähler metric. (More precisely, we have an action of $T^{m+n}/T^s$ on $V_{a,c}$, because $T^s$ acts trivially.)
If the above consideration is applied to the case when $n=0$ (i.e. when instead of complex superspace ${\bf C}^{m|n}$ we consider an ordinary complex linear space ${\bf C}^m$), we obtain the standard symplectic construction of toric varieties \[6\],\[7\]. Let us consider now an (ordinary) toric manifold $V_{a,c}$ and a holomorphic line bundle $\alpha$ over $V_{a,c}$. Then we can construct a complex supermanifold $W_{\alpha}$ corresponding to the line bundle $\alpha$ (supermanifold obtained from total space of $\alpha$ by means of reversion of parity of fibers). We will prove that $W_{\alpha}$ can be considered as a toric supermanifold. The proof uses the well known fact that every line bundle over toric manifold $V_{a,c}$ is equivariant with respect to the action of $T^m$ on $V_{a,c}$ \[7\]. More precisely every one-dimensional representation of $T^s$ determines an action of $\tilde {T}^s$ on ${\bf
C}$; such an action is characterized by integers $\kappa _1,...,\kappa _s$. (An element $\sigma =(\sigma
_1,...,\sigma _s)\in \tilde{T}^s=({\bf C}\setminus \{ 0\} )^s$ generates a transformation $c\rightarrow \sigma _1 ^{\kappa _1}...\sigma_s ^{\kappa
_s}c$.) Combining this action with action of $\tilde {T}^s$ on $\tilde
{R}_{a,c}$ we obtain an action of $\tilde {T}^s$ on $R_{a,c}\times {\bf
C}$. One can get a complex line bundle over $V_{a,c}$ factorizing the projection $\tilde {R}_{a,c}\times {\bf C}\rightarrow \tilde {R}_{a,c}$ with respect to the action of $\tilde {T}^s$. One can prove that an arbitrary holomorphic line bundle over $V_{a,c}$ can be obtained this way. Replacing ${\bf C}$ with ${\bf C}^{0|1}$ in this construction we can describe the supermanifold $W_{\alpha}$ as a quotient of $\tilde {R}_{a,c}\times {\bf
C}^{0|1}$ with respect to an action of $\tilde {T}^s$. Using this description we can identify $W_{\alpha}$ with the toric supermanifold $V_{\hat {a},c}$. Here $\hat {a}$ denotes the matrix $a$ with additional column $(\kappa _1,...,\kappa _s)$, where $\kappa _i$ are the integers characterizing the action of $\tilde {T}^s$ on ${\bf C}$. (In other words, if $V_{a,c}$ is obtained as a quotient $R_{a,c}/T^s$ where $R_{a,c}$ is defined by the equations $\psi_1=c_1,...,\psi _s=c_s$ in ${\bf C}^m$ then $V_{\hat{a},c}$ is a quotient $R_{\hat{a},c}/T^s$, where $R_{\hat{a},c}$ is determined by the equations $\psi_1+\kappa _1\bar {\theta}\theta =c_1,...,\psi _s+\kappa _s\bar
{\theta}\theta =c_s$ in ${\bf C}^{m|1}$). We can conclude, that $A$-model having a smooth hypersurface in toric manifold as a target space is equivalent to an $A$-model with toric supermanifold as a target space. Similar statement is true if a hypersurface is replaced with a smooth complete intersection in toric manifold.
I am indebted to S.Elitzur, A.Givental, A.Giveon, E.Rabinovici and, especially, to M.Kontsevich for useful discussions. .1in
[**References**]{}
.1in 1. Witten, E.: Mirror manifolds and topological field theory, in: Essays on mirror manifolds, ed. S.Yau, International Press (1992)
2\. Witten, E.: The N matrix model and gauged WZW models. Nucl. Phys. B371, 191 (1992)
3\. Vafa, C., Witten, E.: A strong coupling test of S-duality, hep-th 9408074
4\. Kontsevich, M.:Enumeration of rational curves via torus action, hep-th 940535
5\. Sethi, S.: Supermanifolds,rigid manifolds and mirror symmetry, hep-th 9404186
6\. Audin, M.: The topology of torus actions on symplectic manifolds. Birkhäuser 1991
7\. Fulton, W.: Introduction to toric varieties. Princeton Univ. Press 1993
8\. Schwarz, A., Zaboronsky, O., in preparation
[^1]: Research supported in part by NSF grant No. DMS-9201366
|
---
author:
- |
S. V. Larin\
Institute of Macromolecular Compounds, Russian Academy of Sciences,\
V.O. Bol’shoi pr. 31, 199004 St. Petersburg, Russian Federation\
S. V. Lyulin\
Institute of Macromolecular Compounds, Russian Academy of Sciences,\
V.O. Bol’shoi pr. 31, 199004 St. Petersburg, Russian Federation\
P. A. Likhomanova\
National Research Center “Kurchatov Institute”, 123182, Moscow, Russia\
`likhomanovapa@gmail.com`\
K. Yu. Khromov\
National Research Center “Kurchatov Institute”, 123182, Moscow, Russia and\
Moscow Institute of Physics and Technology (State University), 117303, Moscow, Russia\
`khromov_ky@nrcki.ru`\
A. A. Knizhnik\
Kintech Laboratory Ltd., 123182, Moscow, Russia and\
National Research Center “Kurchatov Institute”, 123182, Moscow, Russia\
B. V. Potapkin\
Kintech Laboratory Ltd., 123182, Moscow, Russia and\
National Research Center “Kurchatov Institute”, 123182, Moscow, Russia
title: 'Muiltiscale modeling of electrical conductivity of R-BAPB polyimide + carbon nanotubes nanocomposites'
---
Introduction {#introduction .unnumbered}
============
Polymer materials, while possessing some unique and attractive qualities, such as low weight, high strength, resistance to chemicals, ease of processing, are for the most part insulators. If methods could be devised to turn common insulating polymers into conductors, that would open great prospects for using such materials in many more areas than they are currently used. These areas may include organic solar cells, printing electronic circuits, light-emitting diodes, actuators, supercapacitors, chemical sensors, and biosensors [@Long_2011].
Since the reliable methods for carbon nanotubes (CNT) fabrication had been developed in the 1990s, growing attention has been paid to the possibility of dispersing CNTs in polymers, where CNTs junctions may form a percolation network and turn insulating polymer into a good conductor when a percolation threshold is overcome. An additional benefit of using such polymer/CNTs nanocomposites instead of intrinsically conducting polymers, such as polyaniline [@Polyaniline] for example, is that dispersed CNTs, besides providing electrical conductivity, enhance polymer mechanical properties as well.
CNTs enhanced polymer nanocomposites have been intensively investigated experimentally, including composites conductivity [@Eletskii]. As for the theoretical research in this area, the results are more modest. If one is concerned with nanocomposite conductivity, its value depends on many factors, among which are the polymer type, CNTs density, nanocomposite preparation technique, CNTs and their junctions geometry, a possible presence of defects in CNTs and others. Taking all these factors into account and obtaining quantitatively correct results in modeling is a very challenging task since the resulting conductivity is formed at different length scales: at the microscopic level it is influenced by the CNTs junctions contact resistance and at the mesoscopic level it is determined by percolation through a network of CNTs junctions. Thus a consistent multi-scale method for the modeling of conductivity, starting from atomistic first-principles calculations of electron transport through CNTs junctions is necessary.
Due to the complexity of this multi-scale task, the majority of investigations in the area are carried out in some simplified forms, this is especially true for the underlying part of the modeling: determination of CNTs junction contact resistance. For the contact resistance either experimental values as in [@Soto_2015] or the results of phenomenological Simmons model as in [@Xu_2013; @Yu_2010; @Jang_2015; @Pal_2016] are usually taken, or even an arbitrary value of contact resistance reasonable by an order of magnitude may be set [@Wescott_2007]. In [@Bao_2011; @Grabowski_2017] the tunneling probability through a CNT junction is modeled using a rectangular potential barrier and the quasi-classical approximation.
The authors of [@Castellino_2016] employed an oversimplified two-parameter expression for contact resistance, with these parameters fitted to the experimental data. The best microscopic attempt, that we are aware of, is using the semi-phenomenological tight-binding approximation for the calculations of contact resistance [@Penazzi_2013]. But in [@Penazzi_2013] just the microscopic part of the nanocomposites conductivity problem is addressed, and the conductivity of nanocomposite is not calculated. Moreover, in [@Penazzi_2013] the coaxial CNTs configuration is only considered, which is hardly realistic for real polymers.
Thus, the majority of investigations are concentrated on the mesoscopic part of the task: refining a percolation model or phenomenologically taking into account different geometry peculiarities of CNTs junctions. Moreover, comparison with experiments is missing in some publications on this topic. Thus, a truly multi-scale research, capable of providing quantitative results comparable with experiments, combining fully first-principles calculations of contact resistance on the microscopic level with a percolation model on the mesoscopic level seems to be missing.
In our previous research [@comp_no_pol], we proposed an efficient and precise method for fully first-principles calculations of CNTs contact resistance and combined it with a Monte-Carlo statistical percolation model to calculate the conductivity of a simplified example network of CNTs junctions without polymer filling. In the current paper, we are applying the developed approach to the modeling of conductivity of the CNTs enhanced polymer polyimide R-BAPB.
R-BAPB (Fig. \[fig\_RBAPB-struct\]) is a novel polyetherimide synthesized using 1,3-bis-(3$'$,4-dicarboxyphenoxy)-benzene (dianhidride R) and 4,4$'$-bis-(4$''$-aminophenoxy)diphenyl (diamine BAPB). It is thermostable polymer with extremely high thermomechanical properties (glass transition temperature $T_g= 453-463$ K, melting temperature $T_m= 588$ K, Young’s modulus $E= 3.2$ GPa) [@Yudin_JAPS]. This polyetherimide could be used as a binder to produce composite and nanocomposite materials demanded in shipbuilding, aerospace, and other fields of industry. The two main advantages of the R-BAPB among other thermostable polymers are thermoplasticity and crystallinity. R-BAPB-based composites could be produced and processed using convenient melt technologies.
Crystallinity of R-BAPB in composites leads to improved mechanical properties of the materials, including bulk composites and nanocomposite fibers. It is well known that carbon nanofillers could act as nucleating agents for R-BAPB, increasing the degree of crystallinity of the polymer matrix in composites. As it was shown in experimental and theoretical studies [@Yudin_MRM05; @Larin_RSCADV14; @Falkovich_RSCADV14; @Yudin_CST07], the degree of crystallinity of carbon nanofiller enhanced R-BAPB may be comparable to that of bulk polymers.
![The chemical structure of R-BAPB polyimide.[]{data-label="fig_RBAPB-struct"}](figs/fig_Larin1/RBAPB.png){width="8cm"}
Ordering of polymer chains relative to nanotube axes could certainly influence a conductance of the polymer filled nanoparticle junctions. However, it is expected that such influence will depend on many parameters, including the structure of a junction, position, and orientation of chain fragments on the nanotube surface close to a junction, and others. Taking into account all of these parameters is a rather complex task that requires high computational resources for atomistic modeling and ab-initio calculations, as well as complex analysis procedures. Thus, on the current stage of the study, we consider only systems where the polymer matrix was in an amorphous state, i.e. no sufficient polymer chains ordering relative to nanotubes were observed.
Description of the multiscale procedure {#description-of-the-multiscale-procedure .unnumbered}
=======================================
The modeling of polymer nanocomposite electrical conductivity is based on a multi-scale approach, in which different simulation models are used at different scales. For the electron transport in polymer composites with a conducting filler, the lowest scale corresponds to the contact resistance between tubes. The contact resistance is determined at the atomistic scale by tunneling of electrons between the filler particles via a polymer matrix, and hence, analysis of contact resistance requires knowledge of the atomistic structure of a contact. Therefore, at the first step, we develop an atomistic model of the contact between carbon nanotubes in a polyimide matrix using the molecular dynamics (MD) method. This method gives us the structure of the intercalated polymer molecules between carbon nanotubes for different intersection angles between the nanotubes. One should mention, that since a polymer matrix is soft, the contact structure varies with time and, therefore, we use molecular dynamics to sample these structures.
Based on the determined atomistic structures of the contacts between nanotubes in the polymer matrix we calculate electron transport through the junction using electronic structure calculations and the formalism of the Green’s matrix. Since this analysis requires first-principles methods, one has to reduce the size of the atomistic structure of a contact to acceptable values for the first-principles methods, and we developed a special procedure for cutting the contact structure from MD results. First-principles calculations of contact resistance should be performed for all snapshots of an atomistic contact structure of MD simulations, and an average value and a standard deviation should be extracted. In this way, one can get the dependence of a contact resistance on the intersection angle and contact distance.
Using information about contact resistances we estimate the macroscopic conductivity of a composite with nanotube fillers. For this, we used a percolation model based on the Monte Carlo method to construct a nanotube network in a polymer matrix. In this model, we used distributions of contact resistances, obtained from the first-principles calculations for the given angle between nanotubes. Using this Monte Carlo percolation model one can investigate the influence of non-uniformities of a nanotube distribution on macroscopic electrical conductivity.
In the A section, we will describe the details of molecular dynamics modeling of the atomistic structure of contacts between nanotubes. In the B section, we present the details of first-principles calculations of electron transport for estimates of contact resistance. Finally, in the C section, we present the details of the Monte Carlo percolation model.
A. Preparation of the composite atomic configurations {#a.-preparation-of-the-composite-atomic-configurations .unnumbered}
-----------------------------------------------------
Initially, two metallic CNTs with chirality (5,5) were constructed and separated by 6 Å. The CNTs consisted of 20 periods along the axis, and each one had the total length of 4.92 nm. The broken bonds at the ends of the CNTs were saturated with Hydrogen atoms. The distance 6 Å was chosen, because starting with this distance polymer molecules are able to penetrate the space between CNTs. The three configurations of CNTs junctions were prepared: the first one with parallel CNTs axes (angle between nanotube axes $\varphi=0^\circ$), the second one with the axes crossing at 45 degrees ($\varphi=45^\circ$), and the third one with perpendicular axes ($\varphi=90^\circ$).
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --
![The snapshots of the the nanocomposite system with the parallel orientation of carbon nanotubes at the initial state (left picture) and after the compression procedure (right picture). The black lines represent the periodic simulation cell.[]{data-label="fig_Larin-conf"}](figs/fig_Larin_Config/composite_em1.png "fig:"){width="4cm"} ![The snapshots of the the nanocomposite system with the parallel orientation of carbon nanotubes at the initial state (left picture) and after the compression procedure (right picture). The black lines represent the periodic simulation cell.[]{data-label="fig_Larin-conf"}](figs/fig_Larin_Config/composite_compress1.png "fig:"){width="4cm"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --
To produce the polymer filled samples, we used the procedure similar to that employed for the simulations of the thermoplastic polyimides and polyimide-based nanocomposites in the previous works [@Larin_RSCADV14; @Falkovich_RSCADV14; @Nazarychev_EI; @Lyulin_MA13; @Lyulin_SM14; @Nazarychev_MA16; @Larin_RSCADV15]. First, partially coiled R-BAPB chains with the polymerization degree $N_p=8$, which corresponds to the polymer regime onset [@Lyulin_MA13; @Lyulin_SM14], were added to the simulation box at random positions avoiding overlapping of polymer chains. This results in the initial configuration of samples with a rather low overall density ($\rho\sim 100~$ kg/m$^3$) (Fig. \[fig\_Larin-conf\]). Then the molecular dynamics simulations were performed to compress the systems generated, equilibrate them and perform production runs.
The molecular dynamics simulations were carried out using Gromacs simulation package [@gromacs1; @gromacs2]. The atomistic models used to represent both the R-BAPB polyimide and CNTs were parameterized using the Gromos53a6 forcefield [@gromos]. Partial charges were calculated using the Hartree-Fock quantum-mechanical method with the 6-31G\* basis set, and the Mulliken method was applied to estimate the values of the particle charges from an electron density distribution. As it was shown recently, this combination of the force field and particle charges parameterization method allows one to reproduce qualitatively and quantitatively the thermophysical properties of thermoplastic polyimides [@Nazarychev_EI]. The model used in the present work was successfully utilized to study structural, thermophysical and mechanical properties of the R-BAPB polyimide and R-BAPB-based nanocomposites [@Larin_RSCADV14; @Falkovich_RSCADV14; @Nazarychev_EI; @Lyulin_MA13; @Lyulin_SM14].
All simulations were performed using the NpT ensemble at temperature $T=600$ K, which is higher than the glass transition temperature of R-BAPB. The temperature and pressure values were maintained using Berendsen thermostat and barostat [@Berendses_1; @Berendsen_2] with relaxation times $\tau_T= 0.1$ ps and $\tau_p=0.5$ ps respectively. The electrostatic interactions were taken into account using the particle-mesh Ewald summation (PME) method [@PME1; @PME2].
The step-wise compression procedure allows one to obtain dense samples with an overall density close to the experimental polyimide density value ($\rho \approx 1250-1300$ kg/m$^3$), as shown in Fig. \[fig\_Larin-conf\]. The system pressure $p$ during compression was increased in a step-wise manner up to $p=1000$ bar and decreased then to $p=1$ bar. After compression and equilibration, the production runs were performed to obtain the set of polymer filled CNT junction configurations.
As the conductance of polymer filled CNT junctions is influenced by the density and structure of a polymer matrix in the nearest vicinity of a contact between CNTs, the relaxation of the overall system density was used as the system equilibration criterion. To estimate the equilibration time, the time dependence of the system density was calculated as well as the density autocorrelation function $C_\rho(t)$: $$C_\rho(t)=\frac{\langle \rho(0) \rho(t)\rangle}{\langle \rho^2 \rangle},$$ where $\rho(t)$ is the density of the system at time $t$ and $\langle \rho^2 \rangle$ is the average density of the system during the simulation.
As shown in Fig. \[fig\_Larin-density\]a, the system density does not change sufficiently during simulation after the compression procedure. At the same time, the analysis of the density auto-correlation functions shows some difference in the relaxation processes in the systems studied (see Fig. \[fig\_Larin-density\]b). In the case of the system where CNTs were placed parallel to each other ($\varphi=0^\circ$), $C_\rho(t)$ could be approximated by the exponential decay function $C_\rho(t)=\exp(-t/\tau)$ with relaxation time $\tau = 4$ ps. The density relaxation in the systems with crossed CNTs ($\varphi=45^\circ$ and $\varphi=90^\circ$) was found to be slower. For these two systems density the auto-correlation functions could be approximated by a double exponential function $C_\rho(t)=A\exp(-t/\tau_1) + (1-A)\exp(-t/\tau_2)$, and the relaxation times determined using this fitting were $\tau_1=2.7$ ps and $\tau_2=12.2$ ns (for $\varphi=90^\circ$), and $\tau_1=9.5$ ps and $\tau_2=24.6$ ns (in case of $\varphi=45^\circ$).
Nevertheless, the results obtained after the analysis of the system density relaxation allow us to choose the system equilibration time to be 100 ns, which is higher than the longest system density relaxation times determined by the density autocorrelation function analysis. The same simulation time was used in our previous works to equilibrate the nanocomposite structure after switching on electrostatic interactions [@Larin_RSCADV14; @Nazarychev_EI; @Larin_RSCADV15]. The equilibration was followed by the 150 ns long production run. To analyze the polymer filled CNT junction conductance, 31 configurations of each simulated system, separated by 5 ns intervals, were taken from the production run trajectory.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- --
![The time dependence of the system density $\rho$ (a) and the density auto-correlation functions $C_\rho(t)$ (b) for the systems with various angles between nanotube axes $\varphi$. The dots correspond to the calculated data. The solid lines correspond to the fitting of $C_\rho(t)$ with the exponential (in case of $\varphi=0^\circ$) or double exponential (in case of of $\varphi=45^\circ$ and $\varphi=90^\circ$) functions.[]{data-label="fig_Larin-density"}](figs/fig_Larin_dens/fig_left.pdf "fig:"){width="7.25cm"}
![The time dependence of the system density $\rho$ (a) and the density auto-correlation functions $C_\rho(t)$ (b) for the systems with various angles between nanotube axes $\varphi$. The dots correspond to the calculated data. The solid lines correspond to the fitting of $C_\rho(t)$ with the exponential (in case of $\varphi=0^\circ$) or double exponential (in case of of $\varphi=45^\circ$ and $\varphi=90^\circ$) functions.[]{data-label="fig_Larin-density"}](figs/fig_Larin_dens/fig_right.pdf "fig:"){width="7.25cm"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- --
After the configurational relaxation is finished, we have to prepare polymer filled CNT junctions configuration for the first-principles calculations of contact resistance. The method we used for the calculations of a contact resistance is based on the solution of the ballistic electronic transport problem, finding the Volt-Ampere characteristic $I(V)$ of a device and deriving the contact resistance from the linear part of $I(V)$ corresponding to the low voltages. For this purpose, we employed the Green’s function method for solving the scattering problem and the Landauer-Buttiker approach to find the current through a scattering region coupled to two semi-infinite leads, as described in [@Datta]. Specific details of how these techniques are applied in the case of crossed CNTs can be found in [@comp_no_pol].
B. The first-principles calculations of the contact resistance of CNTs junctions filled with polymer {#sec_meth_fp .unnumbered}
----------------------------------------------------------------------------------------------------
For the preparation of a device for the electronic transport calculations, we first form that part of the device which consists of the atoms belonging to the CNTs used in the CNTs+polymer relaxation. Regions with the same geometry as in [@comp_no_pol] are cut from the initial 20-period long CNTs, and the rest of the atoms belonging to the CNTs are discarded. This is done to make possible a direct comparison of the results obtained for the polymer filled CNTs junctions with the results for CNTs junctions without polymer reported in [@comp_no_pol] for the same separation of CNTs equal to 6 Å.
Note that the CNTs parts of the scattering device contain atoms shifted from their positions in ideal CNTs due to the influence of the adjacent polymer molecules, and these shifts are time-dependent as a result of thermal fluctuations.
The cut regions contain two fragments of CNTs each 9 periods long, and in the case of the CNTs parallel configuration, the CNTs overlap by 7 periods. In the nonparallel configurations, one of the CNTs is rotated around the axis perpendicular to the CNTs axes in the parallel configuration and passing through the geometrical center of a device in the parallel configuration.
After the construction of the CNTs part of the scattering region, we attach to it leads that consist of 5 period long fragments of an ideal CNT. The CNTs parts of the scattering regions with the attached leads for the three considered configurations are shown in Fig. \[fig\_CNT\_conf\].
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The CNTs parts of the junctions. Left: the parallel configuration, center: CNTs axes are crossing at 45 degrees, right: the perpendicular configuration. The leads atoms are colored by green. []{data-label="fig_CNT_conf"}](figs/fig1/l_frame.pdf "fig:"){width="2.25cm"} ![The CNTs parts of the junctions. Left: the parallel configuration, center: CNTs axes are crossing at 45 degrees, right: the perpendicular configuration. The leads atoms are colored by green. []{data-label="fig_CNT_conf"}](figs/fig1/c_frame.pdf "fig:"){width="2.25cm"} ![The CNTs parts of the junctions. Left: the parallel configuration, center: CNTs axes are crossing at 45 degrees, right: the perpendicular configuration. The leads atoms are colored by green. []{data-label="fig_CNT_conf"}](figs/fig1/r_frame.pdf "fig:"){width="2.25cm"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
After the preparation of the CNT parts of the junctions, we still have 17766 atoms in a device. A system with such a large number of atoms cannot be treated by fully first-principles atomistic methods. On the other hand, keeping all those atoms for a precision calculation of the contact resistance of polymer filled CNTs junctions is not necessary, as only those polymer atoms which are close enough to a CNT will serve as tunneling bridges and give a contribution to the junctions conductivity. Thus, for the calculations of the contact resistance, only those atoms were kept which are closer to the CNTs than a certain distance $d$. It has been established by numerical experiments that if the value of $d$ is taken equal to the CNTs separation $d=6$ Å this is quite sufficient, and taking into account more distant atoms does not change the contact resistance significantly.
The procedure of sorting the polymer atoms is as follows. In our molecular dynamics simulations we used 27 separate polymer molecules, consisting of 8 monomers each. If at least one of the atoms of a polymer molecule was closer to the CNTs part of a junction than $d=6$ Å, the whole molecule was kept for a while, and discarded otherwise. Having applied this first part of the procedure, we kept 4 polymer molecules for the parallel configuration, 8 molecules for the perpendicular configuration and 11 molecules for the 45 degrees configuration.
Then we looked at the monomers of each molecule that survived the first round of selection. The same procedure was applied to monomers as the one used earlier for molecules: if at least one of the monomer atoms was closer to the CNTs part of a junction than $d=6$ Å, the whole monomer was kept for a while, and discarded otherwise.
After the second round of selection with monomers was over, we dealt in the same manner with the individual residues comprising a monomer. The broken bonds that appeared in the second and third stages were saturated with hydrogen atoms. The described procedure resulted in the following numbers of atoms in the whole device, including the central scattering region and the leads: 881 for the parallel configuration, 1150 for the perpendicular configuration, and 1074 for the 45 degrees configuration. The atomic configurations obtained using the described procedure for the first time steps in the corresponding series are presented in Fig. \[fig\_junction\_conf\].
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The atomic configurations for the first-principles calculations of the polymer filled CNTs junctions contact resistance. The configurations are for the first time steps in the corresponding series. Left: the parallel configuration, center: CNTs axes are crossed at 45 degrees, right: the perpendicular configuration. The Carbon atoms are gray, the Nitrogen atoms are blue, the Oxygen atoms are red, and the Hydrogen atoms are light gray. The leads atoms are colored by green. []{data-label="fig_junction_conf"}](figs/fig2/l_frame.pdf "fig:"){width="2.25cm"} ![The atomic configurations for the first-principles calculations of the polymer filled CNTs junctions contact resistance. The configurations are for the first time steps in the corresponding series. Left: the parallel configuration, center: CNTs axes are crossed at 45 degrees, right: the perpendicular configuration. The Carbon atoms are gray, the Nitrogen atoms are blue, the Oxygen atoms are red, and the Hydrogen atoms are light gray. The leads atoms are colored by green. []{data-label="fig_junction_conf"}](figs/fig2/c_frame.pdf "fig:"){width="2.25cm"} ![The atomic configurations for the first-principles calculations of the polymer filled CNTs junctions contact resistance. The configurations are for the first time steps in the corresponding series. Left: the parallel configuration, center: CNTs axes are crossed at 45 degrees, right: the perpendicular configuration. The Carbon atoms are gray, the Nitrogen atoms are blue, the Oxygen atoms are red, and the Hydrogen atoms are light gray. The leads atoms are colored by green. []{data-label="fig_junction_conf"}](figs/fig2/r_frame.pdf "fig:"){width="2.25cm"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
A fully ab-initio method for electronic structure investigations utilizing a localized pseudo-atomic basis set, as described in [@Ozaki_DFT] and implemented in [@OpenMX], was used for the calculations of the electronic structures of the whole device and the leads. We used basis set s2p2d1, the Pseudo Atomic Orbitals (PAO) cutoff radius equal to 6.0 a. u., and the cutoff energy of 150 Ry. The pseudo-potentials generated according to Morrison, Bylander, and Kleinman scheme [@pseudopot] were used. For the density functional calculations, the exchange-correlation functional was used in PBE96 form [@PBE96].
Using the electronic structures of the whole device and the leads we calculated energy the dependent transmission function through the device. Then the dependence $I(V)$ of the current $I$ on the voltage $V$ between the leads was determined with the Green’s function approach as described in detail in [@Datta]. Finally, the Landauer-Buttiker approach was used to find the current through a polymer filled CNTs junction.
Solving a scattering problem for a nano-device at arbitrary voltages is a computationally very complex task since it requires achieving self-consistency for both electron density and induced electrostatic potential simultaneously. Fortunately, for contact resistance calculations one can take advantage of the fact that the required voltages are very low.
According to the experimental evidence, the size of a nanocomposite specimen used in conductivity experiments is about 10 mm [@exp_CNT_length_1], and the typical voltages applied across such specimen do not exceed 100 V [@CNT_book]. Then for the size of a central scattering region about 1 nm, the voltage drop is about 10$^{-5}$ V, which is well within the range where the simplified approach is applicable.
The question of the modeling of quantum transport in the limit of low voltages was discussed in detail in [@comp_no_pol], where it was demonstrated that in the case of moderate voltages between leads, the scattering probability $T(E)$ is not sensitive to the details of the electrostatic potential distribution $V({\mathbf r})$ in the central scattering region, and some physically reasonable approximation may be chosen for $V({\mathbf r})$.
This is due to the fact that the difference of the Fermi functions $f(\varepsilon - \mu_L)$ and $f(\varepsilon - \mu_R)$ for the left and right leads with corresponding the chemical potentials $\mu_L$ and $\mu_R$, present in the original Landauer-Buttiker formula: $$I=\frac{2e}{h} \int T(\varepsilon) \left (f(\varepsilon - \mu_L) - f(\varepsilon - \mu_R) \right )d\varepsilon ,
\label{LB_formula}$$ where $e$ is the elementary charge, $h$ is the Planck constant, $\varepsilon$ is the electron energy and $T(\varepsilon)$ is the energy dependent transmission probability, is reduced in this case to a very narrow and sharp peak centered at the Fermi level of the device.
In addition to the analysis performed in [@comp_no_pol], in this article, to verify the accuracy of our approach, we made contact resistance calculations for a simple test CNT junction in a coaxial configuration, using both the simplified method we suggest and the full NEGF method, where not only electron charge density but the electric potential was converged as well. The interlead voltage used in those test calculations was set to $10^{-4}$ V, and the gap between the CNTs tips was 0.94 Å. The atomic configurations for the test calculations are presented in Fig. \[fig\_NEGF\_compare\]. The consistent NEGF calculations produced 1.71 $\cdot 10^{-5}$ S for the conductance of the junctions shown in Fig. \[fig\_NEGF\_compare\], while modeling without searching for convergence of potential yielded 1.72 $\cdot 10^{-5}$ S.
![The atomic configuration used in this work to validate the method of calculating quantum transport by comparing it againts the consistent NEGF approach.[]{data-label="fig_NEGF_compare"}](figs/fig_conf_compare/fig_compare.pdf){width="8cm"}
Thus, in our case, a very complex task of finding the $I(V)$ characteristic of a nano-device can be significantly simplified without the loss of precision. For the $I(V)$ calculations in the current paper we employed the abrupt potential model introduced in [@comp_no_pol]: the potentials $V_L$ for the left lead and $V_R$ for the right lead were set and were used for all atoms of the corresponding CNT to which that lead belonged. As for the polymer atoms, both $V_L$ and $V_R$ can be safely used for them, and at the considered voltages, adopting these two options, as we have checked by direct calculations, leads to the differences in current not exceeding 0.1%.
C. The percolation model {#sec_percolation .unnumbered}
------------------------
Determination of the conductivity of a polymer-CNT system can be implemented in 2 stages. First, a percolation cluster is formed, and the second stage implies solving the matrix problem for a random resistor circuit (network).
At the first stage, the modeling area – a cube of the linear size $L$ – is filled with CNTs. For this task, permeable capsules (cylinders with hemispheres at the ends) with a fixed length and diameter were chosen as filling objects corresponding to CNTs. The cube is filled by the successive addition of CNTs until a fixed bulk density of CNTs $$\eta = \frac{((4/3)\pi R^3 + \pi R^2 h)N}{L^3} \label{cnt_vol_frac}$$ in the cube is reached, where $R$ is the radius of the cylinder and hemisphere, $h$ is the height of the cylinder, and $N$ is the number of CNTs in the cube. The percolation problem for permeable capsules was previously solved in [@Xu_16], and in [@Schilling_15] capsules with a semipermeable shell were considered.
In the percolation problem, we use periodic boundary conditions as, for example, in [@Bao_2011]. We use the method of finding a percolation threshold based on the Newman and Ziff algorithm [@Newman_2001], where the identification of a percolation cluster is made at the stage of its formation.
When a percolation cluster is formed, the obtained CNT configuration is transformed into a resistor circuit (2nd stage).
The contributions to a conductance matrix resulting from inner resistance of CNTs and the junctions tunneling resistance are usually discussed in connection with constructing conductivity percolation algorithms. Direct measurements of CNTs resistance per unit length are available. In [@Chiodarelli_11], the inner resistance of CNTs is estimated as $15 \cdot 10^3$ $\Omega/\mu$m. The results of [@Ebbesen_96] give specific CNTs resistance in the range $(12-86) \cdot 10^3$ $\Omega/\mu$m. Taking into account that the characteristic CNTs lengths in nanocomposites are about several $\mu$m [@exp_CNT_length_1; @CNT_book], this results in the inner CNTs resistance approximately $10^4-10^5$ $\Omega$, which is at least one order of magnitude less than the tunneling resistance obtained in this work. The specific results on tunneling resistance will be discussed below in section \[RND\]. Thus, in our percolation model, the inner resistance of CNTs is neglected, and only tunneling resistance of CNTs junctions is taken into account. This can significantly reduce the requirements for computational time.
When contact resistance is determined only by tunneling, the principle of compiling the matrix for the percolation problem will be as follows. First, the matrix (N, N) is compiled from the bonds of percolation elements, where N is the number of CNTs participating in percolation. Then this matrix is transformed into the conductivity matrix according to the second Kirchhoff law $$\sum_j G_{ij} (V_i-V_j) = 0 \label {second_Kirchoff}$$ - the sum of currents for all internal elements of a percolation network is zero – where $G_{ij}$ are the elements of the conductance matrix ${\mathbf G}$ and $V_i$ is the component of the voltage vector ${\mathbf V}$ corresponding to the $i$-th contact point in a percolation network [@Kirkpatrick_73]. The voltages on the left and right borders of a simulation volume are set to $V_L=1$ V and $V_R=0$ V, respectively.
Now finding the conductivity of the system is reduced to the problem $\mathbf{GV}=\mathbf{I}$, where $\mathbf{I}$ is the vector of the currents between the contact points.
The dimension of the matrix problem can be further reduced to (N-2, N-2) by excluding boundary elements.
After solving the equation (\[second\_Kirchoff\]), with the elements of the $\mathbf{G}$ matrix obtained by the first-principles calculations, we obtain the voltage vector for all internal elements. Then, knowing this vector, we sum up all the currents on each of the boundaries. The currents on the left $I_L$ and right $I_R$ boundaries of a simulation volume are equal in magnitude and opposite in sign $I_L= - I_R$. Knowing these currents, we determine the conductance of the simulation system as $G=|I_L|/(V_L-V_R) = |I_R|/(V_L-V_R)$. And then the conductivity of the composite is calculated as $\sigma=GL/S$, where $L$ is the distance between the faces of a simulation volume where voltage is applied, and $S$ is the area of that kind of face. In our case, for the simulation volume of a cubic shape, $S=L^2$, and $\sigma=G/L$.
To calculate the conductivity, the following system parameters were selected: the length of a nanotube is $l=3$ $\mu$m, the diameter of a CNT is $D=30$ nm, the aspect ratio $l/D=100$, and the size of the system is 4 $\mu$m. The same values were used in [@Yu_2010]. We adopted those values to test our realization of the percolation algorithm against the previously obtained results [@Yu_2010]. Then for the given parameters for each fixed tube density, the Monte Carlo method (100 implementations of various configurations of CNT networks) was used to calculate the system conductivity.
The quality of CNTs dispersion is one of the key factors that affect the properties of nanocomposites, and a lot of efforts is taken to achieve a homogeneous distribution of fillers [@Mital_14]. In this work, we take into consideration the effect of inhomogeneity of a CNTs distribution on composite conductivity. The spatial density of nanotubes $\rho_{CNT}$, in this case, has one peak with a Gaussian distribution: $$\rho_{CNT}=\rho_0 \cdot exp(-(\mathbf {r-r_0})^2/\rho_\sigma^2), \label {agglo}$$ where ${\mathbf r_0}$ coincides with the geometrical center of a simulation volume, and $\rho_\sigma = L/12$. The value of the $\rho_0$ parameter is chosen so that the CNTs volume fraction in the inhomogeneous case is the same as in the homogeneous distribution.
Results and discussion {#RND .unnumbered}
======================
To find the contact resistance of polymer filled CNTs junctions one first needs to find their Volt-Ampere characteristics $I(V)$ and to determine the voltage range where $I(V)$ is linear and is not sensitive to the specific distribution of an electrostatic potential in the scattering region. In Fig. \[fig\_I\_V\] the $I(V)$ plot for the first time step in the atomic geometry series for the parallel configuration is shown.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The Volt-Ampere characteristic for the polymer filled CNTs junction corresponding to the first time step in the series for the parallel configuration. Left frame: maximum inter-lead voltage is $10^{-3}$ V, right frame: $10^{-4}$ V. The circles correspond to the results of calculations, the lines are guides for an eye.[]{data-label="fig_I_V"}](figs/I_V_test/I_V.pdf "fig:"){width="3.5cm"} ![The Volt-Ampere characteristic for the polymer filled CNTs junction corresponding to the first time step in the series for the parallel configuration. Left frame: maximum inter-lead voltage is $10^{-3}$ V, right frame: $10^{-4}$ V. The circles correspond to the results of calculations, the lines are guides for an eye.[]{data-label="fig_I_V"}](figs/I_V_test_low_V/I_V_low_V.pdf "fig:"){width="3.5cm"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
It is clearly seen from Fig. \[fig\_I\_V\] that up to about $10^{-4}$ V the $I(V)$ characteristic is linear, and after that value, it starts to deviate from a simple linear dependence. Thus, for the calculations of a contact resistance $R$ and its inverse, a junction conductivity $G$, we used the electrical current values obtained for the inter-lead voltage equal to $10^{-4}$ V. Note, that according to our estimates in section \[sec\_meth\_fp\], a characteristic voltage drop on the length of a CNTs tunneling junction is about $10^{-5}$ V which is well within the region where the linear $I(V)$ is observed.
The time dependences of the junctions conductances for the three considered configurations are presented in Fig. \[fig\_G\_t\]. One might expect that the shifts of both CNTs atoms and polymer atoms in the central scattering region due to thermal fluctuations would lead to fluctuations of junctions conductances $G$, but quantitative characteristics of this phenomenon such as minimum $G_{min}$, maximum $G_{max}$, mean values $\langle G \rangle$ and a standard deviation $G_\sigma$ can only be captured by highly precise fully atomistic first-principles methods, like those employed in the current paper. The resulting fluctuations of conductance are very high. For the parallel CNTs configuration the minimum value, $G_{min} = 2.4 \cdot 10^{-8}$ S, and the maximum value, $G_{max} = 6.8 \cdot 10^{-6}$ S, differ by more than two orders of magnitude, for the 45 degrees and perpendicular configurations the corresponding ratios are about 30. The same strong variations of conductance over time were reported in [@Penazzi_2013] for the coaxial CNTs configuration, where the results were obtained using a semi-empirical tight-binding approximation. Thus, it is obvious that for the precise determination of a conductance of polymer filled CNTs junctions one needs to use fully atomistic approaches, and phenomenological methods taking atomic configurations into account on the average are not reliable.
![The time dependence of the polymer filled junctions conductance $G$ in S. Red color correspond to the parallel configuration, the green lines – to the perpendicular configuration and the blue lines – the 45 degrees configuration. The results of the calculations are shown by circles, the saw-tooth lines serve as a guide for an eye. The straight solid lines designate the mean values of conductance $\langle G \rangle$, and the dashed ones – $\langle G \rangle \pm G_\sigma$.[]{data-label="fig_G_t"}](figs/G_t/g_t.pdf){width="8cm"}
----------- -------------------- ------------------- ------------------- ------------------- -------------------
0 3.6$\cdot10^{-13}$ 2.4$\cdot10^{-8}$ 6.8$\cdot10^{-6}$ 1.8$\cdot10^{-6}$ 1.6$\cdot10^{-6}$
0.2$\pi$ 1.4$\cdot10^{-14}$ — — —
0.25$\pi$ — 4.8$\cdot10^{-9}$ 1.4$\cdot10^{-7}$ 3.4$\cdot10^{-8}$ 2.7$\cdot10^{-8}$
0.3$\pi$ 1.2$\cdot10^{-14}$ — — —
0.5$\pi$ 4.2$\cdot10^{-14}$ 2.2$\cdot10^{-9}$ 4.3$\cdot10^{-8}$ 1.4$\cdot10^{-8}$ 1.1$\cdot10^{-8}$
----------- -------------------- ------------------- ------------------- ------------------- -------------------
\[tab\_cond\]
To assign the tunneling resistance to a polymer filled CNT junction the following algorithm was used. First, for each junction that had to be used in the percolation algorithm, a uniformly distributed random number $\varphi$ in the range \[0, $\pi/2$\] was generated. The value of the intersection angle for that junction was assigned to the obtained random number. The mean values and standard deviations for CNTs tunneling resistances and conductances calculated for the different atomic configurations corresponding to the different time steps are known for $\varphi = 0, \pi/4$, and $\pi/2$. Analyzing figure 4 of [@comp_no_pol], one can see that though an angle dependence of current and hence conductivity is a rather complex function, in the first approximation one can adopt a roughly piece-wise linear character for this function with the minimum located at $\varphi=0.25\pi$. Thus the logarithm of the mean value of conductance $\mu_\varphi$ for the generated $\varphi$ was set by linear interpolation between the logarithms of the mean values of conductances for $\varphi=0$ and $\varphi=\pi/4$ or $\varphi=\pi/4$ and $\varphi=\pi/2$ presented in Table \[tab\_cond\]. The same algorithm was applied to finding the standard deviation values $\sigma_\varphi$ for the generated $\varphi$. After the statistical parameters for the generated $\varphi$ are estimated, the conductivity of the junction is set to a random number generated using the normal distribution with the parameters $\mu_\varphi$ and $\sigma_\varphi$.
In [@comp_no_pol], the conductances were reported for the CNTs junctions with almost the same geometry as the CNTs parts of the devices considered in the current paper. The only difference between the configurations is that in this work the carbon atoms belonging to the CNTs part of the central scattering region are shifted somewhat from their equilibrium positions due to the interaction with polymer. The maximum values of those shifts along the $x$, $y$, and $z$ coordinates lie in the range $0.2-0.5$ Å. This gives us the possibility to directly compare the current results to the data from [@comp_no_pol] and, thus, elucidate the influence of polymer filling on the junctions conductance. The corresponding data and the results of a basic statistical analysis for the case of the polymer filled junctions are provided in Tab. \[tab\_cond\].
First, as was expected, filling CNTs junctions with polymer creates carrier tunneling paths and increases junctions conductance by 6-7 orders of magnitude. Second, it is evident that the CNTs axes crossing angle is crucial for the junctions conductivity when polymer is present as it was the case without polymer [@comp_no_pol]. At the same time, the sharp dependence of polymer filled junctions conductance on the CNTs crossing angle is somewhat different from the analogous dependence for junctions without polymers. While in the latter case this dependence is sharply non-monotonous, with a pronounced minimum at the angles around $0.25\pi$, in the former case there is a significant difference between the conductance values for the parallel and nonparallel configurations, but the configurations with the angle $\varphi$ between CNTs angles equal to $0.25\pi$ and $0.5\pi$ have very close conductances, and their mean values averaged over time $\langle G \rangle_{45}$ and $\langle G \rangle_{per}$ lie within the ranges $\langle G \rangle \pm G_\sigma$ of each other. Moreover, in contrast to the geometries without polymer, for the polymer filled CNTs junctions $\langle G \rangle_{per}$ is lower than $\langle G \rangle_{45}$ by a factor of 2.4.
Note also that for the parallel configuration, the polymer influence on the junction conductance is more pronounced than for the nonparallel ones. For the parallel configuration, adding polymer to a junction of CNTs separated by 6.0 Å with initial conductance of 3.6$\cdot 10^{-13}$ S produces conductance mean value equal to 1.8$\cdot 10^{-6}$ S. This gives the factor 0.5$\cdot 10^7$; the value of the analogous factor for the perpendicular configuration is 0.33$\cdot 10^6$.
The probable reason for the more effective conductance increase, when polymer is added, for the configurations with smaller angles between CNTs axes, is that the smaller is an intersection angle, the larger is the overlap area between CNTs where polymer can penetrate and, thus, create tunneling bridges. The higher fluctuation of conductance over time for the parallel configuration can be explained by the same reason: a larger CNTs overlap area gives more freedom for polymer atoms to adjust their positions.
The dependence of the calculated composite conductivity $\sigma$ on CNTs volume fraction $\eta$ $\sigma(\eta)$ is presented in Fig. \[fig\_cond\_pol\]. The value of the percolation threshold $\eta_{thresh}$ is estimated in this work as $\eta_{thresh} = 0.007$. To test our realization of the percolation algorithm against the previous results of [@Yu_2010] we calculated the composite conductivity using the fixed CNTs junction conductance equal to 1 M$\Omega$ for all junctions in a percolation network. Our results presented in Fig. \[fig\_cond\_pol\] by the red circles coincide within graphical accuracy to the results of [@Yu_2010] shown by the red squares.
![ The conductivity of CNT enhanced nanocomposites above the percolation threshold obtained in this work. The symbols of different shapes and colors are used to designate the following results. The red circles: the fixed CNTs tunneling junctions resistance of $R=1$ M$\Omega$ is used, the red squares: the conductivity results for the fixed 1 M$\Omega$ tunneling resistance from [@Yu_2010], the black triangles: the same as the red circles but for $R=0.54$ M$\Omega$ corresponding to the mean value of the tunneling junction resistance for the parallel configuration from Table \[tab\_cond\], the blue rhombi: the angle dependence of the CNTs junctions resistance is taken into account, the green pentagons: CNTs agglomeration is considered in addition to the angle dependence. The red line is a guide for an eye.[]{data-label="fig_cond_pol"}](figs/fig9/cnt_9.pdf){width="8cm"}
The 1 M$\Omega$, used in various sources, for example, [@Yu_2010], is not an arbitrary value, but rather a typical contact resistance of CNTs junctions filled with polymer for simple geometries. In this work, we obtained for the parallel configurations $1/\langle G \rangle =0.54$ M$\Omega$. The $\sigma(\eta)$ dependence for the fixed tunneling resistance of $0.54$ M$\Omega$ is shown in Fig. \[fig\_cond\_pol\] by the black triangles.
Taking into account the angle dependence of CNTs junctions conductances with the statistical parameters according to Table \[tab\_cond\], leads to the lowering of composite conductivity just above the percolation threshold by the factor of about 30. This number correlates with the ratio of the mean conductances for the parallel, $\langle G \rangle_{par}$, and 45$^\circ$, $\langle G \rangle_{45}$, configurations: $f_G = \langle G \rangle_{par}/ \langle G \rangle_{45} = 53$, but is higher than $f_G$ due to the presence of junctions with $\varphi < \pi/4$.
If agglomeration of CNTs, modeled by the inhomogeneity of their distribution according to formula (\[agglo\]) and the parameter values discussed in section \[sec\_percolation\], is taken into account in addition to the angle dependence of conductance, the composite conductivity is further reduced above a percolation threshold by the factor of 2.5. Lowering of conductivity of composites with agglomerated CNTs above a percolation threshold was also mentioned in [@Bao_2011]. The calculated results for the conductivity of a percolation network of agglomerated CNTs are shown in Fig. \[fig\_cond\_pol\] by the green pentagons.
We believe that in this work we have identified some of the key factors that influence nanocomposites electrical conductivity: the geometry of tunneling junctions and changes of atomic configurations due to thermal fluctuations. Among other causes that may affect conductivity, but are not considered in this work, are the presence of defects in CNTs, a distribution of CNTs over chiralities, lengths, aspect ratios, different separations between CNTs.
Until the specific experiments on conductivity for R-BAPB polyimide are not available, we can make a preliminary comparison of our modeling results to the available experimental results for different composites. The calculated conductivity of composite just above the percolation threshold at $\eta=0.0075$ is equal to $3.6\cdot10^{-3}$ S/m. This is a reasonable value that falls into the range of experimentally observed composites conductivities (for the comprehensive compilation of experimental results see Table 1 of [@Eletskii]). To make a quantitative comparison of modeling results with experiments the full details of nanocomposites structure are necessary, including the CNTs parameters mentioned in the previous paragraph. All these factors can be easily incorporated into the approach proposed in this work if sufficient computational resources are available.
Conclusions {#conclusions .unnumbered}
===========
We have proposed a physically consistent, computationally simple, and at the same time precise, multi-scale method for calculations of electrical conductivity of CNT enhanced nanocomposites. The method starts with the atomistic determination of the positions of polymer atoms intercalated between CNTs junctions, proceeds with the fully first-principles calculations of polymer-filled CNTs junctions conductance at the microscale and finally performs modeling of percolation through an ensemble of CNTs junctions by the Monte-Carlo technique.
The developed approach has been applied to the modeling of electrical conductivity of polyimide R-BAPB + single wall (5,5) CNTs nanocomposite.
Our major contributions to the field are the following. We have proposed a straightforward method to calculate a contact resistance and conductance for polymer-filled CNTs junctions with arbitrary atomic configurations without resorting to any simplifying assumptions. We have demonstrated that a consistent multiscale approach, based on solid microscopic physical methods can give reasonable results, lying within the experimental range, for the conductivity of composites and suggested a corresponding work-flow.
It is shown that a contact resistance and nanocomposite conductivity is highly sensitive to the geometry of junctions, including an angle between CNTs axes and subtle thermal shifts of polymer atoms in an inter-CNT’s gap. Thus, we argue that for the precision calculations of nanocomposites electrical properties rigorous atomistic quantum-mechanical approaches are indispensable.
We have to admit though, that we have not considered all possible factors that may influence CNT junctions conductivity on the micro-level. We concentrated on the CNTs crossing angle factor because it seems to be the most influential. The additional factors may include, for example, defects in CNTs, CNTs overlap lengths and others. On the other hand, the proposed approach may be used to take all those factors into account, provided sufficient computational resources are available.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported by the State Program “Organization of Scientific Research” (project 1001140) and by the Research Center “Kurchatov Institute” (order No. 1878 of 08/22/2019).
This work has been carried out using computing resources of the federal collective usage center Complex for Simulation and Data Processing for Mega-science Facilities at NRC “Kurchatov Institute”, <http://ckp.nrcki.ru/>.
[10]{} Yun-Ze Long, Meng-Meng, Changzhi Gub, Meixiang Wanc, Jean-Luc Duvaild, Zongwen Liue, Zhiyong Fanf, Progress in Polymer Science [**36**]{} (2011) 1415–1442. G. B. Blanchet, C. R. Fincher, and F. Gao, Appl. Phys. Lett. 82, 1290 (2003). A.V. Eletskii, A.A. Knizhnik, B.V. Potapkin, J.M. Kenny, Physics — Uspekhi, [**58**]{}, 225 (2015). Matias Soto, Milton Esteva, Oscar Martínez-Romero, Jesús Baez and Alex Elías-Zúñiga, Materials 2015, [bf 8]{}, 6697–6718. S Xu, O Rezvanian, K Peters and M A Zikry, Nanotechnology [**24**]{} (2013) 155706. Y. Yu, G. Song, and L. Sun, “Determinant role of tunneling resistance in electrical conductivity of polymer composites reinforced by well dispersed carbon nanotubes”, Journal of Applied Physics [**108**]{}, 084319 (2010). Sung-Hwan Jang and Huiming Yin, Mater. Res. Express [**2**]{} (2015) 045602. G. Pal, S. Kumar, Materials and Design [**89**]{} (2016) 129–136. James T. Wescott, Paul Kung, Amitesh Maitia, APPLIED PHYSICS LETTERS [**90**]{}, 033116 (2007). WS Bao, S A Meguid, Z H Zhu and M J Meguid, Nanotechnology [**22**]{} (2011) 485704. Krzysztof Grabowski, Paulina Zbyrad, Tadeusz Uhl, Wieslaw J. Staszewski, Pawel Packo, Computational Materials Science [**135**]{} (2017) 169–180. Castellino Micaela, Rovere Massimo, Shahzad Muhammad Imran, Tagliaferro Alberto, Composites: Part A [**87**]{} (2016) 237–242. Gabriele Penazzi, Johan M. Carlsson, Christian Diedrich, Gunter Olf, Alessandro Pecchia, and Thomas Frauenheim, J. Phys. Chem. C [**117**]{} (16), pp 8020–8027 (2013). K. Khromov, A. Knizhnik, B, Potapkin, J. M. Kenny, J. Appl. Phys., [**122**]{}, (2017). V.E. Yudin, V.M. Svetlichyi, G.N. Gubanova, A.L. Didenko, T.E. Sukhanova, V.V. Kudryavtsev, S. Ratner, G. Marom, J. Appl. Polym. Sci., [**83**]{}, 2873 (2002). V.E. Yudin, V.M. Svetlichnyi, A.N. Shumakov, D.G. Letenko, A.Y. Feldman, G. Marom, Macromolecular Rapid Communications, [**26**]{}, 885 (2005). S.V. Larin, S.G. Falkovich, V.M. Nazarychev, A.A. Gurtovenko, A.V. Lyulin, S.V. Lyulin, RSC Adv., [**4**]{}, 830 (2014). S.G. Falkovich, S.V. Larin, A.V. Lyulin, V.E. Yudin, J.M. Kenny, S.V. Lyulin, RSC Adv., [**4**]{}, 48606 (2014). V.E. Yudin, A.Y. Feldman, V.M. Svetlichnyi, A.N. Shumakov, G. Marom, Compos. Sci. Tech., [**67**]{}, 789 (2007). V.M. Nazarychev, S.V. Larin, A.V. Yakimansky, N.V. Lukasheva, A.A. Gurtovenko, I.V. Gofman, V.E. Yudin, V.M. Svetlichnyi, J.M. Kenny, S.V. Lyulin, J. Polym. Sci. B, [**53**]{}, 912 (2015). S.V. Lyulin, A.A. Gurtovenko, S.V. Larin, V.M. Nazarychev, A.V. Lyulin, Macromolecules, [**46**]{}, 6357 (2013). S.V. Lyulin, S.V. Larin, A.A. Gurtovenko, V.M. Nazarychev, S.G. Falkovich, V.E. Yudin, V.M. Svetlichnyj, I.V. Gofman, A.V. Lyulin, Soft Matter, [**10**]{}, 1224 (2014). V.M. Nazarychev, A.V. Lyulin, S.V. Larin, I.V. Gofman, J.M. Kenny, S.V. Lyulin, Macromolecules, [**49**]{}, 6700 (2016). S.V. Larin, A.D. Glova, E.B. Serebryakov, V.M. Nazarychev, J.M. Kenny, S.V. Lyulin, RSC Advances, [**5**]{}, 51621 (2015). D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A.E. Mark, H.J.C. Berendsen, J. Comput. Chem., [**26**]{}, 1701 (2005). B. Hess, C. Kutzner, D. van der Spoel, E. Lindahl, J. Chem. Theory Comput., [**4**]{}, 435 (2008). B. Hess, H. Bekker, H.J.C. Berendsen, J.G.E.M. Fraaije, J. Comp. Chem., [**18**]{}, 1463 (1997). H.J.C. Berendsen, in Computer Simulations in Material Science, ed. M. Meyer and V. Pontikis, Kluwer, Dordrecht, 1991 H.J.C. Berendsen, J.P.M. Postma, W.F. van Gunsteren, A. Dinola and J. R. J. Haak, Chem. Phys., [**81**]{}, 3684 (1984). T. Darden, D. York, L. Pedersen, J. Chem. Phys., [**98**]{}, 10089 (1993). U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee, L. G. Pedersen, J. Chem. Phys., [**103**]{}, 8577 (1995). S. Datta, Transport in Mesoscopic Systems (Cambridge University Press, Cambridge, UK, 1995). T. Ozaki, H. Kino, Phys. Rev. [**B72**]{}, 045121 (2005). See http://www.openmx-square.org for OpenMX (Open source package for Material eXplorer) - a software package for nano-scale material simulations. I. Morrison, D.M. Bylander, L. Kleinman, Phys. Rev. B [**47**]{}, 6728 (1993). J. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. [**77**]{}, 3865 (1996). J. O. Aguilar, J. R. Bautista-Quijano, F. Avilés, “Influence of carbon nanotube clustering on the electrical conductivity of polymer composite films”, eXPRESS Polymer Letters Vol.4, No.5 (2010) 292–299. Reza Taherian and Ayesha Kausar, “Electrical Conductivity in Polymer-Based Composites: Experiments, Modelling,and Applications”, William-Andrew, 2019. Xu W., Su X., Jiao Y. et.al. // Phys. Rev. E. 2016. V.93(3), 032122. DOI:10.1103/PhysRevE.94.032122. Schilling T., Miller M.A. and Schoot P. // EPL (Europhysics Letters). 2015. V.111 (5). DOI: 10.1209/0295-5075/111/56004. Newman M. E. J., Ziff R. M. // Phys. Rev. E. 2001. V. 64, 016706. DOI:10.1103/PhysRevE.64.016706. Nicolo Chiodarelli, Sugiura Masahito, Yusaku Kashiwagi, Yunlong Li, Kai Arstila, Olivier Richard, Daire J Cott, Marc Heyns, Stefan De Gendt, Guido Groeseneken, and Philippe M Vereecken, “Measuring the electrical resistivity and contact resistance of vertical carbon nanotube bundles for application as interconnects”, Nanotechnology [**22**]{} (2011) 085302. T. W. Ebbesen, H. J. Lezec, H. Hiura, J. W. Bennett, H. F. Ghaemi, T. Thio, “Electrical conductivity of individual carbon nanotubes”, Letters to Nature, [**382**]{}, 54 (1996). Scott Kirkpatrick, “Percolation and Conduction Scott Kirkpatrick”, Rev. Mod. Phys. [**45**]{}, 574 (1973). “Polymer Nanotube Nanocomposites: Synthesis, Properties, and Applications, Second Edition”, ed. Vikas Mittal, Wiley, 2014.
|
---
abstract: 'Natural philosophy necessarily combines the process of scientific observation with an abstract (and usually symbolic) framework, which provides a logical structure to the development of a scientific theory. The metaphysical underpinning of science includes statements about the process of science itself, and the nature of both the philosophical and material objects involved in a scientific investigation. By developing a formalism for an abstract mathematical description of inherently non-mathematical, physical objects, an attempt is made to clarify the mechanisms and implications of the philosophical tool of *Ansatz*. Outcomes of the analysis include a possible explanation for the philosophical issue of the ‘unreasonable effectiveness’ of mathematics as raised by Wigner, and an investigation into formal definitions of the terms: principles, evidence, existence and universes that are consistent with the conventions used in physics. It is found that the formalism places restrictions on the mathematical properties of objects that represent the tools and terms mentioned above. This allows one to make testable predictions regarding physics itself (where the nature of the tools of investigation is now entirely abstract) just as scientific theories make predictions about the universe at hand. That is, the mathematical structure of objects defined within the new formalism has philosophical consequences (via logical arguments) that lead to profound insights into the nature of the universe, which may serve to guide the course of future investigations in science and philosophy, and precipitate inspiring new avenues of research.'
bibliography:
- 'dualref.bib'
date:
---
**A more general treatment of the philosophy of physics and the existence of universes**\
[Jonathan M. M. Hall[^1]\
University of Adelaide, Adelaide, South Australia 5005, Australia]{}
Introduction {#sect:intro}
============
The study of physics requires both scientific observation and philosophy. The tenants of science and its axioms of operation are not themselves scientific statements, but philosophical statements. The profound philosophical insight precipitating the birth of physics was that scientific observations and philosophical constructs, such as logic and reasoning, could be married together in a way that allowed one to make predictions of observations (in science) based on theorems and proofs (in philosophy). This natural philosophy requires a philosophical ‘leap’, in which one makes an assumption or guess about what abstract framework applies most correctly. Such a leap, called *Ansatz*, is usually arrived at through inspiration and an integrated usage of faculties of the mind, rather than a programmatic application of certain axioms. Nevertheless, a programmatic approach allows enumeration of the details of a mathematical system. It seems prudent to apply a programmatic approach to the notion of Ansatz itself and to clarify its process metaphysically, in order to gain a deeper understanding of how it is used in practice in science; but first of all, let us begin with the inspiration.
A metaphysical approach {#sect:meta}
=======================
In this work, a programme is laid out for addressing the philosophical mechanism of Ansatz. In physics, a scientific prediction is made firstly by arriving at a principle, usually at least partly mathematical in nature. The mathematical formulation is then guessed to hold in particular physical situations. The key philosophical process involved is exactly this ‘projecting’ or ‘matching’ of the self-contained mathematical formulation with the underlying principles of the universe. No proof is deemed possible outside the mathematical framework, for proof, as an abstract entity, is an inherent feature of a mathematical (and philosophical) viewpoint. Indeed, it is difficult to imagine what tools a proof-like verification in a non-mathematical context may use or require.
It may be that the current lack of clarity in the philosophical mechanism involved in applying mathematical principles to the universe has implications for further research in physics. For example, in fine-tuning problems of the Standard Model of particle interactions (such as for the mass of the Higgs boson [@DGH; @Martin:1997ns] and the magnitude of the cosmological constant [@ArkaniHamed:1998rs]) it has been speculated that the existence of multiple universes may alleviate the mystery surrounding them [@Wheeler; @Schmidhuber:1999gw; @Szabo:2004uy; @Gasperini:2007zz; @Greene:2011], in that a mechanism for obtaining the seemingly finely-tuned value of the quantity would no longer be required- it simply arises statistically. However, if such universes are causally disconnected, e.g. in disjoint ‘bubbles’ in Linde’s chaotic inflation framework [@Linde:1983gd; @Linde:1986fc], there is a great challenge in even demonstrating such universes’ existence, and therefore draws into question the rather elaborate programme of postulating them. Setting aside for the time being the use of approaches that constitute novel applications of known theories, such as the exploitation of quantum entanglement to obtain information about the existence of other universes [@Tipler:2010ft], a more abstract and philosophical approach is postulated in this paper.
The universality of mathematics
-------------------------------
Outside our universe, one is at a loss to intuit exactly which physical principles continue to hold. For example, could one assume a Minkowski geometry, and a causality akin to our current understanding, to hold for other universes and the ‘spaces between’, if indeed the universes are connected by some sort of spacetime? Indeed, such questions are perhaps too speculative to lead to any real progress; however, if one takes the view of Mathematical Realism, which often underpins the practice of physics, as argued in Section \[sec:mr\], and the tool of Ansatz, one can at least identify *mathematical principles* as principles that should hold in any physical situation- our universe, or any other. This viewpoint is more closely reminiscent of Level IV in Tegmark’s taxonomy [@Tegmark:2003sr] of universes. One may imagine that mathematical theorems and logical reasoning hold in all situations, and that all ‘universes’ (a term in need of a careful definition to match closely with the sense it is meant in the practice of physics) are subject to mathematical inquiry. In that case, mathematics (and indeed, our own reasoning) may act as a ‘telescope to beyond the universe’ in exactly the situation where all other senses and tools are drawn into question.
To achieve the goal of examining the process of the Ansatz- of matching a mathematical idea to a non-mathematical entity (or phenomenon), one needs to be able to define a non-mathematical object abstractly, or mathematically. Of course, such an entity that can be written down and manipulated is indeed not ‘non-mathematical’. This is so in the same way that, in daily speech, an object can be referred to only by making an abstraction (c.f. ‘this object’, ‘what is *meant* by this object’ ‘what is *meant* by the phrase ‘what is meant by this object’ ’, etc.). This nesting feature is no real stumbling block, as one can simply identify it as an attribute of a particular class of abstractions- those representing non-mathematical objects. Thus a rudimentary but accurate formulation of non-mathematical objects in a mathematical way will form the skeleton outline for a new and fairly general formalism. After developing a mathematics of non-mathematical objects, one could then apply it to a simple test case. Using the formalism, one could derive a process by which an object is connected or related somehow to its description, using only the theorems and properties known to hold in the new framework. The formalism could then be applied to the search for other universes, and the development of a procedure to identify properties of such universes. In doing so, one could make a real discovery so long as the phenomenological properties are not introduced ‘by hand’. This follows the ethos of physics, whereby an inspired principle (or principles) is followed, sometimes superficially remote from a phenomenon being studied, but which has profound implications not always perceived contemporaneously (and not introduced artificially), which ultimately guide the course of an inquiry or experiment.
There is an additional motivation behind this programme beyond addressing the mechanism of the Ansatz, which is to attempt to clarify philosophically Wigner’s ‘unreasonable effectiveness’ of mathematics [@Wigner] itself. It is the hope of this paper to identify this kind of ‘effectiveness’ as a kind of fine-tuning problem, i.e. that it is simply a feature that naturally arises from the structure of the new formalism.
Evidences {#subsect:evintro}
---------
In the special situation where one uses mathematical constructs exclusively, the type of evidence required for a new discovery would also need to be mathematical in nature, and testing that it satisfies the necessary requirements to count as evidence in the usual scientific sense could be achieved by using mathematical tools within the new framework. To explain how this might be done, consider that evidence is usually taken to mean an observation (or collection of observations) about the universe that supports the implications of a mathematical formulation prescribed by a particular theory. Therefore, it is necessary to have a strict separation between objects that are considered ‘real/existing in the universe’, and those that are true mathematical statements that may be applied or *projected* (correctly or otherwise) onto the universe.
Note that, for evidence in the usual sense, any observations experienced by the scientist are indeed abstractions also. For example, in examining an object, photons reflected from its surface can interact in the eye to produce a signal in the brain, and the interpretation of such a signal is an object of an entirely different nature to that of the actual photons themselves- much observational data is, in fact, discarded, and most crucially, the observation is then fitted into an abstract framework constructed in the mind. In a very proper sense, the more abstract is the more tangible to experience, and the more material is the more alien to experience. Therefore, it seems reasonable to suggest that a definition of evidence in familiar scientific settings is already equivalent to evidence in an entirely mathematical framework; in fact, the distinction between the two is purely convention.
Mathematical Realism {#sec:mr}
====================
Historical Background
---------------------
In the ‘world of ideals’ (as developed from the notion of Plato’s ‘universals’ [@Soc], rather than Berkeley’s Idealism [@Berk1]) there are certain abstract objects (‘labels’ or ‘pointers’), which refer to material objects. The Ansatz arises by guessing and then assuming a particular connection between those pointers and other abstract objects. This allows material objects to be entirely objective, (to avoid solipsism), but also entirely subservient in some sense, to abstract objects. An observer can only indirectly interact via interpretation. Thus, the abstract affects the abstract, and abstract the physical.
One might argue, contrarily, that entities existing in the abstract mind are altered via natural or material means [@Jackson]. Certain mental states are invoked upon interpretation of the empirical, regardless of the existence of patterns, which are abstractions (and that this would be true even if the universe were ‘unreasonable’- not in general amenable to rational inquiry). It is the point of view expounded in this treatise, however, that it is not true that material objects can interact so directly with other material objects, but only indirectly, since the interactions themselves would otherwise need to be materials. Yet an interaction is necessarily an abstract link (i.e. it has to follow some pattern, rule or law) even when at rest. That is, the notion of ‘interaction’ is necessarily abstract. It is the feature of positing only indirect relation among physical objects that takes the form of a view opposite to that of epiphenomenalism [@Jackson; @Huxley]. The difference in viewpoint may appear to hinge on the semantics of the terms ‘interaction’ and ‘abstract’, but the goal is simply to characterise the oft-proved successful methods of physics *as already applied* in practice, through a particular choice in philosophy. Our goal is, by simply identifying (and thus labelling) the salient features of the philosophy, an investigation into the more general (and philosophical) aspects of the practice of physics can be conducted. On the other hand, the discussion of the soundness of such a philosophy on other grounds constitutes a tangent topic, and the presentation of a complete enumeration of various emendations or contrary viewpoints on the choice of philosophy will be left for further investigation.
Contemporary developments
-------------------------
The evolution of metaphysics from rudimentary Cartesian Dualism [@Desc] to that proposed by Bohm [@Bohm] demonstrates the usefulness of a mathematical viewpoint in clarifying an enriching philosophical ideas as they pertain to physics, and the universe as a whole. Abstract relationships are centralised, and underlying principles of matter, rather than a catalogue of the immediate properties, are interpreted to have the greater influence in accessing the fundamental nature of the physical world. The shift in perspective is that the simple and elegant descriptions of a physical system are those which incorporate its seemingly disparate features into an integrated whole. One then goes on to postulate a relationship between the physical object in question and the machinery of its abstract description.
Similar philosophical views have found success in the field of neuroscience, such as the work of Damásio in characterising consciousness [@Damasio1; @Damasio2]. Instead of treating the mind and body as separate entities, one postulates an integrated system, whereby mechanisms in the body, such as internal and external stimuli, result in neurological expressions such as emotion. Emotion and reason are thus brought together on equal footing, since both actions are the result of comparing and evaluating a variety of stimuli, including other emotions, to arrive at a response. That is, from a modern perspective, just as relationships between physical objects are fundamental in characterising the intangible properties of their whole, it is the abstract relationships between faculties in the body and brain, such as interactions and stimuli, that characterise consciousness.
The identification of phenomena with abstract descriptions, such as behaviour and interactions, was formalised in Putnam and Fodor’s Functionalism [@Putnam; @Block]. Functionalism provides the ability to consider indirect or ‘second order’ explanations for the nature of objects. Unlike Physicalism, which identifies the nature of objects with the instances themselves that occur in the real-world, Functionalism entails the generalisation of the objects in terms of their functional behaviour. These more general classes of object are identified by the features that all relevant instances of the object have in common, and so the nature of the object becomes more ubiquitous, even if more abstract. This may be nothing more than a semantic shift, in cases where one is at liberty to allow the definitions of certain abstractions more scope as needed (such as pain or consciousness), so that they more closely match their use in daily human endeavours. Further abstractions, such as ‘causes’, an integral part of many areas of science, follow naturally.
Taken on its own, Functionalism represents a deprecated metaphysic, insufficient for a complete account of internal states of a physical system [@Searle], and is therefore commonly employed simultaneously with another metaphysic (such as Physicalism). By not providing for a ‘real’ or material existence independent of an object’s functional behaviour, a bare Functional philosophy is not wholly suitable for describing the process of physics, which involves identifying material objects whilst projected upon them an Ansatz from some (abstract) theory.
As an example of the shift in perspective that Functionalism brings, consider the following scenario of an abstract entity based on real-world observations, such as an emotion/state of affairs, etc., whose cause is in want of identifying. Let this object be denoted a ‘feature’. Instead of the cause simply being identified specific phenomenon, or a mechanism based on the real world, the cause is characterised by an abstract object, $G$, which represents a collection of pointers to the relevant parts of the mechanism. If we posit, for the moment, an abstract interpreting function, $i:R\rightarrow A$, relating the real-world, $R$, to the world of ideals, $A$, and similarly, a pointer function, $r:A\rightarrow R$, one can establish a relationship between the cause and the feature. For a mechanism, $M$, a set of features, $F$, and an element $g_1\in G$, define $M=r(g_1)$, which resides in $r(G)$, and then $i(r(g_1))=i(M)=F$. In this (fairly loose) symbolic description, the features and the cause are related in the statement $i(r(G))=F$. The ‘reverse-epiphenomenal’ philosophy, akin to Interactionism, is to realise that the relation itself between the two entities is an abstract one, whose attributes it will benefit us to characterise.
Formalism {#sect:form}
=========
In this section, a new formalism is outlined in order to capture the essential features of the philosophical problem at hand. A set of very general *abstraction operators* are defined, such that they may act upon each other in composition. By introducing another special kind of operation, the projection $\PP$, general objects may be constructed such that they, at the outset, obey the basic principles expected to hold for objects and attributes used in a recognisable context, such as in language. To avoid semantic trouble, when one is free to assume or assign a property in a given context, the choice made is that which is most closely aligned with ‘what is commonly understood’ by a term. Note that other definitions are (unless logically non-viable) completely acceptable also- it is simply a choice of convenience to try to align the concepts chosen to be investigated with those of a language (such as a spoken or written language). In fact, it is judicious to do so, given that any philosophical problems one may wish to address are usually cast in such a language.
Projective algebra and abstraction classes {#subsect:proj}
------------------------------------------
Though Cantor’s Theorem [@Cantor] prohibits a consistent scheme classifying the space of all such abstract entities, (as echoed by Schmidhuber [@Schmidhuber:1999gw]), the abstractions considered here are limited to a set, $W$, of ‘world objects’, representing a set of a specific type of object with certain (very general) properties. Very little mandatory structure for the objects, $w$, inhabiting $W$ is assumed, and they may be represented by a set, a group or other more specific mathematical objects. Thus, one may define $W$ in a consistent fashion using an appropriate axiomization, such as that of ZFC [@Zermelo], so long as none of the properties of the formalism is contravened.
In Section \[subsubsect:gen\], the properties of the real-world objects $w$ are clarified. Some basic rules of composition are assumed, but the spaces mapped-into in doing so are simply definitions rather than theorems; the tone of this work is not to impose any more specific details on the framework than is required in order to fulfill the aforementioned goals, namely, the construction of a mathematical-like theory in order to address the mechanism of Ansätze. Other mathematical formulations for obtaining general information about a system, such as Deutsch’s *Constructor Theory* [@Deutsch:2012], take a similar approach in determining suitable definitions for objects required for certain tasks in an inquiry.
### The labelling principle {#subsubsect:lp}
Firstly, the projection operator, $\PP$, obeys what shall be known as the *labelling principle*: $$\label{eq:lp}
\boxed{\PP\circ\PP = \PP.}$$ A direct consequence of the labelling principle is that $\PP$ has no inverse, $\PP^{-1}$.
Assume $\PP^{-1}$ exists. Then: $$\begin{aligned}
\PP\circ\PP^{-1} &= \PP \\
& \neq 1. \mbox{\quad Therefore, $\PP^{-1}$ DNE.} \qedhere \end{aligned}$$
where $1$ is the identity operator. An equivalent argument follows for an operator $\PP^{-1}$ acting on the left of $\PP$.
The projection operator may be applied to a world object $w$, and the resultant form, $\PP(w)$, constitutes a new object, inhabiting a different space from that of $w$. Firstly, the consequences of the lack of inverse of $\PP$ directly affect the projected space $\PP W$, which will be interpreted philosophically in the next section. Suffice to say, the judicious design of $\PP W$ lends itself to a particular view of abstractions, whereby very little information can be gained from an object in the real world *directly*, as expostulated regarding the definition of evidence, in Section \[subsect:evintro\].
### Abstractions
The notion of ‘abstraction’ is codified by postulating a certain operator $\A$, which may act on objects residing in a space $W$, much like the projection operator. It will have, however, different properties to those of the projection operator. Using the abstraction operator, one is able to go ‘up a level’, ($\A\circ\A(w)\neq\A(w)$), and establish new features of the object $w$. The sequential application of the abstraction operator creates a chain, in a reminiscent fashion to that of (co-)homologies [@Bott]; however, the properties of the abstraction operators are more general.
One may define the abstraction classes, $\Om^i$, as $$\begin{aligned}
1 &\in\Om^0,\\
\A &\in\Om^1,\\
\A\circ\A &\in \Om^2,\\
&\vdots \end{aligned}$$ For the moment, the properties of the classes are no more extensive than, say, a collection of elements (the operators). The *range* of $\A$, namely $\Om^1 \equiv \{\A_i\}$, is a class of any type of $\A$. (What is meant by the set symbols $\{$ $\}$ will be discussed in Section \[subsubsect:coll\].) It follows that, for $w\in W$, $\Om^0(w) = w\in W$.
The sequential actions of the projection and the abstraction operators do not cancel each other, and it can be shown that $\A\circ\PP(w)\neq w$:
\[thm:cochains\] $\A\circ\PP(w) \neq w$.
Assume $\A\circ\PP(w) = w$. Then: $$\begin{aligned}
&\A\circ \PP\in\Omega^0 \\
&\Rightarrow \A\circ\PP = 1 \\
&\Rightarrow \A =\PP^{-1}, \,\ro{DNE.} \qedhere \end{aligned}$$
Thus, $\A\circ\PP(w)\in\Om^1(\PP(w))$, for some $w\in\Om^0(w)$.
Consider the complex of maps: $$\begin{aligned}
&\Om^0\rightarrow\Om^1\rightarrow\Om^2\cdots\\
&\downarrow\searrow^\chi\,\,\,\downarrow\qquad\downarrow\\
&\PP\,\Om^0\,\,\,\PP\,\Om^1\,\,\,\PP\,\Om^2\cdots \end{aligned}$$ It is possible to design a function, $\chi\equiv\PP\circ\A$, which exists, and will be utilised in Section \[subsubsect:gen\]. However, it is important to note that our constraints on $\PP$ do not allow the construction of a function $\varphi:\PP\,\Om^0\rightarrow\PP\,\Om^1$, or any other mapping between projections of abstraction classes. Mathematically, $\varphi$ would have the form $\PP\circ\A\circ\PP^{-1}$, which does not exist; but philosophically, it is supposed of $\PP(w)$ that it encode the behaviour of the actual world object *meant* by $w$. One interprets the ‘non-mathematical’ object, $\PP(w)\in\PP W$, knowing that the fact it is necessarily an abstraction is already encoded in the behaviour of $\PP$ by construction. Note that the failure to construct a function $\varphi$ as a composite of abstraction operators and their inverses does not mean that such a mapping does not exist. However, the principal motivation for supposing the non-existence of such a function is to encode the features one expects in an abstract modelling of non-abstract objects.
This view of the general structure of abstraction is an opposite view to the metaphysic of epiphenomenalism [@Huxley; @Jackson], in that, colloquially speaking, changes to ‘real-world’ objects can only occur via some abstract state, and it does not make sense to set up a relationship between non-mathematical entities and insist that such a relationship must be non-abstract.
Different instances of $w$ cannot be combined in general, but their abstractions can be compared by composition. The objects $\A(w_1)$ and $\A(w_2)$ can also be defined to be comparable, via use of the *commutators*, in Section \[subsubsect:comm\].
In considering the properties of $\Om^0(W)$, one finds that $$\begin{aligned}
\Om^0(\Om^0(W)) &= W \\
\Rightarrow \Om^0\circ\Om^0 &= \Om^0. \end{aligned}$$ Generalising to higher abstraction classes, we find the *level addition property*: $$\label{eq:la}
\boxed{\Om^i\circ\Om^j = \Om^{i+j}.}$$ The non-uniqueness of $\A$ means that many abstract objects can describe an element of $W$. In general, $\A_i\circ\A_j(w)\neq \A_j\circ\A_i(w)$, so $\Om^1(\A_i(w))\neq\A_i(\Om^1(w))$, though both $\Om^1(\A_i(w))$ and $\A_i(\Om^1(w))$ are in $\Om^2(w)$. The set $\Om^0$ includes the identity operator $1$, but also contains elements constructed from abstractions and other inverses, e.g. $\A_{L,i}^{-1}\circ\A_j(w)$, to be discussed in Section \[subsubsect:li\].
### Commutators {#subsubsect:comm}
Define the commutator as an operator that takes the elements of the $i$th order abstraction space, acting on a world object $w_1$, to the same abstraction space acting on another world object $w_2$, $\Com^i_{W=\Om^0(W)}: \Om^i(w_1)\rightarrow\Om^i(w_2)$. The subscripts on the commutator symbol indicate the space inhabited by the objects whose abstractions are to be commuted, and the labels of the discarded and added objects, respectively. The superscript denotes the order of abstraction (plus one) at which the commutation takes place. As a simple example, $\Com^1_{W,1,2}\A(w_1) = \A(w_2)$. In general, let $$\begin{aligned}
\Com^1_{W,i,j}\A(w_i) &= \A(w_j),\\
\Com^2_{\Om^1(w),i,j}\A_k\circ\A_i(w) &= \A_k\circ\A_j(w),\\
\mbox{and}\,\Com^{b+1}_{\Om^b(w),i,j}
\underbrace{\A\circ\cdots\circ\A^{b}_i\circ\cdots\circ\A}_a(w) &=
\A\circ\cdots\circ\A^{b}_j\circ\cdots\circ\A(w). \end{aligned}$$ In order to construct the new object from the old object, one must successively apply ‘inverse’ operations of the relevant abstractions to the left of the old object (as discussed in the next section), and rebuild the new object by re-applying the abstractions. This is not possible in general, where objects may include operators that have no inverse, such as the projection operator.
### The left inverse of the abstraction {#subsubsect:li}
Define the left inverse $$\A_L^{-1}\circ\A = 1,$$ or more generally, $\A_L^{-1}\circ\A(w) = w$. A right inverse is not assumed to exist in general, which will be important in establish certain kinds of properties in Section \[subsect:ext\].
As a generalization, one can define a chain of negatively indexed abstraction classes $\Om^{-|i|}$. The level addition property can accommodate this scenario. The elements of $\Om^0$ are populated by objects of the form $\A_{L,i}^{-1}\circ \A_j$, or $\A_i\circ \A_{L,j}^{-1}\circ \A_k \circ \A_{L,n}^{-1}$, etc. That is, successive abstractions and inverses in any combination such that the resulting abstraction space is order zero. This includes the identity operator.
By using the left inverse, it can be shown that the following theorem holds (which complements Theorem \[thm:cochains\]), as a consequence of the choice of the philosophical properties of $\PP$:
\[thm:chains\] $\PP\circ\A(w)\neq w$.
Assume $\Om^{-2}\neq\Om^{-1}$, and $\PP=\A_L^{-1}$. Then: $$\begin{aligned}
\PP\circ\PP(w) &= \A_L^{-1}\circ\A_L^{-1}(w)\\
&= \PP(w) \quad (\mbox{Eq.~(\ref{eq:lp})})\\
&= \A_L^{-1}(w)\\
\Rightarrow \A_L^{-1}\circ\A_L^{-1} &= \A_L^{-1}
\,\Rightarrow\Leftarrow \qedhere \end{aligned}$$
As a corollary, it also follows that $$\label{cor:chains}
\PP\circ\A(w) \neq \A\circ\PP(w).$$
In summary, this non-commutativity property of the abstraction operators in Eq. \[cor:chains\] is an important consequence of the reverse-epiphenomenal philosophical motivation behind the labelling principle, and it will be the starting point for the construction of the generalised objects in Section \[subsubsect:gen\].
### Auxiliary maps
In order for a successful description of the relationships among different objects in general, a definition of the mapping between objects of the form $\A\PP(w_1)$ and $\A\PP(w_2)$ is sought.
Up until now, maps of the following types have been considered:\
$\bullet\quad$ $w\rightarrow \A(w)$, where the notion of the map itself is an abstraction of $\A$ of the form:\
$\A_{\ro{map}}\circ\A\in\Om^2(w)$;\
$\bullet\quad$ $\A\circ\A(w)\rightarrow w$, where the map is now of the form $\A_{\ro{map}}\circ\A\circ\A\in\Om^3(w)$, and\
$\bullet\quad$ $\A(w)\rightarrow w$, identical to the first case, except that the direction is reversed. By convention, let the definition of map be chosen such that the direction of mapping is not important, but simply the relationship between the two objects. Therefore, the map is always taken such that it exists in a space $\Om^{i\geq 1}$, that is, we define it as the modulus of the map. Note that each of these maps is constructed as a composite of other abstractions, and shall be denoted *literal maps*.
It cannot, in general, be attested that there is no such mapping between some $w_1$ and $w_2$ in $W$. However, one is free to *define* to exist a non-composite type of map, denoted *auxiliary maps*, which relate objects of the form $\A\PP(w_1)$ and $\A\PP(w_2)$, or $\A\PP\A_1(w)$ and $\A\PP\A_2(w)$, etc. The map itself exists in the space $\Om^1(w)$, that is, the application of what is meant by the map does not change the order of the objects to which it is applied.
Using the concept of auxiliary maps, a generalized version of the commutator, denoted $\hat{\Com}$, may be defined in a way not possible for the naïve construction of the commutator $$\hat{\Com}^2_{\Om^1(w),i,j}\A\,\PP\A_i(w) = \A\,\PP\A_j(w).$$ The *auxiliary commutator* will be important for the neat formulation of the general condition for a specific kind of existence, in Sec \[subsect:ext\].
### The total set
One may define the *total set* of an object $w$ as $$\begin{aligned}
S(w) &= \bigcup_{i=0}^\infty\Om^i(w),\\
S(\PP(w)) &= \bigcup_{i=0}^\infty\Om^i\circ\PP(w).\end{aligned}$$ The $\bigcup$ symbol denotes the range of the multiple objects, indexed by an integer $i$, $j$, etc. over which the set should be specified. Here, one postulates a certain *supposition of physics*, that $\PP S(w)$ spans at least $W$.
$\PP$ is surjective, i.e. $\forall w_i \in W, \exists \sigma_i\in S(w_i)$ such that $\PP(\sigma_i) = w_i$.
This condition represents the ‘working ethos’ of the practice of physics. It is expected that there is some abstract description, however elaborate or verbose, to describe every real-world object.
Relationships {#subsect:rel}
-------------
In this section, the tools introduced in the preceding section will be used to define more general relationships between objects. In addition, a generalised object notation will be defined, and the nature of the real-world objects $w$ will be clarified.
### Ansätze
An Ansatz is formed by adding a structure, or additional layer of abstraction, and imposing it on what one considers ‘the real world’. Cast in the new framework, this is simply the successive application of an abstraction and a projection operator upon some object $$\Z_i(w) = \PP\circ\A_i(w).$$ As a consequence of the labelling principle in Eq. (\[eq:lp\]), the Ansatz of an object, $w$, and what is meant by the real-world object corresponding to $w$, cannot be related directly, recalling that $\varphi:\PP\circ\A(w)\rightarrow\PP(w)$ does not exist. This simply means that there is no way of generating the label of an object directly from the object itself; it is a free choice. The Ansatz is akin to the statement: ‘let this new label (with possible additional information) be linked to the real object’. The notion of Ansatz, particularly the special examples considered in Sec. \[subsect:ext\], will be useful in understanding the formal structure of existence as it pertains to real-world objects.
### Collections and relationships {#subsubsect:coll}
A *collection* or *set* of objects, $\{w_i\}$ (indexed by integer $i$), in the formalism, is simply treated as an abstraction, $\A_\ro{set}$, used in conjunction with a commutator: $$\{w_1,\ldots,w_N\} = \bigcup_{j=1}^N\A_\ro{set}\circ\Com^1_{W,i,j}w_i.$$ By further imposing that there should be a relationship (other than the collection itself) among the objects $w_i$, the addition information is simply added by another abstraction, say, $\A'$, and what is meant be this particular relationship is simply: $$r(w_i) = \PP\,\A'\circ\bigcup_{j=1}^N\A_\ro{set}\circ\Com^1_{W,i,j}w_i.$$ A relationship in general, $r = \A'\circ\Com^1\A\in\Om^2$, does not have to specify that there be a particular relationship among objects $w_i$.
*Example:* In identifying a ‘type’ of object, such as all objects that satisfy a particular function or requisite, one means something slightly more abstract than a particular instance of an object itself. In order to produce a notion similar to the examples: ‘all chairs’ or ‘all electrons’, one must construct a relationship among a set of $w_i$’s, each of which is a set of, say, $n$ observations: $w_A,w_B,\ldots\in W'\subset W$. Let $$\begin{aligned}
w_A &= \bigcup_{j=1}^{N_A}\A_\ro{set}\circ\Com^1_{W,i,j}w_i^{(A)}, \\
w_B &= \bigcup_{j=1}^{N_B}\A_\ro{set}\circ\Com^1_{W,i,j}w_i^{(B)}, \quad\mbox{etc.} \\
w_i^{(A)},w_i^{(B)},\ldots&\in W. \\
\mbox{Then,}\,\,\{w_A,w_B,\ldots\} &= \bigcup_{j'=1}^{n}\bigcup_{j=1}^{N_{j'}}
\A_\ro{set}\circ\Com^1_{W',i',j'}\A_\ro{set}
\circ \Com^1_{W,i,j}w_i^{(i')}. \end{aligned}$$ The relationship itself that constitutes the ‘type’ thus takes the form: $$\label{eq:type}
\boxed{r_\ro{type} = \A'\circ\bigcup_{j'=1}^{n}\bigcup_{j=1}^{N_{j'}}
\A_\ro{set}\circ\Com^1_{W',i',j'}\A_\ro{set}
\circ \Com^1_{W,i,j}w_i^{(i')} \in \Om^3(W)\,\, (= \Om^2(W')),}$$ for some $\A'$, and $W'=\Om^1(W)$[^2]. This formula represents the notion of ‘types’ of object in a fairly general fashion, in order to resemble as closely as possible the way in which objects are typically characterised and subsequently handled in the frameworks of language and thought.
### Generalised objects {#subsubsect:gen}
Up until now, discussion of the nature of the real-world objects $\{w_i\}\in W$ has been avoided. However, in order to incorporate them in the most general way into the framework of the abstraction algebra, one may posit that the real-world objects are simply a chain of successive projection or abstraction operators. In general, one can construct ‘sandwiches’, such as: $$\A_1\circ\cdots\circ\A_{i_1}\circ\PP\circ\A_2\circ\cdots.$$ All objects considered in the framework thus far can be expressed in this form, noting that $\PP$(anything) $\in\Om^0\PP$(anything). Due to the corollary in Eq. (\[cor:chains\]), the projection operators cannot be ‘swapped’ with any of the abstraction operators, and so the structure of the object is nontrivial. Let $c$ denote a generalised object, living in the space: $$\label{eq:c}
\boxed{c \in \Om^{i_1}\PP\,\Om^{i_2}\PP\cdots\PP\,\Om^{i_n}(W) \equiv \C^{i_1 i_2\ldots i_n}_W.}$$ The space $W$ here could stand for any other general space constructed in this manner, not necessarily the space inhabited by $c$ itself; thus Eq. (\[eq:c\]) is not recursive as it may initially appear. Because the internal structure, so-to-speak, of $c$ contains a collection of a possible many abstractions, it may be expressed in terms of type. Here are two examples: $$\begin{aligned}
\mbox{Let}\,\,\,c^{(1)} &= \PP\{\PP\circ\A_1(w),\ldots,\PP\circ\A_n(w)\}
\in \Om^0\PP\,\Om^1\PP\,\Om^1(w) = \C^{011}_w\\
&= \PP\bigcup_{j=1}^n\A_\ro{set}\circ\Com^2_{\Om^1,i,j}\PP\A_i(w).\\
\mbox{Or}\,\,\,c^{(2)} &= \PP\{\PP(w_1),\ldots,\PP(w_n)\}
\in \Om^0\PP\,\Om^1\PP\,\Om^0(W) = \C^{010}_W\\
&= \PP\bigcup_{j=1}^n\A_\ro{set}\circ\Com^1_{W,i,j}\PP(w_i), \end{aligned}$$ where, in the first case, $\Om^0(w)\in\Om^0(W)$.
Consider the behaviour of an Ansatz $\Z=\PP\circ\A_Z$ acting on a generalised object $c\in\C^{i_1\ldots i_n}_W$: $$\Z\circ c = \PP\circ\A_Z\circ\A_1\circ\cdots\circ\A_{i_1}\circ\PP\circ\cdots.
\vspace{-3mm}$$ $\Z$ maps $c$ into a space $\C^{0i_1+1\ldots i_n}_W$. If we define rank($c$)$= n$, then rank($\Z\circ c$)$= n+1$. Note that the rank of $\Z\in \C_c^{01}$ can also be read off easily: rank($\Z$)$= 2$. Objects of the form of $\Z$ are the principal rank $2$ Ansätze. Note that other rank $2$ objects besides $\Z$ exits, such as objects of the form $\PP\circ\A_1\circ\A_2$.
A more general description of Ansätze also exist, analogously to the generalised objects. By constructing an object of the form: $\chi = \underbrace{\A\circ\cdots\circ\A}_{j_1}
\,\circ\,\,\PP\circ\underbrace{\A\circ\cdots\circ\A}_{j_2}\circ\PP\circ\cdots$, that is, for an object residing in a space $\C_c^{j_1\,j_2\ldots j_m}$, the composition $\chi\circ c$ lies in $\C_W^{j_1\ldots (j_m+i_1)\ldots i_n}$, which is of rank $n+m-1$.
By convention, an Ansatz must contain a projection operator. Therefore, there is no rank $1$ Ansatz, and we arrive at our general definition of Ansatz: $$\mbox{Any object acting on $c$, with a rank $> 1$, is an Ansatz.}$$ In addition, there are no objects with rank $\leq 0$.
Let $\xi$ exist such that rank($\xi$)$\leq 0$, and $c\in\C^{i_1\ldots i_n}_W$. Then: $$\begin{aligned}
\mbox{rank($\xi\circ c$)} &= \mbox{rank($\xi$)+rank($c$)}-1
< \mbox{rank($c$)}=n\\
\Rightarrow \xi\circ c &\in \C^{i'_1\ldots i'_{n-1}}_W \\
\Rightarrow \xi &\mbox{\,\,is of the form\,\,}
X\circ\bigcup_{j=1}^{k}\PP^{-1}\circ
\bigcup_{i=1}^{i_j} \A_{i,L}^{-1}, \,\mbox{where rank($X$)\,$\leq k$}\\
\Rightarrow \xi &\mbox{\,\,DNE, for any $X$.} \qedhere \end{aligned}$$
For example, in the case $k=1$, $X$ is a rank $1$ Ansatz, and $\xi = X\circ\PP^{-1}\circ\bigcup_{i=1}^{i_1}\A_{i,L}^{-1}$, such that $\xi\circ c \in \C^{i_2\ldots i_n}_W\cong\C^{i'_1\ldots i'_{n-1}}_W$. The rank of $\xi\circ c$ is $n-1$, and the rank of $\xi$ is therefore $0$. Because of the usage of the operation $\PP^{-1}$, such an object is inadmissible.
We would like to use the Ansätze to investigate the properties of generalised objects. However, there are a variety of properties in particular, discussion of which shall occupy the next section. The notion of ‘existence’ is a key example that urgently requires clarification, and it will be found that such a property (and those similar to it), when treated as an abstraction, must have additional constraints.
$I$-extantness {#subsect:ext}
--------------
Firstly, one must make a careful distinction between what is meant by ‘existence’ in the sense of mathematical objects, and in the sense of the ‘real world’. In the former case, one may assume that an object exists if it can be defined in a logically consistent manner. In the latter case, it is a nontrivial property of an object, which must be investigated on a case-by-case basis, and the alternative word ‘extantness’ will be used in order to avoid confusion. The goal of the formalism is to relate the two terms- that an object’s extantness can be tested by appealing to the existence (in the mathematical sense) of some construction.
We begin by assuming that extantness is an inferred property of an object, and thus added by an Ansatz. Define its abstraction, $\A_E$, such that an object $c = \PP\circ\A_E(w)$ is extant if such a construction exists; i.e. $c$ is extant if it can be written in this form (for any $w$). For $c = \PP\circ\A_E\circ\A_1\circ\cdots\in\C_W^{i_1\ldots i_n}$; the operator $\A_E$ must occur in the left-most position of all the abstractions in $c$. Clearly then, it *must not necessarily be the case* that $\PP\circ\A_E(w)$ exists, if this abstraction is to be equivalent to how extantness (or existence in the conventional sense) is understood.
*Example:* Consider the object $\PP\circ\A_E(1)$, where $1$ is the abstraction identity. $\A_E(1)=\A_E$ is the extantness itself, and $\PP\circ\A_E$ is ‘what is meant’ by extantness, which is itself extant- it is the trivial extant object.
This leads us to the first property of $\A_E$, that its right inverse, $\A_{E,R}^{-1}$, does not exist, as anticipated in Section \[subsubsect:li\].
Consider $c=\PP\circ\A\circ\cdots(w)$ such that it is not extant. Assume $\A_{E,R}^{-1}$ exists also. Then: $$\begin{aligned}
c &= \PP\circ\A_E\circ\A_{E,R}^{-1}\circ\A\circ\cdots(w) \\
&= \PP\circ\A_E(w'), \,\mbox{where}\,w'\equiv\A_{E,R}^{-1}(w)\\
\Rightarrow &\,\mbox{$c$ is extant.}\,\Rightarrow\Leftarrow \qedhere \end{aligned}$$
It is not necessary at this stage to suppose that the left inverse of $\A_E$ does not exist either; however, if that were the case, then $\A_E$ would share a property with $\PP$, in lacking an inverse. The two are unlike, however, in that $\A_E\circ\A_E\neq \A_E$. Since $\A_E$ lives in a restricted class $\tilde{\Om}^1\subset\Om^1$, indicating the additional constraint of lacking an inverse, then the level addition property of Eq. (\[eq:la\]) means $\A_E\circ\A_E\in\tilde{\Om}^2\not\equiv\tilde{\Om}^1$. A further consequence of the non-existence of $\A_{E,L}^{-1}$ is that the statement $\A_E(a) = \A_E(b)$ does not mean that $a=b$. One may interpret this as the fact that two abstractions may simply be labels for the same extant object. Note that the definition of the literal commutator requires the existence of an inverse of each abstraction operator that occurs in sequence to the left of the object being commuted, but that is not the case for auxiliary commutators.
Recalling the supposition of physics, that $\PP S(w)$ spans at least $W$, a further clarification may now be added:
\[supp:sopcor\] All extants have Ansätze, but not all elements of $\PP(W)$ or $\PP S(w)$ are extants.
From the point of view of Mathematical Realism, one would argue that projected quantities, $\PP\circ\cdots$, are those which are ‘real’ (and not dependent on their extantness), since such a definition of ‘real’ would then encompass a larger variety of objects, regardless of their particular realisation in our universe. Such a semantic choice for the word ‘real’ seems to align best with the philosophy of Mathematical Realism. Nevertheless, it is still important to have a mechanism in the formalism to determine the extantness of an object.
Although extantness has been singled out as a key property, a similar argument may be made for the truth of a statement, whose abstraction can be denoted as $\A_T$. Like extantness, the object $\PP\circ\A_T(w)$ may not exist for every $w$, and the trivially true object is $\PP\circ\A_T(1)$. Let us label all properties of this sort, ‘$I$-extantness’, since their enumeration in terms of common words is not of interest here. For any $I$-extant abstraction $\A_I$, we call $\tilde{\C}^{i_1\ldots i_n}_W$ the restricted class of generalised objects, $c_I$.
A formula is now derived, which is able to distinguish between objects that are $I$-extant and those that are not, by virtue of their mathematical existence. Consider the case that $\PP\circ\A_I(w_1)$ exists, but $\PP\circ\A_I(w_2)$ does not. $w_1$ must contain an addition property, $\A_{I'}$, that is not present in $w_2$. Unlike $\A_I$, it is not required that $\A_{I'}$ occur in a particular spot in the list of abstractions that comprise $w_1$. Nor is there a restriction in the construction of an inverse, which would prevent a commutator notation being employed. Let $w_1$ be represented by a collection of objects defined by: $w_1=\{\A_{I'}\circ\A\circ\cdots,\A\circ\A_{I'}\circ\cdots,\mbox{etc.}\}$. That is, $w_1$ takes the form of a set of generalised objects, $c$, but for the replacement of an operator, $\A$, with $\A_{I'}$. It is important to note that the $\A_{I'}$ that distinguishes $w_1$ from a non-extant object, such as $w_2$, is particular to $w_1$. For an object $c$ to be extant, it would have to include an abstraction $\A_{I'}^c$, specific to $c$; otherwise, any object related to $c$ in any way would also be extant, which doesn’t reflect the behaviour expected of extant objects in the universe.
In commutator notation, one would need to write out a geometric composition of the form $$\begin{aligned}
w_1&= \A_{\ro{set}}\bigcup_{m=0}^{i_1-1}\Com^{i_1-m+1}_{\Om^{i_1-m}(w),m+1,I'}H_1\circ\PP\circ
\bigcup_{m'=0}^{i_2-1}\Com^{i_2-m'+1}_{\Om^{i_2-m'}(w),m'+1,I'}H_2\circ\cdots,\\
&= \A_{\ro{set}}\circ H_1\circ \bigcup_{p=2}^n\PP\bigcup_{m=0}^{i_p-1}\Com^{i_p-m+1}_{\Om^{i_p-m}(w),m+1,I'}H_p,\end{aligned}$$ for $c=H_1\circ\PP \circ H_2\circ\cdots$. A more elegant formula may be defined simply in terms of $c$ itself, without the need of introducing new symbols, $H_1, \ldots, H_n$. One can achieve this using auxiliary commutators $$\begin{aligned}
w_1&=\Big\{\hat{\Com}^{(\sum_{j=1}^n i_j)+1}_{\C_W^{i_1\ldots i_n},\,1,\,I'}c,
\hat{\Com}^{\sum_{j=1}^n i_j}_{\C_W^{i_1-1\ldots i_n},\,2,\,I'}c,
\ldots,\hat{\Com}^{(\sum_{j=1}^{n-1} i_j)+1}_{\C_W^{1\ldots i_n},\,i_1,\,I'}c,\ldots,
\hat{\Com}^{(\sum_{j=1}^{n-2} i_j)+1}_{\C_W^{1\ldots i_n},\,i_2,\,I'}c,\dots\Big\}\\
&= \bigcup_{m=0}^{i_1-1}\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^n i_j)+1-m}_{\C_W^{i_1-m\ldots i_n},\,m+1,\,I'}c
\cup \bigcup_{m'=0}^{i_2-1}\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^{n-1} i_j)+1-m'}_{\C_W^{i_2-m'\ldots i_n},\,m'+1,\,I'}c \cup \cdots\\
&= \bigcup_{p=1}^n\bigcup_{m=0}^{i_p-1}\A_{\ro{set}}\circ
\hat{\Com}^{(\sum_{j=1}^{n-p+1}i_j)+1-m}_{\C_W^{i_1\ldots i_p-m\ldots i_n},\,m+1,\,I'}c. \end{aligned}$$ It follows then, that a generalised object that is $I$-extant takes the form $$\label{eq:cIcond}
\boxed{c_I = \PP\circ\A_I\circ \bigcup_{p=1}^n\bigcup_{m=0}^{i_p-1}\A_{\ro{set}}\circ
\hat{\Com}^{(\sum_{j=1}^{n-p+1} i_j)+1-m}_{\C_W^{i_1\ldots i_p-m\ldots i_n},\,m+1,\,I'}c,}$$ where $c_I$ is of the form $\PP\circ\A_I(w_I)$. This is a powerful formula, as it represents the condition for $I$-extantness for a generalised object, $c$. Note that it would be just as correct to define $c_I$ as an element of a set characterised by the righthand side (i.e. using ‘$\in$’ instead of ‘$=$’), but because the notion of a ‘set’, $\A_{\ro{set}}$, is simply an element of $\Om^1$, it can be incorporated into the general form of $\C^{i_1\ldots i_n}_W$.
One might wonder how to relate the properties of a proof (i.e. verifying the truth of a statement) with the existence of an abstraction, $\A_{T'}$. In a example, consider the object representing the existence of truth, $w_T$. The validity of the ‘excluded middle’ [@Aristotle] in this situation means that the proof is very simple:
$$\begin{aligned}
w_T &\Rightarrow w_T \\
\neg w_T &\Rightarrow (\neg \neg w_T) = w_T. \qedhere \end{aligned}$$
Since $w_T$ is the statement of truth itself, i.e. $w_T = \PP\circ\A_T$, the inconsistency of $\PP\circ \A_T(w_T)$ means the inconsistency of $\PP\circ\A_T$. Such a statement is not true by construction. One can now identify the abstraction, $\A_{T'}^{w_T}$ as being $\A_T$ itself. Thus, this exercise demonstrates that the proof of a statement has consequences for the abstract form of the statement, allowing one to identify more specific properties. Note that this does not, at this stage, provide extra proof methods, since there is no procedure, as yet, for acquiring knowledge of the form of an object’s relevant $\A_{T'}$ in advance. The content of the proof must rely on standard means.
Cardinality
-----------
In the derivation of the general condition for an object $c$ to be $I$-extant (Eq. (\[eq:cIcond\])), one arrives at a set of elements. In this notation, the set is not intended to specify all the possibilities that each abstraction operator, $\A$, can take. Rather, the set can be thought of as being ‘the set of alterations from a general $c$’ that encompass the required condition.
If one seeks an absolute measure of the ‘size’ of the object, in terms of the overall possibilities, one may define a type of cardinality, $|c|$, in terms of the total possible number of abstractions. Recalling Cantor’s Theorem [@Cantor], there is no consistent description of such a universal class. However, since the formalism accommodates the imposition of restrictions on the kind of objects that can be represented, let the number of possibilities for $\A$ be assumed consistently definable, and denote as $L$. $L$ need not be finite, nor even countable, however, it can be used to obtain formulae for the cardinality of an object.
Define the number of abstractions, $\A$, in $c\in\C_W^{i_1\ldots i_n}$ as $\bar{n} \equiv \sum_{j=1}^n i_j$. Thus one finds that $$\begin{aligned}
|c| &= L^{\bar{n}},\,\mbox{and}\\
|c_I| &= \bar{n}\,L^{\bar{n}-1}. \end{aligned}$$ The latter formula is simply a consequence of there being $\bar{n}$ possibilities for restricting one abstraction operator to be $\A_{I'}$. If one enforces $N$ restrictions on the set of $\A$’s, then it follows that $$|c_N| = \left(\frac{1}{2}(\bar{n}^2 - N^2)+1\right)L^{\bar{n}-N}.$$ This formula will become relevant in the next section.
Unreasonable effectiveness
==========================
The goal is to use the general framework, described in Section \[sect:form\], is to encapsulate the essence of describing phenomena using a theory, in the sense used in physics. Thus, the issue of Wigner’s ‘unreasonable effectiveness’ of mathematics [@Wigner] to describe the universe may be addressed by transporting the problem to a metaphysical context. There, the tools from philosophy, such as logic and proof theory, can be directed at the question that involves not so much the behaviour of the universe, as the behaviour of descriptions of the universe (i.e. physics itself). It is important to be able to transport certain features of physics into a context where an analysis may take place, and such a context is, by definition, metaphysics.
The notion of ‘effectiveness’ is that, given a consistent set of phenomena, $v_i\in V$, One can extend $V$ to include more phenomena such that (general) Ansätze able to explain the phenomena satisfactorily can still be found. In this general context, what is meant by an ‘explanation’ will be taken to be a relationship among the phenomena, $v_i$, in the form of abstractions. The essence of the mystery of the effectiveness of mathematics is not whether one can always ‘draw a box’ around an arbitrary collection of objects, or that laws and principles (of any kind) are obeyed, but the identification of particular principle(s) such that phenomena $v_1,v_2,\ldots$ are consequences of them; and that via the principles, the whole of $V$ may be obtained, indicating a more full explanation of the phenomena. That is, the phenomena are extant because of the truth of the underlying principles, rather than being identified ‘by hand’ (which would hold no predictive power in the scientific sense). Note that the set $V$ may, in fact, only include a subset of the possible phenomena to discover in the universe, and so would represent a subset of the set, $W$, as discussed in Section \[sect:form\].
Let $v_1,v_2\ldots$ have descriptions $\A_{v_1},\A_{v_2},\ldots
\in V\subset W$, which are extant. Let there be some principle, (or even collection of principles with complicated inter-dependencies), described by the general object, $c_{\ro{princ}}$, such that each element of $V$ may be enumerated. It is our goal to investigate under what conditions the following statement holds: $$\begin{aligned}
c_{\ro{princ}} \,\mbox{is true} &\Rightarrow v_1,\,v_2,\ldots\,\mbox{are extant},\\
\mbox{i.e.}\,\,\Big[c_{\ro{princ}}=\PP\circ\A_{T}(w_{\ro{princ}})\Big] &\Rightarrow
\Big[\A_{v_i} = \PP\circ\A_E(\A_{y_i})\in V\Big].
\label{eq:unr}\end{aligned}$$ If there is a principle that implies such a statement, we wish to identify it, and investigate whether or not it is true.
The circumstances of the truth of Eq. (\[eq:unr\]) depends on how $w_{\ro{princ}}$ is related to the phenomena, $v_i$. $w_{\ro{princ}}$ itself represents principle(s) whose truth is not added by hand in Ansatz form. This does not mean that it is not true, since the form of $w_{\ro{princ}}$ is as yet unspecified. The most general way of relating $w_{\ro{princ}}$ and all $v_i$’s is to use the method of substituting abstractions into the formula for a generalised object, such as that used to derive the general condition of $I$-extantness in Eq. (\[eq:cIcond\]). In the same way that the set of all possible locations of $\A_{I'}$ in $c$ was considered, here, all possible combinations of locations of abstractions describing $v_i$ in $c$ must be considered, such that each $\A_{v_i}$ occurs at least once. This formula can be developed inductively.
Consider only two phenomena, $v_1$ and $v_2$, with corresponding abstract descriptions defined as $\A_{v_1}$ and $\A_{v_2}$. For a generalised object, $c\in\C^{i_1\ldots i_n}_W$, one finds $$w_{\ro{princ}}^{(N=2)} = \bigcup_{p=1}^n\bigcup_{\stackrel{m'=0}{m'\neq m}}^{i_p-1}
\bigcup_{m=0}^{i_p-1}
\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^{n-p+1}i_j)+1-m'}_{\C^{i_1\ldots i_p-m'\ldots i_n}_W,m'+1,v_2}
\hat{\Com}^{(\sum_{j=1}^{n-p+1}i_j)+1-m}_{\C^{i_1\ldots i_p-m\ldots i_n}_W,m+1,v_1} c.$$ In the case of $N$ phenomena, it is assumed that $N \leq i_p$: the number of abstractions available in the general formula for $c$ may be made arbitrary large to accommodate the number of phenomena. One may make use of the following formula $$\bigcup_{\stackrel{m^{(N-1)}=0}{m^{(N-1)}\neq \mbox{\tiny{any other $m$'s}}}}^{i_p-1}
\cdots\bigcup_{\stackrel{m^{(2)}=0}{m^{(2)}\neq m^{(1)}}}^{i_p-1}\bigcup_{m^{(1)}=0}^{i_p-1}
= \bigcup_{k=0}^{N-1}\bigcup_{\quad m^{(k,i_p)}\in[0,i_p-1]\backslash\bigcup_{\mu=0}^k\{m^{(\mu)}\}}.$$ Here, $[0,i_p-1]$ is the closed interval from $0$ to $i_p-1$ in the set of integers, and for brevity, we define $\{m^{(0)}\}$ as the empty set: $\emptyset$. The most general form of $w_{\ro{princ}}$ may now be written as $$\boxed{
w^{\ro{G}}_{\ro{princ}} = \bigcup_{p=1}^n\bigcup_{k=0}^{N-1}
\!\!\!\!\!\!
\bigcup_{\quad m\in[0,i_p-1]\backslash\bigcup_{\mu=0}^k\{m^{(\mu)}\}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^{n-p+1}i_j)+1-m}_{\C^{i_1\ldots i_p-m\ldots i_n}_W,m+1,v_{k+1}}
c.}
\label{eq:WPG}$$ In order for $c_{\ro{princ}}$ to be true, $w_{\ro{princ}}$ must contain information about the objects $y_i$, such that $\A_{v_i}=\PP\circ\A_E(\A_{y_i})$. Therefore, we seek only those elements of Eq. (\[eq:WPG\]) such that the phenomena $v_i$ take this form. This is a more restrictive set, as each abstraction of $y_i$ must be applied to the right of $\A_E$, which must be applied to the right of $\PP$. There are only $n-1$ such occurrences of $\PP$ in $c$, so in making this restriction, we are free to choose $$\begin{aligned}
\label{eq:r1}
\bullet\quad N&\leq n-1, \\
\bullet\quad N &\leq i_p.
\label{eq:r2}\end{aligned}$$ The form of the more restricted version of $w_{\ro{princ}}$ is thus $$\boxed{w^{\ro{R}}_{\ro{princ}} = \bigcup_{k=0}^{N-1}
\!\!\!\!\!\!
\bigcup_{\quad p\in[2,n]\backslash\bigcup_{\pi=0}^k\{p^{(\pi)}\}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^{n-p+2}i_j)+1}_{\C^{i_1\ldots i_p\ldots i_n}_W,1,E}
\hat{\Com}^{\sum_{j=1}^{n-p+2}i_j}_{\C^{i_1\ldots i_p-1\ldots i_n}_W,2,y_{k+1}}c,}
\label{eq:WPR}$$ where $\{p^{(0)}\}=\emptyset$.
If $c_{\ro{princ}}$ can be constructed consistently, i.e. if it exists, then the form of $w_{\ro{princ}}$ must be restricted to include an abstraction, $\A_{T'}^{w_{\ro{princ}}}$, that ensures the existence of $c_{\ro{princ}}$. This uses the same argument as in deriving Eq. (\[eq:cIcond\]), with $c_T = \PP\circ\A_T(w_T)$, and involves the union of Eq. (\[eq:WPR\]) with the object $w_T$. Thus, the condition under which Eq. (\[eq:unr\]) is true can now be written.
\[thm:thcond\] The condition under which the principles of a theory describe certain phenomena takes the form $$w_{\ro{princ}} \subseteq w^{\ro{R},T'}_{\ro{princ}} = \bigcup_{k=0}^{N-1}
\bigcup_{\quad p\in[2,n]\backslash\bigcup_{\pi=0}^k\{p^{(\pi)}\}}
\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^{n-p+2}i_j)+1}_{\C^{i_1\ldots i_p\ldots i_n}_W,1,E}
\hat{\Com}^{\sum_{j=1}^{n-p+2}i_j}_{\C^{i_1\ldots i_p-1\ldots i_n}_W,2,y_{k+1}}c\,\cup\, w_T.
\label{eq:thcond}$$
The statement of the theorem, that ‘$w_{\ro{princ}} \subseteq w^{\ro{R},T'}_{\ro{princ}}$ constitutes the condition for which Eq. (\[eq:unr\]) is true’, is only fulfilled if the general form of $w_{\ro{princ}}$ (in Eq. (\[eq:WPG\])) includes a description of extant phenomena explicitly, which takes the form shown in Eq. (\[eq:WPR\]). That is, one must show that $w^{\ro{R},T'}_{\ro{princ}}\subseteq w^{\ro{G},T'}_{\ro{princ}}$. This entails that the elements of $w^{\ro{R},T'}_{\ro{princ}}$ and $w^{\ro{G},T'}_{\ro{princ}}$ are of the same form, differing only by use of a restriction. Therefore, in this case, the abstraction $\A_{T'}^{w_{\ro{princ}}}$ is sufficient to ensure the truth of the elements in both sets. Note that the inclusion of $\A_{T'}^{w_{\ro{princ}}}$ takes the same form for both $w^{\ro{R},T'}_{\ro{princ}}$ and $w^{\ro{G},T'}_{\ro{princ}}$. Therefore, it is sufficient to show that $w^{\ro{R}}_{\ro{princ}}\subseteq w^{\ro{G}}_{\ro{princ}}$.
Express Eq. (\[eq:WPR\]) in terms of abstractions, $\A_{v_i}$, recalling that $\PP\circ\A_E(\A_{y_i}) =$\
$\PP\circ\PP\circ\A_E(\A_{y_i}) =
\PP\circ \A_{v_i}$: $$w^{\ro{R}}_{\ro{princ}} = \bigcup_{k=0}^{N-1}
\!\!\!\!\!\!
\bigcup_{\quad p\in[2,n]\backslash\bigcup_{\pi=0}^k\{p^{(\pi)}\}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^{n-p+2}i_j)+1}_{\C^{i_1\ldots i_p\ldots i_n}_W,1,v_{k+1}}c.
\label{eq:princp1}$$ Choosing the value $m=0$ in $w_{\ro{princ}}^{\ro{G}}$ yields $$w_{\ro{princ},m=0}^{\ro{G},T'} = \bigcup_{p=1}^n\bigcup_{k=0}^{N-1}
\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^{n-p+1}i_j)+1}_{\C^{i_1\ldots i_p\ldots i_n}_W,1,v_{k+1}}c.
\label{eq:princp2}$$ The only difference between Eqs. (\[eq:princp1\]) and (\[eq:princp2\]) is the choice of values of the iterator $p$. To obtain $w^{\ro{R}}_{\ro{princ}}\subseteq w^{\ro{G}}_{\ro{princ}}$, it is sufficient to show that $$[2,n]\backslash\bigcup_{\pi=0}^{k}\{p^{(\pi)}\} \subseteq [2,n]
\quad \forall k\in[0,N-1].$$ Recalling the restrictions of Eqs. (\[eq:r1\]) and (\[eq:r2\]), take $N \ll n$. Now, $\bigcup_{\pi=1}^{k}\{p^{(\pi)}\}$ is a finite set of integers that is a subset of $[2,n]$ $$\begin{aligned}
\bigcup_{\pi=1}^{k}\{p^{(\pi)}\} &\subseteq [2,n],\\
\mbox{where}\quad \{p^{(0)}\} &= \emptyset. \,\\
\therefore \bigcup_{\pi=0}^{k}\{p^{(\pi)}\} &\subseteq [2,n]\\
\Rightarrow w^{\ro{R},T'}_{\ro{princ}}&\subseteq w^{\ro{G},T'}_{\ro{princ}}. \qedhere\end{aligned}$$
Note that the fact that $w_{\ro{princ}}^{\ro{R},T'}$ is a more restrictive set than $w_{\ro{princ}}^{\ro{G},T'}$ does not mean that it is ‘smaller’ in the sense of cardinality. Assuming a sufficiently large $n$ value to accommodate all $N+1$ restrictions, one finds that $$\begin{aligned}
|w^{\ro{G},T'}_{\ro{princ}}| &= \left(\frac{1}{2}(\bar{n}^2 - (N+1)^2)+
1\right)L^{\bar{n}-(N+1)} = |w^{\ro{R},T'}_{\ro{princ}}| \\
&\Rightarrow w_{\ro{princ}}^{\ro{R},T'}\cong w_{\ro{princ}}^{\ro{G},T'}. \end{aligned}$$
The above observation provides a possible explanation for the appearance of the ‘unreasonable effectiveness’ of mathematics. The set of relationships among extant phenomena, $w_{\ro{princ}}^{\ro{R},T'}$, is not smaller, in any strict sense, than the general set of relationships among abstractions, $w_{\ro{princ}}^{\ro{G}.T'}$. The countability of sets of phenomena filtering into a more restrictive and still countable form, $w_{\ro{princ}}^{\ro{R},T'}$, combined with the formalism for describing non-mathematical objects in a mathematical way, constitutes the metaphysical explanation for the ‘unreasonable effectiveness’ of mathematics. In other words, there is no unreasonableness at all, but it is simply a mathematical consequence of the countability of phenomena, and the abstract description of objects that are not innately abstract.
This demonstrates the power of metaphysical tools, in the form of principles and proofs, to address key philosophical issues in physics. That is, the process employed here was not physics itself, but philosophical argumentation applied to the abstractions of objects used in the practice of physics.
Evidence for universes
======================
Defining evidence {#subsect:ev}
-----------------
The definition of evidence relies on the connection between a set of phenomena (called evidence), and the principles of a theory that the evidence supports. It is assumed here that the sense in which the phenomena support or demonstrate an abstraction, such as the theory, is the same sense in which a theory can be said to entail the extantness of the phenomena. The symmetry between the two arguments has not been proved, however, since it relies on the precise details of often-imprecisely defined linguistic devices.
The description of evidence, using the formalism of Section \[sect:form\], takes a similar form to that of the description of Ansatz for phenomena in Eq. (\[eq:unr\]), except that the direction of the correspondence is reversed $$\begin{aligned}
\A_{v_1},\,\A_{v_2},\ldots\,\mbox{are extant} &\Rightarrow c_{\ro{princ}}
\,\mbox{is true},\\
\mbox{i.e.}\,\,\Big[\A_{v_i} = \PP\circ\A_E(\A_{y_i})\in V\Big] &\Rightarrow
\Big[c_{\ro{princ}}=\PP\circ\A_{T}(w_{\ro{princ}})\Big].
\label{eq:ev}\end{aligned}$$
The lefthand side of Eq. (\[eq:ev\]) restricts the form of the object, $w_{\ro{princ}}$ (representing the set of principles) through $w_{\ro{princ}} \subseteq w^{\ro{R}}_{\ro{princ}}$. For the form of $w_{\ro{princ}}$ to entail the righthand side of Eq. (\[eq:ev\]), it must also include the abstraction, $\A_{T'}$. Thus, the condition under which Eq. (\[eq:ev\]) is true, where phenomena constitute evidence for a set of principles, is $$\label{eq:evcond}
\Big[w_{\ro{princ}}\subseteq w^{\ro{R},T'}_{\ro{princ}}\Big]\quad (=c_{\ro{cond}}).$$ This is the same condition obtained for the examination of principles entailing extant phenomena, in Eq. (\[eq:unr\]).
There is a duality between the two scenarios, which can be expressed in the following manner. If the condition of Eqs. (\[eq:thcond\]) and (\[eq:evcond\]) is true, then $$\Big[c_{\ro{princ}} = \PP\circ\A_T(w_{\ro{princ}})\Big]
\Leftrightarrow \Big[\A_{v_i}=\PP\circ\A_E(A_{y_i})\Big].$$ That is, relationship between principles and evidence is symmetrical in a sense. The sort of phenomena entailed by a theory is of exactly the same nature as the sort of phenomena that constitutes evidence for such a theory. This leads one to postulate an object, $c_{\ro{D}} = \PP\circ\A_T(w_{\ro{D}})$, which represents this duality, and one may identify a *Duality Theorem*, which takes the form $$\boxed{
w_{\ro{D}} = \Big\{\Big[\PP\circ\A_T(w_{\ro{cond}})\Big]
\Rightarrow\Big[\Big(c_{\ro{princ}}= \PP\circ\A_T(w_{\ro{princ}})\Big)
\Leftrightarrow
\Big(\A_{v_i}=\PP\circ\A_E(\A_{y_i})\Big)\Big]\Big\}.}$$ The theorem is a consequence of the fact that the application of the restrictions acting upon $w_{\ro{princ}}$ commute with each other in the formalism.
Note that, in attempting to clarify a term ill-defined in colloquial usage, we have arrived at quite a strict definition of evidence: if $w_{\ro{princ}}$ is to constitute a set of principles describing the elements, $v_i$, it must at least take the form of a description based explicitly on all $v_i$ elements. Any part of $w_{\ro{princ}}$ that does not lie in $w_{\ro{princ}}^{\ro{R},T'}$ is not relevant for consideration as being supported by the evidence.
The relating theorem and the fundamental object
-----------------------------------------------
Since the $I$-extantness of some $c_I$ has been related to the mathematical existence of an object $\A_I(w)$, a primary question to investigate would be the $I$-extantness of the statement of this relation itself. The statement of ‘the tying-in of the mathematical and non-mathematical objects’ has certain properties that should deem the investigation of its own $I$-extantness a nontrivial exercise.
Denote the above statement, which is an Ansatz, as $\Z_I(c_I)$, and let $c_I$ be $I$-extant. That is, for $c_I\equiv \PP\circ\A_I(w)$, $\A_I(w)$ exists; and let $\Z_I(c_I) \equiv \PP\circ\A_\Z\circ c_I$, for some $\A_\Z$. Recall that the assumed existence (in the mathematical sense) of the statement, $\Z_I(c_I)$, does not trivially entail $I$-extantness, under Supposition \[supp:sopcor\]. To show that $\Z_I$ is $I$-extant, it is required that it can be put in the form $$\Z_I(c_I) = \PP\circ\A_I(w_\Z),$$ which implies that $$\label{eq:ZI}
\PP\circ\A_\Z\circ c_I = \PP\circ\A_I(w_\Z).$$ We would like to attempt to understand under what conditions this holds.
Consider the scenario in which the $I$-extant form of $\Z_I$ does not exist. In this case, it is not possible to say that $\Z_I$ is not $I$-extant, since the statement relating existence and $I$-extantness has not been proved, and no information about $I$-extantness can be gained using this method. If, however, the $I$-extant form of $\Z_I$ does exist, then it is indeed certain that $\Z_I$ is $I$-extant. In other words, there is a logical subtlety that entails an ‘asymmetry’: the demonstration of the existence of an object is enough to prove it, but the equivalent demonstration of its non-existence is not enough to disprove it, since the relied-upon postulate would then be undermined. Therefore, in this particular situation, unless further a logical restriction is found to be necessary to add in later versions of the formalism, it is sufficient to show that the $I$-extant *can* exist, for $\Z_I$ to be $I$-extant. This is not true in general, due to Supposition \[supp:sopcor\].
\[eq:ZIext\] $\Z_I(c_I)$ is $I$-extant.
The above theorem, denoted the *relating theorem*, may be verified in proving Eq. (\[eq:ZI\]). It is enough to show that $\A_\Z\circ c_I = \A_I\circ w_\Z$ for any $c_I$, where there exists an $\A_\Z$ such that $\A_I$ obeys the property: $\A_{I,R}^{-1}$ DNE, which is, in our general framework, the only distinguishing feature of $\A_I$ at this point. The demonstration is as follows:
Let $\Z_I$ exist, such that $$\begin{aligned}
\Z_I &= \A_\Z\circ\PP\circ\A_I\circ\bigcup_{p=1}^n\bigcup_{m=0}^{i_p-1}
\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^{n-p+1}i_j)+1-m}_{C_W^{i_1\ldots i_p-m\ldots i_n},\,m+1,\,I'}c \\
&= \A_I\circ w_\Z, \end{aligned}$$ for any $w_\Z$. Due to the labelling principle, this can only be true if $w_\Z\equiv c_I$ and $\A_I\equiv \A_\Z$; that is, the abstraction of $c_I$ (above) is an $I$-abstraction: $\A_\Z \in \tilde{\Om}^1(W)$. This is a valid choice, since the existence of $\A_{Z,R}^{-1}$ was not assumed.
Therefore, the form of $\Z_I$ is now known: $$\begin{aligned}
\Z_I(c_I) &= \PP\circ\A_I(c_I)\\
&= \PP\A_I\circ\PP\A_I(w).\end{aligned}$$ In words, what has been discovered is that the Ansatz of $I$-extantness is equivalent to the Ansatz in the statement ‘the $I$-extantness of $c_I$ is related to existence’. That is, the operation associated with, say, ‘it is $I$-extant’ ($\PP\circ\A_I(w)$), when applied twice, forms the statement ‘its $I$-extantness is related to existence’ ($\PP\A_I\circ\PP\A_I(w)$); and it is the *same operation*. This needn’t be the case in general, and so it is a nontrivial result that $$\label{eq:ZIthm}
\boxed{\Z_I = \PP\circ\A_I.}$$ Note that Eq. (\[eq:ZIthm\]), in this case, is not a definition, but a *theorem*, to be known as the *correspondence corollary* to the relating theorem.
In a sense, $\Z_I$ is the fundamental $I$-extant object, in that it is the most obvious starting point for the analysis of the existence of $I$-extant objects in general. It also constitutes the first example of an object demonstrated to exist in a universe (though, a clarification of distinguishing different universes is still required, and investigated in Section \[subsec:univ\]).
Recall, in construction of ‘types’ in Eq. (\[eq:type\]), that familiar notions such as ‘chair’, or other such objects, are brought into a recognisable shape using this formula. Though the types may not appear more recognisable at face value, the properties of such a construction align more closely with what is meant phenomenologically but such objects. In a similar fashion, the type of $\Z_I$ can be established, to created a more full, complete, or ‘dressed’ version of the object.
$\Z_I$ is an example of a $c_I$. Since projected objects cannot be related directly, a type will be constructed from instances of $\A_I$ (of which $\A_E$ is one), and the dressed fundamental object will be a projection of the type. Using the same argument used in deriving Eq. (\[eq:type\]), the set of $n$ observed instances of $\A_I$ takes the following form (acting on some set $W'$) $$\bigcup_{j'=1}^n\A_\ro{set}\circ\hat{\Com}^2_{\tilde{\Om}^1(W'),i',j'}\A_I^{i'}(W').$$ If each observed instance may be identified as the set of a certain $N$ characteristics residing in $W$, then our intermediate set $W'$ can be dropped, and we find $$\label{eq:Aprime}
\A_I\circ \bigcup_{j=1}^{N_{i'}}\A_\ro{set}\circ\hat{\Com}^1_{W,i,j}w_i^{(i')} \in \A_I^{i'}(\Om^1(W)).$$ In this case, an instance of $\A_I^{i'}$ contains more information than just a set of characteristics, since it is also known that it is $I$-extant. Therefore, it contains an additional abstraction operator. What is meant by the fundamental type therefore takes the form $$\label{eq:fundtype}
\boxed{R_\Z\equiv\PP r_\Z =
\PP\,\A'\circ\bigcup_{j'=1}^n\bigcup_{j=1}^{N_{j'}}
\A_\ro{set}\circ\hat{\Com}^1_{W,i',j'}
\A_I\circ\A_\ro{set}\circ\hat{\Com}^1_{W,i,j}
w_i^{(i')}\in\PP\,\Om^2\circ\tilde{\Om}^1\circ\Om^1(W),}$$ where $\A'$ is the abstraction of relationship. Note that $R_\Z$ is, in general, an element of $\PP\Om^4(W)$.
Distinguishing universes {#subsec:univ}
------------------------
In this section, we address the issue of classifying universes by their properties in a general fashion. An attempt can then be made to identify features that distinguish universes from one another, and thus clarify the definition of our own universe in a way that is convenient in the practice of physics.
Suppose the definition of a universe, $\ca{U}$, to be the ‘maximal’ list of objects that have the same character, that is, obeying the same list of basic properties. The list need not necessarily be finite, as each collection of properties could, in principle, represent a collection of infinitely many objects themselves. In the language of the formalism developed so far, a formula may be constructed from a generalised object $c$ by ensuring that each element of $\ca{U}$ is related to the content of the underlying principles, $w_{\ro{princ}}$. This formula will be analogous to Eq. (\[eq:WPG\]).
\[supp:univ\] $\exists\, \ca{U}$, such that a list of underlying principles may be configured to be enumerable as a countable set, $w_{\ro{princ}} = u_1\cup\cdots\cup u_N$, for $N$ elements. (It is not required that $u_1\cup\cdots\cup u_N\in \ca{U}$). A universe based on these principles is the object represented by the largest possible set of the form: $$\label{eq:univ}
\ca{U} = \PP\circ\bigcup_{p=1}^n\bigcup_{k=0}^{N-1}
\!\!\!\!\!\!
\bigcup_{\quad m\in[0,i_p-1]\backslash\bigcup_{\mu=0}^k\{m^{(\mu)}\}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^{n-p+1}i_j)+1-m}
_{\C^{i_1\ldots i_p-m\ldots i_n}_W,m+1,u_{k+1}}c,$$ with respect to a generalised object, $c$.
Note that extantness is a universal property, in that it can be defined in the formalism regardless of the universe in which it is extant. It may, but it is not required that it constitute one of the $N$ underlying principles of a universe.
The distinction between different universes is largely convention, based on the most convenient definition in practising physics. One such convenience is the ability to arrive at a *consistent* definition of the universe. This is not the case for a naïvely defined universe required to contain all possible objects, due to Cantor’s Theorem [@Cantor]. In order to establish two universes as distinct, the following convention is adopted:
\[supp:dist\] Consider two consistently definable universes $\ca{U}$ and $\ca{U}'$. If the universe defined as the union, $X \equiv \ca{U}\cup \ca{U}'$ is inconsistent, then the universes $\ca{U}$ and $\ca{U}'$ are distinct.
Introducing a square-bracket notation, where $\ca{U}[\ldots]$ indicates that the underlying principles to be used in defining $\ca{U}$ are listed in $[\ldots]$, one may write $\ca{U} = \ca{U}[w_{\ro{princ}}]$ and $\ca{U}' = \ca{U}[w'_{\ro{princ}}]$. Define the following lists of principles to be consistent: $$\begin{aligned}
w_{\ro{princ}} &= u_1\cup \cdots u_{N-1},\\
w'_{\ro{princ}} &= u_N \cup u'_1\cup \cdots u'_{N'},\end{aligned}$$ but suppose the inclusion of both principles $u_{N-1}$ and $u_N$ to lead to inconsistency. It then follows that $$X \equiv \ca{U}[u_1\cup \cdots u_{N-1}]\cup \ca{U}'[u_N \cup u'_1\cup
\cdots u'_{N'}]\quad\mbox{is inconsistent.}$$ *Example:* The inconsistency of a set of principles can emerge in the combination of negation and recursion, as clearly demonstrated by Gödel [@Godel] and Tarski [@Tarski]. If $u_{N-1}$ were to express a negation, such as ‘only contains elements that don’t contain themselves’, and $u_{N}$ were to enforce a recursion, such as ‘contains all elements’, then Russell’s paradox would result [@Russell].
Evidence for other universes {#subsec:den}
----------------------------
The final denouement is to demonstrate that the fundamental type constitutes evidence for a universe distinct from our own. Consider $N$ abstract objects, $\A_{v_k}$, each of which represents an element of the fundamental type in Eq. (\[eq:fundtype\]). They may be expressed analogously to the operator $\A_I^{i'}$ defined in Eq. (\[eq:Aprime\]), for $\ca{N}$ sub-characteristics: $$\A_{v_k} \equiv \PP\circ\A_I\circ \bigcup_{j=1}^{\ca{N}}\A_{\ro{set}}\circ\hat{\Com}^1
_{W,i,j}w_i^{(k)}.$$ Though the sub-characteristics themselves are not vital in this investigation, one can simply see that the objects are $I$-extant (by construction), by expressing them in the form: $$\A_{v_k} = \PP\circ\A_I(\A_{y_k})\in V',$$ where $V'$ denotes a set that contains at least all $N$ elements, $\A_{v_k}$. By determining the underlying principles describing $V'$, one may denote its maximal set as $\ca{U}'$. Since $V'$ is an abstract object existing as a subset of the objects that comprise our formalism, $F$, it follows that $\ca{U}'$ is the set of all abstracts, and cannot be consistently defined [@Cantor]. That is, by taking the maximal set of objects obeying this restriction, one arrives at a set containing itself, and all possible abstract objects, which is Cantor’s universal set. This does not mean, however, that $V'$ itself is inconsistent; since $V' \subset F \subset \ca{U}'$, we are free to assume that $V'$ is constructed in such a way as to render it consistent, just as was assumed for $F$.
Now consider our universe, $\ca{U}$, which takes the form of Eq. (\[eq:univ\]), with the restriction that one of its underlying principles must be that all elements are extant. That is, let $\ca{U}$ only contain elements of the form $w = \PP\circ\A_E(c)
\in \ca{U}$. $\ca{U}$ may be still be consistently defined. Since $\ca{U}'$ is inconsistent, the definition of a composite universe $X \equiv \ca{U} \cup \ca{U}'$ is also inconsistent. *Therefore, by the convention established in Supposition \[supp:dist\], $\ca{U}$ and $\ca{U}'$ are distinct*. It also follows that $V'\subset \ca{U}'$ is also distinct from $\ca{U}$.
The pertinent underlying principle for $V'$ is simply that it be consistent; or more specifically, that an extant description of it be consistent: $$w_{\ro{princ}}' = Y',\quad\mbox{such that}\quad V' = \PP\circ\A_E(Y'),$$ for some consistently definable $Y'$. It is reasonable to expect that the elements that comprise $V'$ *constitute evidence* for the extantness of $V'\in\ca{U}'$. This can be checked by determining that the condition for evidence is satisfied: $$\label{eq:finalcond}
w_{\ro{princ}}' \subseteq w_{\ro{princ}}^{\ro{R},T'}[v_k].$$ This condition, however, is not satisfied in general, due to the fact that $Y'$ must contain an abstraction, $\A_{E'}$, which is not present in the general formulation of $w_{\ro{princ}}^{\ro{R},T'}$. This makes sense that the truth of a statement does not necessarily entail an extantness. In the specific case above, though, we have considered the fundamental type of the general $I$-extantness, encompassing all abstractions of a specific form, as described in Section \[subsect:ext\]. In this special case, the extantness required by both sides of Eq. (\[eq:finalcond\]) is the same, and we are required to demonstrate that $$\begin{aligned}
\label{eq:finalproof}
w_{\ro{princ}}' &= Y' \subseteq w_{\ro{princ}}^{\ro{R},I'}[v_k],\\
\mbox{for}\,\, V' &= \PP\circ\A_I(Y').\end{aligned}$$
$Y'$ is a set containing $N+1$ abstractions, $\A_{v_k}$, with $\A_{N+1}\equiv \A_{I'}$: $$Y' = \{\A_{v_1},\ldots,\A_{v_N},\A_{I'}\} = \bigcup_{k=0}^N\A_{\ro{set}}\circ
\hat{\Com}^1_{V',i,k} \A_{v_i}.$$ The righthand side of Eq. (\[eq:finalproof\]) takes the form: $$w^{\ro{R},I'}_{\ro{princ}} = \bigcup_{k=0}^{N-1}
\!\!\!\!\!\!
\bigcup_{\quad p\in[2,n]\backslash\bigcup_{\pi=0}^k\{p^{(\pi)}\}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\A_{\ro{set}}\circ\hat{\Com}^{(\sum_{j=1}^{n-p+2}i_j)+1}_{\C^{i_1\ldots i_p\ldots i_n}_W,1,v_{k+1}}c
\cup w_I.$$ The left term can be pared down by choosing $n=2$, $i_1=0$ and $i_2=1$, and the right term, $w_I$, by choosing $n=1$, $i_1=1$: $$\begin{aligned}
&\Rightarrow \Big(\bigcup_{k=1}^N \A_{\ro{set}}\circ\hat{\Com}^2_{\PP\Om^1(W),1,v_{k+1}}c
\Big)\,\,\,\cup \,\,
\Big(\A_{\ro{set}}\circ\hat{\Com}^2_{\Om^1(W),m+1,I'}c\Big) \\
&= \{\PP\A_{v_1}(w),\ldots,\PP\A_{v_N}(w)\}\cup\{\A_{I'}(w)\} \\
&= \bigcup_{k=0}^N\A_{\ro{set}}\circ
\hat{\Com}^1_{V',i,k} \A_{v_i}, \quad\mbox{for}\,\,\A_{N+1}\equiv \A_{I'}.
\qedhere\end{aligned}$$
The final line follows from the labelling principle, $\PP\circ\PP = \PP$, and the fact that each $\A_{v_k}$ is in extant form, $\PP\circ\A_I(\A_{y_k})$.
Conclusion
==========
This manuscript has attempted to address key issues in physics from the point of view of philosophy. By adopting a metaphysical framework closely aligned with that used in the practice of physics, the philosophical tool of Ansatz was examined, and the process of its use was clarified. In this context, a mathematical formalism for describing intrinsically non-mathematical objects was expounded. In examining the consistency of such a framework, a careful distinction between existence (in the mathematical sense) and ‘extantness’ (in the sense of phenomena existing in the universe) was made. Using the formalism, a general condition for extantness was derived in terms of a generalised object, which incorporates the salient features of abstraction and projection to the non-mathematical world in a way easily manipulated. In principle, the formalism may make verifiable predictions, since properties (and the consequences of combinations of properties in the form of theorems) can be arranged to make strict statements about the behaviour or nature of a system or other general objects.
As an example, a possible explanation of Wigner’s ‘unreasonable effectiveness’ of mathematics was derived. This demonstrates the ability of a metaphysical framework to address important mysteries inaccessible from within the physics itself being described.
Lastly, an attempt was made to classify other universes in a general fashion, and to clarify the characteristics and role of evidence for theories that provides at least a partial description of a universe. The connection between phenomena that constitute evidence and the theory itself was established in a Duality Theorem. Instead of focusing on attempting an ad-hoc identification of extra-universal phenomena from experiment, the formalism was used to *derive* basic properties of objects that do not align with our universe. As a first example toward such a goal, a fundamental object was identified, which satisfies the necessary properties of evidence, and whose extantness does not coincide with our universe. This paves the way for future investigations into more precise details of the properties (perhaps initially bizarre and unexpected) that objects may possess outside our universe.
[^1]: jonathan.hall@adelaide.edu.au
[^2]: The form, $W'=\Om^1(W)$, makes sense, in that the notion of ‘being a subset’ is a single-level abstraction, residing in $\Om^1$.
|
**KERNEL ESTIMATION OF DENSITY LEVEL SETS**
[Benoît CADRE]{}[[^1]]{}
Laboratoire de Mathématiques, Université Montpellier II,
CC 051, Place E. Bataillon, 34095 Montpellier cedex 5, FRANCE
[**Abstract.**]{} Let $f$ be a multivariate density and $f_n$ be a kernel estimate of $f$ drawn from the $n$-sample $X_1,\cdots,X_n$ of i.i.d. random variables with density $f$. We compute the asymptotic rate of convergence towards 0 of the volume of the symmetric difference between the $t$-level set $\{f\geq t\}$ and its plug-in estimator $\{f_n\geq t\}$. As a corollary, we obtain the exact rate of convergence of a plug-in type estimate of the density level set corresponding to a fixed probability for the law induced by $f$.
[**Key-words:**]{} Kernel estimate, Density level sets, Hausdorff measure.
[**2000 Mathematics Subject Classification:**]{} 62H12, 62H30.
[**1. Introduction.**]{} Recent years have witnessed an increasing interest in estimation of density level sets and in related multivariate mappings problems. The main reason is the recent advent of powerfull mathematical tools and computational machinery that render these problems much more tractable. One of the most powerful application of density level sets estimation is in unsupervised [*cluster analysis*]{} (see Hartigan \[1\]), where one tries to break a complex data set into a series of piecewise similar groups or structures, each of which may then be regarded as a separate class of data, thus reducing overall data compexity. But there are many other fields where the knowledge of density level sets is of great interest. For example, Devroye and Wise \[2\], Grenander \[3\], Cuevas \[4\] and Cuevas and Fraiman \[5\] used density support estimation for pattern recognition and for detection of the abnormal behavior of a system.
In this paper, we consider the problem of estimating the $t$-level set ${\cal L}(t)$ of a multivariate probability density $f$ with support in ${I\!\!R}^k$ from independent random variables $X_1,\cdots,X_n$ with density $f$. Recall that for $t\geq 0$, the $t$-level set of the density $f$ is defined as follows : $${\cal L}(t)=\{x\in{I\!\!R}^k\, :\, f(x)\geq t\}.$$
The question now is how to define the estimates of ${{\cal L}}(t)$ from the $n$-sample $X_1,\cdots,X_n$ ? Even in a nonparametric framework, there are many possible answers to this question, depending on the restrictions one can impose on the level set and the density under study. Mainly, there are two families of such estimators : the [*plug-in*]{} estimators and the estimators constructed by an [*excess mass*]{} approach. Assume that an estimator $f_n$ of the density $f$ is available. Then a straightforward estimator of the level set ${{\cal L}}(t)$ is $\{f_n\geq t\}$, the plug-in estimator. Molchanov \[6, 7\] and Cuevas and Fraiman \[5\] proved consistency of these estimators and obtained some rates of convergence. The excess mass approach suggest to first consider the empirical mapping $M_n$ defined for every borel set $L\subset{I\!\!R}^k$ by $$M_n(L)=\frac{1}{n} \sum_{i=1}^n {\bf 1}_{\{X_i\in L\}}-t \lambda(L),$$ where $\lambda$ denotes the Lebesgue measure on ${I\!\!R}^k$. A natural estimator of ${{\cal L}}(t)$ is a maximizer of $M_n(L)$ over a given class of borel sets $L$. For different classes of level sets (mainly star-shaped or convex level sets), estimators based on the excess mass approach were studied by Hartigan \[8\], Müller \[9\], Müller and Sawitzki \[10\], Nolan \[11\] and Polonik \[12\], who proved consistency and found certain rates of convergence. When the level set is star-shaped, Tsybakov \[13\] recently proved that the excess mass approach gives estimators with optimal rates of convergence in an asymptotically minimax sense, whithin the studied classes of densities. Though this result has a great theoretical interest, assuming the level set to be convex or star-shaped appears to be somewhat unsatisfactory for the statistical applications. Indeed, such an assumption does not permit to consider the important case where the density under study is multimodal with a finite number of modes, and hence the results can not be applied to cluster analysis in particular. In comparison, the plug-in estimators do not care about the specific shape of the level set. Moreover, another advantage of the plug-in approach is that it leads to easily computable estimators. We emphasize that, if the excess mass approach often gives estimators with optimal rates of convergence, the complexity of the computational algorithm of such an estimator is high, due to the presence of the maximizing step (see the computational algorithm proposed by Hartigan, \[8\]).
In this paper, we study a plug-in type estimator of the density level set ${{\cal L}}(t)$, using a kernel density estimate of $f$ (Rosenblatt, \[14\]). Given a kernel $K$ on ${I\!\!R}^k$ ([*i.e.*]{}, a probability density on ${I\!\!R}^k$) and a bandwidth $h=h(n) >0$ such that $h\to 0$ as $n$ grows to infinity, the kernel estimate of $f$ is given by $$f_n(x)=\frac{1}{nh^k}\sum_{i=1}^n K\Big(\frac{x-X_i}{h}\Big), \ x\in{I\!\!R}^k.$$ We let the plug-in estimate ${{\cal L}}_n(t)$ of ${{\cal L}}(t)$ be defined as $${{\cal L}}_n(t)=\{x\in{I\!\!R}^k\, : \, f_n(x)\geq t\}.$$
In the whole paper, the distance between two borel sets in ${I\!\!R}^k$ is a measure -in particular the volume or Lebesgue measure $\lambda$ on ${I\!\!R}^k$- of the symmetric difference denoted $\Delta$ ([*i.e.*]{}, $A\Delta B=(A\cap B^c)\cup(A^c\cap B)$ for all sets $A,B$). Our main result (Theorem 2.1) deals with the limit law of $$\sqrt {nh^k}\,\lambda\Big({{\cal L}}_n(t)\Delta{{\cal L}}(t)\Big),$$ which is proved to be degenerate.
Consider now the following statistical problem. In cluster analysis for instance, it is of interest to estimate the density level set corresponding to a fixed probability $p\in [0,1]$ for the law induced by $f$. The data contained in this level set can then be regarded as the most important data if $p$ is far enough from 0. Since $f$ is unknown, the level $t$ of this density level set is unknown as well. The natural estimate of the target density level set ${{\cal L}}(t)$ becomes ${{\cal L}}_n(t_n)$, where $t_n$ is such that $$\int_{{{\cal L}}_n(t_n)} f_nd\lambda=p.$$ As a consequence of our main result, we obtain in Corollary 2.1 the exact asymptotic rate of convergence of ${{\cal L}}_n(t_n)$ to ${{\cal L}}(t)$. More precisely, we prove that for some $\beta_n$ which only depends on the data, one has : $$\beta_n\sqrt {nh^k}\,\lambda\Big({{\cal L}}_n(t_n)\Delta{{\cal L}}(t)\Big)\to \sqrt {\frac{2}{\pi}\int K^2d\lambda}$$ in probability.
The precise formulations of Theorem 2.1 and Corollary 2.1 are given in Section 2. Section 3 is devoted to the proof of Theorem 2.1 while the proof of Corollary 2.1 is given in Section 4. The appendix is dedicated to a change of variables formula involving the $(k$-$1)$-dimensional Hausdorff measure (Proposition A).
[**2. The main results.**]{}
[**2.1 Estimation of $t$-level sets.**]{} In the following, $\Theta\subset (0,\infty)$ denotes an open interval and $\|.\|$ stands for the euclidean norm over any finite dimensional space. Let us introduce the hypotheses on the density $f$ :
- $f$ is twice continuously differentiable and $f(x)\to 0$ as $\|x\|\to\infty$;
- For all $t\in\Theta$, $$\inf_{f^{-1}(\{t\})} \|\nabla f\|>0,$$
where, here and in the following, $\nabla\psi(x)$ denotes the gradient at $x\in{I\!\!R}^k$ of the differentiable function $\psi\,:\, {I\!\!R}^k\to{I\!\!R}$. Next, we introduce the assumptions on the kernel $K$ :
- $K$ is a continuously differentiable and compactly supported function. Moreover, there exists a monotone nonincreasing function $\mu\, :\, {I\!\!R}_+\to{I\!\!R}$ such that $K(x)=\mu(\|x\|)$ for all $x\in{I\!\!R}^k$.
The assumption on the support of $K$ is only provided for simplicity of the proofs. As a matter of fact, one could consider a more general class of kernels, including the gaussian kernel for instance. Moreover, as we will use Pollard’s results \[15\], $K$ is assumed to be of the form $\mu(\|.\|)$.
Throughout the paper, ${{\cal H}}$ denotes the $(k$-$1)$-dimensional Hausdorff measure on ${I\!\!R}^k$ ([*cf.*]{} Evans and Gariepy, \[16\]). Recall that ${{\cal H}}$ agrees with ordinary “$(k$-$1)$-dimensional surface area” on nice sets. Moreover, $\partial A$ is the boundary of the set $A\subset{I\!\!R}^k$, $$\alpha(k) = \left\{ \begin{array}{ll}
3 & \mbox{if $k=1$;} \\
k+4 & \mbox{if $k\geq 2$.}
\end{array}
\right.$$ and for any bounded borel function $g\, :\, {I\!\!R}^k\to{I\!\!R}_+$, $\lambda_g$ stands for the measure defined for each borel set $A\subset{I\!\!R}^k$ by $$\lambda_g(A)=\int_A g\, d\lambda.$$ Finally, the notation ${\stackrel{\rm P}{\to}}$ denotes the convergence in probability.
It can be proved that if [**H1**]{}, [**H3**]{} hold and if $\lambda(\partial {{\cal L}}(t))=0$, one has : $$\lambda\Big({{\cal L}}_n(t)\Delta {{\cal L}}(t)\Big){\stackrel{\rm P}{\to}}0.$$ The aim of Theorem 2.1 below is to obtain the exact rate of convergence.
[**Theorem 2.1.**]{} [*Let $g\, :\, {I\!\!R}^k\to{I\!\!R}_+$ be a bounded borel function and assume that [**H1**]{}-[**H3**]{} hold. If $nh^k/(\log n)^{16}\to\infty$ and $nh^{\alpha(k)} (\log n)^2\to 0$, then for almost every (a.e.) $t\in\Theta$ : $$\sqrt {nh^k}\, \lambda_g\Big({{\cal L}}_n(t)\Delta{{\cal L}}(t)\Big){\stackrel{\rm P}{\to}}\sqrt{\frac{2t}{\pi} \int K^2d\lambda}\, \int_{\partial {{\cal L}}(t)} \frac{g}{\|\nabla f\|}d{{\cal H}}.$$* ]{}
[**Remarks 2.1.**]{} $\bullet$ Notice that the rightmost integral is defined because $g$ is bounded and ${{\cal L}}(t)$ is a compact set for all $t>0$ according to [**H1**]{}.
$\bullet$ In practice, this result is mainly interesting when $g\equiv 1$, since we then have the asymptotic behavior of the volume of the symmetric difference between the two level sets. The general case is provided for the proof of Corollary 2.1 below.
$\bullet$ If we only assume $f$ to be Lipschitz instead of [**H1**]{}, then $f$ is an almost everywhere continuously differentiable function by Rademacher’s theorem and Theorem 2.1 holds under the additional assumption on the bandwidth : $nh^{k+2}(\log n)^2\to 0$.
[**2.2 Estimation of level sets with fixed probability.**]{} In order to derive the corollary, we need an additional condition on $f$.
- For all $t\in (0,\sup_{{I\!\!R}^k} f]$, $\lambda(f^{-1}[t-{\varepsilon},t+{\varepsilon}])\to 0$ as ${\varepsilon}\to 0$. Moreover, $\lambda(f^{-1}(0,{\varepsilon}])\to 0$ as ${\varepsilon}\to 0$.
Roughly speaking, [**H4**]{} means that the sets where $f$ is constant do not charge the Lebesgue measure on ${I\!\!R}^k$. Many densities with a finite number of local extrema satisfy [**H4**]{}. However, notice that if $f$ is a continuous density such that $\lambda(f^{-1}(0,{\varepsilon}])\to 0$ as ${\varepsilon}\to 0$, then it is compactly supported.
Let us now denote by ${\cal P}$ the application $${\cal P} : \begin{array}{ll}
[0,\sup_{{I\!\!R}^k} f] & \to [0,1] \\
\qquad t & \mapsto \lambda_f({{\cal L}}(t)).\\
\end{array}$$ Observe that $\cal P$ is one-to-one if $f$ satisfies [**H1**]{}, [**H4**]{}. Then, for all $p\in [0,1]$, let $t^{(p)}\in [0,\sup_{{I\!\!R}^k} f]$ be the unique real number such that $\lambda_f({{\cal L}}(t^{(p)}))=p$. Morevover, let $t_n^{(p)}\in [0,\sup_{{I\!\!R}^k} f_n]$ be such that $\lambda_{f_n}({{\cal L}}_n(t_n^{(p)}))=p$. Notice that $t_n^{(p)}$ does exists since $f_n$ is a density on ${I\!\!R}^k$.
The aim of Corollary 2.1 below is to obtain the exact rate of convergence of ${{\cal L}}_n(t_n)$ to ${{\cal L}}(t)$. We also introduce an estimator of the unknown integral in Theorem 2.1.
[**Corollary 2.1.**]{} [*Let $k\geq 2$, $(\alpha_n)_n$ be a sequence of positive real numbers such that $\alpha_n\to 0$ and assume that [**H1**]{}-[**H4**]{} hold. If $nh^{k+2}/\log n\to \infty$, $nh^{k+4} (\log n)^2\to 0$ and $\alpha_n^2nh^k/(\log n)^2\to \infty$ then, for a.e. $p\in{\cal P}(\Theta)$ : $$\sqrt {nh^k}\frac {\beta_n}{\sqrt {t_n^{(p)}}}\, \lambda\Big({{\cal L}}_n(t_n^{(p)})\Delta{{\cal L}}(t^{(p)})\Big){\stackrel{\rm P}{\to}}\sqrt {\frac{2}{\pi}\int K^2d\lambda},$$ where $\beta_n=\alpha_n/\lambda\big({{\cal L}}_n(t_n^{(p)})-{{\cal L}}_n(t_n^{(p)}+\alpha_n)\big).$* ]{}
[**Remarks 2.2.**]{} $\bullet$ It is of statistical interest to mention the fact that under the assumptions of the corollary, we have for all $p\in [0,1]$ : $t_n^{(p)}\to t^{(p)}$ with probability 1 (see Lemma 4.3).
$\bullet$ When $k=1$, the conditions of Theorem 2.1 on the bandwidth $h$ do not permit to derive Corollary 2.1. In practice, estimations of density level sets and their applications to cluster analysis for instance are mainly interesting in high-dimensional problems.
[**3. Proof of Theorem 2.1.**]{}
[**3.1. Auxiliary results and proof of Theorem 2.1.**]{} For all $t>0$, let $${{\cal V}}_n^t=f^{-1} \Big[t-\frac{(\log n)^{\beta}}{\sqrt {nh^k}},t\Big] \quad {\rm and} \quad {{\overline {\cal V}}}_n^t=f^{-1} \Big[t, t+\frac{(\log n)^{\beta}}{\sqrt {nh^k}}\Big],$$ where $\beta >1/2$ is fixed. Moreover, ${{\tilde K}}$ stands for the real number : $${{\tilde K}}=\int K^2 d\lambda.$$
[**Proposition 3.1.**]{} [*Let $g\, :\, {I\!\!R}^k\to {I\!\!R}_+$ be a bounded borel function and assume that [**H1**]{}-[**H3**]{} hold. If $nh^k/(\log n)^{31\beta}\to\infty$ and $nh^{\alpha(k)} (\log n)^{2\beta}\to 0$, then for a.e. $t\in\Theta$ : $$\begin{aligned}
\lim_n \sqrt {nh^k} \int_{{{\cal V}}_n^t} P(f_n(x)\geq t)d\lambda_g(x) & = & \lim_n \sqrt {nh^k} \int_{{{\overline {\cal V}}}_n^t} P(f_n(x)< t)d\lambda_g(x)\\
& = & \sqrt{\frac{t{{\tilde K}}}{2\pi}}\, \int_{\partial {{\cal L}}(t)} \frac{g}{\|\nabla f\|}d{{\cal H}}.\end{aligned}$$* ]{}
[**Proposition 3.2.**]{} [*Let $g\, :\, {I\!\!R}^k\to {I\!\!R}_+$ be a bounded borel function and assume that [**H1**]{}-[**H3**]{} hold. If $nh^k/(\log n)^{5\beta}\to\infty$ and $nh^{\alpha(k)} (\log n)^{2\beta}\to 0$, then for a.e. $t\in\Theta$ : $$\lim_n nh^k {{\rm var}}\,\Big[\lambda_g\Big({{\cal V}}_n^t\cap {{\cal L}}_n(t)\Big)\Big]=0=\lim_n nh^k {{\rm var}}\,\Big[ \lambda_g\Big({{\overline {\cal V}}}_n^t\cap {{\cal L}}_n(t)^{c}\Big)\Big].$$*]{}
[**Proof of Theorem 2.1.**]{} Let $t\in\Theta$ be such that both conclusions of Propositions 3.1 and 3.2 hold. According to [**H3**]{} and Pollard (\[15\], Theorem 37 and Problem 28, Chapter II), we have almost surely (a.s.) : $$\sup_{{I\!\!R}^k}|f_n-Ef_n|\to 0.$$ Moreover, since both $\sup_n Ef_n(x)$ and $f(x)$ vanish as $\|x\|\to\infty$ by [**H1**]{}, [**H3**]{}, we have : $$\sup_{{I\!\!R}^k}|Ef_n-f|\to 0.$$ Thus, a.s. and for $n$ large enough : $$\sup_{{I\!\!R}^k} |f_n-f|\leq \frac{t}{2}.$$ Consequently, ${{\cal L}}_n(t)\subset {{\cal L}}(t/2)$ and since ${{\cal L}}(t)\subset{{\cal L}}(t/2)$, we get : $$\begin{aligned}
\lambda_g\Big({{\cal L}}_n(t)\Delta{{\cal L}}(t)\Big)=\int_{{{\cal L}}(t/2)} {\bf 1}_{\{f_n<t,f\geq t\}}d\lambda_g+\int_{{{\cal L}}(t/2)} {\bf 1}_{\{f_n\geq t,f< t\}}d\lambda_g. \qquad (3.1)\end{aligned}$$ Let $$A_n=\Big\{ \sqrt{nh^k} \sup_{{{\cal L}}(t/2)} |f_n-f|\leq (\log n)^{\beta}\Big\}.$$ Since ${{\cal L}}(t/2)$ is a compact set by [**H1**]{}, it is a classical exercise to prove that $P(A_n)\to 1$ under the assumptions of the theorem. Hence, one only needs to prove that the result of Theorem 2.1 holds on the event $A_n$. But on $A_n$, one has according to (3.1): $\lambda_g\big({{\cal L}}_n(t)\Delta{{\cal L}}(t)\big)=J_n^1+J_n^2$, where : $$J_n^1=\lambda_g\Big({{\overline {\cal V}}}_n^t\cap{{\cal L}}_n(t)^{c}\Big) \ {\rm and} \ J_n^2=\lambda_g\Big({{\cal V}}_n^t\cap{{\cal L}}_n(t)\Big).$$ By Propositions 3.1 and 3.2, if $j=1$ or $j=2$ : $$\sqrt {nh^k} J_n^j{\stackrel{\rm P}{\to}}\sqrt{\frac{t{{\tilde K}}}{2\pi}}\, \int_{\partial {{\cal L}}(t)} \frac{g}{\|\nabla f\|}d{{\cal H}}, \qquad (3.2)$$ if the bandwidth $h$ satisfies $nh^{\alpha(k)}(\log n)^{2\beta}\to 0$ and $nh^k/(\log n)^{31\beta}\to\infty$. Letting $\beta=16/31$, the theorem is proved $\bullet$
[**3.2. Proof of Proposition 3.1.**]{} Let $X$ be a random variable with density $f$, $$V_n(x)={{\rm var}}\, K\Big(\frac{x-X}{h}\Big) \ {\rm and} \ Z_n(x)=\frac {h^k\sqrt n}{\sqrt {V_n(x)}}(f_n(x)-Ef_n(x)),$$ for all $x\in{I\!\!R}^k$ such that $V_n(x)\neq 0$. Moreover, $\Phi$ denotes the distribution function of the ${\cal N} (0,1)$ law.
In the proofs, $c$ denotes a positive constant whose value may vary from line to line.
[**Lemma 3.1.**]{} [*Assume that [**H1**]{}, [**H3**]{} hold and let ${\cal C}\subset {I\!\!R}^k$ be a compact set such that $\inf_{{\cal C}}f >0$. Then, there exists $c>0$ such that for all $n\geq 1$, $x\in{\cal C}$ and $u\in{I\!\!R}$: $$|P(Z_n(x)\leq u)-\Phi(u)|\leq \frac{c}{\sqrt {nh^k}}.$$* ]{}
[**Proof.**]{} By the Berry-Essèen inequality ([*cf.*]{} Feller, \[17\]), one has for all $n\geq 1$, $u\in{I\!\!R}$ and $x\in{I\!\!R}^k$ such that $V_n(x)\neq 0$: $$|P(Z_n(x)\leq u)-\Phi(u)|\leq \frac{3}{\sqrt {nV_n(x)^3}} E\Big|K\Big(\frac{x-X}{h}\Big)-EK\Big(\frac{x-X}{h}\Big)\Big|^3.$$ It is a classical exercise to deduce from [**H1**]{}, [**H3**]{} that $$\sup_{x\in{\cal C}} E\Big|K\Big(\frac{x-X}{h}\Big)-EK\Big(\frac{x-X}{h}\Big)\Big|^3\leq c\, h^k \ {\rm and} \ \inf_{x\in{\cal C}} V_n(x)\geq c\, h^k,$$ hence the lemma $\bullet$
For all borel bounded function $g\, :\, {I\!\!R}^k\to{I\!\!R}_+$, we let $\Theta_0(g)$ to be the set of $t\in\Theta$ such that : $$\lim_{{\varepsilon}\searrow 0} \frac{1}{{\varepsilon}} \lambda_g\Big(f^{-1} [t-{\varepsilon},t]\Big)=
\lim_{{\varepsilon}\searrow 0} \frac{1}{{\varepsilon}} \lambda_g\Big(f^{-1} [t, t+{\varepsilon}]\Big)=
\int_{\partial {{\cal L}}(t)} \frac{g}{\|\nabla f\|}d{{\cal H}}.$$
[**Lemma 3.2.**]{} [*Let $g\, :\, {I\!\!R}^k\to{I\!\!R}_+$ be a borel bounded function and assume that [**H1**]{}, [**H2**]{} hold. Then we have : $\Theta_0(g)=\Theta$ a.e.*]{}
[**Proof.**]{} According to [**H1**]{}, [**H2**]{}, for all $t\in\Theta$, there exists $\eta >0$ such that : $$\inf_{f^{-1}[t-\eta,t+\eta]}\|\nabla f\|>0.$$ We deduce from Proposition A that for all $t\in\Theta$ and ${\varepsilon}>0$ small enough : $$\frac{1}{{\varepsilon}} \lambda_g\Big(f^{-1} [t-{\varepsilon},t]\Big)=\frac{1}{{\varepsilon}} \int_{t-{\varepsilon}}^t \int_{\partial {{\cal L}}(s)} \frac{g}{\|\nabla f\|} d{{\cal H}}\, ds.$$ Using the Lebesgue-Besicovitch theorem ([*cf.*]{} Evans and Gariepy, \[16\], Theorem 1, Chapter I), we then have for a.e. $t\in\Theta$: $$\lim_{{\varepsilon}\searrow 0}\frac{1}{{\varepsilon}} \lambda_g\Big(f^{-1} [t-{\varepsilon},t]\Big)=\int_{\partial {{\cal L}}(t)} \frac{g}{\|\nabla f\|} d{{\cal H}},$$ and the same result holds for $\lambda_g\big(f^{-1} [t,t+{\varepsilon}]\big)$ instead of $\lambda_g\big(f^{-1} [t-{\varepsilon},t]\big)$, hence the lemma $\bullet$
It is a straightforward consequence of Lemma 3.2 above that $\lambda(\partial {{\cal L}}(t))=0$ for a.e. $t\in\Theta$. For simplicity, we shall assume throughout that this is true for all $t\in\Theta$. Since $\Theta$ is an open interval, we have in particular $$\lambda\Big(f^{-1}[t-{\varepsilon},t+{\varepsilon}]\Big)=\lambda\Big(f^{-1}(t-{\varepsilon},t+{\varepsilon})\Big),$$ for all $t\in\Theta$ and ${\varepsilon}>0$ small enough.
We now let for $t\in\Theta$ and $x\in{I\!\!R}^k$ such that $f(x)V_n(x)\neq 0$ : $$t_n(x)=\sqrt {\frac{nh^k}{{{\tilde K}}f(x)}}(t-f(x)) \ {\rm and} \ {{\overline t}_n}(x)=\frac{h^k\sqrt n}{\sqrt {V_n(x)}}(t-Ef_n(x)),$$ and finally, ${\overline {\Phi}}(u)=1-\Phi(u)$ for all $u\in{I\!\!R}$.
[**Lemma 3.3.**]{} [*Let $g\, :\, {I\!\!R}^k\to{I\!\!R}_+$ be a bounded borel function and assume that [**H1**]{}, [**H2**]{} hold. If $nh^k/(\log n)^{2\beta}\to\infty$ and $nh^{k+4} (\log n)^{2\beta}\to 0$, then for all $t\in\Theta_0(g)$ : $$\begin{aligned}
& & \lim_n \sqrt {nh^k} \Big[ \int_{{{\cal V}}_n^t} P(f_n(x)\geq t)d\lambda_g(x)-\int_{{{\cal V}}_n^t} {\overline {\Phi}}(t_n(x))d\lambda_g(x)\Big]=0 \\
& {\rm and} &
\lim_n \sqrt {nh^k} \Big[ \int_{{{\overline {\cal V}}}_n^t} P(f_n(x)< t)d\lambda_g(x) -\int_{{{\overline {\cal V}}}_n^t} \Phi(t_n(x))d\lambda_g(x)\Big]=0.\end{aligned}$$* ]{}
[**Proof.**]{} We only prove the first equality. Let $t\in\Theta_0(g)$. First note that for all $x\in{I\!\!R}^k$ such that $V_n(x)\neq 0$ : $$P(f_n(x)\geq t)=P(Z_n(x)\geq {{\overline t}_n}(x)).$$ There exists a compact set ${\cal C}\subset{I\!\!R}^k$ such that $\inf_{{\cal C}} f>0$ and ${{\cal V}}_n^t\subset{\cal C}$ for all $n$. Observe that by Lemma 3.1 and the above remarks, $$\sqrt {nh^k}\Big[ \int_{{{\cal V}}_n^t} P(f_n(x)\geq t)d\lambda_g(x)-\int_{{{\cal V}}_n^t} {\overline {\Phi}}({{\overline t}_n}(x))d\lambda_g(x)\Big]\leq c\, \lambda_g({{\cal V}}_n^t).$$ Since $\lambda_g({{\cal V}}_n^t)\to 0$ by Lemma 3.2, one only needs now to prove that : $$E_n:=\sqrt {nh^k} \int_{{{\cal V}}_n^t}|{\overline {\Phi}}({{\overline t}_n}(x))-{\overline {\Phi}}(t_n(x))|d\lambda_g(x)\to 0.$$ One deduces from the Lipschitz property of $\Phi$ that $$E_n\leq c\sqrt {nh^k} \lambda_g({{\cal V}}_n^t) \sup_{x\in{{\cal V}}_n^t}|{{\overline t}_n}(x)-t_n(x)|. \qquad (3.3)$$ But, by definitions of ${{\overline t}_n}(x)$ and $t_n(x)$, we have for all $x\in{{\cal V}}_n^t$ : $$\begin{aligned}
& & \frac{1}{\sqrt {nh^k}}|{{\overline t}_n}(x)-t_n(x)|\\
& \leq & \Bigg(|t-f(x)|\Bigg|\frac{1}{\sqrt {{{\tilde K}}f(x)}}-\frac{1}{\sqrt {V_n(x)h^{-k}}}\Bigg|+\sqrt {\frac{h^k}{V_n(x)}}|Ef_n(x)-f(x)|\Bigg)\\
& \leq & \Bigg( \frac{(\log n)^{\beta}}{\sqrt {nh^k}} \sqrt {\frac{|{{\tilde K}}f(x)-V_n(x)h^{-k}|}{{{\tilde K}}f(x)V_n(x)h^{-k}}}+\sqrt {\frac{h^k}{V_n(x)}}|Ef_n(x)-f(x)|\Bigg). \ (3.4)\end{aligned}$$ It is a classical exercise to deduce from [**H1**]{}, [**H3**]{} that, since ${{\cal V}}_n^t$ is contained in $\cal C$, $$\sup_{x\in{{\cal V}}_n^t} |Ef_n(x)-f(x)|\leq c\, h^2,$$ and similarly, that $$\sup_{x\in{{\cal V}}_n^t}|{{\tilde K}}f(x)-V_n(x)h^{-k}| \leq c\, h.$$ One deduces from (3.4) and above that $$\sup_{x\in{{\cal V}}_n^t} |{{\overline t}_n}(x)-t_n(x)|\leq c\,\big(\sqrt h\, (\log n)^{\beta}+\sqrt {nh^{k+4}}\big).$$ Thus, by (3.3) and since $t\in\Theta_0(g)$, one has for all $n$ large enough : $$E_n\leq c\,(\log n)^{\beta}\big(\sqrt h\,(\log n)^{\beta}+\sqrt {nh^{k+4}}\big),$$and the latter term vanishes by assumptions on $h$, hence the lemma $\bullet$
[**Proof of Proposition 3.1.**]{} By Lemma 3.2, one only needs to prove Proposition 3.1 for all $t\in\Theta_0(g)$. Fix $t\in\Theta_0(g)$, and let $$I_n:=\int_{{{\cal V}}_n^t} {\overline {\Phi}}(t_n(x))d\lambda_g(x) \ {\rm and} \ {\overline I}_n:=\int_{{{\overline {\cal V}}}_n^t} \Phi(t_n(x))d\lambda_g(x).$$ By Lemma 3.3, the task is now to prove that $$\lim_n \sqrt {nh^k}\, I_n=\sqrt {\frac{t{{\tilde K}}}{2\pi}} \int_{\partial {{\cal L}}(t)}\frac{g}{\|\nabla f\|} d{{\cal H}}=\lim_n \sqrt {nh^k}\, {\overline I}_n.$$ We only show the first equality. One has $$I_n=\frac{1}{\sqrt {2\pi{{\tilde K}}}} \int_{{{\cal V}}_n^t}
\int_{b_n(x)}^{\infty} \exp\Big(-\frac {u^2}{2{{\tilde K}}}\Big)du \, d\lambda_g(x),$$ where for all $x\in{I\!\!R}^k$ such that $f(x)>0$, $b_n(x)=\sqrt {nh^k}(t-f(x))/f(x)^{1/2}$. By Fubini’s theorem : $$I_n=\frac{1}{\sqrt {2\pi{{\tilde K}}}}\int_0^{\infty} \exp\Big(-\frac{u^2}{2{{\tilde K}}}\Big)\lambda_g\Big(f^{-1}\Big[\max\Big(t-\frac{(\log n)^{\beta}}{\sqrt {nh^k}},\chi\Big(\frac{u}{\sqrt {nh^k}}\Big)^2\Big),t\Big]\Big)du,$$ where for all $v\geq 0$, $\chi(v)=-v/2+(1/2)\sqrt {v^2+4t}$. It is straightforward to prove the equivalence : $$u\in [0,r_n] \Leftrightarrow \chi\Big(\frac{u}{\sqrt {nh^k}}\Big)^2\geq t-\frac{(\log n)^{\beta}}{\sqrt {nh^k}},$$ where $r_n=(\log n)^{\beta}/\sqrt {t-(\log n)^{\beta}(nh^k)^{-1/2}}$, so that one can split $I_n$ into two terms, [*i.e.*]{}, $I_n=I_n^1+I_n^2$, where $$\begin{aligned}
I_n^1 & = & \frac{1}{\sqrt {2\pi{{\tilde K}}}} \int_0^{r_n} \exp\Big(-\frac{u^2}{2{{\tilde K}}}\Big) \lambda_g\Big(f^{-1}\Big[\chi\Big(\frac{u}{\sqrt {nh^k}}\Big)^2,t\Big]\Big)du \\
{\rm and} \ I_n^2 & = & \frac{1}{\sqrt {2\pi{{\tilde K}}}} \int_{r_n}^{\infty} \exp\Big(-\frac{u^2}{2{{\tilde K}}}\Big) \lambda_g\Big( f^{-1}\Big[t-\frac{(\log n)^{\beta}}{\sqrt {nh^k}},t\Big]\Big)du.\end{aligned}$$ Since $t\in\Theta_0(g)$, one has for all $n$ large enough : $$\sqrt {nh^k}\, I_n^2\leq c\,(\log n)^{\beta} \int_{r_n}^{\infty} \exp\Big(-\frac{u^2}{2{{\tilde K}}}\Big)du, \qquad (3.5)$$ and the rightmost term vanishes. Thus, it remains to compute the limit of $\sqrt {nh^k} I_n^1$. Using an expansion of $\chi$ in a neighborhood of the origin, we get $$\lim_n \sqrt {nh^k}\, \lambda_g\Big(f^{-1} \Big[\chi\Big(\frac{u}{\sqrt {nh^k}}\Big)^2,t\Big]\Big)=u\sqrt t \int_{\partial {{\cal L}}(t)} \frac{g}{\|\nabla f\|} d{{\cal H}}, \qquad (3.6)$$ for all $u\geq 0$, since $t\in\Theta_0(g)$. Moreover, one deduces from Lemma 3.2 that for all $n$ large enough and for all $u\in [0,r_n]$ : $$\begin{aligned}
\sqrt {nh^k}\, \lambda_g\Big(
f^{-1}\Big[\chi\Big(\frac{u}{\sqrt {nh^k}}\Big)^2,t\Big]\Big) & \leq & c\sqrt {nh^k} \Big(t-\chi\Big(\frac{u}{\sqrt {nh^k}}\Big)^2\Big)\\
& \leq & c\, u, \qquad (3.7)\end{aligned}$$ because $r_n/\sqrt {nh^k}\to 0$. Thus, according to (3.5)-(3.7) and the Lebesgue theorem : $$\begin{aligned}
\lim_n \sqrt {nh^k}\, I_n & = & \lim_n \sqrt {nh^k}\, I_n^1 \\
& = & \frac{1}{\sqrt {2\pi{{\tilde K}}}} \int_0^{\infty}
\exp \Big(-\frac{u^2}{2{{\tilde K}}}\Big) u \sqrt t \int_{\partial {{\cal L}}(t)}
\frac{g}{\|\nabla f\|}d{{\cal H}}\, du\\
& = & \sqrt {\frac{t{{\tilde K}}}{2\pi}} \int_{\partial {{\cal L}}(t)} \frac{g}{\|\nabla f\|}d{{\cal H}},\end{aligned}$$ hence the proposition $\bullet$
[**3.3. Proof of Proposition 3.2.**]{} From now on, we introduce two random variables $N_1$, $N_2$ with law ${\cal N} (0,1)$ such that $N_1,N_2,X_1,X_2,\cdots$ are independent. We let $$\sigma_n=\frac{1}{(\log n)^{2\beta}\log \log n}, \ \forall n\geq 2.$$ (As we will see later, the random variable $Z_n(x)+\sigma_n N_1$ -for instance- has a density with respect to the Lebesgue measure.) For simplicity, we assume in the following that under [**H3**]{}, the support of $K$ is contained in the euclidean unit ball of ${I\!\!R}^k$.
[**Lemma 3.4.**]{} [*Let $g\, :\, {I\!\!R}^k\to{I\!\!R}_+$ be a borel bounded function and assume that [**H2**]{} holds. If $nh^k/(\log n)^{2\beta}\to\infty$, then for all $t\in\Theta_0(g)$ there exists $c>0$ such that for $n$ large enough : $$\begin{aligned}
& & \int_{{{\cal V}}_n^t} P\Big(\Big\{ Z_n(x)\geq {{\overline t}_n}(x)\Big\}\Delta\Big\{Z_n(x)+\sigma_n N_1\geq {{\overline t}_n}(x)\Big\}\Big) d\lambda_g(x)\leq c\, w_n; \\
& and & \int_{{{\overline {\cal V}}}_n^t} P\Big(\Big\{ Z_n(x)< {{\overline t}_n}(x)\Big\}\Delta\Big\{Z_n(x)+\sigma_n N_1< {{\overline t}_n}(x)\Big\}\Big) d\lambda_g(x)\leq c\, w_n,\end{aligned}$$ where $w_n=(\log n)^{\beta}/(nh^k)+\sigma_n (\log n)^{\beta}/\sqrt {nh^k}$.*]{}
[**Proof.**]{} We only prove the first inequality. Let $t\in\Theta_0(g)$ and $$P_n := \int_{{{\cal V}}_n^t} P\Big(\Big\{ Z_n(x)\geq {{\overline t}_n}(x)\Big\}\Delta\Big\{Z_n(x)+\sigma_n N_1\geq {{\overline t}_n}(x)\Big\}\Big) d\lambda_g(x).$$ By independence of $N_1$ and $Z_n(x)$, $P_n$ is smaller than $$\int_{{{\cal V}}_n^t} \int \exp\Big(-\frac{z^2}{2}\Big) P\Big(\Big\{ Z_n(x)\geq {{\overline t}_n}(x)\Big\}\Delta\Big\{Z_n(x)+\sigma_n z\geq {{\overline t}_n}(x)\Big\}\Big) dz\, d\lambda_g(x),$$ and consequently, $$P_n\leq \int_{{{\cal V}}_n^t} \int\exp\Big(-\frac{z^2}{2}\Big) P\Big(|Z_n(x)-{{\overline t}_n}(x)|\leq \sigma_n |z|\Big) dz\, d\lambda_g(x).$$ Since $t\in\Theta_0(g)$, one deduces from Lemma 3.1 that for $n$ large enough : $$\begin{aligned}
P_n & \leq & c\, \frac{\lambda_g({{\cal V}}_n^t)}{\sqrt {nh^k}}+\int_{{{\cal V}}_n^t} \int \exp\Big(-\frac{z^2}{2}\Big) P\Big(|N_1-{{\overline t}_n}(x)|\leq \sigma_n|z|\Big) dz\, d\lambda_g(x)\\
& \leq & c\,\Big( \frac{(\log n)^{\beta}}{nh^k}+\frac{\sigma_n (\log n)^{\beta}}{\sqrt {nh^k}}\Big),\end{aligned}$$ hence the lemma $\bullet$
[**Lemma 3.5.**]{} [*Fix $t\in\Theta$ and assume that [**H1**]{}, [**H3**]{} hold. Then, there exists a polynomial function $Q$ of degree 5 defined on ${I\!\!R}^2$ such that for all $(u_1,u_2)\in{I\!\!R}^2$ and $n$ large enough : $$\Big|E\exp\Big(i\Big(u_1 Z_n(x)+u_2Z_n(y)\Big)\Big)-E\exp\Big(iu_1Z_n(x)\Big)E\exp\Big(iu_2Z_n(y)\Big)\Big|$$ $$\leq \frac{Q(|u_1|,|u_2|)}{\sqrt {nh^k}},$$ if $x,y\in{{\cal V}}_n^t\cup{{\overline {\cal V}}}_n^t$ are such that $\|x-y\|\geq 2h$.* ]{}
[**Proof.**]{} First of all, fix $u_1,u_2\in{I\!\!R}$, $x,y\in{{\cal V}}_n^t\cup{{\overline {\cal V}}}_n^t$ and consider the following quantities : $$\begin{aligned}
M_1 & := & \frac{u_1}{\sqrt {nV_n(x)}}\Big[K\Big(\frac{x-X}{h}\Big)-E K\Big(\frac{x-X}{h}\Big)\Big]\\
{\rm and} \ M_2 & := & \frac{u_2}{\sqrt {nV_n(y)}}\Big[K\Big(\frac{y-X}{h}\Big)-E K\Big(\frac{y-X}{h}\Big)\Big].\end{aligned}$$ One deduces from the inequality $|\exp(iw)-1-iw+w^2/2|\leq |w|$ $\forall w\in{I\!\!R}$ that $$\Big|E\exp\Big(i\Big(M_1+M_2\Big)\Big)-1+\frac{1}{2}E(M_1+M_2)^2\Big|$$ $$= \Big|E\Big[\exp\Big(i\Big(M_1+M_2\Big)\Big)-1-i(M_1+M_2)+\frac{1}{2}(M_1+M_2)^2\Big]\Big|\leq E|M_1+M_2|^3.$$ In a similar fashion, if $j=1$ or $j=2$ : $$\Big|E\exp(iM_j)-1+\frac{1}{2}EM_j^2\Big|=\Big|E\Big[\exp(iM_j)-1-iM_j+\frac{1}{2} M_j^2\Big]\Big|\leq E|M_j|^3.$$ Consequently, $$\begin{aligned}
& & \Big|E\exp\Big(i\Big(M_1+M_2\Big)\Big)-E\exp\Big(iM_1\Big)E\exp\Big(iM_2\Big)\Big|\\
& \leq & E|M_1+M_2|^3+\Big|\Big(1-\frac{1}{2}E|M_1+M_2|^2\Big)-\Big(1-\frac{1}{2}EM_1^2\Big)\Big(1-\frac{1}{2}EM_2^2\Big)\Big|\\
& & + \Big|1-\frac{1}{2}EM_1^2\Big|E|M_2|^3+ \Big|1-\frac{1}{2}EM_2^2\Big|E|M_1|^3. \qquad (3.8)\end{aligned}$$ It is an easy exercice to prove that for all $n$ large enough, one has $\inf V_n(x)\geq ch^k$, the infinimum being taken over all $x\in{{\cal V}}_n^t\cup{{\overline {\cal V}}}_n^t$. Consequently, if $j=1$ or $j=2$ : $$E|M_j|^3\leq c\,\frac{|u_j|^3}{\sqrt {n^3h^k}},$$ from which we deduce that : $$E|M_1+M_2|^3\leq c\,\frac{|u_1|^3+|u_2|^3}{\sqrt {n^3h^k}}.$$ Moreover, $EM_1^2=u_1^2/n$, $EM_2^2=u_2^2/n$ and for all $x,y\in{{\cal V}}_n^t\cup{{\overline {\cal V}}}_n^t$ such that $\|x-y\|\geq 2h$ : $$E(M_1+M_2)^2=EM_1^2+EM_2^2-\frac{u_1u_2}{n\sqrt {V_n(x)V_n(y)}} EK\Big(\frac{x-X}{h}\Big)EK\Big(\frac{y-X}{h}\Big),$$ because the support of $K$ is contained in the unit ball and hence $$EK\Big(\frac{x-X}{h}\Big)K\Big(\frac{y-X}{h}\Big)=0.$$ One deduces from above and (3.8) that for all $x,y\in{{\cal V}}_n^t\cup{{\overline {\cal V}}}_n^t$ such that $\|x-y\|\geq 2h$ : $$\begin{aligned}
& & \Big|E\exp\Big(i\Big(M_1+M_2\Big)\Big)-E\exp\Big(iM_1\Big)E\exp\Big(iM_2\Big)\Big|\\
& \leq & c\,\frac{|u_1|^3+|u_2|^3}{\sqrt {n^3h^k}}+\frac{(u_1u_2)^2}{n^2}+c\,\frac{|u_2|^3(1+u_1^2)+|u_1|^3(1+u_2^2)}{\sqrt {n^3h^k}}+c\,\frac{|u_1u_2|h^k}{n}.\end{aligned}$$ By assumption, $nh^{3k}\to 0$ so that for $n$ large enough : $h^k\leq 1/\sqrt {nh^k}$. Consequently, $$\Big|E\exp\Big(i\Big(M_1+M_2\Big)\Big)-E\exp\Big(iM_1\Big)E\exp\Big(iM_2\Big)\Big|\leq \frac{Q(|u_1|,|u_2|)}{\sqrt {nh^k}},$$ where $Q$ is defined for all $u_1,u_2\in{I\!\!R}$ by : $$Q(u_1,u_2)=c\big(u_1^3+u_2^3+(u_1u_2)^2+u_1u_2+u_2^2u_1^3+u_1^3u_2^2\big).$$ Consequently, for all $u_1,u_2\in{I\!\!R}$ and $x,y\in{{\cal V}}_n^t\cup{{\overline {\cal V}}}_n^t$ such that $\|x-y\|\geq 2h$ : $$\begin{aligned}
& & \Big|E\exp\Big(i\Big(u_1Z_n(x)+u_2Z_n(y)\Big)\Big)-E\exp\Big(iu_1Z_n(x)\Big)E\exp\Big(iu_2Z_n(y)\Big)\Big|\\
& = & \Big|\Big(E\exp\Big(i\Big(M_1+M_2\Big)\Big)\Big)^n-\Big(E\exp\Big(iM_1\Big)E\exp\Big(iM_2\Big)\Big)^n\Big|\\
& \leq & n\Big|E\exp\Big(i\Big(M_1+M_2\Big)\Big)-E\exp\Big(iM_1\Big)E\exp\Big(iM_2\Big)\Big|\\
& \leq & \frac{Q(|u_1|,|u_2|)}{\sqrt {nh^k}},\end{aligned}$$ hence the lemma $\bullet$
In the following, $uv$ stands for the usual scalar product of $u,v\in{I\!\!R}^2$.
[**Lemma 3.6.**]{} [*Let $x,y\in{I\!\!R}^k$ be such that $V_n(x)V_n(y)\neq 0$. Then, the bivariate random variable $$\left( \begin{array}{clcr} Z_n(x)+\sigma_n N_1 \\ Z_n(y)+\sigma_n N_2 \\ \end{array} \right)$$ has a density $\varphi_n^{x,y}$ defined for all $u\in{I\!\!R}^2$ by $$\varphi_n^{x,y}(u)=\frac{1}{4\pi^2} \int E\Big[\exp\Big(i\Big(v_1Z_n(x)+v_2Z_n(y)\Big)\Big)\Big]\exp\Big(-i\, uv-\frac{1}{2}\sigma_n^2\|v\|^2\Big)dv.$$* ]{}
[**Proof.**]{} By independence of $X_1,\cdots,X_n,N_1$ and $N_2$, the random variable $$\left( \begin{array}{clcr} Z_n(x)\\ Z_n(y)\\ \end{array} \right) +
\sigma_n \left( \begin{array}{clcr} N_1\\ N_2\\ \end{array} \right)$$ has a density $\varphi_n^{x,y}$ defined for all $u=(u_1,u_2)\in{I\!\!R}^2$ by $$\varphi_n^{x,y}(u) = \frac{1}{2\pi\sigma_n^2} E\Big[\exp\Big(-\frac{(u_1-Z_n(x))^2}{2\sigma_n^2}\Big)\exp\Big(-\frac{(u_2-Z_n(y))^2}{2\sigma_n^2}\Big)\Big].$$ Using the equality $$\frac{1}{\sqrt {2\pi\sigma_n^2}}\exp\Big(-\frac{z^2}{2\sigma_n^2}\Big)=\frac{1}{2\pi} \int \exp\Big(-izw-\frac{1}{2}\sigma_n^2w^2\Big)dw \ \forall z\in{I\!\!R},$$ we deduce from the Fubini theorem that $$\varphi_n^{x,y}(u) = \frac{1}{4\pi^2} \int E\Big[\exp\Big(i\Big(v_1Z_n(x)+v_2Z_n(y)\Big)\Big)\Big] \exp\Big(-iuv-\frac{1}{2}\sigma_n^2\|v\|^2\Big)dv,$$ hence the lemma $\bullet$
[**Proof of Proposition 3.2.**]{} We only prove the first equality of Proposition 3.2. According to Lemma 3.2, one only needs to prove the result for each $t\in\Theta_0(g)$. Hence we fix $t\in\Theta_0(g)$ and we put : $$A_n(x)=\Big\{Z_n(x)\geq {{\overline t}_n}(x)\Big\}, \ A_n^j(x)=\Big\{Z_n(x)+\sigma_nN_j\geq {{\overline t}_n}(x)\Big\}, \ j=1,2,$$ for all $x\in{I\!\!R}^k$ such that $V_n(x)\neq 0$. First note that since the events $A_n(x)$ and $\{f_n(x)\geq t\}$ are equal, one has $$\begin{aligned}
& &{{\rm var}}\Big[\lambda_g\Big({{\cal V}}_n^t\cap{{\cal L}}_n(t)\Big)\Big]\\
& = & \int_{({{\cal V}}_n^t)^{\times 2}} \Big(P(A_n(x)\cap A_n(y))-P(A_n(x))P(A_n(y))\Big)
d\lambda_g^{\otimes 2}(x,y). \quad (3.9)\end{aligned}$$ But, by Lemma 3.4 and since $t\in\Theta_0(g)$, one has for all $n$ large enough : $$\begin{aligned}
& & nh^k \int_{({{\cal V}}_n^t)^{\times 2}} \Big(P(A_n(x)\cap A_n(y))-P(A_n^1(x)\cap A_n^2(y))\Big)d\lambda_g^{\otimes 2}(x,y)\\
& \leq & 2nh^k \lambda_g({{\cal V}}_n^t)\int_{{{\cal V}}_n^t} P(A_n(x)\Delta A_n^1(x)) d\lambda_g(x)\\
& \leq & c\,(\log n)^{\beta} \sqrt {nh^k} \Big( \frac{(\log n)^{\beta}}{nh^k}+\frac{\sigma_n(\log n)^{\beta}}{\sqrt {nh^k}}\Big)\\
& \leq & c\,\Big( \frac{(\log n)^{2\beta}}{\sqrt {nh^k}}+\sigma_n (\log n)^{2\beta}\Big),\end{aligned}$$ and the latter term tends to 0 by assumption. In a similar fashion, one can prove that $$nh^k \int_{({{\cal V}}_n^t)^{\times 2}} \Big(P(A_n(x))P(A_n(y))-P(A_n^1(x))P(A_n^2(y))\Big)d\lambda_g^{\otimes 2}(x,y)\to 0.$$ By the above results and (3.9), it remains to show that $$nh^k\int_{({{\cal V}}_n^t)^{\times 2}} \Big( P(A_n^1(x)\cap A_n^2(y))-P(A_n^1(x))P(A_n^2(y)) \Big)d\lambda_g^{\otimes 2}(x,y) \to 0.\qquad (3.10)$$ Let $T(h)=\{(x,y)\in({I\!\!R}^k)^{\times 2} :\, \|x-y\|\leq 2h\}$. According to the Fubini theorem, $$\begin{aligned}
nh^k \lambda_g^{\otimes 2}\Big(({{\cal V}}_n^t)^{\times 2}\cap T(h)\Big) & = & nh^k \int_{{{\cal V}}_n^t} \lambda_g\Big({{\cal V}}_n^t\cap B(x,2h)\Big)d\lambda_g(x)\\
& \leq & nh^k \int_{{{\cal V}}_n^t} \lambda_g(B(x,2h))d\lambda_g(x),\end{aligned}$$ where $B(z,r)$ stands for the euclidean closed ball with center at $z\in{I\!\!R}^k$ and radius $r>0$. Since $t\in\Theta_0(g)$, one deduces that $$\begin{aligned}
nh^k \lambda_g^{\otimes 2}\Big(({{\cal V}}_n^t)^{\times 2}\cap T(h)\Big) & \leq & c\, n h^k \frac{(\log n)^{\beta}}{\sqrt {nh^k}} h^k\\
& \leq & c\,\sqrt {nh^{3k} (\log n)^{2\beta}},\end{aligned}$$ so that, by assumption on the bandwidth $h$ : $$\lim_n nh^k \lambda_g^{\otimes 2}\Big(({{\cal V}}_n^t)^{\times 2}\cap T(h)\Big)=0.$$ Let now ${\cal S}_n=({{\cal V}}_n^t)^{\times 2}\cap T(h)^c$. According to (3.10) and the above result, one only needs now to prove that : $$nh^k\int_{{\cal S}_n}\Big(P(A_n^1(x)\cap A_n^2(y))-P(A_n^1(x))P(A_n^2(y))\Big)d\lambda_g^{\otimes 2}(x,y)\to 0.\quad (3.11)$$ By Lemmas 3.5 and 3.6, one has for all $x,y\in{\cal S}_n$: $$\begin{aligned}
& & \Big|P(A_n^1(x)\cap A_n^2(y))-P(A_n^1(x))P(A_n^2(y))\Big| \\
& \leq & \int \Big|E\exp\Big(i\Big(u_1 Z_n(x)+u_2Z_n(y)\Big)\Big)\\
& & \qquad -E\exp\Big(iu_1Z_n(x)\Big)E\exp\Big(iu_2Z_n(y)\Big)\Big|
\exp\Big(-\frac{1}{2}\sigma_n^2\|u\|^2\Big)du_1du_2\\
& \leq & \frac{1}{\sqrt {nh^k}}\int Q(|u_1|,|u_2|)\exp\Big(-\frac{1}{2}\sigma_n^2\|u\|^2\Big)du_1du_2\\
& \leq & \frac{c}{\sigma_n^7 \sqrt {nh^k}},\end{aligned}$$ where $Q$ is the polynomial function defined in Lemma 3.5. Consequently, one has for all $n$ large enough : $$\begin{aligned}
& & nh^k\int_{{\cal S}_n}\Big(P(A_n^1(x)\cap A_n^2(y))-P(A_n^1(x))P(A_n^2(y))\Big)d\lambda_g^{\otimes 2}(x,y)\\
& \leq & c\, \frac{\sqrt {nh^k}}{\sigma_n^7} \lambda_g^{\otimes 2}({\cal S}_n)\\
& \leq & c\,\frac{\sqrt {nh^k}}{\sigma_n^7} \lambda_g({{\cal V}}_n^t)^2\\
& \leq & c\,\frac{(\log n)^{2\beta}}{\sigma_n^7\sqrt {nh^k}},\end{aligned}$$ which tends to 0 by assumption, hence (3.11) $\bullet$
[**4. Proof of Corollary 2.1.**]{}
[**Lemma 4.1.**]{} [*Let $k\geq 2$ and assume that [**H1**]{}-[**H3**]{} hold. If $nh^{k+4}(\log n)^2\to 0$ and $nh^k/(\log n)^{16}\to\infty$, then for a.e. $t\in\Theta$ : $$\sqrt {nh^k} \Big( \lambda_{f_n}({{\cal L}}(t))-\lambda_{f_n}({{\cal L}}_n(t))\Big){\stackrel{\rm P}{\to}}0.$$* ]{}
[**Proof.**]{} Let $t\in\Theta$ be such that the conclusion of Theorem 2.1 holds both for $g\equiv f$ and $g\equiv 1$. Notice that $$\begin{aligned}
\lambda_{f_n}({{\cal L}}(t))-\lambda_{f_n}({{\cal L}}_n(t)) & = & \int f_n\Big({\bf 1}_{\{f\geq t\}}-{\bf 1}_{\{f_n\geq t\}}\Big)d\lambda\\
& = & \int_{{{\cal L}}(t)} f_n {\bf 1}_{\{f_n<t\}}d\lambda-\int_{{{\cal L}}(t)^{c}} f_n {\bf 1}_{\{f_n\geq t\}}d\lambda.\end{aligned}$$ As in the proof of Theorem 2.1, we see that the result of the lemma will hold if we show that $\sqrt {nh^k} K_n{\stackrel{\rm P}{\to}}0$, where $$K_n := \int_{{{\overline {\cal V}}}_n^t} f_n{\bf 1}_{\{f_n<t\}}d\lambda-\int_{{{\cal V}}_n^t} f_n{\bf 1}_{\{f_n\geq t\}}d\lambda.$$ Split $K_n$ into four terms as follows : $$\begin{aligned}
K_n & = &\int_{{{\overline {\cal V}}}_n^t} (f_n-f){\bf 1}_{\{f_n<t\}}d\lambda-\int_{{{\cal V}}_n^t} (f_n-f){\bf 1}_{\{f_n\geq t\}}d\lambda\\
& & +\int_{{{\overline {\cal V}}}_n^t}{\bf 1}_{\{f_n<t\}}d\lambda_f-\int_{{{\cal V}}_n^t}{\bf 1}_{\{f_n\geq t\}}d\lambda_f. \qquad (4.1)\end{aligned}$$ On one hand, it is a classical exercise to deduce from [**H1**]{}, [**H3**]{} that $$\sup_{{{\overline {\cal V}}}_n^t} |f_n-f|{\stackrel{\rm P}{\to}}0.$$ Thus, using (3.2), $$\sqrt {nh^k} \int_{{{\overline {\cal V}}}_n^t} (f_n-f){\bf 1}_{\{f_n<t\}}d\lambda {\stackrel{\rm P}{\to}}0.$$ In a similar fashion : $$\sqrt {nh^k} \int_{{{\cal V}}_n^t} (f_n-f){\bf 1}_{\{f_n\geq t\}}d\lambda {\stackrel{\rm P}{\to}}0.$$ On the other hand, we get from (3.2) that : $$\lim_n \sqrt {nh^k} \int_{{{\cal V}}_n^t} {\bf 1}_{\{f_n\geq t\}}d\lambda_f=\lim_n \sqrt {nh^k} \int_{{{\overline {\cal V}}}_n^t} {\bf 1}_{\{f_n < t\}}d\lambda_f,$$ where the limits are in probability. By the above results and (4.1), $\sqrt {nh^k}K_n$ tends to 0 in probability, hence the lemma $\bullet$
[**Lemma 4.2.**]{} [*Let $k\geq 2$, $t\in\Theta$ and assume that [**H1**]{}, [**H3**]{} hold. If $nh^{k+4}\to 0$, then : $$\sqrt {nh^k} \Big( \lambda_f({{\cal L}}(t))-\lambda_{f_n}({{\cal L}}(t))\Big){\stackrel{\rm P}{\to}}0.$$* ]{}
[**Proof.**]{} Observe that $$\lambda_f({{\cal L}}(t))-\lambda_{f_n}({{\cal L}}(t))=\int_{{{\cal L}}(t)} (f-Ef_n)d\lambda+\int_{{{\cal L}}(t)}(Ef_n-f_n)d\lambda.$$ According to [**H1**]{}, [**H3**]{}, we have : $$\int_{{{\cal L}}(t)} |f-Ef_n|d\lambda\leq ch^2,$$ and since $nh^{k+4}\to 0$, we only need to prove that $$\sqrt {nh^k} \int_{{{\cal L}}(t)} (Ef_n-f_n)d\lambda{\stackrel{\rm P}{\to}}0.$$ We prove that this convergence holds in quadratic mean. We have : $$\begin{aligned}
E\Big(\sqrt {nh^k} \int_{{{\cal L}}(t)} (Ef_n-f_n)d\lambda\Big)^2
& \leq & \frac{1}{h^k} E\Big( \int_{{{\cal L}}(t)} K\Big(\frac{x-X}{h}\Big)dx\Big)^2\\
& \leq & \frac{1}{h^k} \int_{{{\cal L}}(t)^{\times 2}} EK\Big(\frac{x-X}{h}\Big)K\Big(\frac{y-X}{h}\Big) dxdy.\end{aligned}$$ Recall that we assume in Section 3.3 that the support of $K$ is contained in the unit ball so that if $\|x-y\|\geq 2h$, $$EK\Big(\frac{x-X}{h}\Big)K\Big(\frac{y-X}{h}\Big)=0.$$ Letting $R(h)=\{(x,y)\in{{\cal L}}(t)^{\times 2} :\, \|x-y\|\leq 2h\}$, one deduces from above that $$\begin{aligned}
E\Big(\sqrt {nh^k} \int_{{{\cal L}}(t)} (Ef_n-f_n)d\lambda\Big)^2 & \leq &
\frac{c}{h^k} \int_{R(h)} \int K\Big(\frac{x-u}{h}\Big)f(u)dudxdy\\
& \leq & c\int_{R(h)} \int K(v)f(x-hv)dvdxdy\\
& \leq & c\, \lambda^{\otimes 2} (R(h))\\
& \leq & c \int_{{{\cal L}}(t)} \lambda\Big({{\cal L}}(t)\cap B(x,2h)\Big)dx,\end{aligned}$$ according to the Fubini theorem. Thus, we get : $$E\Big(\sqrt {nh^k} \int_{{{\cal L}}(t)} (Ef_n-f_n)d\lambda\Big)^2 \leq ch^k,$$ hence the lemma $\bullet$
[**Lemma 4.3.**]{} [*Let $p\in [0,1]$ and assume that [**H1**]{}, [**H3**]{} and [**H4**]{} hold. If $nh^k/\log n\to \infty$, then $t_n^{(p)}\to t^{(p)}$ a.s.*]{}
[**Proof.**]{} Let $t=t^{(p)}$ and $t_n=t_n^{(p)}$. As seen in the proof of Theorem 2.1, $\sup_{{I\!\!R}^k} |f_n-f|\to 0$ a.s. Hence, one can fix $$\omega\in\Big\{\sup_{{I\!\!R}^k}|f_n-f|\to 0\Big\}.$$ For notational convenience, we omit $\omega$ until the end of this proof. Since $f$ is bounded, one has $\sup_n\sup_{{I\!\!R}^k} f_n<\infty$ and consequently $\sup_n t_n<\infty$. Thus, from each sequence of integers, one can extract a subsequence $(n_k)_k$ such that $t_{n_k}\to t^*$. On one hand, according to Scheffé’s theorem, $$\lim_n \Big(\lambda_{f_{n_k}}({{\cal L}}_{n_k}(t_{n_k}))-\lambda_f({{\cal L}}_{n_k}(t_{n_k}))\Big)=0, \qquad (4.2)$$ since both $f$ and $f_{n_k}$ are density functions on ${I\!\!R}^k$ and $$\Big|\lambda_{f_{n_k}}({{\cal L}}_{n_k}(t_{n_k}))-\lambda_{f}({{\cal L}}_{n_k}(t_{n_k}))\Big|\leq \int |f_{n_k}-f|d\lambda.$$ On the other hand, letting ${\varepsilon}_k=\sup_{{I\!\!R}^k}|f_{n_k}-f|$, one observes that $$\begin{aligned}
\Big|\lambda_f({{\cal L}}(t_{n_k}))-\lambda_f({{\cal L}}_{n_k}(t_{n_k}))\Big| & = &
\int f\Big| {\bf 1}_{\{f\geq t_{n_k}\}}-{\bf 1}_{\{ f_{n_k}\geq t_{n_k}\}}\Big| d\lambda\\
& \leq & \int f {\bf 1}_{\{t_{n_k}-{\varepsilon}_k\leq f\leq t_{n_k}+{\varepsilon}_k\}}d\lambda\\
& \leq & c\,\lambda\Big( f^{-1}([t_{n_k}-{\varepsilon}_k,t_{n_k}+{\varepsilon}_k]\cap (0,\sup_{{I\!\!R}^k}f])\Big),\end{aligned}$$ and the latter term tends to 0 as $k\to\infty$ under [**H4**]{} (consider separately the two cases : $t^*=0$ and $t^*>0$). One deduces from (4.2) that : $$\begin{aligned}
\lim_n \Big(\lambda_f({{\cal L}}(t))-\lambda_f({{\cal L}}(t_{n_k}))\Big) & = & \lim_n \Big(p-\lambda_f({{\cal L}}(t_{n_k}))\Big) \\
& = & \lim_n \Big( \lambda_{f_{n_k}}({{\cal L}}_{n_k}(t_{n_k}))-\lambda_f({{\cal L}}_{n_k}(t_{n_k}))\Big)\\
& & + \lim_n \Big(\lambda_f({{\cal L}}_{n_k}(t_{n_k}))-\lambda_f({{\cal L}}(t_{n_k}))\Big)\\
& = & 0. \qquad (4.3)\end{aligned}$$ Moreover, the application $s\mapsto\lambda_f({{\cal L}}(s))$ defined on $[0,\sup_{{I\!\!R}^k}f]$ is continuous according to [**H4**]{}. Consequently, one has $$\lim_n \lambda_f({{\cal L}}(t_{n_k}))=\lambda_f({{\cal L}}(t^*)),$$ and thus, by (4.3), $\lambda_f({{\cal L}}(t))=\lambda_f({{\cal L}}(t^*))$ and hence $t=t^*$ because $\cal P$ is one-to-one. One conclude $t_n\to t$ since we proved that from each sequence of integers, one can extract a subsequence $(n_k)_k$ such that $t_{n_k}\to t$. The lemma is proved $\bullet$
[**Lemma 4.4.**]{} [*Let $k\geq 2$ and assume that [**H1**]{}-[**H4**]{} hold. If $nh^{k+4}(\log n)^2\to 0$ and $nh^{k+2}/\log n\to \infty$, then for a.e. $p\in {\cal P}(\Theta)$ : $$\sqrt {nh^k} \int_{t_n^{(p)}}^{t^{(p)}} \int_{\partial {{\cal L}}_n(s)} \frac{1}{\|\nabla f_n\|}d{{\cal H}}\, ds{\stackrel{\rm P}{\to}}0.$$* ]{}
[**Proof.**]{} One only needs to choose $p\in{\cal P}(\Theta)$ such that the conclusion of Lemma 4.1 holds for $t^{(p)}$. For simplicity, let $t=t^{(p)}$ and $t_n=t_n^{(p)}$. It is a classical exercise to prove that since $nh^{k+2}/\log n\to \infty$ and $nh^{k+4}\to 0$, $$\|\nabla f_n\|\to \|\nabla f\| \ {\rm a.s.},$$ uniformly over the compact sets. Thus, by Lemma 4.3 and [**H2**]{}, we have a.s. and for $n$ large enough : $$\inf_{f^{-1}[\min(t_n,t),\max(t_n,t)]} \|\nabla f_n\|>0. \qquad (4.4)$$ We deduce from Proposition A that a.s. and for $n$ large enough : $$\begin{aligned}
\lambda_{f_n}({{\cal L}}_n(t_n))-\lambda_{f_n}({{\cal L}}_n(t)) & = & \int \Big( {\bf 1}_{\{f_n\geq t_n\}}-{\bf 1}_{\{f_n\geq t\}}\Big)d\lambda_{f_n}\\
& = & \int {\bf 1}_{\{t_n\leq f_n<t\}}d\lambda_{f_n}-\int {\bf 1}_{\{t\leq f_n < t_n\}}d\lambda_{f_n}\\
& = & \int_{t_n}^t \int_{\partial {{\cal L}}_n(s)} \frac{f_n}{\|\nabla f_n\|} d{{\cal H}}\, ds,\end{aligned}$$ where the latter integral is defined according to (4.4). Consequently, $$\Big|\lambda_{f_n}({{\cal L}}_n(t_n))-\lambda_{f_n}({{\cal L}}_n(t))\Big| = \int_{\min(t_n,t)}^{\max(t_n,t)} s \int_{\partial {{\cal L}}_n(s)}
\frac{1}{\|\nabla f_n\|} d{{\cal H}}\, ds.$$ By Lemma 4.3, one has a.s. and for $n$ large enough : $t_n\geq t/2$. Since $\lambda_{f_n}({{\cal L}}_n(t_n))=p=\lambda_f({{\cal L}}(t))$, one deduces that : $$\Big|\lambda_f({{\cal L}}(t))-\lambda_{f_n}({{\cal L}}_n(t))\Big|\geq \frac{t}{2} \int_{\min(t_n,t)}^{\max(t_n,t)}\int_{\partial
{{\cal L}}_n(s)} \frac{1}{\|\nabla f_n\|} d{{\cal H}}\, ds.$$ We can now conclude the proof of the lemma because $$\sqrt {nh^k} \Big|\lambda_f({{\cal L}}(t))-\lambda_{f_n}({{\cal L}}_n(t))\Big|{\stackrel{\rm P}{\to}}0,$$ by Lemmas 4.1 and 4.2 $\bullet$
[**Lemma 4.5**]{} [*Assume that [**H1**]{}-[**H3**]{} hold. If $nh^k/(\log n)^2\to\infty$, then for a.e. $p\in{\cal P}(\Theta)$ : $$\frac{\sqrt {nh^k}}{\log n}\, |t_n^{(p)}-t^{(p)}|{\stackrel{\rm P}{\to}}0.$$*]{}
[**Proof.**]{} By [**H2**]{} and the Lebesgue-Besicovitch theorem (Evans and Gariepy, \[16\], Theorem 1, Chapter I), we have for a.e. $p\in{\cal P}(\Theta)$ : $$\frac{1}{{\varepsilon}} \int_{t^{(p)}-{\varepsilon}}^{t^{(p)}} \int_{\partial {{\cal L}}(s)} \frac{f}{\|\nabla f\|} d{{\cal H}}\, ds\to \int_{\partial {{\cal L}}(t^{(p)})} \frac{f}{\|\nabla f\|} d{{\cal H}},$$ as ${\varepsilon}\searrow 0$. Thus, one only needs to prove the lemma for $p\in{\cal P}(\Theta)$ such that the above result holds. For convenience, let $t=t^{(p)}$ and $t_n=t_n^{(p)}$. It suffices to show that $$\frac{\sqrt {nh^k}}{\log n}\, |t_n^{(p)}-t^{(p)}|{\stackrel{\rm P}{\to}}0$$ on the event $A_n$ defined by $$A_n=\Big\{ \sup_{{{\cal L}}(t/2)}|f_n-f|\leq r_n\Big\},$$ where $r_n=(\log n)^{3/4}/\sqrt {nh^k}$, because $P(A_n)\to 1$ (see the proof of Theorem 2.1). According to Lemma 4.3, one has a.s. and for $n$ large enough : ${{\cal L}}(t_n)\cup{{\cal L}}_n(t_n)\subset {{\cal L}}(t/2)$ on the event $A_n$. Then, $$\begin{aligned}
|\lambda_f({{\cal L}}(t_n))-\lambda_{f_n}({{\cal L}}_n(t_n))| & = & \Big| \int_{{{\cal L}}(t_n)} fd\lambda-\int_{{{\cal L}}_n(t_n)} f_nd\lambda\Big|\\
& \leq & \int_{{{\cal L}}(t/2)} |f_n-f|d\lambda+\int f\Big|{\bf 1}_{{{\cal L}}(t_n)}-{\bf 1}_{{{\cal L}}_n(t_n)}\Big|d\lambda\\
& \leq & c\, r_n+c\,\lambda\Big({{\cal L}}(t_n)\Delta{{\cal L}}_n(t_n)\Big). \qquad (4.5)\end{aligned}$$ But, on $A_n$ : $$\lambda\Big({{\cal L}}(t_n)\Delta{{\cal L}}_n(t_n)\Big)\leq \lambda\Big(\Big\{t_n-r_n\leq f\leq t_n+r_n\Big\}\Big).$$ By [**H1**]{}, [**H2**]{}, there exists a neighborhood $V$ of $t$ such that $$\inf_{f^{-1} (V)} \|\nabla f\|>0,$$ thus, by Lemma 4.3, one has a.s. and for $n$ large enough : $$\begin{aligned}
\lambda\Big({{\cal L}}(t_n)\Delta{{\cal L}}_n(t_n)\Big) & \leq & \sup_{s\in V} \lambda\Big(\Big\{s-r_n\leq f\leq s+r_n\Big\}\Big)\\
& \leq & c\,r_n,\end{aligned}$$ where the latter inequality is a consequence of Proposition A. According to (4.5), one has on $A_n$ and for $n$ large enough : $$|\lambda_f({{\cal L}}(t_n))-\lambda_f({{\cal L}}(t))|=|\lambda_f({{\cal L}}(t_n))-\lambda_{f_n}({{\cal L}}_n(t_n))|\leq c\,r_n.$$ Observe now that by Proposition A and our choice of $t$, one has a.s. : $$\frac{\lambda_f({{\cal L}}(t_n))-\lambda_f({{\cal L}}(t))}{t_n-t} \to \int_{\partial {{\cal L}}(t)} \frac{f}{\|\nabla f\|} d{{\cal H}}\neq 0,$$ thus on $A_n$, $$|t_n-t|\leq c\, r_n,$$ for $n$ large enough, hence the lemma $\bullet$
[**Lemma 4.6.**]{} [*Assume that [**H1**]{}-[**H4**]{} hold and let $(\alpha_n)_n$ be a sequence of positive real numbers. If $\alpha_n\to 0$, $\alpha_n^2nh^k/(\log n)^2\to\infty$ and $nh^k/(\log n)^2\to\infty$, then for a.e. $p\in{\cal P}(\Theta)$ : $$\frac{1}{\alpha_n}\lambda\Big({{\cal L}}_n(t_n^{(p)})-{{\cal L}}_n(t_n^{(p)}+\alpha_n)\Big){\stackrel{\rm P}{\to}}\int_{{{\cal L}}(t^{(p)})} \frac{1}{\|\nabla f\|}d{{\cal H}}.$$*]{}
[**Proof.**]{} According to Proposition A and [**H1**]{}, [**H2**]{}, [**H4**]{}, one has for a.e. $t\in\Theta$ : $$\frac{1}{{\varepsilon}}\lambda\Big({{\cal L}}(t)-{{\cal L}}(t+{\varepsilon})\Big)=\frac{1}{{\varepsilon}}\lambda\Big(\Big\{t\leq f\leq t+{\varepsilon}\Big\}\Big)\to \int_{{{\cal L}}(t^{(p)})} \frac{1}{\|\nabla f\|}d{{\cal H}},$$ as ${\varepsilon}\searrow 0$. Hence, it suffices to prove the lemma for all $p\in{\cal P}(\Theta)$ such that the above result holds with $t=t^{(p)}$. For convenience, let $t=t^{(p)}$ and $t_n=t_n^{(p)}$. By Lemma 4.5, one only needs to prove that $$\frac{1}{\alpha_n}\lambda\Big({{\cal L}}_n(t_n)-{{\cal L}}_n(t_n+\alpha_n)\Big)=\frac{1}{\alpha_n}\lambda\Big(\Big\{t_n\leq f_n < t_n+\alpha_n\Big\}\Big){\stackrel{\rm P}{\to}}\int_{{{\cal L}}(t^{(p)})} \frac{1}{\|\nabla f\|}d{{\cal H}},$$ on the event $B_n$ defined by $$B_n=\Big\{\sup_{{{\cal L}}(t/2)}|f_n-f|\leq v_n, \ |t_n-t|\leq v_n\Big\},$$ where $v_n=\log n/\sqrt {nh^k}$, because $P(B_n)\to 1$. But, for $n$ large enough, one has ${{\cal L}}_n(t_n)\cup{{\cal L}}(t)\subset {{\cal L}}(t/2)$ on $B_n$. Consequently, $$\frac{1}{\alpha_n}\Big|\lambda\Big(\Big\{t_n\leq f_n<t_n+\alpha_n\Big\}\Big)-
\lambda\Big(\Big\{t\leq f\leq t+\alpha_n\Big\}\Big)\Big|$$ $$\leq \frac{1}{\alpha_n} \lambda\Big(\Big\{t-2v_n\leq f\leq t+2v_n\Big\}\Big) \leq c\,\frac{v_n}{\alpha_n},$$ and the latter term tends to 0 by assumption on $\alpha_n$. Finally, the choice of $t$ implies that $$\frac{1}{\alpha_n}\lambda\Big(\Big\{t\leq f_n\leq t+\alpha_n\Big\}\Big)\to \int_{{{\cal L}}(t^{(p)})} \frac{1}{\|\nabla f\|}d{{\cal H}},$$ so that on $B_n$ : $$\frac{1}{\alpha_n}\lambda\Big(\Big\{t_n\leq f_n<t_n+\alpha_n\Big\}\Big){\stackrel{\rm P}{\to}}\int_{{{\cal L}}(t^{(p)})} \frac{1}{\|\nabla f\|}d{{\cal H}},$$ hence the lemma $\bullet$
[**Proof of Corollary 2.1.**]{} According to Lemma 4.3, Lemma 4.6 and Theorem 2.1, one only needs to prove that for a.e. $p\in {\cal P}(\Theta)$ : $$\sqrt {nh^k}\Big[\lambda\Big({{\cal L}}_n(t_n^{(p)})\Delta {{\cal L}}(t^{(p)})\Big)-\lambda\Big({{\cal L}}_n(t^{(p)})\Delta {{\cal L}}(t^{(p)})\Big)\Big]{\stackrel{\rm P}{\to}}0.$$ Moreover, it suffices to show the above result for each $p\in {\cal P}(\Theta)$ such that the conclusion of Lemma 4.4 holds. Fix such a $p\in{\cal P}(\Theta)$ and, for simplicity, let $t=t^{(p)}$ and $t_n=t_n^{(p)}$. A straightforward computation gives the relation : $$D_n :=\lambda\Big({{\cal L}}_n(t_n)\Delta {{\cal L}}(t)\Big)-\lambda\Big({{\cal L}}_n(t)\Delta {{\cal L}}(t)\Big)=\int \Big({\bf 1}_{\{f_n\geq t_n\}}-{\bf 1}_{\{f_n\geq t\}}\Big)\eta\, d\lambda,$$ where $\eta=1-2{\bf 1}_{\{f\geq t\}}$. Then, $$D_n= \int {\bf 1}_{\{t_n\leq f_n<t\}}\eta\, d\lambda- \int {\bf 1}_{\{t\leq f_n< t_n\}}\eta\, d\lambda.$$ By (4.4) and [**H3**]{}, one can now apply Proposition A, which gives : $$D_n=\int_{t_n}^t \int_{\partial {{\cal L}}_n(s)} \frac{\eta}{\|\nabla f_n\|}d{{\cal H}}\, ds.$$ Consequently, $$|D_n|\leq \int_{\min(t_n,t)}^{\max (t_n,t)} \int_{\partial {{\cal L}}_n(s)} \frac{1}{\|\nabla f_n\|}d{{\cal H}}\, ds,$$ so that by Lemma 4.4 : $$\sqrt {nh^k} D_n=\sqrt {nh^k} \Big[\lambda\Big({{\cal L}}_n(t_n)\Delta {{\cal L}}(t)\Big)-\lambda\Big({{\cal L}}_n(t)\Delta {{\cal L}}(t)\Big)\Big]{\stackrel{\rm P}{\to}}0,$$ hence the corollary $\bullet$
[**Appendix : A change of variables formula.**]{} Proposition A below is a consequence of the change of variables formula given in Evans and Gariepy (\[16\], Chapter III, Theorem 2). For a similar proof, see also Chapter III, Proposition 3 in the same book.
[**Proposition A.**]{} [*Let $\varphi\,:\, {I\!\!R}^k\to {I\!\!R}_+$ be a continuously differentiable function such that $\varphi(x)\to 0$ as $\|x\|\to\infty$, and $I\subset {I\!\!R}_+$ be an interval such that $\inf I >0$ and $$\inf_{\varphi^{-1}(I)} \|\nabla \varphi\|>0.$$ Then, for all borel bounded function $g\, :{I\!\!R}^k\to {I\!\!R}$ : $$\int_{\varphi^{-1}(I)} gdx=\int_I \int_{\varphi^{-1}(\{s\})} \frac{g}{\|\nabla \varphi\|} d{{\cal H}}\, ds.$$*]{}
[**Proof.**]{} Notice that $\varphi$ is a locally Lipschitz function and $$g{\bf 1}_{\varphi^{-1}(I)}$$ is integrable because $\varphi^{-1}(I)$ is bounded. Proposition A is then an easy consequence of Theorem 2 in Evans and Gariepy (\[16\], Chapter III) $\bullet$
[**Acknowledgements.**]{} The author thank André Mas and Nicolas Molinari for many helpful comments.
**REFERENCES**
\[1\] J.A. Hartigan, Clustering Algorithms, Wiley, New-York, 1975.
\[2\] L. Devroye and G.L. Wise, Detection of abnormal behavior via nonparametric estimation of the support, SIAM J. Appl. Math. 38 (1980) 480-488.
\[3\] U. Grenander, Abstract Inference, Wiley, New-York, 1981.
\[4\] A. Cuevas, On pattern analysis in the non-convex case, Kybernetes 19 (1990) 26-33.
\[5\] A. Cuevas and R. Fraiman, Pattern analysis via nonparametric density estimation, Unpublished manuscript (1993).
\[6\] I.S. Molchanov, Empirical estimation of distribution quantiles of random closed sets, Theory Probab. Appl. 35 (1990) 594-600.
\[7\] I.S. Molchanov, A limit theorem for solutions of inequalities, Unpublished manuscript (1993).
\[8\] J.A. Hartigan, Estimation of a convex density contour in two dimensions, J. Amer. Statist. Assoc. 82 (1987) 267-270.
\[9\] D.W. Müller, The excess mass approach in statistics, Beiträge zur Statistik, Univ. Heidelberg, 1993.
\[10\] D.W. Müller and G. Sawitzki, Excess mass estimates and tests of multimodality, J. Amer. Statist. Assoc. 86 (1991) 738-746.
\[11\] D. Nolan, The excess-mass ellipsoid, J. Multivariate Anal. 39 (1991) 348-371.
\[12\] W. Polonik, Measuring mass concentration and estimating density contour clusters - an excess mass approach, Ann. Statist. 23 (1995) 855-881.
\[13\] A.B. Tsybakov, On nonparametric estimation of density level sets, Ann. Statist. 25 (1997) 948-969.
\[14\] M. Rosenblatt, Remarks on some nonparametric estimates of a density function, Ann. Math. Statist. 27 (1956) 832-837.
\[15\] D. Pollard, Convergence of Stochastic Processes, Springer, New York, 1984.
\[16\] L.C. Evans and R.F. Gariepy, Measure Theory and Fine Properties of Functions, CRC Press, Boca Raton, 1992.
\[17\] W. Feller, An Introduction to Probability Theory and Its Applications, Wiley, New-York, 1992.
[^1]: cadre@math.univ-montp2.fr
|
---
author:
- Justin Lovegrove
title: Reverberation Mapping of the Optical Continua of 57 MACHO Quasars
---
Abstract
========
Autocorrelation analyses of the optical continua of 57 of the 59 MACHO quasars reveal structure at proper time lags of $544 \pm 5.2$ days with a standard deviation of 77 light days. Interpreted in the context of reverberation from elliptical outflow winds as proposed by Elvis (2000) [@E00], this implies an approximate characteristic size scale for winds in the MACHO quasars of $544 \pm 5.2$ light days. The internal structure variable of these reflecting outflow surfaces is found to be $11.87^o \pm 0.40^o$ with a standard deviation of $2.03^o$.
Introduction
============
Brightness fluctuations of the UV-optical continuum emission of quasars were recognised shortly after the initial discovery of the objects in the 1960s [@ms]. Although several programmes were undertaken to monitor these fluctuations, little is yet known about their nature or origin. A large number of these have focused on comparison of the optical variability with that in other wavebands and less on long-timescale, high temporal resolution optical monitoring. Many studies have searched for oscillations on the $\sim$ day timescale in an attempt to constrain the inner structure size (eg [@wm]). This report, however, is concerned with variability on the year timescale to evidence global quasar structure.
In a model proposed by Elvis [@E00] to unite the various spectroscopic features associated with different “types” of quasars and AGN (eg broad absorption lines, X-ray/UV warm absorbers, broad blue-shifted emission lines), the object’s outer accretion disc has a pair of bi-conical extended narrow emission line regions in the form of outflowing ionised winds. Absorption and emission lines and the so-called warm absorbers result from orientation effects in observing these outflowing winds. Supporting evidence for this is provided by a correlation between polarisation and broad absorption lines found by [@O]. Outflowing accretion disc winds are widely considered to be a strong candidate for the cause of feedback (for a discussion of the currently understood properties of feedback see [@FEA]). Several models have been developed [@p00; @p08] to simulate these winds. [@p00] discusses different launch mechanisms for the winds - specifically the balance between magnetic forces and radiation pressure - but finds no preference for one or the other, while [@p08] discusses the effect of rotation and finds that a rotating wind has a larger thermal energy flux and lower mass flux, making a strong case for these winds as the source of feedback. The outflow described by [@E00] is now usually identified with the observationally-invoked “dusty torus” around AGN [@ra].
[@mp] demonstrated for the MACHO quasars that there is no detectable lag time between the V and R variability in quasars, which can be interpreted as demonstrating that all of the optical continuum variability originates in the same region of the quasar.
[@ST97] observed the gravitationally lensed quasar Q0957+561 to measure the time delay between the two images and measure microlensing effects. In doing so, they found a series of autocorrelation subpeaks initially attributed to either microlensing or accretion disc structure. These results were then re-interpreted by [@S05] as Elvis’ outflowing winds at a distance of $2 \times 10^{17} cm$ from the quasar’s central compact object. A model applied by [@VA07] to the quasar Q2237+0305, to simulate microlensing, found that the optimal solution for the system was one with a central bright source and an extended structure with double the total luminosity of the central source, though the outer structure has a lower surface brightness as the luminosity is emanating from a larger source, later determined by [@slr08] to lie at $8.4 \times 10^{17} cm$.
[@slr08] continued on to argue that since magnetic fields can cause both jets and outflows, they therefore must be the dominant effect in AGN. [@lslp] however pointed out that the magnetic field required to power the observed Elvis outflows is too great to be due to the accretion disc alone. They therefore argue that all quasars and AGN have an intrinsically magnetic central compact object, which they refer to as a MECO, as proposed by [@rl07], based upon solutions of the Einstein-Maxwell equations by [@rl03]. One compelling aspect to this argument is that it predicts a power-law relationship between Elvis outflow radius and luminosity, which was found in work by Kaspi et al [@ks] and updated by Bentz et al [@b], if one assumes the source of quasar broad emission lines to be outflow winds powered by magnetic fields. The [@ks] and [@b] results were in fact empirically derived for AGN of $Z < 0.3$ and [@ks] postulates that there may be some evolution of this relation with luminosity (and indeed one might expect some time-evolution of quasar properties which may further modify this scaling relation) so generalising these results to quasars may yet prove a fallacy. The radius of the broad line region was found to scale initially by [@ks] as $R_{blr}
\propto L^{0.67}$, while [@b] found $R_{blr} \propto L^{0.52}$.
Another strength of the MECO argument is that while [@wu] found quasar properties to be uncorrelated using the current standard black hole models, [@lslp] and [@sll] found a homogeneous population of quasars using the [@rl03] model. [@p] used microlensing observations of 10 quadruply-lensed quasars, 9 of which were of known redshift including Q2237+0305, to demonstrate that standard thin accretion disc models, such as the widely-accepted Shakura-Sunyaev (S-S) disc [@ss73], underestimate the optical continuum emission region thickness by a factor of between 3 and 30, finding an average calculated thickness of $3.6 \times 10^{15} cm$, while observed values average $5.3 \times 10^{16} cm$. [@bp] found a radius of the broad line region for the Seyfert galaxy NGC 5548 of just under 13 light days when the average of several spectral line reverberations were taken, corresponding to $R_{blr} = 3.3 \times 10^{16} cm$. When the scaling of [@ks] and [@b] is taken into account, the [@bp] result is comparable to the [@slr08] and [@S05] results (assuming $\frac{L_{quasar}}{L_{seyfert}} \sim 10^4$, then [@b] would predict a quasar $R_{blr}$ of approximately $3 \times 10^{18}$). Also, given that black hole radius scales linearly with mass, as does the predicted radius of the inner edge of the accretion disc, a linear mass-$R_{blr}$ relationship might also be expected. Given calculated Seyfert galaxy black hole masses of order $10^8M_o$ and average quasar masses of order $10^9M_o$, this would scale the Seyfert galaxy $R_{blr}$ up to $3.3 \times 10^{17} cm$. While these relations are not self-consistent, either of them may be found consistent with the existing quasar structure sizes. [@r] also found structure on size scales of $10^{16} cm$ from microlensing of SDSS J1004+4112 which would then scale to $10^{18}cm$. These studies combined strongly evidence the presence of the Elvis outflow at a radial distance of approximately $10^{18} cm$ from the central source in quasars which may be detected by their reverberation of the optical continuum of the central quasar source.
The [@VA07] result is also in direct conflict with the S-S accretion disc model, which has been applied in several unsuccessful attempts to describe microlensing observations of Q2237. First a simulation by [@w] used the S-S disc to model the microlensing observations but predicted a large microlensing event that was later observed not to have occurred. [@k] then attempted to apply the S-S disc in a new simulation but another failed prediction of large-amplitude microlensing resulted. Another attempt to simulate the Q2237 light curve by [@ei] produced the same large-amplitude microlensing events. These events are an inherent property of the S-S disc model where all of the luminosity emanates from the accretion disc, hence causing it all to be lensed simultaneously. Only by separating the luminosity into multiple regions, eg two regions, one inner and one outer, as in [@VA07], can these erroneous large-amplitude microlensing events be avoided.
Previous attempts have been made to identify structure on the year timescale, including structure function analysis by Trevese et al [@t] and by Hawkins [@h96; @h06]. [@t] found strong anticorrelation on the $\sim 5$ year timescale but no finer structure - this is unsurprising as their results were an average of the results for multiple quasars taken at low temporal resolution, whereas the size scales of the Elvis outflow winds should be dependant on various quasar properties which differ depending on the launch mechanism and also should be noticed on smaller timescales than their observations were sensitive to. [@h96] also found variations on the $\sim
5$-year timescale again with poor temporal resolution but then put forth the argument that the variation was found to be redshift-independant and therefore was most likely caused by gravitational microlensing. However [@cs] demonstrated that microlensing occurs on much shorter timescales and at much lower luminosity amplitudes than these long-term variations. [@h06] used structure function analysis to infer size scales for quasar accretion discs but again encountered the problem of too infrequent observations. In this paper and [@h07] it was argued that Fourier power specra are of more use in the study of quasars, which were then used in [@h07] to interpret quasar variability. However, since reverberation is not expected to be periodic, Fourier techniques are not suited to its detection. Hawkins’ observations of long-timescale variability were recognised in [@slp] as a separate phenomenon to the reverberation expected from the Elvis model; this long-term variability remains as-yet unexplained. Hawkins’ work proposed this variation to be indicative of a timescale for accretion disc phenomena, while others explained it simply as red noise.
[@u] demonstrated that there is a correlation between the optical and x-ray variability in some but not all AGN, arguing that x-ray reprocessing in the accretion disc is a viable source of the observed variability, combined with viscous processes in the disc which would cause an inherent mass-timescale dependance, as in the S-S disc the temperature at a given radius is proportional to mass$^{- \frac{1}{4}}$. In this case a red-noise power spectrum would describe all quasar variability. For the purposes of this report however, the S-S disc model is regarded as being disproven by the [@VA07] and [@p] results in favour of the Elvis outflow model and so the temperature-mass relation is disregarded. A later investigation [@as] demonstrated that the correlation between X-ray and optical variability cannot be explained by simple reprocessing in an optically thick disc model with a corona around the central object. This lends further support to the assumption that the [@VA07] model of quasars is indeed viable.
[@gi] demonstrated from Fourier power spectra that shorter-duration brightness events in quasars statistically have lower amplitudes but again their temporal resolution was too low to identify reverberation on the expected timescales. Also as previously discussed, non-periodic events are extremely difficult to detect with Fourier techniques. All of these works are biased by the long-term variability recognised by [@slp] as problematic in QSO variability study. A survey by [@r04] proposed that quasars could be identified by their variability on this timescale but the discovery by [@slp] that long-timescale variation is not quite a universal property of quasars somewhat complicates this possibility. The spread of variability amplitudes is also demonstrated in [@n] for 44 quasars observed at the Wise Observatory. The [@n] observations are also of low temporal resolution and uncorrected for long-term variability thus preventing the detection of reverberation patterns.
This project therefore adopts the [@E00] and [@VA07] model of quasars consisting of a central compact, luminous source with an accretion disc and outflowing winds of ionised material with double the intrinsic luminosity of the central source. The aim is to search for said winds via reverberation mapping. Past investigations such as [@bot] have attempted reverberation mapping of quasars but have had large gaps in their data, primarily due to the fact that telescope time is usually allocated only for a few days at a time but also due to seasonal dropouts. Observations on long timescales, lacking seasonal dropouts and with frequent observations are required for this purpose, to which end the MACHO programme data have been selected. Past results would predict an average radius of the wind region of order $10^{18} cm$.
The assumption will be made that all structure on timescales in the region of $c \Delta t = 10^{18} cm$ is due to reverberation and that the reverberation process is instantaneous. The aim is simply to verify whether this simplified version of the model is consistent with observation, not to compare the model to other models.
Theoretical Background
======================
Reverberation mapping is a technique whereby structure size scales are inferred in an astrophysical object by measuring delay times from strong brightness peaks to subsequent, lower-amplitude peaks. The subsequent peaks are then assumed to be reflections (or absorptions and re-emissions) of the initial brightness feature by some external structure (eg the dusty torus in AGN, accretion flows in compact objects). Reverberation mapping may be used to study simple continuum reflection or sources of specific absorption/emission features or even sources in entirely different wavebands, by finding the lag time between a brightness peak in the continuum and a subpeak in the emission line or waveband of interest. This technique has already been successfully applied by, among many others, [@bp] and [@S05].
When looking for continuum reverberation one usually makes use of the autocorrelation function, which gives the mean amplitude, in a given light curve, at a time $t + \Delta t$ relative to the amplitude at time $t$. The amplitude of the autocorrelation is the product of the probability of a later brightening event at dt with the relative amplitude of that event, rendering it impossible to distinguish by autocorrelation alone, eg a 50% probability of a 50% brightening from a 100% probability of a 25% brightening, since both will produce the same mean brightness profile. The mathematical formulation of this function is: $$AC(\Delta t) = \frac{1}{N_{obs}} \cdot \sum _t \frac{I(t + \Delta t) \cdot I(t)}{\sigma ^2}$$ where I(t) represents the intensity at a time t relative to the mean intensity, such that for a dimming event I(t) is negative. Hence $AC(\Delta t)$ becomes negative if a brightening event at t is followed by a dimming event at $t+
\Delta t$ or if a dimming event at t is followed by a brightening event at $t+\Delta t$. $\sigma$ is the standard deviation of I - in rigorous mathematics the $\sigma ^2$ term is in fact the product of the standard deviations for I(t) and I($t + \Delta t$) but since they are idential in this case, $\sigma ^2$ may be used. The nature of the autocorrelation calculation also has a tendancy to introduce predicted brightness peaks unrelated to the phenomena of interest. Given autocorrelation peaks at lag times $t_1$ and $t_2$, a third peak will also be created at lag $t_2 - t_1$, of amplitude $\frac{A(t_2)}{A(t_1)}$, which is then divided by the number of data points in the brightness record.
Observational Details
=====================
The MACHO survey operated from 1992 to 1999, observing the Magellanic Clouds for gravitational lensing events by Massive Compact Halo Objects - compact objects in the galactic halo, one of the primary dark matter candidates. The programme was undertaken from the Mount Stromlo Observatory in Australia, where the Magellanic Clouds are circumpolar, giving brightness records free from seasonal dropouts. Information on the equipment used in the programme can be found at http://wwwmacho.anu.edu.au/ or in [@macho]. 59 quasars were within the field of view of this programme and as a result highly sampled light curves for all of these objects were obtained. These quasars were observed over the duration of the programme in both V and R filters. The brightness records for the 59 quasars in both V and R filters are freely available at http://www.astro.yale.edu/mgeha/MACHO/ while the entire MACHO survey data are available from http://wwwmacho.anu.edu.au/Data/MachoData.html.
Theoretical Method
==================
Throughout this study the following assumptions were made:
1. That all autocorrelation structure on the several hundred day timescale was due to reverberation from the Elvis outflows, which are also the source of the broad emission lines in quasars. We denote the distance to this region as $R_{blr}$ - the “radius of the broad line region”.
2. That the timescale for absorption and re-emission of photons was negligable compared to the light travel time from the central source to the outflow surface.
3. That the internal structure variable was less than $45^o$ for all objects as found by [@S05] and [@slr08]. Note that an internal structure variable of $\epsilon$ and inclination angle of $\theta$ cannot be distinguished by autocorrelation alone from an internal structure variable of $90^o - \epsilon$ with an inclination angle of $90^o - \theta$, as demonstrated in Fig. 1. The assumed low $\epsilon$ is further justified by the fact that the [@E00] model predicts the winds to be projected at an angle of $30^o$ from the accretion disc plane, contraining $\epsilon \leq 30^o$ as the reverberating surfaces must lie at lower inclinations if they are in fact part of the outflow structure.
4. That extinction was negligable for all objects. Reliable extinction maps of the Magellanic Clouds are not available and information about the quasars’ host galaxies is also unavailable so extinction calculations are not possible. This assumption should be reasonable as any extinction would have a noticeable impact on the colour of the quasar (given the accepted relation $3.2 \cdot
E(B-V) = A_V$ where $E(B-V)$ is the colour excess and $A_V$ is the V absorption).
First a series of predictions were made for the relative luminosities of the 59 quasars in the sample using their redshifts (presented in [@mg]) and apparent magnitudes, combined with the mean quasar SED presented in [@SED]. Two estimates were produced for these, one for the V data and one for the R data. This calculation was of interest as it would predict the expected ratios of $R_{blr}$ between the objects in our sample via the [@ks] and [@b] relations. These relations are so far only applied to nearby AGN and so this investigation was carried out to test their universality. Firstly, the apparent magnitudes were converted to flux units via the equation $$m = -2.5 \cdot \log{f}$$ where m is the apparent magnitude and f the flux. Then the redshifts of the objects were used to calculate their distances using the tool at\
http://astronomy.swin.edu.au/ elenc/Calculators/redshift.php with $H_0 = 71
Mpc/km/s$. Using this distance and the relation $$f = \frac{L}{4 \pi d^2}$$ where L is the luminosity, f the flux and d the distance, the luminosity of each source in the observed waveband was calculated. It was recognised, however, that simply taking the ratio between these luminosities was not a fair representation of the ratio of their absolute luminosities as cosmological redshift would cause the observed region of the quasar spectrum to shift with distance and the luminosity varies over the range of this spectrum. Hence the mean quasar SED of [@SED] was used to convert the observed luminosities of the objects to expected luminosities at a common frequency - in this case 50 GHz. These values were then divided by the minimum calculated luminosity so that their relative values are presented in Table 1. For this purpose it was assumed that the spectral shape of the quasar is independant of bolometric luminosity.
Next a calculation was made to predict the dependance of the reverberation pattern of a quasar on its orientation to the observer’s line of sight. This was produced using the geometric equations initially presented in [@S05] but reformulated to become more general. [@S05] discusses “case 1” and “case 2” quasars with different orientations - “case 1” being where the nearest two outflow surfaces lie on the near side of the accretion disc plane and “case 2” being where the nearest two surfaces are on the near side of the rotation axis. These equations become generalised by recognising that the two cases are in fact degenerate and from reverberation alone one cannot distinguish a $13^o$ internal structure variable in “case 1” from a $77^o$ internal structure variable in “case 2”. This is also demonstrated in Fig. 1. For the purposes of Fig. 1, the [@S05] interpretation of $\epsilon$ is adopted but later it will be demonstrated that in fact there are several possible meanings of $\epsilon$, though this degeneracy is the same for all interpretations.
The generalisation is then $$t_1 = \frac{R_{blr}}{c} \cdot (1 - \cos(\theta - \epsilon))$$ $$t_2 = \frac{R_{blr}}{c} \cdot (1 - \cos(\theta + \epsilon))$$ $$t_3 = \frac{R_{blr}}{c} \cdot (1 + \cos(\theta + \epsilon))$$ $$t_4 = \frac{R_{blr}}{c} \cdot (1 + \cos(\theta - \epsilon))$$
This prediction could later be compared to the MACHO observations. A value of $\epsilon$ was not inserted until the project had produced results, so that the mean calculated value of $\epsilon$ could be inserted into the simulation for comparison with the results.
In the data analysis it was quickly noticed that one of the 59 MACHO quasars - MACHO 42.860.123 - was very poorly observed by the survey, totalling only 50 observations over the programme’s seven-year lifetime. The V and R data for the remaining objects were processed in IDL to remove any 10-$\sigma$ data points – 5-$\sigma$ would seem a more appropriate value at first glance, but inspection of the data showed relevant brightness peaks between 5- and 10-$\sigma$. At the same time all null entries (nights with no observation) were also removed. The data were then interpolated to give a uniformly-spaced brightness record over time, with the number of bins to interpolate over equal to the number of observations. The case was made for using a fixed number of bins for all objects, eg 1000, but for some data records as few as 250 observations were made and so such an interpolation would serve only to reinforce any remaining spurious points. By the employed method, a small amount of smoothing of the data was introduced. The data then had their timescales corrected for cosmological redshift to ensure that all plots produced and any structure inferred were on proper timescales.
In the beginning of the project, to understand the behaviour of the data and appreciate the complexities and difficulties of the investigation, a list was compiled of the most highly-sampled ($> 600$ observations) quasars, yielding 30 objects. Each of these objects had their light curves and autocorrelations examined and a key feature presented itself immediately; the data showed in some cases a long-timescale ($\sim 1000$ proper days), large-amplitude ($\sim
0.8$ magnitude) variation that dominated the initial autocorrelation function. Since this variation was seen only in some quasars (and is on a longer timescale than the predicted reverberation), this signal was removed by applying a 300-day running boxcar smooth algorithm over the data, before subtracting this smoothed data (which would now be low-frequency variation dominated) from the actual brightness record. The timescale for smoothing was determined by examination of the brightness profiles of the objects, which showed deviations from this long-timescale signature on timescales below 300 days. Autocorrelation analysis of the uncorrected data also found the first autocorrelation minimum to lie before 300 days. Further, if the central brightness pulse has a duration beyond 300 days it will be extremely difficult (if even possible) to resolve the delay to the first reverberation peak, which is expected to be at a maximum lag time of several hundred days. The new autocorrelation calculations for the long-term variability-corrected data showed from visual inspection the autocorrelation patterns expected from reverberation. However, as is described in section 2, the presence of autocorrelation peaks alone do not necessarily constitute a detection of reverberation.
To determine the reality of the autocorrelation peaks, a routine was written to identify the ten largest-amplitude brightening and fading events in each low-pass filtered brightness record and produce the mean brightness profiles following them - a mean brightening profile and a mean dimming profile. Inspection of these brightness profiles demonstrated that any spurious autocorrelation peaks were a small effect, easily removed by ignoring any peaks within one hundred days of a larger-amplitude peak. The justification for this is simple - if two reverberations occur within one hundred days of each other (which has a low probability of occurence when one considers the required quasar orientation), resolving them within the brightness record or autocorrelation will become difficult, particularly for the most poorly-sampled, low-redshift objects with a time resolution of $\sim 10$ days and given that the average half-width of a central brightness peak was found to be approximately 70 days. There is also the consideration of the shape of the reverberation signal from each elliptical surface, which has not yet been calculated but may have differing asymmetry for each of the four surfaces.
Finally it was noticed that the brightening and dimming profiles do not perfectly match and in some cases seem to show structure of opposing sign on the same timescales - if a brightening event was usually followed by another brightening event at a time lag $t$, a dimming event was sometimes also followed by a brightening event at a lag of $t$. The fact that this effect is not seen in every object makes it difficult to interpret or even identify as a real effect. The result of this is that the mean brightness profile does not always perfectly match the autocorrelation, but usually demonstrates the same approximate shape. These arguments are not quantifiable and so are discarded from future investigation, which will focus on the results of an automated analysis. At this point it was also noted that the highest-redshift quasar in the survey, MACHO 208.15799.1085 at a redshift of 2.77, contained only $\sim 700$ proper days’ worth of observations. It was felt that this would not be sufficient time to observe a significant number of reverberation events, which were observed to occur on timescales of order $\sim 500$ proper days, and so the object was discarded from further analysis. Fig. 2 gives one example of the manually-produced light curves, its autocorrelation and the effect of long-term variability.
![**Plots produced in manual inspection of the data for one quasar.** Top left is the raw data, top right its autocorrelation. On the bottom left is the data remaining after subtraction of long-term variability and bottom right is the corrected data’s autocorrelation function.](fig2.eps){width="\textwidth"}
In the automatically-processed, long-term variability-corrected data, reverberation patterns were recognised in the autocorrelation by an IDL routine designed to smooth the autocorrelation on a timescale of 50 days and identify all positive peaks with lags less than 1000 days. If two or more peaks were identified within 100 days of each other, the more prominent peak was returned. As a result, we estimate the accuracy of any reverberation signatures to be approximately 50 days. Therefore a 50-day smooth will reduce the number of peaks for the program to sort through while not detracting from the results, simultaneously removing any double-peaks caused by dips in autocorrelation caused by unrelated phenomena. The 50-day resolution simply reflects the fact that the occurence of two peaks within one hundred days can not be identified with two separate reverberations.
This technique was also applied to a ’Brownian noise’ or simple red noise simulation with a spectral slope of -2 for comparison. This was generated using a random number generator in the following way. First an array (referred to as R) of N random numbers was produced and the mean value subtracted from each number in said array. A new, blank array (S) of length N+1 was created, with values then appended as follows: $$\mbox{if } R[i] > 0 \mbox{ then } S[i + 1] = S[i] + 1$$ $$\mbox{if } R[i] < 0 \mbox{ then } S[i + 1] = S[i] - 1$$
The seeds for the random number generator were taken as the raw data for each quasar in each filter, so that 114 different simulations were produced to aid in later comparison with results. [@tk] outline a method for producing more rigorous red noise simulations of different spectral slopes but to allow more time to focus on the observations and their interpretation, only this simplified calculation was performed.
Basic white noise simulations were also produced but produced none of the predicted autocorrelation structure and so were only followed through the early data processing stages. An example white noise autocorrelation is given in Fig. 3 with a quasar autocorrelation curve overplotted. Fig. 4 shows the light curve and autocorrelation function for a star observed by the MACHO programme compared to one of the quasars to demonstrate the difference in autocorrelation function and demonstrate that the observed structure is a feature of the quasars themselves and not induced by observational effects.
![**Quasar (+) and white noise (.) autocorrelation functions**](fig3.eps){width="\textwidth"}
![**Top: Quasar (+) and star data (.)** demonstrating that the long-term variability observed in the quasars is not an observational effect. **Bottom: Quasar (bold) and star autocorrelation functions** after 300-day boxcar smooths have been applied, showing that the autocorrelation structure observed in quasars is intrinsic to the quasars themselves](fig4.eps){width="\textwidth"}
Results
=======
Once the data were processed and autocorrelation peaks identified, the positions of the peaks were then used to calculate the radial distance from central source to outflow region ($R_{blr}$), the inclination angle of the observer’s line of sight from the accretion disc plane ($\theta$) and a factor termed the ’internal structure variable’ which is the angle from the accretion disc plane to the reverberating outflow regions ($\epsilon$). These are calculated from equations (2-5). These can then produce $$(4) + (5) + (6) + (7) = t_1 + t_2 + t_3 + t_4 = 4 \cdot \frac{R_{blr}}{c}$$ or $R_{blr} = c \cdot \langle t \rangle$ $$(6) - (5) = t_3 - t_2 = 2 \cdot \frac{R_{blr}}{c} \cdot \cos(\theta + \epsilon)$$ or $\theta + \epsilon = \arccos(\frac{t_3 - t_2}{2 \cdot \langle t \rangle})$ $$(7) - (4) = t_4 - t_1 = 2 \cdot \frac{R_{blr}}{c} \cdot \cos(\theta - \epsilon)$$ or $\theta - \epsilon = \arccos(\frac{t_4 - t_1}{2 \cdot \langle t \rangle})$
From (11) and (12) we then obtain $$\theta = \frac{\arccos(\frac{t_3 - t_2}{2 \cdot \langle t \rangle}) + \arccos(\frac{t_4 - t_1}{2 \cdot \langle t \rangle})}{2}$$ $$\epsilon = \frac{\arccos(\frac{t_3 - t_2}{2 \cdot \langle t \rangle}) - \arccos(\frac{t_4 - t_1}{2 \cdot \langle t \rangle})}{2}$$
Consideration of the geometry involved also shows that three special cases must be considered. First is the case where $\theta = 90^o - \epsilon$. In this case, peaks two and three are observed at the same time and so only three distinct brightness pulses are seen. In analysing the data, the simplifying assumption was made that all 3-peak autocorrelation signatures were caused by this effect. Another case to be considered is when $\theta = 90^o$, in which cases pulses 1 and 2 are indistinguishable, as are pulses 3 and 4. Again, the assumption was made that all two-peak reverberation patterns were caused by this orientation effect. Notice that in these cases the equation for $R_{blr}$ is not modified, but the equations for $\epsilon$ in the 3-peak and 2-peak cases become $$\epsilon = \frac{\arcsin(\frac{t_3 - t_1}{2 \cdot \langle t \rangle})}{2}$$ $$\epsilon = \frac{\arcsin(\frac{t_2 - t_1}{2 \cdot \langle t \rangle})}{2}$$ respectively. Finally there is the case of zero inclination angle, where peaks one and two arrive at the same time as the initial brightening event, followed by peaks three and four which then arrive simultaneously. In this situation no information about the internal structure variable can be extracted. Thankfully, no such events were observed. The results calculated for the 57 studied quasars, from equations (10) and (13-16), are presented in Table 1. The redshifts presented in Table 1 are as given in [@mg]. Additionally a column has been included showing $R_{blr}$ in multiples of the minimum measured $R_{blr}$ for comparison with the relative luminosities of the quasars.
This yields an average $R_{blr}$ of 544 light days with an RMS of 74 light days and an average $\epsilon$ of $11.87^o$ with an RMS of $2.03^o$. A plot of $t_1$, $t_2$, $t_3$, $t_4$ as a function of $\theta$ for the 57 objects was then compared to the predictions of a simulation using the calculated average $\epsilon$ from these observations, for the data from both filters. This entire process was then repeated for the red noise simulations, of which 114 were produced such that the resulting plots could be directly compared. The simulations produced by the R data were treated as R data while the simulations produced by V data were treated as V data, to ensure the simulations were treated as similarly to the actual data as possible. The process was then repeated for 10 stars in the same field as the example quasar that was studied in detail, MACHO 13.5962.237. Notice that while the quasar data show trends very close to the model, the star data exhibit much stronger deviations (see Fig. 5).
[lcccccccccr]{} 2.5873.82 & 0.46 & 959 & 17.44 & 1028 & 17.00 & 608 & 71 & 9.5 & 1.67 & 37\
5.4892.1971 & 1.58 & 958 & 18.46 & 938 & 18.12 & 560 & 70 & 12.0 & 1.53 & 54\
6.6572.268 & 1.81 & 988 & 18.33 & 1011 & 18.08 & 578 & 71 & 12.5 & 1.58 & 79\
9.4641.568 & 1.18 & 973 & 19.20 & 950 & 18.90 & 697 & 72 & 11.0 & 1.91 & 24\
9.4882.332 & 0.32 & 995 & 18.85 & 966 & 18.51 & 589 & 64 & 12.0 & 1.61 & 6\
9.5239.505 & 1.30 & 968 & 19.19 & 1007 & 18.81 & 579 & 64 & 12.0 & 1.59 & 28\
9.5484.258 & 2.32 & 990 & 18.61 & 396 & 18.30 & 481 & 83 & 10.5 & 1.32 & 78\
11.8988.1350 & 0.33 & 969 & 19.55 & 978 & 19.23 & 541 & 67 & 14.0 & 1.48 & 3\
13.5717.178 & 1.66 & 915 & 18.57 & 509 & 18.20 & 572 & 59 & 13.0 & 1.57 & 62\
13.6805.324 & 1.72 & 952 & 19.02 & 931 & 18.70 & 594 & 85 & 9.5 & 1.63 & 41\
13.6808.521 & 1.64 & 928 & 19.04 & 397 & 18.74 & 510 & 81 & 11.0 & 1.40 & 38\
17.2227.488 & 0.28 & 445 & 18.89 & 439 & 18.58 & 608 & 82 & 8.0 & 1.67 & 4\
17.3197.1182 & 0.90 & 431 & 18.91 & 187 & 18.59 & 567 & 73 & 16.0 & 1.55 & 24\
20.4678.600 & 2.22 & 348 & 20.11 & 356 & 19.87 & 439 & 68 & 14.0 & 1.20 & 18\
22.4990.462 & 1.56 & 542 & 19.94 & 519 & 19.50 & 556 & 64 & 12.5 & 1.52 & 17\
22.5595.1333 & 1.15 & 568 & 18.60 & 239 & 18.30 & 565 & 73 & 9.0 & 1.54 & 41\
25.3469.117 & 0.38 & 373 & 18.09 & 363 & 17.82 & 558 & 65 & 12.0 & 1.53 & 15\
25.3712.72 & 2.17 & 369 & 18.62 & 365 & 18.30 & 517 & 72 & 12.0 & 1.42 & 73\
30.11301.499 & 0.46 & 297 & 19.46 & 279 & 19.08 & 546 & 68 & 12.5 & 1.50 & 6\
37.5584.159 & 0.50 & 264 & 19.48 & 258 & 18.81 & 562 & 70 & 12.5 & 1.54 & 7\
48.2620.2719 & 0.26 & 363 & 19.06 & 352 & 18.73 & 605 & 65 & 13.0 & 1.66 & 3\
52.4565.356 & 2.29 & 257 & 19.17 & 255 & 18.96 & 447 & 85 & 8.0 & 1.22 & 44\
53.3360.344 & 1.86 & 260 & 19.30 & 251 & 19.05 & 496 & 67 & 10.0 & 1.36 & 33\
53.3970.140 & 2.04 & 272 & 18.51 & 105 & 18.24 & 404 & 69 & 13.5 & 1.11 & 75\
58.5903.69 & 2.24 & 249 & 18.24 & 322 & 17.97 & 491 & 80 & 8.5 & 1.35 & 104\
58.6272.729 & 1.53 & 327 & 20.01 & 129 & 19.61 & 518 & 70 & 12.5 & 1.42 & 15\
59.6398.185 & 1.64 & 279 & 19.37 & 291 & 19.01 & 539 & 83 & 9.5 & 1.48 & 29\
61.8072.358 & 1.65 & 383 & 19.33 & 219 & 19.05 & 471 & 67 & 16.0 & 1.29 & 29\
61.8199.302 & 1.79 & 389 & 18.94 & 361 & 18.68 & 475 & 69 & 14.0 & 1.30 & 40\
63.6643.393 & 0.47 & 243 & 19.71 & 243 & 19.29 & 536 & 67 & 11.0 & 1.47 & 5\
63.7365.151 & 0.65 & 250 & 18.74 & 243 & 18.40 & 625 & 83 & 10.5 & 1.71 & 19\
64.8088.215 & 1.95 & 255 & 18.98 & 240 & 18.73 & 464 & 67 & 17.5 & 1.27 & 46\
64.8092.454 & 2.03 & 242 & 20.14 & 238 & 19.94 & 485 & 73 & 10.5 & 1.33 & 16\
68.10972.36 & 1.01 & 267 & 16.63 & 245 & 16.34 & 670 & 84 & 10.0 & 1.84 & 216\
75.13376.66 & 1.07 & 241 & 18.63 & 220 & 18.37 & 561 & 67 & 10.0 & 1.54 & 36\
77.7551.3853 & 0.85 & 1328 & 19.84 & 1421 & 19.61 & 489 & 69 & 14.0 & 1.34 & 9\
78.5855.788 & 0.63 & 1457 & 18.64 & 723 & 18.42 & 491 & 77 & 12.0 & 1.35 & 19\
206.16653.987 & 1.05 & 741 & 19.56 & 581 & 19.28 & 486 & 69 & 14.5 & 1.33 & 15\
206.17052.388 & 2.15 & 803 & 18.91 & 781 & 18.68 & 365 & 37 & 13.5 & 1.00 & 53\
207.16310.1050 & 1.47 & 841 & 19.17 & 885 & 18.85 & 530 & 90 & 8.5 & 1.45 & 31\
207.16316.446 & 0.56 & 809 & 18.63 & 880 & 18.44 & 590 & 66 & 12.5 & 1.62 & 16\
208.15920.619 & 0.91 & 836 & 19.34 & 759 & 19.17 & 621 & 70 & 13.0 & 1.70 & 15\
208.16034.100 & 0.49 & 875 & 18.10 & 259 & 17.81 & 742 & 82 & 9.0 & 2.03 & 23\
211.16703.311 & 2.18 & 733 & 18.91 & 760 & 18.56 & 374 & 90 & 6.5 & 1.02 & 57\
211.16765.212 & 2.13 & 791 & 18.16 & 232 & 17.87 & 447 & 76 & 13.0 & 1.22 & 108\
1.4418.1930 & 0.53 & 960 & 19.61 & 340 & 19.42 & 577 & 64 & 11.5 & 1.58 & 6\
1.4537.1642 & 0.61 & 1107 & 19.31 & 367 & 19.15 & 635 & 79 & 9.0 & 1.74 & 9\
5.4643.149 & 0.17 & 936 & 17.48 & 943 & 17.15 & 699 & 80 & 9.0 & 1.92 & 8\
6.7059.207 & 0.15 & 977 & 17.88 & 392 & 17.41 & 613 & 83 & 11.5 & 1.68 & 4\
13.5962.237 & 0.17 & 879 & 18.95 & 899 & 18.47 & 524 & 66 & 12.5 & 1.44 & 2\
14.8249.74 & 0.22 & 861 & 18.90 & 444 & 18.60 & 579 & 69 & 13.5 & 1.59 & 3\
28.11400.609 & 0.44 & 313 & 19.61 & 321 & 19.31 & 504 & 70 & 13.5 & 1.38 & 5\
53.3725.29 & 0.06 & 266 & 17.64 & 249 & 17.20 & 553 & 69 & 14.0 & 1.52 & 1\
68.10968.235 & 0.39 & 243 & 19.92 & 261 & 19.40 & 566 & 83 & 12.0 & 1.55 & 3\
69.12549.21 & 0.14 & 253 & 16.92 & 244 & 16.50 & 497 & 68 & 12.5 & 1.36 & 9\
70.11469.82 & 0.08 & 243 & 18.25 & 241 & 17.59 & 544 & 79 & 10.5 & 1.49 & 1\
82.8403.551 & 0.15 & 836 & 18.89 & 857 & 18.55 & 556 & 56 & 12.0 & 1.52 & 1
The red-noise simulation process described in section 5 was repeated ten times (so that in total 1140 individual red noise simulations were created), so that a statistic on the expected bulk properties of red noise sources could be gathered. It was found that on average, $18.5 \%$ of pure red-noise sources had no autocorrelation peaks detected by the program used to survey the MACHO quasars, with an RMS of $\pm 6.3 \%$. Since all 57 studied quasars show relevant autocorrelation structure, it may be concluded that the observed structure is a $9 \sigma$ departure from the red-noise distribution, indicating a greater than $99 \%$ certainty that the effect is not caused by red-noise of this type.
With an error of $\pm 50$ days in the determination of each lag time, an error of $\pm 35$ days is present in the calculation of $t_2 - t_1$, $t_3 - t_2$, etc. This also yields an error of $\pm 25$ days in the calculated radius of the outflow region. If the error in $\cos(\theta \pm \epsilon)$ is small, ie $\sigma(R_{blr}) \ll R_{blr}$, the small angle approximation may be applied in calculating the error in radians, before converting into degrees. Using the average value of $R_{blr} = 544 ld$ and therefore an error in $\cos(\theta \pm
\epsilon)$ of $\pm 0.0643$, the error in $\theta \pm \epsilon$ is found to be $\pm 3.69^o$ - sufficiently low to justify the small angle approximation. The errors in $\theta$ and $\epsilon$ are therefore both equal to $\pm 2.61^o$. The errors in the calculated mean values for $R_{blr}$ and $\epsilon$ are then $\pm 3.3$ light days and $\pm 0.35^o$ respectively. The 100-day resolution limit for reverberation peaks also restricted the value of $\theta$ to greater than $50^o$ - at lower inclinations the program was unable to resolve neighbouring peaks as they would lie within 100 days of each other for a quasar with $R_{blr} \sim 544$ light days. For this reason, the calculated inclination angles are inherently uncertain. The contribution to the RMS of $\epsilon$ is surprisingly small for the two- and three-peak objects.
The availability of data in two filters enables an additional statistic to be calculated - the agreement of the R and V calculated values. For one quasar, MACHO 206.17057.388, one autocorrelation peak was found in the V data while three were found in the R. Inspection of the data themselves showed that the R data was much more complete and so the V result for this object was discarded. The mean deviation of the R or V data from their average was found to be 44 days, giving an error on the mean due to the R/V agreement of $\pm
4.1$ days. The mean deviation of the calculated $\theta$ due to the R/V agreement is $7.1^o$ and that for $\epsilon$ is $2.04^o$, yielding an error in $\epsilon$ due to the difference in R and V autocorrelation functions of $\pm
0.19^o$.
The RMS deviation of the observed lag times from those predicted by a model with internal structure variable $\epsilon = 11.87^o$ is found to be $4.29^o$, which is acceptable given a systematic error of $\pm 2.61^o$ and an RMS in the calculated $\epsilon$ of $2.03^o$. Combining the two errors on each of the mean $R_{blr}$ and $\epsilon$ yields total errors on their means of $\pm 5.26$ light days and $0.40^o$ respectively.
The interpretation of $\epsilon$ is not entirely clear - it is presented in [@S05] as the angle made by the luminous regions of the outflowing winds from the accretion disc. However, as is demonstrated in Fig. 6a, $\epsilon$ may in fact represent the projection angle of the shadow of the accretion disc onto the outflowing winds if the central luminosity source occupies a region with a radius lower than the disc thickness, thus giving by geometry the ratio of the inner accretion disc radius to the disc thickness. If the central source is extended above and below the accretion disc, however, then $\epsilon$ may represent, as shown in Fig. 6b, the ratio of disc thickness to $R_{blr}$. There may also be a combination of effects at work.
An accurate way of discriminating between these two interpretations may be found by analysing the mean reverberation profiles from the four outflow surfaces and producing a relation between the timing of the central reverberation peak and that of the onset of reverberation by a specific outflow surface. The two figures above demonstrate how the timing of reverberation onset is interpreted in the two cases of compact vs extended central source but to discriminate, models must be produced of the expected reverberation profiles for these two cases. Results from [@slr06] would predict the former, as indeed would [@cs] and [@VA07] and so figure 6a would seem favourable, though analysis of the reverberation profiles will enable a more concrete determination of the meaning of $\epsilon$. Models may then be produced of the expected brightness patterns for differing outflow geometries and compared to the observed light curves to determine the curvature and projection angles of the outflows.
The calculated luminosities may now be compared to the calculated $R_{blr}$ for this sample to determine whether the [@ks] or [@b] relations hold for quasars. Fig. 7 demonstrates that there is absolutely no correlation found for this group of objects. This is not entirely unsurprising as even for nearby AGN both [@ks] and [@b] had enormous scatters in their results, though admittedly not as large as those given here. One important thing to note is that given a predicted factor 200 range in luminosity, one might take [@b] as a lower limit on the expected spread, giving a predicted range of $R_{blr}$ of approximately a factor 15. The sensitivity range of this project in $R_{blr}$ is a function of the inclination angle of the quasar but in theory the $R_{blr}$ detectable at zero inclination would be 500 light days with a minimum detectable $R_{blr}$ at this inclination of 50 light days. This factor 10 possible spread of course obtains for any inclination angle giving possibly an infinite spread of $R_{blr}$. For the calculated RMS of 77 light days, the standard deviation is 10.2 light days, 2% of the calculated mean. The probability of NOT finding a factor 15 range of $R_{blr}$ if it exists is therefore less than 1%. Therefore it can be stated to high confidence that this result demonstrates that the [@ks] and [@b] results do not hold for quasars. The failure of [@ks] for this data set is evident from the fact that it predicts an even larger spread in $R_{blr}$ than [@b] which is therefore even less statistically probable from our data. It is possible that this is a failure of the SED of [@SED], that there are some as-yet unrecognised absorption effects affecting the observations or that the assumption of negligable absorption/re-emission times is incorrect but the more likely explanation would seem to be that no correlation between $R_{blr}$ and luminosity exists for quasars.
![**Relative luminosities and broad line radii.** Absolutely no correlation is seen between $R_{rel}$ (the relative $R_{blr}$) and $L_{rel}$ (the relative luminosity), demonstrating that the [@ks] and [@b] results do not hold for quasars. The error bars are so small that they are contained within the data points.](fig7.eps){width="\textwidth"}
A calculated average $R_{blr}$ of 544 light days corresponds to $1.4 \times
10^{18} cm$, compared to the predicted average of $\sim 10^{18} cm$. The value found here is in remarkable agreement with those previously calculated, made even more surprising by the fact that so few results were available upon which to base a prediction. Many quasar models predict a large variation in quasar properties, see for example [@wu], so we conclude that quasars and perhaps AGN in general are an incredibly homogeneous population.
Conclusions
===========
Brightness records of 57 quasars taken from the MACHO survey in R and V colour filters have been analysed to show the presence of autocorrelation structure consistent with biconic outflowing winds at an average radius of $544 \pm 5.2$ light days with an RMS of 74 light days. An internal structure variable of $11.87 \pm 0.40^o$ was found, with an RMS of $2.9^o$. The accuracy of the program designed to determine the timing of the reverberation peaks limited its temporal resolution to 100 days, resulting in the quoted systematic errors in the mean values calculated. With longer-timescale, more regularly sampled data this temporal resolution can be improved - this may also be achieved with more sophisticated computational techniques combined with brightness models not available in this project.
The correlations between radius of the broad emission line region and luminosity found by [@ks] and [@b] for nearby AGN do not appear to hold for quasars. This may be indicative of some time or luminosity evolution of the function as no redshift-independant correlation is found in this data set. If there is some relation it is more likely to be time-evolving since any luminosity dependance would most likely be noticeable in Fig. 7, which it is not.
The presence of reverberation in 57 of the 57 quasars analysed implies that the outflowing wind is a universal structure in quasars - a verifiable result since this structure may be identified in regularly-sampled quasars in other surveys. While it is acknowledged that red noise may yet be responsible for the brightness fluctuations observed, the results are so close to the initial model’s prediction that noise seems an unlikely explanation, especially given the corroborating evidence for the theory [@S05; @VA07; @p; @bp; @r].
The continuum variability of quasars, though well observed, is still not well understood. The results of this study would suggest that an understanding of these fluctuations can only be found by recognising that several physical processes are at work, of which reverberation is of secondary importance in many cases. It does however appear to be universally present in quasars and possibly in all AGN. Given that quasars are defined observationally by the presence of broad, blue-shifted emission lines, of which outflowing winds are the proposed source, this result is strong support for the [@E00] model.
Future Work
===========
Several phenomena identified in the MACHO quasar light curves remain as yet unexplained.
1. What is the source of the long-term variability of quasars? Is it a random noise process or is there some underlying physical interpretation? It has been suggested that perhaps a relativistic orbiting source of thermal emission near the inner accretion disc edge may be the source of such fluctuation. Modelling of the expected emission from such a source must be undertaken before such a hypothesis can be tested.
2. Why is it that the brightness profile following a dimming event sometimes agrees perfectly with the brightening profile while at other times it is in perfect disagreement? Again, is this a real physical process? Work by [@ge] on stratified wind models presents a situation where the central object brightening could increase the power of an inner wind, increasing its optical depth and thus shielding outer winds. This would result in negative reverberation. Further investigation may demonstrate a dependance of this effect on whether the central variation is a brightening or fading.
3. What is the mean profile of each reverberation peak? This profile may yield information about the geometry of the outflowing wind, thus enabling constraints to be placed on the physical processes that originate them.
4. Can quasars be identified by reverberation alone? Or perhaps by the long-term variability they exhibit? With surveys such as Pan-STARRS and LSST on the horizon, there is growing interest in devloping a purely photometric method of identifying quasars.
5. LSST and Pan-STARRS will also produce light curves for thousands of quasars which can then be analysed in bulk to produce a higher statistical accuracy for the long-term variability properties of quasars. It is evident that the sampling rate will only be sufficient for reverberation mapping to be performed with LSST and not Pan-STARRS.
6. Is there a time- or luminosity-evolving relation between $R_{blr}$ and luminosity? Comparison of $R_{blr}$, luminosity and redshift may yet shed light on this question.
Acknowledgements
================
I would like to thank Rudy Schild for proposing and supervising this project, Pavlos Protopapas for his instruction in IDL programming, Tsevi Mazeh for his advice on the properties and interpretation of the autocorrelation function and Phil Uttley for discussion and advice on stochastic noise in quasars. This paper utilizes public domain data originally obtained by the MACHO Project, whose work was performed under the joint auspices of the U.S. Department of Energy, National Nuclear Security Administration by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, the National Science Foundation through the Center for Particle Astrophysics of the University of California under cooperative agreement AST-8809616, and the Mount Stromlo and Siding Spring Observatory, part of the Australian National University.
\[1\] Elvis, M., ApJ, 545, 63 (2000) \[2\] Matthews, T. & Sandage, A., ApJ, 138, 30 (1963) \[3\] Webb, W. & Malkan, M., ApJ, 540, 652 (2000) \[4\] Ogle, P., PhD Thesis, CalTech (1998) \[5\] Fabian, A. et al., arXiv: 0903.4424 (2009) \[6\] Proga, D., ApJ, 538, 684 (2000) \[7\] Proga, D., Ostriker, J. & Kurosawa, R., ApJ, 676, 101 (2008) \[8\] Antonucci, R., Ann. Rev. Ast. & Ap. 31, 473 (1993) \[9\] Magdis, G. & Papadakis, I., ASPC, 360, 37 (2006) \[10\] Schild, R. & Thomson, D., AJ, 113, 130 (1997) \[11\] Schild, R., AJ, 129, 1225 (2005) \[12\] Vakulik, V. et al., MNRAS, 382, 819 (2007) \[13\] Schild, R., Leiter, D. & Robertson, S., AJ, 135, 947 (2008) \[14\] Lovegrove, J. et al., in preparation (2009) \[15\] Robertson, S. & Leiter, D., in New Developments in Black Hole Research (Nova Science Publishers, New York, 2007) pp1-48 \[16\] Robertson, S. & Leiter, D., ApJ, 596, 203 (2003) \[17\] Kaspi, S. et al., ApJ, 629, 61 (2005) \[18\] Bentz, M. et al., arXiv: 0812.2283 (2008) \[19\] Woo, J.-H. & Urry, C. M., ApJ, 579, 530 (2000) \[20\] Schild, R., Leiter, D. & Lovegrove, J., in preparation (2009) \[21\] Pooley, D. et al., ApJ, 661, 19 (2007) \[22\] Shakura, N. & Sunyaev, R., A&A, 24, 337 (1973) \[23\] Peterson, B. et al., Ap J, 425, 622 (1994) \[24\] Richards, G. et al., ApJ, 610, 671 (2004) \[25\] Wyithe, S., Webster, R. & Turner, E., MNRAS, 318, 1120 (2000) \[26\] Kochanek, C., ApJ, 605, 58 (2004) \[27\] Eigenbrod, A. et al., A&A, 490, 933 (2008) \[28\] Trevese, D. et al., ApJ, 433, 494 (1994) \[29\] Hawkins, M., MNRAS, 278, 787 (1996) \[30\] Hawkins, M., ASPC, 360, 23 (2006) \[31\] Colley, W. & Schild, R., ApJ, 594, 97 (2003) \[32\] Hawkins, M., A&A, 462, 581 (2007) \[33\] Schild, R., Lovegrove, J. & Protopapas, P., astro-ph/0902.1160 (2009) \[34\] Uttley, P. et al., PTPS, 155,170 (2004) \[35\] Arevalo, P. et al., MNRAS, 389, 1479 (2008) \[36\] Giveon, U. et al., MNRAS, 306, 637 (1999) \[37\] Rengstorf, A. et al., ApJ, 606, 741 (2004) \[38\] Netzer, H., MNRAS, 279, 429 (1996) \[39\] Botti, I. et al., astro-ph/0805.4664 (2008) \[40\] Alcock, C. et al., PASP, 111, 1539 (1999) \[41\] Geha, M. et al., AJ, 125, 1 (2003) \[42\] Richards, G. et al., astro-ph/0601558 (2006) \[43\] Timmer, J. & Koenig, M., A&A, 300, 707 (1995) \[44\] Schild, R., Leiter, D. & Robertson, S., AJ, 132, 420 (2006) \[45\] Gallagher, S. & Everett, J., ASPC, 373, 305 (2007)
|
---
abstract: 'We present Nesterov-type acceleration techniques for Alternating Least Squares (ALS) methods applied to canonical tensor decomposition. While Nesterov acceleration turns gradient descent into an optimal first-order method for convex problems by adding a *momentum* term with a specific weight sequence, a direct application of this method and weight sequence to ALS results in erratic convergence behaviour or divergence. This is so because the tensor decomposition problem is non-convex and ALS is accelerated instead of gradient descent. We investigate how line search or restart mechanisms can be used to obtain effective acceleration. We first consider a cubic line search (LS) strategy for determining the momentum weight, showing numerically that the combined Nesterov-ALS-LS approach is competitive with or superior to other recently developed nonlinear acceleration techniques for ALS, including acceleration by nonlinear conjugate gradients (NCG) and LBFGS. As an alternative, we consider various restarting techniques, some of which are inspired by previously proposed restarting mechanisms for Nesterov’s accelerated gradient method. We study how two key parameters, the momentum weight and the restart condition, should be set. Our extensive empirical results show that the Nesterov-accelerated ALS methods with restart can be dramatically more efficient than the stand-alone ALS or Nesterov accelerated gradient method, when problems are ill-conditioned or accurate solutions are required. The resulting methods perform competitively with or superior to existing acceleration methods for ALS, and additionally enjoy the benefit of being much simpler and easier to implement. On a large and ill-conditioned 71$\times$1000$\times$900 tensor consisting of readings from chemical sensors used for tracking hazardous gases, the restarted Nesterov-ALS method outperforms any of the existing methods by a large factor.'
author:
- 'Drew Mitchell[^1]'
- 'Nan Ye[^2]'
- 'Hans De Sterck[^3]'
bibliography:
- 'ref.bib'
title: Nesterov Acceleration of Alternating Least Squares for Canonical Tensor Decomposition
---
Introduction
============
Canonical tensor decomposition.
-------------------------------
Tensor decomposition has wide applications in machine learning, signal processing, numerical linear algebra, computer vision, natural language processing and many other fields [@kolda2009tensor]. This paper focuses on the CANDECOMP/PARAFAC (CP) decomposition of tensors [@kolda2009tensor], which is also called the canonical polyadic decomposition. CP decomposition approximates a given tensor $T \in R^{I_{1} \times \ldots \times I_{N}}$ by a low-rank tensor composed of a sum of $r$ rank-one terms, $\widetilde{T} = \sum_{i=1}^{r} a_{1}^{(i)} \circ \ldots \circ a_{N}^{(i)}$, where $\circ$ is the vector outer product. Specifically, we minimize the error in the Frobenius norm, $$\begin{aligned}
\norm{T - \sum_{i=1}^{r} a_{1}^{(i)} \circ \ldots \circ a_{N}^{(i)}}_{F}.
\label{eq:cp}\end{aligned}$$ Finding efficient methods for computing tensor decomposition is an active area of research, but the alternating least squares (ALS) algorithm is still one of the most efficient algorithms for CP decomposition. ALS finds a CP decomposition in an iterative way. In each iteration, ALS sequentially updates a block of variables at a time by minimizing expression (\[eq:cp\]), while keeping the other blocks fixed: first $A_{1} = (a_{1}^{(1)}, \ldots, a_{1}^{(r)})$ is updated, then $A_{2} = (a_{2}^{(1)}, \ldots, a_{2}^{(r)})$, and so on. Updating a factor matrix $A_{i}$ is a linear least-squares problem that can be solved in closed form. Collecting the matrix elements of the $A_{i}$’s in a vector $x$, we shall use $ALS(x)$ to denote the updated variables after performing one full ALS iteration starting from $x$.
When the CP decomposition problem is ill-conditioned, ALS can be slow to converge [@acar2011scalable], and recently a number of methods have been proposed to accelerate ALS. One approach uses ALS as a nonlinear preconditioner for general-purpose nonlinear optimization algorithms, such as nonlinear GMRES [@sterck2012nonlinear], nonlinear conjugate gradients (NCG) [@sterck2015nonlinearly], and LBFGS [@sterck2018nonlinearly]. Alternatively, the general-purpose optimization algorithms can be seen as nonlinear accelerators for ALS. In [@wang2018accelerating], an approach was proposed based on the Aitken-Stefensen acceleration technique. These acceleration techniques can substantially improve ALS convergence speed when problems are ill-conditioned or an accurate solution is required.
Nesterov’s accelerated gradient method. {#subsec:Nesterov}
---------------------------------------
In this paper, we adapt Nesterov’s acceleration method for gradient descent to the ALS method for CP tensor decomposition. Nesterov’s method of accelerating gradient descent is a celebrated method for speeding up the convergence rate of gradient descent, achieving the optimal convergence rate obtainable for first order methods on convex problems [@nesterov1983method].
Consider the problem of minimizing a function $f(x)$, $$\min_{x} f(x).$$ Nesterov’s accelerated gradient descent starts with an initial guess $x_{1}$. For $k \ge 1$, given $x_{k}$, a new iterate $x_{k+1}$ is obtained by first adding a multiple of the *momentum* $x_{k} - x_{k-1}$ to $x_{k}$ to obtain an auxiliary variable $y_{k}$, and then performing a gradient descent step at $y_{k}$. The update equations at iteration $k \ge 1$ are as follows: $$\begin{aligned}
y_{k} &= x_{k} + \beta_{k} (x_{k} - x_{k-1}), \\
x_{k+1} &= y_{k} - \alpha_{k} {\nabla}f(y_{k}), \label{eq:nesterov}\end{aligned}$$ where the gradient descent step length $\alpha_{k}$ and the momentum weight $\beta_{k}$ are suitably chosen numbers, and $x_{0} = x_{1}$ so that the first iteration is simply gradient descent.
There are a number of ways to choose the $\alpha_{k}$ and $\beta_{k}$ so that Nesterov’s accelerated gradient descent converges at the optimal $O(1/k^{2})$ in function value for smooth convex functions. For example, when $f(x)$ is a convex function with $L$-Lipschitz gradient, by choosing $\alpha_{k} = \frac{1}{L}$, and $\beta_{k}$ as $$\begin{aligned}
\lambda_{0} &= 0, \quad
\lambda_{k} = \frac{1 + \sqrt{1 + 4 \lambda_{k-1}^{2}}}{2}, \label{eq:lambda} \\
\beta_{k} &= \frac{\lambda_{k-1} - 1}{\lambda_{k}}. \label{eq:momentum1}\end{aligned}$$ one obtains the following $O(1/k^{2})$ convergence rate: $$f(x_{k}) - f(x^{*}) \le \frac{2 L \norm{x_1 - x^{*}}}{k^{2}},$$ where $x^{*}$ is a minimizer of $f$. See, e.g., [@su2016differential] for more discussion on the choices of momentum weights.
Main approach and contributions of this paper.
----------------------------------------------
Recent work has seen extensions of Nesterov’s accelerated gradient method in several ways: either the method is extended to the non-convex setting [@ghadimi2016accelerated; @li2015accelerated], or Nesterov’s approach is applied to accelerate convergence of methods that are not directly of gradient descent-type, such as the Alternating Direction Method of Multipliers (ADMM) [@goldstein2014fast].
This paper attacks both of these challenges at the same time for the canonical tensor decomposition problem: we develop Nesterov-accelerated algorithms for the non-convex CP tensor decomposition problem, and we do this by accelerating ALS steps instead of gradient descent steps.
Our basic approach is to apply Nesterov acceleration to ALS in a manner that is equivalent to replacing the gradient update in the second step of Nesterov’s method, Eq. (\[eq:nesterov\]), by an ALS step. Replacing gradient directions by update directions provided by ALS is essentially also the approach taken in [@sterck2012nonlinear; @sterck2015nonlinearly; @sterck2018nonlinearly] to obtain nonlinear acceleration of ALS by NGMRES, NCG and LBFGS; in the case of Nesterov’s method the procedure is extremely simple and easy to implement. However, applying this procedure directly fails for several reasons. First, it is not clear to which extent the $\beta_{k}$ momentum weight sequence of (\[eq:momentum1\]), which guarantees optimal convergence for gradient acceleration in the convex case, applies at all to our case of ALS acceleration for a non-convex problem. Second, and more generally, it is well-known that optimization methods for non-convex problems require mechanisms to safeguard against ‘bad steps’, especially when the solution is not close to a local minimum. The main contribution of this paper is to propose and explore two such safeguarding mechanisms for Nesterov acceleration applied to ALS, namely, line search, and restart with momentum weight selection. This leads to a family of acceleration methods for ALS that are competitive with or outperform the best currently existing nonlinear acceleration methods for ALS.
As further motivation for the problem that we address and our approach, illustrates the convergence difficulties that ALS may experience for ill-conditioned CP tensor decomposition problems, and how nonlinear acceleration may allow to remove these convergence difficulties. For the standard ill-conditioned synthetic test problem that is the focus of Fig. \[fig:intro\] (see for the problem description), ALS converges slowly (black curve). It is known that standard gradient-based methods such as gradient descent (GD), NCG or LBFGS that do not rely on ALS, perform more poorly than ALS [@acar2011scalable], so it is no surprise that applying Nesterov’s accelerated gradient method to the problem (for example, with the gradient descent step length $\alpha_k$ determined by a standard cubic line search as in [@acar2011scalable; @sterck2012nonlinear; @sterck2015nonlinearly; @sterck2018nonlinearly], cyan curve) leads to worse performance than ALS. Nonlinear acceleration of ALS, however, can substantially improve convergence [@sterck2012nonlinear; @sterck2015nonlinearly; @sterck2018nonlinearly], and we pursue this using Nesterov acceleration in this paper. However, as expected, applying Nesterov acceleration directly to ALS by replacing the gradient step in the Nesterov formula by a step in the ALS direction does not work and leads to erratic convergence behaviour (magenta curve), because the problem is non-convex and the Nesterov momentum weight sequence that guarantees optimal convergence in the convex case is inadequate in the non-convex case.
As the first key contribution of this paper, we show that we can obtain an efficient Nesterov-based acceleration of ALS by determining the Nesterov momentum weight $\beta_k$ in each iteration using a cubic line search (LS) (red curve). The resulting Nesterov-ALS-LS method is competitive with or superior to other recently developed nonlinear acceleration techniques for ALS that use line searches, such as NGMRES-ALS (green curve), with the advantage that Nesterov-ALS-LS is much easier to implement. However, the line searches may require multiple evaluations of $f(x)$ and its gradient and can be expensive.
As the second key contribution of the paper, we consider restart mechanisms as an alternative to the line search, and we study how two key parameters, the momentum step and the restart condition, should be set. The blue curves in Fig. \[fig:intro\] show two examples of the acceleration that can be provided by two variants of the family of restarted Nesterov-ALS methods we consider. One of these variants (Nesterov-ALS-RG-SN-D2) uses Nesterov’s sequence for the momentum weights, and another successful variant simply always uses momentum weight one (Nesterov-ALS-RG-S1-E). The naming scheme for the Nesterov-ALS variants that we consider will be explained in . Extensive numerical tests to be provided in show that the best-performing Nesterov-ALS scheme is achieved when using the gradient ratio as momentum weight (as in [@nguyen2018accelerated]), and restarting when the objective value increases.
The convergence theory of Nesterov’s accelerated gradient method for convex problems does not apply in our case due to the non-convex setting of the CP problem, and because we accelerate ALS steps instead of gradient steps. In fact, in the context of nonlinear convergence acceleration for ALS, few theoretical results on convergence are available [@sterck2012nonlinear; @sterck2015nonlinearly; @sterck2018nonlinearly]. We will, however, demonstrate numerically, for representative synthetic and real-world test problems, that our Nesterov-accelerated ALS methods are competitive with or outperform existing acceleration methods for ALS. In particular, our best-performing Nesterov-ALS method outperforms any existing acceleration method for ALS when applied to a large real-world ill-conditioned 71$\times$1000$\times$900 tensor.
The remainder of this paper is structured as follows. discusses related work. presents our general Nesterov-ALS scheme and discusses its instantiations. In , we perform an extensive experimental study of our algorithm by comparing it with a number of acceleration schemes on several benchmark datasets. concludes the paper.
Related Work {#sec:related}
============
Nesterov’s accelerated gradient descent is a celebrated optimal first-order algorithm for convex optimization [@nesterov1983method]. Recently, there are a number of works that apply Nesterov’s acceleration technique to non-convex problems. In [@ghadimi2016accelerated], a modified Nesterov accelerated gradient descent method was developed that enjoys the same convergence guarantees as gradient descent on non-convex optimization problems, and maintains the optimal first order convergence rate on convex problems. A Nesterov accelerated proximal gradient algorithm was developed in [@li2015accelerated] that is guaranteed to converge to a critical point, and maintains the optimal first order convergence rate on convex problems.
Nesterov’s technique has also been used to accelerate non-gradient based methods. In [@goldstein2014fast] it was used to accelerate ADMM, and [@ye2017nesterov] used it to accelerate an approximate Newton method.
Nesterov’s accelerated gradient method is known to exhibit oscillatory behavior on convex problems. An interesting discussion on this is provided in [@su2016differential] which formulates an ODE as the continuous time analogue of Nesterov’s method. Such oscillatory behavior happens when the method approaches convergence, and can be alleviated by restarting the algorithm using the current iterate as the initial solution, usually resetting the sequence of momentum weights to its initial state close to 0. In [@su2016differential] an explanation is provided of why resetting the momentum weight to a small value is effective using the ODE formulation of Nesterov’s accelerated gradient descent. In [@odonoghue2015adaptive] the use of adaptive restarting was explored for convex problems, and [@nguyen2018accelerated] explored the use of adaptive restarting and adaptive momentum weight for nonlinear systems of equations resulting from finite element approximation of PDEs. Our work is the first study of a general Nesterov-accelerated ALS scheme.
Several ALS-specific nonlinear acceleration techniques have been developed recently as discussed in the introduction [@sterck2012nonlinear; @sterck2015nonlinearly; @sterck2018nonlinearly]. These algorithms often have complex forms and incur significant computational overhead. Our Nesterov-ALS scheme is simple and straightforward to implement, and only incurs a small amount of computational overhead.
As far as we know, the recent paper [@wang2018accelerating] is the only one that has started to explore the application of Nesterov acceleration to ALS. However, they only tried the vanilla Nesterov technique with a standard Nesterov momentum sequence $\beta_k$ and without restarting or line search mechanisms (as for the magenta curve in Fig. \[fig:intro\]), and not surprisingly they fail to obtain acceleration of ALS.
Nesterov-Accelerated ALS Methods {#sec:algo}
================================
Our general strategy is to replace the gradient descent step $x_{k+1} = y_{k} - \alpha_{k} {\nabla}f(y_{k})$ in Eq. (\[eq:nesterov\]), the second step of Nesterov’s accelerated gradient descent, with the ALS update $x_{k+1} = ALS(y_{k})$. This simply results in the following accelerated ALS update formula: $$x_{k+1} = ALS(x_{k} + \beta_{k} (x_{k} - x_{k-1})). \label{eq:NALS-update}$$ However, a direct application of a standard Nesterov momentum weight sequence for convex problems does not work. A typical behavior is illustrated by the magenta curve in , which suggests that the algorithm gets stuck in a highly suboptimal region. Such erratic behavior is due to the fact that CP decomposition is non-convex, and that the ALS update is very different from gradient descent as seen in previous works [@acar2011scalable]. Below we propose two general ways to safeguard against bad steps: line search and restart.
Nesterov-ALS with line search.
------------------------------
Inspired by line search methods for nonlinear optimization, such as NCG or LBFGS, we propose using *line search* to determine the momentum weight $\beta_k$ in a way that safeguards against bad steps introduced by the $\beta_{k} (x_{k} - x_{k-1})$ term. (Note that ALS itself always reduces $f(x)$ and is not prone to introducing bad steps.) In each iteration, we determine $\beta_{k}$ as an approximate solution of $$\begin{aligned}
\beta_{k} & \approx \arg\min_{\beta \ge 0} f(x_{k} + \beta (x_{k} - x_{k-1})). \label{eq:lsw}\end{aligned}$$ We use the standard Moré-Thuente cubic line search that is also used for tensor decomposition methods in [@acar2011scalable; @sterck2012nonlinear; @sterck2015nonlinearly; @sterck2018nonlinearly]. This inexact line search finds a value of $\beta_k$ that satisfies the Wolfe conditions, which impose a sufficient descent condition and a curvature condition. Each iteration of this iterative line search requires the computation of the function value, $f(x)$, and its gradient. As such, the line search can be quite expensive. In our numerical tests, we use the following line search parameters: $10^{-4}$ for the descent condition, $10^{-2}$ for the curvature condition, a starting search step length of 1, and a maximum of 20 line search iterations. Since the line search is potentially expensive, we also consider restart mechanisms as an alternative.
Nesterov-ALS with restart.
--------------------------
Our general Nesterov-ALS scheme with restart is shown in . Besides incorporating the momentum term in the update rule (line 12), there are two other important ingredients in our algorithm: adaptive restarting (line 5-7), and adaptive momentum weight $\beta_k$ (line 9). The precise expressions we use for restarting and computing the momentum weight are explained in the following subsections. In each iteration $k$ of the algorithm we compute a new update according to the update rule (\[eq:NALS-update\]) with momentum term (line 12). Before computing the update, we check whether a restart is needed (line 5) due to a bad current iterate. When we restart, we discard the current bad iterate (line 6), and compute a simple ALS update instead (ALS always reduces $f(x)$ and is thus well-behaved), by setting $\beta_k$ equal to zero (line 7) such that (line 12) computes an ALS update. Note that, when a bad iterate is discarded, we don’t decrease the iteration index $k$ by one, but instead set the current iterate $x_k$ equal to the previously accepted iterate $x_{k-1}$, which then occurs twice in the sequence of iterates. We wrote the algorithm down this way because we can then use $k$ to count work (properly keeping track of the cost to compute the rejected iterate), but the algorithm can of course also be written without duplicating the previous iterate when an iterate is rejected. The index $i$ keeps track of the number of iterates since restarting, which is used for some of our strategies to compute the momentum weight $\beta_k$, see . The $\beta_{k-1} \neq 0$ condition is required in (line 5), which checks whether a restart is needed, to make sure that each restart (computing an ALS iteration) is followed by at least one other iteration before another restart can be triggered (because otherwise the algorithm could get stuck in the same iterate).
initialize $x_{1}$, $x_{2} \gets ALS(x_{1})$ $i \gets 2$ $\beta_{1} =0$
$x_{k} = x_{k-1}$ $\beta_{k} = 0$, $i \gets 1$ compute $\beta_{k}$ using $i$ and/or previous iterates $x_{k+1} \gets ALS(x_{k} + \beta_{k} (x_{k} - x_{k-1}))$, $i \gets i + 1$
Various termination criteria may be used. In our experiments, we terminate when the gradient 2-norm reaches a set tolerance: $$\norm{{\nabla}f(x_k)} /n_{X} \leq tol, \label{eq:terminate}$$ Here $n_{X}$ is the number of variables in the low-rank tensor approximation.
The momentum weight $\beta_{k}$ and the restart condition need to be specified to turn the scheme into concrete algorithms. We discuss the choices used in and below.
Momentum weight choices for Nesterov-ALS with restart. {#sec:momentum}
------------------------------------------------------
Naturally, we can ask whether a momentum weight sequence that guarantees optimal convergence for convex problems is applicable in our case. We consider the momentum weight rule defined in , but adapted to take restart into account: $$\begin{aligned}
\beta_{k} &\gets (\lambda_{i-1} - 1)/\lambda_{i}, \label{eq:fixedw}\end{aligned}$$ where $\lambda_{i}$ is defined in . Restart is taken into account by using $i$ instead of $k$ as the index on the RHS.
Following [@nguyen2018accelerated], we also consider using the *gradient ratio* as the momentum weight $$\begin{aligned}
\beta_{k} &\gets \frac{\norm{{\nabla}f(x_{k})}}{\norm{{\nabla}f(x_{k-1})}}. \label{eq:grw}\end{aligned}$$ This momentum weight rule can be motivated as follows [@nguyen2018accelerated]. When the gradient norm drops significantly, that is, when convergence is fast, the algorithm performs a step closer to the ALS update, because momentum may not really be needed and may in fact be detrimental, potentially leading to overshoots and oscillations. When the gradient norm does not change much, that is, when the algorithm is not making much progress, acceleration may be beneficial and a $\beta_k$ closer to 1 is obtained by the formula.
Finally, since we observe that Nesterov’s sequence produces $\beta_k$ values that are always of the order of 1 and approach 1 steadily as $k$ increases, we can simply consider a choice of $\beta_k=1$ for our non-convex problems, where we rely on the restart mechanism to correct any bad iterates that may result, replacing them by an ALS step. Perhaps surprisingly, the numerical results to be presented below show that this simplest of choices for $\beta_k$ may work well, if combined with suitable restart conditions.
Restart conditions for Nesterov-ALS. {#sec:restart}
------------------------------------
One natural restarting strategy is *function restarting* (see, e.g., [@odonoghue2015adaptive; @su2016differential]), which restarts when the algorithm fails to sufficiently decrease the function value. We consider condition $$\begin{aligned}
f(x_{k}) &> \eta f(x_{k-d}). \label{eq:fr}\end{aligned}$$ Here, we normally use $d=1$, but $d>1$ can be used to allow for *delay*. We normally take $\eta=1$, but we have found that it sometimes pays off to allow for modest increase in $f(x)$ before restarting, and a value of $\eta>1$ facilitates that. If $d=1$ and $\eta=1$, the condition guarantees that the algorithm will make some progress in each iteration, because the ALS step that is carried out after a restart is guaranteed to decrease $f(x)$. However, requiring strict decrease may preclude accelerated iterates (the first accelerated iterate may always be rejected in favor of an ALS update), so either $\eta>1$ or $d>1$ allows for a few accelerated iterates to initially increase $f(x)$, after which they may decrease $f(x)$ in further iterations in a much faster way than ALS, potentially resulting in substantial acceleration of ALS. While function restarting (with $d=1$ and $\eta=1$) has been observed to significantly improve convergence for convex problems, no theoretical convergence rate has been obtained [@odonoghue2015adaptive; @su2016differential].
Following [@su2016differential], we also consider the *speed restarting* strategy which restarts when $$\begin{aligned}
\norm{x_{k} - x_{k-1}} &< \eta \norm{x_{k-d} - x_{k-d-1}}. \label{eq:sr}\end{aligned}$$ Similarly to function restarting, we also have the $d$ parameter and the $\eta$ parameter in speed restarting. Intuitively, this condition means that the speed along the convergence trajectory, as measured by the change in $x$, drops. [@su2016differential] showed that speed restarting leads to guaranteed improvement in convergence rate for convex problems.
Another natural strategy is to restart when the gradient norm satisfies $$\begin{aligned}
\norm{{\nabla}f(x_{k})} &> \eta \norm{{\nabla}f(x_{k-d})}, \label{eq:gr}\end{aligned}$$ where, as above, $\eta$ can be chosen to be equal to or greater than one. This *gradient restarting* strategy (with $\eta=1$) has been used in conjunction with gradient ratio momentum weight by [@nguyen2018accelerated], and a similar condition on the residual has been used for ADMM acceleration in [@goldstein2014fast].
When we use a value of $\eta>1$ in the above restart conditions, we have found in our experiments that it pays off to allow for a larger $\eta$ immediately after the restart, and then decrease $\eta$ in subsequent steps. In particular, in our numerical tests below, we set $\eta$ = 1.25, and decrease $\eta$ in every subsequent step by 0.02, until $\eta$ reaches 1.15.
Numerical Tests {#sec:expt}
===============
We evaluated our algorithm on a set of synthetic CP test problems that have been carefully designed and used in many papers, and three real-world datasets of different sizes and originating from different applications. All numerical tests were performed in Matlab, using the Tensor Toolbox [@TTB_Software] and the Poblano Toolbox for optimization [@SAND2010-1422]. Matlab code for our methods and tests will be made available on the authors’ webpages, and as an extension to the Poblano Toolbox.
Naming convention for Nesterov-ALS schemes.
-------------------------------------------
We use the following naming conventions for the restarting strategies and momentum weight strategies defined in . The line search Nesterov-ALS scheme is denoted Nesterov-ALS-LS. For the restarted Nesterov-ALS schemes, we append Nesterov-ALS with the abbreviations in to denote the restarting scheme used, and the choice for the momentum weight $\beta_k$.
For example, Nesterov-ALS-RF-SG means using restarting based on function value (RF) and momentum step based on gradient ratio (SG). For most tests we don’t use delay (i.e., $d=1$ in or ), and $\eta$ is usually set to 1 in or . Appending D$n$ or E to the name indicates that a delay $d=n>1$ is used, and that $\eta\ne1$ is used, respectively.
Abbreviation Explanation
-------------- ----------------------------------
RF function restarting as in
RG gradient restarting as in
RX speed restarting as in
SN Nesterov step as in
SG gradient ratio step as in
S1 constant step 1
D$n$ delay $n>1$ in restart condition
E $\eta\ne1$ in restart condition
: Abbreviations used in naming convention for restarted Nesterov-ALS variants.[]{data-label="tbl:abbr"}
Baseline algorithms.
--------------------
We compare the new Nesterov-ALS schemes with the recently proposed nonlinear acceleration methods for ALS using GMRES [@sterck2012nonlinear], NCG [@sterck2015nonlinearly], and LBFGS [@sterck2018nonlinearly]. These methods will be denoted in the result figures as GMRES-ALS, NCG-ALS, and LBFGS-ALS, respectively.
Synthetic test problems and results. {#subsec:synthetic}
------------------------------------
We use the synthetic tensor test problems considered by [@acar2011scalable] and used in many papers as a standard benchmark test problem for CP decomposition [@sterck2012nonlinear; @sterck2015nonlinearly; @sterck2018nonlinearly]. As described in more detail in [@sterck2012nonlinear], we generate six classes of random three-way tensors with highly collinear columns in the factor matrices. We add two types of random noise to the tensors generated from the factor matrices (homoscedastic and heteroscedastic noise, see [@acar2011scalable; @sterck2012nonlinear]), and then compute low-rank CP decompositions of the resulting tensors.
Due to the high collinearity, the problems are ill-conditioned and ALS is slow to converge [@acar2011scalable]. All tensors have equal size $s=I_1=I_2=I_3$ in the three tensor dimensions. The six classes differ in their choice of tensor sizes ($s=20, \ 50, \ 100$), decomposition rank ($R=3, \ 5$), and noise parameters $l_1$ and $l_2$ ($l_1=0,\ 1$ and $l_2=0,\ 1$), in combinations that are specified in Table \[table:test\_problems\] in the Supplementary Materials.
To compare how various methods perform on these synthetic problems, we generate 10 random tensor instances with an associated random initial guess for each of the six problem classes, and run each method on each of the 60 test problems, with a convergence tolerance $tol=10^{-9}$. We then present so-called $\tau$-plot performance profiles (as also used in [@sterck2012nonlinear]) to compare the relative performance of the methods over the test problem set.
*Optimal restarted Nesterov-ALS method*. Our extensive experiments on both the synthetic and real world datasets (as indicated in further tests below and in the supplement) suggest that the optimal restarted Nesterov-ALS method is the one using function restarting (RF) and gradient ratio momentum steps (SG), i.e., Nesterov-ALS-RF-SG. As a comparison, for gradient descent, the study of [@nguyen2018accelerated] suggests that gradient restarting and gradient ratio momentum weights work well.
![Synthetic test problems. $\tau$-plot comparing the optimal restarted algorithm, Nesterov-ALS-RF-SG, with other variants of restarted Nesterov-ALS.[]{data-label="fig:compare-nes-als-variants"}](figs/New_plot2_HardALG1mod3_l){width="\linewidth"}
shows the performance of our optimal Nesterov-ALS-RF-SG method on the synthetic test problems, with an ablation analysis that compares it with those variants obtained by varying one hyper-parameter of Nesterov-ALS-RF-SG at a time. In this $\tau$-plot, we display, for each method, the fraction of the 60 problem runs for which the method execution time is within a factor $\tau$ of the fastest method for that problem. For example, for $\tau=1$, the plot shows the fraction of the 60 problems for which each method is the fastest. For $\tau=2$, the plot shows, for each method, the fraction of the 60 problems for which the method reaches the convergence tolerance in a time within a factor of two of the fastest method for that problem, etc. As such, the area between curves is a measure for the relative performance of the methods.
We can see that several variants have comparable performance to Nesterov-ALS-RF-SG, so the optimal choice of restart mechanism and momentum weight is not very sensitive. For these tests, changing the delay parameter has least effect on the performance. Interestingly, this is then followed by changing the momentum weight to be a constant of 1. This is followed by changing function restarting to gradient restarting and speed restarting, respectively. More detailed numerical results comparing Nesterov-ALS-RF-SG with a broader variation of restarted Nesterov-ALS are shown in the supplement, further confirming that Nesterov-ALS-RF-SG generally performs the best among the family of restarted Nesterov-ALS methods, for the synthetic test problems.
![Synthetic test problems. $\tau$-plot comparing the optimal restarted algorithm, Nesterov-ALS-RF-SG, with the line search version, Nesterov-ALS-LS, and existing accelerated ALS methods.[]{data-label="fig:compare-acc-methods-synthetic"}](figs/New_Competetive_method_HardALG1mod3_l.pdf){width="\linewidth"}
compares Nesterov-ALS-RF-SG with the line search version Nesterov-ALS-LS, and several existing accelerated ALS methods that use line search strategies, namely, GMRES-ALS, NCG-ALS, and LBFGS-ALS. Nesterov-ALS-RF-SG performs similarly to the best performing existing method, LBFGS-ALS [@sterck2018nonlinearly]. It performs substantially better than Nesterov-ALS-LS (it avoids the expensive line searches). Nevertheless, Nesterov-ALS-LS is competitive with the existing NGMRES-ALS [@sterck2012nonlinear], and superior to NCG-ALS [@sterck2015nonlinearly].
The Enron dataset and results.
------------------------------
The Enron dataset is a subset of the corporate email communications that were released to the public as part of the 2002 Federal Energy Regulatory Commission (FERC) investigations following the Enron bankruptcy. After various steps of pre-processing as described in [@chi2012tensors], a sender$\times$receiver$\times$month tensor of size 105$\times$105$\times$28 was obtained. We perform rank-10 CP decompositions for Enron. Fig. \[fig:enron\] shows gradient norm convergence for one typical test run, and a $\tau$-plot for 60 runs with different random initial guesses and convergence tolerance $tol=10^{-7}\,\|\nabla f(x_0)\|\approx0.0081$. For this well-conditioned problem (see discussion below), ALS converges fast and does not need acceleration. In fact, the acceleration overhead makes ALS faster than any of the accelerated methods. This is consistent with results in [@acar2011scalable; @sterck2012nonlinear; @sterck2015nonlinearly; @sterck2018nonlinearly] for well-conditioned problems.
![Enron data. Gradient convergence and $\tau$-plot.[]{data-label="fig:enron"}](figs/New_enron_time_l.pdf "fig:"){width="\linewidth"} ![Enron data. Gradient convergence and $\tau$-plot.[]{data-label="fig:enron"}](figs/New_Enron_Competetive_method_e-7_relative_drop_l.pdf "fig:"){width="\linewidth"}
The claus dataset and results.
------------------------------
The claus dataset is a 5$\times$201$\times$61 tensor consisting of fluorescence measurements of 5 samples containing 3 amino acids, taken for 201 emission wavelengths and 61 excitation wavelengths. Each amino acid corresponds to a rank-one component [@andersen2003practical]. We perform a rank-3 CP decomposition for claus. Fig. \[fig:claus\] shows gradient norm convergence for one test run, and a $\tau$-plot for 60 runs with different random initial guesses and convergence tolerance $tol=10^{-7}\,\|\nabla f(x_0)\|\approx0.1567$. For this medium-conditioned problem (see discussion below), substantial acceleration of ALS can be obtained if high accuracy is required, and Nesterov-ALS-RF-SG performs as well as the best existing methods, but it is much easier to implement.
![Claus data. Gradient convergence and $\tau$-plot.[]{data-label="fig:claus"}](figs/New_claus_time_l.pdf "fig:"){width="\linewidth"} ![Claus data. Gradient convergence and $\tau$-plot.[]{data-label="fig:claus"}](figs/New_Claus_Competetive_method_e-7_relative_drop_l.pdf "fig:"){width="\linewidth"}
The Gas3 dataset and results.
-----------------------------
Gas3 is relatively large and has multiway structure. It is a 71$\times$1000$\times$900 tensor consisting of readings from 71 chemical sensors used for tracking hazardous gases over 1000 time steps [@vergara2013performance]. There were three gases, and 300 experiments were performed for each gas, varying fan speed and room temperature. We perform a rank-5 CP decomposition for Gas3.
Fig. \[fig:gas3\] shows gradient norm convergence for one typical test run, and a $\tau$-plot for 20 runs with different random initial guesses and convergence tolerance $tol=10^{-7}\,\|\nabla f(x_0)\|\approx349.48$. For this highly ill-conditioned problem (see discussion below), ALS converges slowly, and NGMRES-ALS, NCG-ALS and LBFGS-ALS behave erratically. Our newly proposed Nesterov-ALS-RF-SG very substantially outperforms all other methods (not only for the convergence profile shown in the top panel, but for the large majority of the 20 tests with random initial guesses). Nesterov-ALS-RF-SG performs much more robustly for this highly ill-conditioned problem than any of the other accelerated methods, and reaches high accuracy much faster than any other method.
![Gas3 data. Gradient convergence and $\tau$-plot.[]{data-label="fig:gas3"}](figs/New_gas3_time_l.pdf "fig:"){width="\linewidth"} ![Gas3 data. Gradient convergence and $\tau$-plot.[]{data-label="fig:gas3"}](figs/New_Gas3_Competetive_method_e-7_relative_drop_no_norm_l.pdf "fig:"){width="\linewidth"}
Discussion on real-world problems.
----------------------------------
We speculated that our accelerated ALS methods may work best for ill-conditioned problems. To verify this, we computed the condition number of the initial Hessians for the three real-world problems. These were 58,842, 3,094,000, and 119,220,000 for Enron, Claus, and Gas3, respectively. This agrees with the observed advantage of Nesterov-ALS-RF-SG in .
Conclusion {#sec:conclude}
==========
We have derived Nesterov-ALS methods that are simple and easy to implement as compared to several existing nonlinearly accelerated ALS methods, such as GMRES-ALS, NCG-ALS, and LBFGS-ALS [@sterck2012nonlinear; @sterck2015nonlinearly; @sterck2018nonlinearly]. The optimal variant, using function restarting and gradient ratio momentum weight, is competitive with or superior to ALS and GMRES-ALS, NCG-ALS, and LBFGS-ALS.
Simple nonlinear iterative optimization methods like ALS and coordinate descent (CD) are widely used in a variety of application domains. There is clear potential for extending our approach to accelerating such simple optimization methods for other non-convex problems. A specific example is Tucker tensor decomposition [@kolda2009tensor]. NCG and NGMRES acceleration have been applied to Tucker decomposition in [@Hans_Tucker_decomp], and LBFGS acceleration in [@sterck2018nonlinearly], using a manifold approach to maintain the Tucker orthogonality constraints, and this approach can directly be extended to Nesterov acceleration.
Supplementary materials for “Nesterov Acceleration of Alternating Least Squares for Canonical Tensor Decomposition” {#supplementary-materials-for-nesterov-acceleration-of-alternating-least-squares-for-canonical-tensor-decomposition .unnumbered}
===================================================================================================================
Parameters for synthetic CP test problems {#parameters-for-synthetic-cp-test-problems .unnumbered}
=========================================
Table \[table:test\_problems\] lists the parameters for the standard ill-conditioned synthetic test problems used in the main paper. The test problems are described in [@acar2011scalable], and the specific choices of parameters for the six classes in Table \[table:test\_problems\] correspond to test problems 7-12 in [@sterck2012nonlinear]. All tensors have equal size $s=I_1=I_2=I_3$ in the three tensor dimensions, and have high collinearity $c$. The six classes differ in their choice of tensor sizes ($s$), decomposition rank ($R$), and noise parameters $l_1$ and $l_2$.
problem $s$ $c$ $R$ $l_1$ $l_2$
--------- ----- ----- ----- ------- -------
1 20 0.9 3 0 0
2 20 0.9 5 1 1
3 50 0.9 3 0 0
4 50 0.9 5 1 1
5 100 0.9 3 0 0
6 100 0.9 5 1 1
: List of parameters for synthetic CP test problems.\[table:test\_problems\]
Detailed comparisons for different restarting strategies {#detailed-comparisons-for-different-restarting-strategies .unnumbered}
========================================================
(on the following pages) show $\tau$-plots for variants of the restarted Nesterov-ALS schemes, for the case of function restart (RF, ), gradient restart (RG, ), and speed restart (RX, ), applied to the synthetic test problems.
[.8]{} ![image](figs/New_Function_Restart_delay_HardALG1mod3_l.pdf){width="\linewidth"}
[.8]{} ![image](figs/New_Function_Restart_no_delay_HardALG1mod3_l.pdf){width="\linewidth"}
[.8]{} ![image](figs/New_Gradient_Delay_HardALG1mod3_l.pdf){width="\linewidth"}
[.8]{} ![image](figs/New_Gradient_Restart_no_delay_HardALG1mod3_l.pdf){width="\linewidth"}
For each of the restart mechanisms, several of the restarted Nesterov-ALS variants typically outperform ALS, NCG-ALS [@sterck2015nonlinearly] and NGMRES-ALS [@sterck2012nonlinear].
Several of the best-performing restarted Nesterov-ALS variants are also competitive with the best existing nonlinear acceleration method for ALS, LBFGS-ALS [@sterck2018nonlinearly], and they are much easier to implement.
![image](figs/New_X_Restart_no_delay_HardALG1mod3_l.pdf){width=".8\linewidth"}
Among the restart mechanisms tested, function restart () substantially outperforms gradient restart (), and, in particular, speed restart ().
The $\tau$-plots confirm that Nesterov-ALS-RF-SG, using function restarting and gradient ratio momentum weight, consistently performs as one of the best methods, making it our recommended choice for ALS acceleration.
[^1]: School of Mathematical Sciences, Monash University, demit3@student.monash.edu
[^2]: School of Mathematics and Physics, University of Queensland, nan.ye@uq.edu.au
[^3]: School of Mathematical Sciences, Monash University, hans.desterck@monash.edu
|
---
abstract: 'We investigate the microscopic origin of the ferromagnetic and antiferromagnetic spin exchange couplings in the quasi one-dimensional cobalt compound Ca$_3$Co$_2$O$_6$. In particular, we establish a local model which stabilizes a ferromagnetic alignment of the $S=2$ spins on the cobalt sites with trigonal prismatic symmetry, for a sufficiently strong Hund’s rule coupling on the cobalt ions. The exchange is mediated through a $S=0$ cobalt ion at the octahedral sites of the chain structure. We present a strong coupling evaluation of the Heisenberg coupling between the $S=2$ Co spins on a separate chain. The chains are coupled antiferromagnetically through super-superexchange via short O-O bonds.'
author:
- Raymond Frésard
- Christian Laschinger
- Thilo Kopp
- Volker Eyert
title: 'The Origin of Magnetic Interactions in Ca$_3$Co$_2$O$_6$'
---
Recently there has been renewed interest in systems exhibiting magnetization steps. In classical systems such as CsCoBr$_3$ one single plateau is typically observed in the magnetization versus field curve at one third of the magnetization at saturation.[@Hida94] This phenomenon attracted considerable attention, and Oshikawa, Yamanaka and Affleck demonstrated that Heisenberg antiferromagnetic chains exhibit such magnetization plateaus when embedded in a magnetic field.[@Oshikawa97] These steps are expected when $N_c (S-m)$ is an integer, where $N_c$ is the number of sites in the magnetic unit cell, $S$ the spin quantum number, and $m$ the average magnetization per spin, which we shall refer to as the OYA criterion. The steps can be stable when chains are coupled, for instance in a ladder geometry. In that case the magnetic frustration is an important ingredient to their stability.[@Mila98] Plateaus according to the OYA criterion are also anticipated for general configurations, provided gapless excitations do not destabilize them.[@Oshikawa00] Indeed several systems exhibiting magnetization steps are now known;[@Shiramura98; @Narumi98] they all obey the OYA criterion, they are usually far from exhausting all the possible $m$ values, they all are frustrated systems, and they all can be described by an antiferromagnetic Heisenberg model. Related behavior has been recently found in other systems. For example, up to five plateaus in the magnetization vs. field curve have been observed in Ca$_3$Co$_2$O$_6$ at low temperature[@Aasland97; @Kageyama97; @Maignan00]. However there is to date no microscopic explanation to this phenomenon, even though the location of the plateaus is in agreement with the OYA criterion.
Ca$_3$Co$_2$O$_6$ belongs to the wide family of compounds A’$_3$ABO$_6$, and its structure belongs to the space group R3c. It consists of infinite chains formed by alternating face sharing AO$_6$ trigonal prisms and BO$_6$ octahedra — where Co atoms occupy both A and B sites. Each chain is surrounded by six chains separated by Ca atoms. As a result a Co ion has two neighboring Co ions on the same chain, at a distance of $2.59$ Å, and twelve Co neighbors on the neighboring chains at distances $7.53$ Å (cf. Fig. \[Fig:plane\]).[@Fjellvag96] Concerning the magnetic structure, the experiment points toward a ferromagnetic ordering of the magnetic Co ions along the chains, together with antiferromagnetic correlations in the buckling a-b plane.[@Aasland97] The transition into the ordered state is reflected by a cusp-like singularity in the specific heat at 25 K,[@Hardy03] — at the temperature where a strong increase of the magnetic susceptibility is observed. Here we note that it is particularly intriguing to find magnetization steps in a system where the dominant interaction is ferromagnetic.
In order to determine the effective magnetic Hamiltonian of a particular compound one typically uses the Kanamori-Goodenough-Anderson (KGA) rules[@Goodenough]. Knowledge of the ionic configuration of each ion allows to estimate the various magnetic couplings. When applying this program to Ca$_3$Co$_2$O$_6$ one faces a series of difficulties specifically when one tries to reconcile the neutron scattering measurements that each second Co ion is non-magnetic. Even the assumption that every other Co ion is in a high spin state does not settle the intricacies related to the magnetic properties; one still has to challenge issues such as: i) what are the ionization degrees of the Co ions? ii) how is an electron transfered from one cobalt ion to a second? iii) which of the magnetic Co ions are magnetically coupled? iv) which mechanism generates a ferromagnetic coupling along the chains?
These questions are only partially resolved by ab initio calculations. In particular, one obtains that both Co ions are in 3+ configurations.[@Whangbo03] Moreover both Co-O and direct Co-Co hybridizations are unusually large, and low spin and high spin configurations for the Co ions along the chains alternate.[@Eyert03]
Our publication addresses the magnetic couplings, and in particular the microscopic origin of the ferromagnetic coupling of two Co ions through a non-magnetic Co ion. In view of the plethoric variety of iso-structural compounds,[@Stitzer01] the presented mechanism is expected to apply to many of these systems. We now derive the magnetic inter-Co coupling for Ca$_3$Co$_2$O$_6$ from microscopic considerations. The high-spin low-spin scenario confronts us with the question of how a ferromagnetic coupling can establish itself, taking into account that the high spin Co ions are separated by over 5 Å, linked via a non-magnetic Co and several oxygens.
Let us first focus on the Co-atoms in a single Co-O chain of Ca$_3$Co$_2$O$_6$. As mentioned above the surrounding oxygens form two different environments in an alternating pattern. We denote the Co ion in the center of the oxygen-octahedron Co1, and the Co ion in the trigonal prisms Co2. The variation in the oxygen-environment leads to three important effects. First, there is a difference in the strength of the crystal field splitting, being larger in the octahedral environment. As a result Co1 is in the low spin state and Co2 in the high spin state. Second, the local energy levels are in a different sequence. For the octahedral environment we find the familiar $t_{2g}$–$e_g$ splitting, provided the axes of the local reference frame point towards the surrounding oxygens. The trigonal prismatic environment accounts for a different set of energy levels. For this local symmetry one expects a level scheme with $d_{3z^2-r^2}$ as lowest level, followed by two twofold degenerate pairs $d_{xy}$, $d_{x^2-y^2}$ and $d_{xz}$, $d_{yz}$. However, our LDA calculations[@Eyert03] show that the $d_{3z^2-r^2}$ level is actually slightly above the first pair of levels. Having clarified the sequence of the energy levels, we now turn to the microscopic processes which link the Co ions. Two mechanisms may be competing: either the coupling involves the intermediate oxygens, or direct Co-Co overlap is more important. Relying on electronic structure calculations, we may safely assume that the direct Co-Co overlap dominates.[@Eyert03] The identification of the contributing orbitals is more involved. Following Slater and Koster[@Slater54] one finds that only the $3z^2$-$r^2$ orbitals along the chains have significant overlap. However, we still have to relate the Koster-Slater coefficients and the coefficients for the rotated frame since the natural reference frames for Co1 and Co2 differ. On the Co2 atoms with the triangular prismatic environment the $z$-axis is clearly defined along the chain direction, and we choose the $x$ direction to point toward one oxygen. This defines a reference frame $S$. The $x$ and $y$ directions are arbitrary and irrelevant to our considerations. The octahedral environment surrounding the Co1 atoms defines the natural coordinate system, which we call $S'$. By rotating $S'$ onto $S$ one obtains the $3z^2$-$r^2$ orbital in the reference frame $S$ as an equally weighted sum of $x'y'$, $x'z'$, $y'z'$ orbitals in $S'$. The above observation that the only significant overlap is due to the $3z^2$-$r^2$ orbitals on both Co ions now translates into an overlap of the $3z^2$-$r^2$ orbital on high spin cobalt with all $t_{2g}$ orbitals on low spin cobalt.
![Typical hopping paths for a) ferromagnetic and b) antiferromagnetic ordering. The displayed ferromagnetic path is the only one for ferromagnetic ordering and has the highest multiplicity of all, ferromagnetic and antiferromagnetic. There are similar paths for antiferromagnetic ordering but with a Hund’s rule penalty and lower multiplicity. The path in (b) is unique for the antiferromagnetic case and has low energy but also low multiplicity. \[Fig:levelscheme\]](fig_1.eps){width=".47\textwidth"}
We proceed with a strong coupling expansion to identify the magnetic coupling along the chain. This amounts to determine the difference in energy, between the ferromagnetic and antiferromagnetic configurations, to fourth order in the hopping, since this is the leading order to the magnetic interaction between the high spin Co ions. As explained above we only have to take into account the $3z^2$-$r^2$ level on Co2 and the $t_{2g}$ levels on Co1. In an ionic picture all $t_{2g}$ levels on Co1 are filled while the $d_{3z^2-r^2}$ level on Co2 is half-filled and we therefore consider hopping processes from the former to the latter. In the ferromagnetic configuration we include processes where two down spin electrons hop from Co1 to both neighboring Co2 and back again as displayed in Fig. \[Fig:levelscheme\]a. There are in total $3\times 2\times 2\times 2\times 2 = 48$ such processes. The intermediate spin state for Co1 is in agreement with Hund’s rule. The energy gain per path is given by: $${ E_f=\frac{t^4}{E_0^2} \;
\left(3U-5J_{\rm{H}}+4 E_{\rm loc}(\Delta_{\rm{cf}},J_{\rm{H}},-1)\right)^{-1}}$$ with $${ E_0 = U-J_{\rm{H}}+2E_{\rm loc}(\Delta_{\rm{cf}},J_{\rm{H}},-2)}$$ $$\Delta_{\rm{cf}} = \Delta_{\rm{Co1}} + \frac{4}{10} \Delta_{\rm{Co2}}$$ and $${ E_{\rm loc}(\Delta_{\rm{cf}},J_{\rm{H}},l)}=
{\frac{\Delta_{\rm{cf}}J_{\rm{H}}^2}{(\Delta_{\rm{cf}}-
\frac{1}{2}lJ_{\rm{H}})(2\Delta_{\rm{cf}}+3J_{\rm{H}})}}$$ where $\Delta_{\rm{Co1}}$ and $\Delta_{\rm{Co2}}$ denote the crystal field splittings on Co1 and Co2, respectively. The Hund’s coupling is $J_{\rm{H}}$, assumed to be identical on both, Co1 and Co2, and $U$ denotes the local Coulomb repulsion. There are no further paths in this configuration, besides the one which twice iterates second order processes. In the antiferromagnetic case the situation is slightly more involved. Here three different classes of paths have to be distinguished. The first class, denoted $a1$ in the following, consists of hopping events of one up spin and one down spin electron from the same Co1 level. (There are $3\times
2\times 2 = 12$ such paths). The second class ($a2$) consists of hopping events of one down spin and one up spin electron from different Co1 levels (There are $3\times
2\times 2 \times 2 = 24$ such paths). The third class ($a3$), shown in Fig. \[Fig:levelscheme\]b, consists of hopping processes where one electron is hopping from Co1 to Co2 and then another electron is hopping from the other Co2 to the same Co1 and back again (There are $3\times 2= 6 $ such paths). In total this sums up to 42 paths in the antiferromagnetic configuration. Consequently, we have more ferromagnetic than antiferromagnetic exchange paths. However the energy gain depends on the path. For the classes $a1$ and $a2$, the intermediate Co1 state violates the Hund’s rule, and we identify an energy gain per path given by: $$\begin{aligned}
\nonumber
E_{a1}&=&\frac{t^4}{E_0^2}\left(3U-5J_{\rm{H}}+(4-\frac{6J_{\rm{H}}}{\Delta_{\rm{cf}}})
E_{\rm loc}(\Delta_{\rm{cf}},J_{\rm{H}},1)\right)^{-1}\\
E_{a2}&=&\frac{t^4}{E_0^2}\left(3U-2J_{\rm{H}}-\rm{F}(\Delta_{\rm{cf}},J_{\rm{H}})\right)^{-1}\end{aligned}$$ Here F is a positive function which is smaller than $J_{\rm{H}}$. The expression $3U-2J_{\rm{H}}-F$ is the lowest eigenvalue of $\langle i|H_{\rm Co}|j \rangle$ where the states $i$ and $j$ are all possible states on Co1 consistent with two of the $d$-orbitals filled and three empty. For the class $a3$ one observes that one does not need to invoke a Co1 ion with four electrons as an intermediate state, in contrast to all other processes we considered so far. We find the energy gain as: $$E_{a3}=\frac{t^4}{E_0^2\left(U+2J_{\rm H}\right)}\\$$ Altogether we obtain the difference in energy gain between the ferromagnetic and the antiferromagnetic configurations as: $$E^{\rm F} - E^{\rm AF}=48 E_f-24E_{a1}-12E_{a2}-6E_{a3}$$ The dependence of $E^{\rm F} - E^{\rm AF}$ on $J_{\rm H}$ for different values of the local interaction $U$ is shown in Fig. \[Fig:stability\]. Using $J_{\rm H}=0.6~\rm{eV}$,[@Laschinger03] $U=5.3~\rm{eV}$,[@Sawatzky91] $t=1.5~\rm{eV}$,[@Eyert03] $\Delta_{\rm{Co1}}=2.5~\rm{eV}$ and $\Delta_{\rm{Co2}}=1.5~\rm{eV}$,[@Eyert03] we obtain an estimate for the Heisenberg exchange coupling (for the Co2 spin $S=2$): $$J^{\rm F}=(E^{\rm F} - E^{\rm AF})/2S^2\approx 2 \rm{meV}$$ which is in reasonable agreement with the experimental transition temperature of 25 K.
In this context one should realize that a one-dimensional chain does not support a true phase transition into the magnetic state. However, as the length $L$ of the chains is finite, a crossover into the ferromagnetic state may be observed when the correlation length is approximately $L$.[@chain]
![Energy gain $E^{\rm F} - E^{\rm AF}$ of a nearest neighbor Co2 ferromagnetic alignment with respect to an antiferromagnetic orientation as a function of the Hund’s coupling $J_{\rm H}$, for a typical parameter set of Hubbard $U$. The grey shaded area indicates the interval of $J_{\rm H}$ for which a Co2-high-spin Co1-low-spin configuration can be stabilized for the considered crystal field splittings.[@Laschinger03] []{data-label="Fig:stability"}](fig_2.eps){width="48.00000%"}
To emphasize the importance of the chain geometry we now briefly discuss the hypothetical case where the $z$-axis of the octahedra corresponds to that of the prism. In this geometry there is only one orbital on each Co ion which contributes to the hopping processes. In this situation the process favoring ferromagnetism shown in Fig. \[Fig:levelscheme\]a does not exist, in contrast to the process $a3$ shown in Fig. \[Fig:levelscheme\]b, and the resulting coupling is therefore antiferromagnetic.
In the large class of known isostructural compounds[@Stitzer01] the non-magnetic ion is not necessarily a Co ion. If the non-magnetic ion is in a $3d^2$ (or $4d^2$) configuration, the above argument applies, and the coupling is antiferromagnetic. If the configuration is $3d^4$, all the discussed electronic processes contribute, however with different multiplicities. Moreover, additional paths have to be considered for the antiferromagnetic case. They represent exchange processes through an empty orbital on the non-magnetic ion. As a result, the ferromagnetic scenario has fewer paths than the antiferromagnetic, and the coupling becomes antiferromagnetic. Correspondingly, a ferromagnetic interaction can only occur when all three orbitals on the nonmagnetic ion participate in the exchange process. Obviously the situation we consider differs from the standard 180 degree superexchange mechanism in many respects.
With the investigation of the [*interchain*]{} magnetic interaction one first notices that each magnetic Co ion has twelve neighboring Co ions on different chains. However, as displayed in Fig. \[Fig:plane\], there is an oxygen bridge to only six neighbors, one per chain. Here the coupling $J^{\rm AF}$ results from the super-superexchange mechanism (with exchange via two oxygen sites), and it is antiferromagnetic. Since the Co-O hybridization is unusally large in this system, we expect the interchain magnetic coupling to be sufficiently strong to account for the observed antiferromagnetic correlations.
From our previous considerations we introduce the minimal magnetic Hamiltonian: $$\begin{aligned}
\label{Eq:Hamiltonian}
H &=& \sum_{i,j} \left(J^{\rm F}_{i,j} {\vec S}_i \cdot {\vec S}_j + J^{\rm AF}_{i,j}
{\vec S}_i \cdot {\vec S}_j \right) -D \sum_{i} S_{z,i}^2 \\
{\rm with}&& J^{\rm F}_{i,j} = \left\{ \begin{array}{ll}
J^{\rm F}& \mbox{if ${\vec j} - \vec{i} = \pm 2 {\vec d}$}\\
0& \mbox{otherwise}
\end{array} \right. \nonumber \\
{\rm and} \nonumber\\
J^{\rm AF}_{i,j}\! &=& \!\! \left\{ \begin{array}{ll}
J^{\rm AF}& \mbox{if ${\vec j} - \vec{i} = \pm ( {\vec a}+{\vec d}), \pm ( {\vec
b}+{\vec d}), \pm ( {\vec c}+{\vec d}) $}\nonumber \\
0& \mbox{otherwise.}
\end{array} \right. \\ \nonumber\end{aligned}$$ Here we use the site vectors ${\vec a} = a (-1/2,\sqrt{3}/2,c/(12 a))$, ${\vec b} = a
(-1/2,-\sqrt{3}/2,c/(12 a))$, ${\vec c} = a (1,0,c/(12 a))$, and ${\vec d} = {\vec a}+{\vec b}+{\vec c}$ where $a=9.06$ Å and $c=10.37$ Å are the lattice constants of the hexagonal unit cell. The Hamiltonian, Eq. (\[Eq:Hamiltonian\]), also includes a phenomenological contribution $D S_{z}^2$ which accounts for the anisotropy observed, for example, in the magnetic susceptibility.[@Kageyama97a; @Maignan00]
![ Projection of Ca$_3$Co$_2$O$_6$ in the (100) plane, with those oxygen atoms which are closest to the plane. Shadows indicate the shortest O-O bonds, along which super-superexchange processes take place. Yellow (red) large spheres denote Co1 (Co2) atoms, dark (light) small blue spheres oxygen atoms located above (below) the Co plane. \[Fig:plane\]](fig_3.eps){width="49.00000%"}
The stability of the magnetization steps results from the magnetic frustration which is introduced through the antiferromagnetic interchain coupling. Indeed, the lattice structure suggests that this magnetic system is highly frustrated, since the chains are arranged on a triangular lattice. However investigating the Hamiltonian (\[Eq:Hamiltonian\]) reveals that the microscopic mechanism leading to frustration is more complex. It is visualized when we consider a closed path $TLRT'T$ — where the sites $T$ and $T'$ are next nearest-neighbor Co2 sites on the same chain and the sequence of sites $TLR$ is located on a triangle of nearest neighbor chains. One advances from $T$ to $L$ to $R$ to $T'$ on a helical path[@comment] formed by the oxygen bridges from Fig. \[Fig:plane\]. Since the structure imposes $T$ and $T'$ to be next nearest-neighbors, the frustration occurs independently of the sign of the intrachain coupling.
In summary we established the magnetic interactions in an effective magnetic Hamiltonian for Ca$_3$Co$_2$O$_6$. It is a spin-2 Hamiltonian, with antiferromagnetic interchain coupling, and ferromagnetic intrachain interactions. The latter is obtained from the evaluation of all spin exchange paths between two high-spin Co2 sites through an intermediary low-spin Co1 site. This mechanism is particular to the geometry of the system as is the microscopic mechanism which leads to magnetic frustration. We expect that the discussed microscopic mechanisms also apply to other isostructural compounds, such as Ca$_3$CoRhO$_6$ and Ca$_3$CoIrO$_6$.
We are grateful to A. Maignan, C. Martin, Ch. Simon, C. Michel, A. Guesdon, S. Boudin and V. Hardy for useful discussions. C. Laschinger is supported by a Marie Curie fellowship of the European Community program under number HPMT2000-141. The project is supported by DFG through SFB 484 and by BMBF (13N6918A).
F. Hida, J. Phys. Soc. Jpn. [**63**]{}, 2359 (1994).
M. Oshikawa [*et al.*]{}, Phys. Rev. Lett. [**78**]{}, 1984 (1997).
F. Mila, Eur. Phys. J. B [**6**]{}, 201 (1998).
M. Oshikawa, Phys. Rev. Lett. [**84**]{}, 1535 (2000).
Y. Narumi [*et al.*]{}, Physica B [**246-247**]{}, 509 (1998).
W. Shiramura [*et al.*]{}, J. Phys. Soc. Jpn. [**67**]{}, 1548 (1998).
S. Aasland [*et al.*]{}, Solid State Commun. [**101**]{}, 187 (1997).
H. Kageyama [*et al.*]{}, J. Phys. Soc. Jpn. [**66**]{}, 1607 (1997).
A. Maignan [*et al.*]{}, Eur. Phys. J. B [**15**]{}, 657 (2000).
H. Fjellv[å]{}g [*et al.*]{}, J. Sol. State Chem. [**124**]{}, 190 (1996).
V. Hardy [*et al.*]{}, Phys. Rev. B [**68**]{}, 014424 (2003).
J. B. Goodenough, [*Magnetism and the Chemical Bond*]{} (Interscience Publishers, John Wiley & and sons, New York, 1963)
M. H. Whangbo [*et al.*]{},
Solid State Commun. [**125**]{}, 413 (2003).
V. Eyert [*et al.*]{}, Preprint, unpublished (2003).
K. E. Stitzer [*et al.*]{}, Opin. Solid State Mater. Sci. [**5**]{}, 535 (2001).
J. C. Slater and G. F. Koster, Phys. Rev. [**94**]{}, 1498 (1954).
C. Laschinger [*et al.*]{}, J. Magn. and Magnetic Materials, (in press, 2004).
J. van Elp [*et al.*]{}, Phys. Rev. B [**44**]{}, 6090 (1991).
Given the strong magnetic anisotropy of Ca$_3$Co$_2$O$_6$,[@Maignan00] which is phenomenologically included in Eq. (\[Eq:Hamiltonian\]), we expect that one can capture some physics of the system in the Ising limit. In that case, for a typical chain length of the order of one hundred sites, the correlation length extends to the system size for $T\simeq 0.8 J$ which is to be interpreted as the crossover temperature to the ferromagnetic state. Correspondingly, the spin susceptibility peaks at about $T/J\simeq0.8-1.2$ which we verified in exact diagonalization. Even with chain lengths of 10,000 Co sites, the crossover temperature is still in the given range.
H. Kageyama [*et al.*]{}, J. Phys. Soc. Jpn. [**66**]{}, 3996 (1997).
The positions of the considered sites are: $T$ is an arbitrary Co2 site on any chain, $L=T+{\vec b}+{\vec d}$, $R=L+{\vec c}+{\vec d}$ and $T'=R+{\vec a}+{\vec d}$.
|
---
abstract: |
The magnetic permeability of a ferrite is an important factor in designing devices such as inductors, transformers, and microwave absorbing materials among others. Due to this, it is advisable to study the magnetic permeability of a ferrite as a function of frequency.
When an excitation that corresponds to a harmonic magnetic field **H** is applied to the system, this system responds with a magnetic flux density **B**; the relation between these two vectors can be expressed as **B** =$\mu(\omega)$ **H** . Where $\mu$ is the magnetic permeability.
In this paper, ferrites were considered linear, homogeneous, and isotropic materials. A magnetic permeability model was applied to NiZn ferrites doped with Yttrium.
The parameters of the model were adjusted using the Genetic Algorithm. In the computer science field of artificial intelligence, Genetic Algorithms and Machine Learning does rely upon nature’s bounty for both inspiration nature’s and mechanisms. Genetic Algorithms are probabilistic search procedures which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover.
For the numerical fitting usually is used a nonlinear least square method, this algorithm is based on calculus by starting from an initial set of variable values. This approach is mathematically elegant compared to the exhaustive or random searches but tends easily to get stuck in local minima. On the other hand, random methods use some probabilistic calculations to find variable sets. They tend to be slower but have greater success at finding the global minimum regardless of the initial values of the variables
author:
- 'Silvina Boggi, Adrian C. Razzitte,Gustavo Fano'
title: Numerical response of the magnetic permeability as a funcion of the frecuency of NiZn ferrites using Genetic Algorithm
---
Magnetic permeability model
===========================
The ferrites materials have been widely used as various electronic devices such as inductors, transformers, and electromagnetic wave absorbers in the relatively high-frequency region up to a few hundreds of MHz.
The electromagnetic theory can be used to describe the macroscopic properties of matter. The electromagnetic fields may be characterized by four vectors: electric field **E**, magnetic flux density **B** , electric flux density **D**, and magnetic field **H**, which at ordinary points satisfy Maxwell’s equations.
The ferrite media under study can be considerer as linear, homogeneous, and isotropic. The relation between the vectors **B** and **H** can be expressed as : **B** =$\mu(\omega)$**H** . Where $\mu$ is the magnetic permeability of the material.
Another important parameter for magnetic materials is magnetic susceptibility $\chi$ which relates the magnetization vector M to the magnetic field vector **H** by the relationship: **M** =$\chi(\omega)$ **H** .
Magnetic permeability $\mu$ and magnetic susceptibility $\chi$ are related by the formula: $\mu=1 + \chi $.
Magnetic materials in sinusoidal fields have, in fact, magnetic losses and this can be expressed taking $\mu$ as a complex parameter:$ \mu=\mu' + j \mu" $ [@VonHippel]
In the frequency range from RF to microwaves, the complex permeability spectra of the ferrites can be characterized by two different magnetization mechanisms: domain wall motion and gyromagnetic spin rotation.
Domain wall motion contribution to susceptibility can be studied through an equation of motion in which pressure is proportional to the magnetic field [@greiner].
Assuming that the magnetic field has harmonic excitation $H= H_{0} e^{j\omega t}$ , the contribution of domain wall to the susceptibility $\chi _{d}$ is:
$$\label{eq:chid}
\chi_{d}=\frac{\omega^{2}\;\chi_{d0}}{\omega^{2}{_{d}}-\omega^{2}-j\omega\beta}$$
Here, $\chi_{d}$ is the magnetic susceptibility for domain wall, $\omega_{d}$ is the resonance frequency of domain wall contribution, $\chi_{d0}$ is the static magnetic susceceptibility, $\beta$ is the damping factor and $\omega$ is the frequency of the external magnetic field.
Gyromagnetic spin contribution to magnetic susceptibility can be studied through a magnetodynamic equation [@sohoo][@wohlfarth].
The magnetic susceptibility $\chi_{s}$ can be expressed as:
$$\
\chi_{s}=\frac{\left(\omega_{s}-j\omega\alpha\right)\omega_{s}\chi_{s0}}{\left(\omega_{s}-j\omega\alpha\right)^{2}-\omega^{2}},$$
Here, $\chi_{s}$ is the magnetic susceptibility for gyromagnetic spin, $\omega_{s}$ is the resonance frequency of spin contribution, $ \chi_{s0}$ is the static magnetic susceptibility, and $\alpha$ is the damping factor and $\omega$ is the frequency of the external magnetic field.
The total magnetic permeability results [@PhysicaB]:
$$\label{eq:modelo}
\mu=1+ \chi_{d}+\chi_{s}=1+\frac{\omega^{2}\;\chi_{d0}}{\omega^{2}{_{d}}-\omega^{2}-j\omega\beta}+\frac{\left(\omega_{s}+j\omega\alpha\right)\omega_{s}\chi_{s0}}{\left(\omega_{s}+j\omega\alpha\right)^{2}-\omega^{2}}$$
Separating the real and the imaginary parts of equation (\[eq:modelo\]) we get:
$$\label{mureal}
\mu'\left(\omega\right)=1+\frac{\omega_{d}^{2}\;\chi_{d0}\left(\omega_{d}^{2}-\omega^{2}\right)}{\left(\omega_{d}^{2}-\omega^{2}\right)^{2}+\omega^{2}\beta^{2}}+\frac{\omega_{s}^{2}\;\chi_{s0}\left(\omega_{s}^{2}-\omega^{2}\right)+\omega^{2}\alpha^{2}}{\left(\omega_{s}^{2}-\omega^{2}\left(1+\alpha^{2}\right)\right)^{2}+4\omega^{2}\omega_{s}^{2}\alpha^{2}}$$
$$\label{muimag}
\mu"\left(\omega\right)=1+\frac{\omega_{d}^{2}\;\chi_{d0}\;\omega\;\beta}{\left(\omega_{d}^{2}-\omega^{2}\right)^{2}+\omega^{2}\beta^{2}}+\frac{\omega_{s}\;\chi_{s0}\;\omega\;\alpha\left(\omega_{s}^{2}+\omega^{2}\left(1+\alpha^{2}\right)\right)}{\left(\omega_{s}^{2}-\omega^{2}\left(1+\alpha^{2}\right)\right)^{2}+4\omega^{2}\omega_{s}^{2}\alpha^{2}},$$
Magnetic losses, represented by the imaginary part of the magnetic permeability, can be extremely small; however, they are always present unless we consider vacuum [@landau]. From a physics point of view, the existing relationship between $\mu'$ and $\mu"$ reflects that the mechanisms of energy storage and dissipation are two aspects of the same phenomenon [@boggi].
Genetic Algorithms
==================
Genetic Algorithms (GA) are probabilistic search procedures which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover.
A GA allows a population composed of many individuals evolve according to selection rules designed to maximize “fitness” or minimize a “cost function”.
A path through the components of AG is shown as a flowchart in Figure (\[fig:diagramaflujo\])
![Flowchart of a Genetic Algorithm.[]{data-label="fig:diagramaflujo"}](diagramaf.eps){width="70.00000%"}
Selecting the Variables and the Cost Function
---------------------------------------------
A cost function generates an output from a set of input variables (a chromosome). The Cost function’s object is to modify the output in some desirable fashion by finding the appropriate values for the input variables. The Cost function in this work is the difference between the experimental value of the permeability and calculated using the parameters obtained by the genetic algorithm.
To begin the AG is randomly generated an initial population of chromosomes. This population is represented by a matrix in which each row is a chromosome that contains the variables to optimize, in this work, the parameters of permeability model. [@1]
Natural Selection
------------------
Survival of the fittest translates into discarding the chromosomes with the highest cost . First, the costs and associated chromosomes are ranked from lowest cost to highest cost. Then, only the best are selected to continue, while the rest are deleted. The selection rate, is the fraction of chromosomes that survives for the next step of mating.
Select mates
-------------
Now two chromosomes are selected from the set surviving to produce two new offspring which contain traits from each parent. Chromosomes with lower cost are more likely to be selected from the chromosomes that survive natural selection. Offsprings are born to replace the discarded chromosomes
Mating
------
The simplest methods choose one or more points in the chromosome to mark as the crossover points. Then the variables between these points are merely swapped between the two parents. Crossover points are randomly selected.
Mutación
--------
If care is not taken, the GA can converge too quickly into one region of a local minimum of the cost function rather than a global minimum. To avoid this problem of overly fast convergence, we force the routine to explore other areas of the cost surface by randomly introducing changes, or mutations, in some of the variables.
The Next Generation
--------------------
The process described is iterated until an acceptable solution is found. The individuals of the new generation (selected, crossed and mutated) repeat the whole process until it reaches a termination criterion. In this case, we consider a maximum number of iterations or a predefined acceptable solution (whichever comes first)
Results and discussion
=======================
$Ni_{0.5}Zn_{0.5}Fe_{2-x}Y_{x}O_{4}$ samples were prepared via sol-gel method with x=0.01, 0.02, and 0.05. The complex permeability of the samples was measured in a material analyzer HP4251 in the range of 1MHz to 1 GHz.[@silvia].
The experimental data of magnetic permeability have been used for fitting the parameters of the model [@PhysicaB]. Firstly, we fitted the magnetic losses based in equation (\[muimag\]) by the Genetic Algoritm method and we obtained the six fitting parameters. We substituted, then, these six parameters into equation (\[mureal\]) to calculate the real part of permeability.
The variables of the problem to adjust were the six parameters of the model: ($\chi_{d}$, $\chi_{s}$, $\omega_{d} $, $\omega_{s}$, $\beta$ y $\alpha$)
Magnetic losses being a functional relationship: $$%\label{ajuste}
\mu''_{ajustado}= f(\omega,\chi_{d},\chi_{s},\omega_{d},\beta,\alpha)$$
where $\omega$ is the frequency of the external magnetic field, y $\chi_{d}$, $\chi_{s}$, $\omega_{d} $, $\omega_{s}$, $\beta$ $\alpha$ are unknown parameters, the problem is to estimate these from a set of pairs of experimental:$\:\:(\omega_{i},\mu''_{i}); (i=1,2,...,n)$ .
The cost function was the mistake made in calculating $\mu''_{ajustado}$ with the expression (\[muimag\]) using the parameters obtained from the genetic algorithm and the experimental value of $\mu''$ for each frequency.
$$Cost\;function=\sum^{n}_{i=1}(f(\omega_{i},\chi_{d},\chi_{s},\omega_{d}, \omega_{s}, \beta,\alpha)-\mu''_{i})^{2}$$
2000 iterations were performed, with a population of 300 chromosomes (each with 6 variables). The fraction of the population that was replaced by children in each iteration was 0.5 and the fraction of mutations was 0.25.
![ Evolution of the error](mincost.eps "fig:"){width="80.00000%"} \[fig:mincost\]
The figure \[fig:mincost\] shows the evolution of the error in successive iterations, we graphed the minimum value of function cost in the population for each iteration. It show that the error converges at minimum value quickly, and then this value is stable.
Although in the equations (\[muimag\]) and (\[mureal\]), $\omega_{d}$ and $\omega_{s}$ must be in Hz, we calculate in MHz and then multiplied in the equations by $1.10^{6}$, the same treatment we performed with the parameter beta, we calculate its value in the range between 1 and 2000, although in equations we multiplied by $1.10^{7}$. This treatment was necessary for that the AG values to variables within a limited range.
Table \[tab:tabla1\] shows the parameters of the model calculated for the three NiZn ferrite samples doped with different amounts of Yttrium. Figure \[fig:permeajuste\] graphs (a), (b) and (c) shows the permeability spectra for the three ferrites. Solid lines represent the curves constructed from the adjusted parameters while dotted and dashed lines represent the curves from experimental data.
The frequency of the $\mu''$ maximum for the spin component is calculated [@tsutaoka]:
$$\omega_{\mu''\!max}^{s}=\frac{\omega_{s}}{\sqrt{1+\alpha^{2}}}$$
And for the domain wall component:[@tsutaoka]:
$$\omega_{\mu''\!max}^{d}= \frac{1} {6}\sqrt{12\omega_{d}^{2}-6\beta^{2}+6\sqrt{16\omega_{d}^{4}-4\omega_{d}^{2}\beta^{2}+\beta^{4}}}$$
In these ferrites maximums are located in: $\omega_{\mu''\!max}^{s}\cong\!80\: MHz$ y $\omega_{\mu''\!max}^{d}\cong\!1000\: MHz$.
$\chi_{d0}$ $\omega_{d}\;\left(Hz\right)$ $\beta$ $\chi_{s0}$ $\omega_{s}\;\left(Hz\right)$ $\alpha$
------------ ------------- ------------------------------- ------------------- ------------- ------------------------------- ----------
NiZnY 0.01 22.05 1262 $\cdot10^{6}$ 1966$\cdot10^{7}$ 4.48 1989$\cdot10^{6}$ 1.8967
NiZnY 0.02 24.79 1115$\cdot10^{6}$ 1581$\cdot10^{7}$ 5.50 1480$\cdot10^{6}$ 1.40
NiZnY 0.05 33.75 671$\cdot10^{6}$ 1021$\cdot10^{7}$ 10.75 1334$\cdot10^{6}$ 3.477
: Adjusted parameters in the permeability model for NiZn ferrites doped with Yttrium.[]{data-label="tab:tabla1"}
h ![(a)(b)y(c) Complex permeability spectra in NiZn ferrites[]{data-label="fig:permeajuste"}](permeajuste3.eps "fig:"){width="100.00000%"}
[99]{} Haupt R.L, Haupt S.E “Practical Genetic Algorithms.”Wiley-Interscience publication (1998) A.Von Hippel. “Dielectrics and Waves”. J. Wiley Sons. (1954). R. F. Sohoo. “Theory and Application of Ferrites”, Prentice Hall, NJ, USA (1960) Trainotti V. and Fano W. “Ingeniera Electromagnetica”. Nueva Libreria, 2004. Landau and Lifchitz. “Electrodinamica de los medios continuos”. Reverté, 1981. W.G.Fano, S.Boggi, A.C.Razzitte, “Causality study and numerical response of the magnetic permeability as a function of the frequency of ferrites using Kramers Kronig relations” Physica B, 403, 526-530, (2008) Greiner. “Clasical electrodynamics”. cap.16, Springer (1998) Tsutaoka, T. “Frequency dispersión of complex permeability in Mn-Zn and Ni-Zn spinel ferrites and their composite materials”J ournal of Applied Physics,Volumen 93 (2003) Wohlfarth E. “Ferromagnetics Materials”, volumen 2. North Holland, 1980. S.E.Jacobo, S Duhalde, H.R.Bertorello, Journal of Magnetism and Magnetic Materials 272-276 (2004) 2253-2254. Silvina Boggi, Adrián C. Razzitte and Water G.Fano, *Non-equilibrium Thermodynamics and entropy production spectra: a tool for the characterization of ferrimagnetic materials*, Journal of Non-Equilibrium Thermodynamics. Volume 38, issue 2 p.175-183 (2013). T. Tsutaoka, T.*Frequency dispersión of complex permeability in Mn-Zn and Ni-Zn spinel ferrites and their composite materials*, Journal of Applied Physics,v 93 (2003)
|
---
abstract: 'Explicit formulas involving a generalized Ramanujan sum are derived. An analogue of the prime number theorem is obtained and equivalences of the Riemann hypothesis are shown. Finally, explicit formulas of Bartz are generalized.'
address: 'Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich, Switzerland'
author:
- Patrick Kühn
- Nicolas Robles
title: Explicit formulas of a generalized Ramanujan sum
---
Introduction
============
In [@ramanujan] Ramanujan introduced the following trigonometrical sum.
The Ramanujan sum is defined by $$\begin{aligned}
\label{ramanujansum}
{c_q}(n) = \sum_{\substack{(h,q) = 1}} {{e^{2\pi inh/q}}},\end{aligned}$$ where $q$ and $n$ are in ${\mathbb{N}}$ and the summation is over a reduced residue system $\bmod \; q$.
Many properties were derived in [@ramanujan] and elaborated in [@hardy]. Cohen [@cohen] generalized this arithmetical function in the following way.
Let $\beta \in {\mathbb{N}}$. The $c_q^{(\beta)}(n)$ sum is defined by $$\begin{aligned}
\label{cohendef}
c_q^{(\beta )}(n) = \sum_{\substack{(h,{q^\beta })_\beta = 1}} {{e^{2\pi inh/{q^\beta }}}},\end{aligned}$$ where $h$ ranges over the non-negative integers less than $q^{\beta}$ such that $h$ and $q^{\beta}$ have no common $\beta$-th power divisors other than $1$.
It follows immediately that when $\beta = 1$, becomes the Ramanujan sum . Among the most important properties of $c_q^{(\beta )}(n)$ we mention that it is a multiplicative function of $q$, i.e. $$c_{pq}^{(\beta )}(n) = c_p^{(\beta )}(n)c_q^{(\beta )}(n),\quad (p,q) = 1.$$ The purpose of this paper is to derive explicit formulas involving $c_q^{(\beta)}(n)$ in terms of the non-trivial zeros $\rho$ of the Riemann zeta-function and establish arithmetic theorems.
Let $z \in {\mathbb{C}}$. The generalized divisor function $\sigma_z^{(\beta)}(n)$ is the sum of the $z^{\operatorname{th}}$ powers of those divisors of $n$ which are $\beta^{\operatorname{th}}$ powers of integers, i.e. $$\sigma_z^{(\beta)}(n) = \sum_{{d^\beta }|n} {{d^{\beta z}}}.$$
The object of study is the following.
For $x \ge 1$, we define $$\mathfrak{C}^{(\beta )}(n,x) = \sum_{q \leqslant x} {c_q^{(\beta )}(n)}.$$
For technical reasons we set $$\mathfrak{C}^{\sharp,(\beta )}(n,x) =
\begin{cases}
\mathfrak{C}^{(\beta )}(n,x), & \mbox{ if } x \notin {\mathbb{N}},\\
\mathfrak{C}^{(\beta )}(n,x) - \tfrac{1}{2}c_x^{(\beta)}(n), & \mbox{ if } x \in {\mathbb{N}}.
\end{cases}$$ The explicit formula for $\mathfrak{C}^{\sharp,(\beta )}(n,x)$ is then as follows.
\[explicitcohenramanujan\] Let $\rho$ and $\rho_m$ denote non-trivial zeros of $\zeta(s)$ of multiplicity $1$ and $m \ge 2$ respectively. Fix integers $\beta$, $n$. There is an $1>\varepsilon > 0$ and a $T_0 = T_0(\varepsilon)$ such that [(\[br-a\])]{.nodecor} and [(\[br-b\])]{.nodecor} hold for a sequence $T_{\nu}$ and $$\mathfrak{C}^{\sharp,(\beta )}(n,x) = - 2\sigma_1^{(\beta )}(n) + \sum_{\substack{|\gamma | < T_{\nu}}} {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}\frac{{{x^\rho }}}{\rho }} + {\rm K}_{T_{\nu}}(x) - \sum_{k = 1}^\infty {\frac{{{{( - 1)}^{k}}(2\pi /{x})^{2k}}}{{(2k)!k\zeta (2k + 1)}}\sigma_{1 + 2k/\beta }^{(\beta )}(n)} + E_{T_{\nu}}(x) ,$$ where the error term satisfies $$E_{T_{\nu}}(x) \ll \frac{x \log x}{T_{\nu}^{1-\varepsilon}} ,$$ and where for the zeros of multiplicity $m \ge 2$ we have $${\rm K}_{T_{\nu}}(x) = \sum_{m \geqslant 2} {\sum_{{|\gamma_m|<T_{\nu}}} {\kappa ({\rho _m},x)} } ,\quad \kappa ({\rho _m},x) = \frac{1}{{(m - 1)!}}\mathop {\lim }\limits_{s \to {\rho _m}} \frac{{{d^{m - 1}}}}{{d{s^{m - 1}}}}\bigg( {{{(s - {\rho _m})}^m}\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s}} \bigg).$$ Moreover, in the limit $\nu \to \infty$ we have $$\mathfrak{C}^{\sharp,(\beta )}(n,x) = - 2\sigma_1^{(\beta )}(n) + \lim_{\nu \to \infty} \sum_{\substack{|\gamma | < T_{\nu}}} {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}\frac{{{x^\rho }}}{\rho }} + \lim_{\nu \to \infty} {\rm K}_{T_{\nu}}(x) - \sum_{k = 1}^\infty {\frac{{{{( - 1)}^{k}}(2\pi /{x})^{2k}}}{{(2k)!k\zeta (2k + 1)}}\sigma_{1 + 2k/\beta }^{(\beta )}(n)}.$$
The next result is a generalization of a well-known theorem of Ramanujan which is of the same depth as the prime number theorem.
\[line1theorem\] For fixed $\beta$ and $n$ in ${\mathbb{N}}$, we have $$\begin{aligned}
\label{line1cohen}
\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}} = \sum_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{{{q^s}}}}\end{aligned}$$ at all points on the line ${\operatorname{Re}}(s)=1$.
\[corollaryPNT1\] Let $\beta \in {\mathbb{N}}$. One has that $$\label{pnt_ramanujan}
\sum\limits_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{q}} = 0, \quad \beta \ge 1,
\quad \textnormal{and} \quad
\sum_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{{{q^\beta }}}} =
\begin{cases}
\tfrac{{\sigma_0^{(\beta )}(n)}}{{\zeta (\beta )}} & \mbox{ if } \beta > 1,\\
0 & \mbox{ if } \beta =1.
\end{cases}$$ In particular $$\begin{aligned}
\label{pnt_ramanujan2}
\sum_{q = 1}^\infty {\frac{c_q(n)}{q}} = 0 \quad \textnormal{and} \quad \sum_{q = 1}^\infty {\frac{\mu(q)}{q}} = 0.\end{aligned}$$
It is possible to further extend the validity of deeper into the critical strip, however, this is done at the cost of the Riemann hypothesis.
\[equivalence1\] Let $\beta, n \in {\mathbb{N}}$. The Riemann hypothesis is true if and only if $$\begin{aligned}
\label{RH_equivalent}
\sum_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{{{q^s}}}}\end{aligned}$$ is convergent and its sum is $\sigma_{1-s/\beta}^{(\beta)}(n) / \zeta(s)$, for every $s$ with $\sigma > \tfrac{1}{2}$.
This is a generalization of a theorem proved by Littlewood (see [@littlewood] and $\mathsection$14.25 of [@titchmarsh]) for the special case where $n =1$.
\[equivalence2\] A necessary and sufficient condition for the Riemann hypothesis is $$\mathfrak{C}^{(\beta)}(n,x) \ll_{n,\beta} x^{\frac{1}{2} + \varepsilon} \label{RH-1}$$ for every $\varepsilon>0$.
We recall that the von Mangolt function $\Lambda(n)$ may be defined by $$\Lambda (n) = \sum_{d\delta = n} {\mu (d)\log \delta}.$$ Since $c_q^{(\beta)}(n)$ is a generalization of the Möbius function, we wish to construct a new $\Lambda (n)$ that incorporates the arithmetic information encoded in the variable $q$ and the parameter $\beta$.
For $\beta, k , m \in {\mathbb{N}}$ the generalized von Mangoldt function is defined as $$\Lambda _{k,m}^{(\beta )}(n) = \sum_{d\delta = n} {c_d^{(\beta )}(m){{\log }^k}\delta }.$$
We note the special case $\Lambda _{1,1}^{(1)}(n) = \Lambda(n)$. We will, for the sake of simplicity, work with $k=1$. The generalization for $k > 1$ requires dealing with results involving (computable) polynomials of degree $k-1$, see for instance $\mathsection$12.4 of [@ivic] as well as [@ivic1] and [@ivic2].
The generalized Chebyshev function $\psi _m^{(\beta )}(x)$ and $\psi _{m}^{\sharp,(\beta )}(x)$ are defined by $$\psi _m^{(\beta )}(x) = \sum_{n \leqslant x} {\Lambda _{1,m}^{(\beta )}(n)}, \quad \operatorname{and}\quad \psi _{m}^{\sharp,(\beta )}(x) = \frac{1}{2}(\psi _m^{(\beta )}({x^ + }) + \psi _m^{(\beta )}({x^ - })).$$ for $\beta, m \in {\mathbb{N}}$.
The explicit formula for the generalized Chebyshev function is given by the following result.
\[explicitCohenChebyshev\] Let $c>1$, $\beta \in {\mathbb{N}}$, $x > m$, $T \ge 2$ and let $\left\langle x \right\rangle_{\beta}$ denote the distance from $x$ to the nearest interger such that $\Lambda_{1,m}^{\beta}(n)$ is not zero (other than $x$ itself). Then $$\psi _{m}^{\sharp,(\beta )}(x) = \sigma_{1 - 1/\beta }^{(\beta )}(m)x - \sum_{|\gamma | \leqslant T} {\sigma_{1 - \rho /\beta }^{(\beta )}(m)\frac{{{x^\rho }}}{\rho }} - \sigma_1^{(\beta )}(m)\log (2\pi ) - \sum_{k = 1}^\infty {\sigma_{1 + 2k/\beta }^{(\beta )}(m)\frac{{{x^{ - 2k}}}}{{2k}}} + R(x,T),$$ where $$R(x,T) \ll x^{\varepsilon} \min \bigg( {1,\frac{x}{{T\left\langle x \right\rangle_{\beta} }}} \bigg) + \frac{x^{1+\varepsilon}\log x}{T} + \frac{{x{{\log }^2}T}}{T} ,$$ for all $\varepsilon > 0$.
Taking into account the standard zero-free region of the Riemann-zeta function we obtain
\[PNTzerofree\] One has that $$\begin{aligned}
|\psi _m^{(\beta )}(x) - \sigma_{1 - 1/\beta }^{(\beta )}(m)x| \ll x^{1+\varepsilon} e^{- {c_2}{(\log x)^{1/2}}} , \end{aligned}$$ for $\beta \in {\mathbb{N}}$.
Moreover, on the Riemann hypothesis, one naturally obtains a better error term.
\[PNTRH\] Assume RH. For $\beta \in {\mathbb{N}}$ one has that $$\begin{aligned}
\psi _m^{(\beta )}(x) = \sigma_{1 - 1/\beta }^{(\beta )}(m)x + O({x^{1/2+\varepsilon}} )\end{aligned}$$ for each $\varepsilon > 0$.
Our next set of results is concerned with a generalization of a function introduced by Bartz [@bartz1; @bartz2]. The function introduced by Bartz was later used by Kaczorowski in [@kaczorowski] to study sums involving the Möbius function twisted by the cosine function. Let us set $\mathbb{H} = \{ x+iy, \; x \in {\mathbb{R}}, \; y >0 \}$.
Suppose that $z \in \mathbb{H}$, we define the $\varpi$ function by $$\begin{aligned}
\label{cohen_bartz_varpi}
\varpi _n^{(\beta )}(z) = \mathop {\lim }\limits_{m \to \infty } \sum_{\substack{\rho \\ 0 < \operatorname{Im} \rho < {T_m}}} {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}{e^{\rho z}}}. \end{aligned}$$
The goal is to describe the analytic character of $\varpi _n^{(\beta )}(z)$. Specifically, we will construct its analytic continuation to a meromorphic function of $z$ on the whole complex plane and prove that it satisfies a functional equation. This functional equation takes into account values of $\varpi _n^{(\beta )}(z)$ at $z$ and at $\bar z$; therefore one may deduce the behavior of $\varpi _n^{(\beta )}(z)$ for ${\operatorname{Im}}(z)<0$. Finally, we will study the singularities and residues of $\varpi _n^{(\beta )}(z)$.
\[bartz11\] The function $\varpi _n^{(\beta )}(z)$ is holomorphic on the upper half-plane $\mathbb{H}$ and for $z \in \mathbb{H}$ we have $$2\pi i\varpi _n^{(\beta )}(z) = \varpi _{1,n}^{(\beta )}(z) + \varpi _{2,n}^{(\beta )}(z) - {e^{3z/2}}\sum_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{{{q^{3/2}}(z - \log q)}}}$$ where the last term on the right hand-side is a meromorphic function on the whole complex plane with poles at $z = \log q$ whenever $c_q^{(\beta)}(n)$ is not equal to zero. Moreover, $$\varpi _{1,n}^{(\beta )}(z) = \int_{-1/2 + i \infty}^{-1/2} \frac{\sigma_{1-s/\beta}^{\beta}(n)}{\zeta(s)}e^{s z}ds$$ is analytic on $\mathbb{H}$ and $$\varpi _{2,n}^{(\beta )}(z) = \int_{-1/2}^{3/2} \frac{\sigma_{1-s/\beta}^{\beta}(n)}{\zeta(s)}e^{s z}ds$$ is regular on ${\mathbb{C}}$.
This is done on the assumption that the non-trivial zeros are all simple. This is done for the sake of clarity, since straightforwad modifications are needed to relax this assumption. See $\mathsection$8 for further details.
\[bartz12\] The function $\varpi _n^{(\beta )}(z)$ can be continued analytically to a meromorphic function on ${\mathbb{C}}$ which satisfies the functional equation $$\begin{aligned}
\label{FE_cohen_bartz}
\varpi _n^{(\beta )}(z) + \overline{\varpi _n^{(\beta )}(\bar z)} = A^{(\beta)}_n(z) = - \sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 0}^\infty {\frac{1}{{k!}}\bigg\{ {{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k} + {{\left( { - {e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}} \bigg\}\sigma_{1 + k/\beta }^{(\beta )}(n)} },\end{aligned}$$ where the function $A^{(\beta)}_n(z)$ is entire and satisfies $$A^{(\beta)}_n(z)=2\sum_{k = 1}^\infty {\frac{{{{( - 1)}^k}{{(2\pi )}^{2k}}}}{{(2k)!}}\frac{{{e^{ - 2kz}}\sigma_{1 + k/\beta }^{(\beta )}(n)}}{{\zeta (1 + 2k)}}}.$$
\[bartz13\] The only singularities of $\varpi _n^{(\beta )}(z)$ are simple poles at the points $z = \log q$ on the real axis, where $q$ is an integer such that $c_q^{(\beta )}(n) \ne 0$, with residue $$\mathop {\operatorname{res} }\limits_{z = \log n} \varpi _n^{(\beta )}(z) = - \frac{1}{{2\pi i}}c_q^{(\beta )}(n).$$
Proof of Theorem \[explicitcohenramanujan\]
===========================================
In order to obtain unconditional results we use an idea put forward by Bartz [@bartz2]. The key is to use the following result of Montgomery, see [@montextreme] and Theorem 9.4 of [@ivic].
For any given $\varepsilon > 0$ there exists a real $T_0 = T_0(\varepsilon)$ such that for $T \ge T_0$ the following holds: between $T$ and $2T$ there exists a value of $t$ for which $$|\zeta(\sigma \pm it)|^{-1} < c_1 t^{\varepsilon} \textrm{ for } -1 \le \sigma \le 2,$$ with an absolute constant $c_1 > 0$.
That is, for each $\varepsilon > 0$, there is a sequence $T_{\nu}$, where $$2^{\nu-1} T_0(\varepsilon) \le T_{\nu} \le 2^{\nu} T_0(\varepsilon), \ \nu = 1,2,3,\ldots \label{br-a}$$ such that $$|{\zeta(\sigma \pm iT_{\nu})}|^{-1} < c_1 T_{\nu}^{\varepsilon} \textrm{ for } -1 \le \sigma \le 2. \label{br-b}$$ Finally, towards the end we will need the following bracketing condition: $T_m$ ($m \le T_m \le m+1$) are chosen so that $$\left| {{{\zeta (\sigma + i{T_m})}}} \right|^{-1} < T_m^{{c_1}} \label{br-c}$$ for $-1 \le \sigma \le 2$ and $c_1$ is an absolute constant. The existence of such a sequence of $T_m$ is guaranteed by Theorem 9.7 of [@titchmarsh], which itself is a result of Valiron, [@valiron].\
We will use either bracketing (\[br-a\])-(\[br-b\]) or depending on the necessity. These choices will lead to different bracketings of the sum over the zeros in the various explicit formulas appearing in the theorems of this note.
The first immediate result is as follows.
\[lemma\] The generalized divisor function $\sigma_z^{(\beta)}(n)$ satisfies the following bound for $z \in {\mathbb{C}}$, $n \in {\mathbb{N}}$ $$|\sigma_z^{(\beta)}(n)| \le \sigma_{{\operatorname{Re}}(z)}^{(\beta)}(n) \le n^{\beta \max(0,{\operatorname{Re}}(z))+1}.$$
In [@cohen] the following two properties of $c_q^{(\beta)}(n)$ are derived.
\[lemma22\] For $\beta$ and $n$ integers one has $$\begin{aligned}
\label{cohenmoebius}
c_q^{(\beta )}(n) = \sum_{\substack{d|q \\ {d^\beta }|n}} {\mu \left( {\frac{q}{d}} \right){d^\beta }} \nonumber\end{aligned}$$ where $\mu$ denotes the Möbius function.
\[lemma23\] For ${\operatorname{Re}}(s)>1$ and $\beta \in {\mathbb{N}}$ one has $$\sum_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{{{q^{\beta s}}}}} = \frac{{\sigma _{1 - s }^{(\beta )}(n)}}{{\zeta (\beta s)}}.$$
From Lemma \[lemma22\] one has the following bound $$|c_q^{(\beta )}(n)| \leqslant \sum_{\substack{d|q \\ {d^\beta }|n}} {{d^\beta }} \leqslant \sum_{{d^\beta }|n} {{d^\beta }} = \sigma_1^{(\beta )}(n).$$ Suppose $x$ is a fixed non-integer. Let us now consider the positively oriented path $\mathcal{C}$ made up of the line segments $[c-iT,c+iT,-2N-1+iT,-2N-1-iT]$ where $T$ is not the ordinate of a non-trivial zero. We set $a_q = c_q^{(\beta)}(n)$ and we use the lemma in $\mathsection$3.12 of [@titchmarsh] to see that we can take $\psi(q) = \sigma_1^{(\beta)}(n)$. We note that for $\sigma > 1$ we have $$\sum_{q = 1}^\infty {\frac{{|c_q^{(\beta )}(n)|}}{{{q^\sigma }}}} \leqslant \sigma_1^{(\beta)} (n)\sum_{q = 1}^\infty {\frac{1}{{{q^\sigma }}}} = \sigma_1^{(\beta)} (n)\zeta (\sigma ) \ll {\frac{1}{{\sigma - 1}}}$$ so that $\alpha=1$. Moreover, if in that lemma we put $s=0$, $c=1+1/\log x$ and $w$ replaced by $s$, then we obtain $$\mathfrak{C}_0^{(\beta )}(n,x) = \frac{1}{{2\pi i}}\int_{c - iT}^{c + iT} {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s}ds} + E_{1,T}(x),$$ where $E_{1,T}(x)$ is an error term that will be evaluated later. If $x$ is an integer, then $\tfrac{1}{2}c_x^{(\beta)}(n)$ is to be substracted from the left-hand side. Then, by residue calculus we have $$\frac{1}{{2\pi i}}\oint\nolimits_\mathcal{C} {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s}ds} = {R_0} + {R_\rho }(T) + {\rm K}(x,T) + {R_{ - 2k}}(N),$$ where each term is given by the residues inside $\mathcal{C}$ $${R_0} = \mathop {\operatorname{res} }\limits_{s = 0} \frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s} = - 2\sigma_1^{(\beta )}(n),$$ and for $k = 1,2,3,\cdots$ we have $${R_{ - 2k}}(N) = \sum_{k = 1}^N {\mathop {\operatorname{res} }\limits_{s = - 2k} \frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s}} = \sum_{k = 1}^N {\frac{{\sigma_{1 + 2k/\beta }^{(\beta )}(n)}}{{\zeta '( - 2k)}}\frac{{{x^{ - 2k}}}}{{ - 2k}}} = \sum_{k = 1}^N {\frac{{{{( - 1)}^{k - 1}}(2\pi /{x})^{2k}}}{{(2k)!k\zeta (2k + 1)}}\sigma_{1 + 2k/\beta }^{(\beta )}(n)}.$$ For the non-trivial zeros we must distinguish two cases. For the simple zeros $\rho$ we have $${R_\rho }(T) = \sum_{|\gamma | < T} {\mathop {\operatorname{res} }\limits_{s = \rho } \frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s}} = \sum_{|\gamma | < T} {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}\frac{{{x^\rho }}}{\rho }},$$ and by the formula for the residues of order $m$ we see that ${\rm K}(x,T)$ is of the form indicated in the statement of the theorem. We now bound the vertical integral on the far left $$\begin{aligned}
\int_{ - 2N - 1 - iT}^{ - 2N - 1 + iT} {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s}ds} &= \int_{2N + 2 - iT}^{2N + 2 + iT} {\frac{{\sigma_{1 - (1 - s)/\beta }^{(\beta )}(n)}}{{\zeta (1 - s)}}\frac{{{x^{1 - s}}}}{{1 - s}}ds} \nonumber \\
&= \int_{2N + 2 - iT}^{2N + 2 + iT} {\frac{{{x^{1 - s}}}}{{1 - s}}\frac{{\sigma_{1 - (1 - s)/\beta }^{(\beta )}(n){2^{s - 1}}{\pi ^s}}}{{\cos (\tfrac{{\pi s}}{2})\Gamma (s)}}\frac{1}{{\zeta (s)}}ds} \nonumber \\
&\ll {\int_{ - T}^T {\frac{1}{T}{{\left( {\frac{{2\pi }}{x}} \right)}^{2N + 2}} {e^{(2N+3) \log(n) + 2N + 2 - (2N + \tfrac{3}{2})\log (2N + 2)}}dt} }, \nonumber\end{aligned}$$ since by the use of Lemma \[lemma\] we have $\sigma_{1 - (1 - s)/\beta}^{(\beta)}(n) \ll \sigma_{1 - (1 - 2 - 2N)/\beta }^{(\beta)}(n) \ll n^{2N + 3}$. This tends to zero as $N \to \infty$, for a fixed $T$ and a fixed $n$. Hence we are left with $$\begin{aligned}
\mathfrak{C}_0^{(\beta )}(n,x) & = - 2\sigma_1^{(\beta )}(n) + \sum_{|\gamma | < T} {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}\frac{{{x^\rho }}}{\rho }} + {\rm K}(x,T) \\
&+ \sum_{k = 1}^\infty {\frac{{{{( - 1)}^{k - 1}}(2\pi /{x})^{2k}}}{{(2k)!k\zeta (2k + 1)}}\sigma_{1 + 2k/\beta }^{(\beta )}(n)} + \frac{1}{{2\pi i}}\bigg( {\int_{c - iT}^{ - \infty - iT} + \int_{ - \infty + iT}^{c + iT} } \bigg)\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s}ds - E_{1,T}(x) \\
& = - 2\sigma_1^{(\beta )}(n) + \sum_{|\gamma | < T} {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}\frac{{{x^\rho }}}{\rho }} + {\rm K}(x,T) + \sum_{k = 1}^\infty {\frac{{{{( - 1)}^{k - 1}}(2\pi /{x})^{2k}}}{{(2k)!k\zeta (2k + 1)}}\sigma_{1 + 2k/\beta }^{(\beta )}(n)} \\
&+ E_{2,T}(x) - E_{1,T}(x),\end{aligned}$$ where the last two terms are to be bounded. For the second integral, we split the range of integration in $(-\infty + iT, -1 +iT) \cup (-1+iT,c+iT)$ and we write $$\begin{aligned}
\int_{ - \infty + iT}^{ - 1 + iT} {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s}ds} &= \int_{2 + iT}^{\infty + iT} {\frac{{\sigma_{1 - (1 - s)/\beta }^{(\beta )}(n)}}{{\zeta (1 - s)}}\frac{{{x^{1 - s}}}}{{1 - s}}ds} \nonumber \\
&= \int_{2 + iT}^{\infty + iT} {\frac{{{x^{1 - s}}}}{{1 - s}}\frac{{\sigma_{1 - (1 - s)/\beta }^{(\beta )}(n){2^{s - 1}}{\pi ^s}}}{{\cos (\tfrac{{\pi s}}{2})\Gamma (s)}}\frac{1}{{\zeta (s)}}ds} \nonumber \\
&\ll {\int_2^\infty {\frac{1}{T}{{\left( {\frac{{2\pi }}{x}} \right)}^\sigma} {e^{((\beta + \sigma) \log n + \sigma - (\sigma - \tfrac{1}{2})\log \sigma }}d\sigma } } \ll {\frac{1}{T x^2}} . \nonumber \end{aligned}$$ We can now choose for each $\varepsilon > 0$, $T = T_{\nu}$ satisfying (\[br-a\]) and (\[br-b\]) such that $$\frac{1}{{\zeta (s)}} \ll {t^\varepsilon }, \quad \frac{1}{2} \leqslant \sigma \leqslant 2,\quad t = {T_\nu}.$$ Thus the other part of the integral is $$\int_{ - 1 + i{T_{\nu}}}^{c + i{T_{\nu}}} {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s}ds} \ll {\int_{ - 1}^{\min (\beta,c)} T_\nu ^{\varepsilon - 1}{e^{(\beta - \sigma + 1) \log n}}{x^\sigma }d\sigma + \int_{\min (\beta,c)}^{c} T_\nu ^{\varepsilon - 1}{x^\sigma }d\sigma } \ll x T_\nu ^{\varepsilon - 1}.$$ The integral over $(2-iT_{\nu},-\infty-iT_{\nu})$ is dealt with similarly. It remains to bound $E_{1,T_\nu}(x)$, i.e. the three error terms on the right-hand side of (3.12.1) in [@titchmarsh] . We have $\psi(q) = \sigma_1^{(\beta)}(n)$, $s = 0$, $c = 1 + \frac{1}{\log x}$ and $\alpha=1$. Inserting these yields $$\begin{aligned}
E_{T_\nu}(x) & = E_{2,T_\nu}(x) - E_{1,T_\nu}(x) \\
&\ll x T_\nu ^{\varepsilon - 1} + \frac{x \log x}{T_\nu} + \frac{x \sigma_1^{\beta}(n) \log x}{T_\nu} + \frac{x \sigma_1^{\beta}(n)}{T_\nu} \ll \frac{x \log x}{T_\nu^{1-\varepsilon}} . \end{aligned}$$ If we assume that all non-trivial zeros are simple then term ${\rm K}_{T_{\nu}}(x)$ disappears. Theorem \[explicitcohenramanujan\] can be illustrated by plotting the explicit formula as follows.
![In blue: $\mathfrak{C}^{\sharp,(1)}(12,x)$, in red: the main terms of Theorem \[explicitcohenramanujan\] with 5 and 25 pairs of zeros and $5 \le x \le 100$.](CR_gr1.eps "fig:") ![In blue: $\mathfrak{C}^{\sharp,(1)}(12,x)$, in red: the main terms of Theorem \[explicitcohenramanujan\] with 5 and 25 pairs of zeros and $5 \le x \le 100$.](CR_gr2.eps "fig:")
Increasing the value of $\beta$ does not affect the match. For $\beta=2$:
![In blue: $\mathfrak{C}^{\sharp,(2)}(24,x)$, in red: the main terms of Theorem \[explicitcohenramanujan\] with 5 and 25 pairs of zeros and $1 \le x \le 100$.](CR_gr3.eps "fig:") ![In blue: $\mathfrak{C}^{\sharp,(2)}(24,x)$, in red: the main terms of Theorem \[explicitcohenramanujan\] with 5 and 25 pairs of zeros and $1 \le x \le 100$.](CR_gr4.eps "fig:")
For $\beta =3$ we have the same effect
![In blue: $\mathfrak{C}^{\sharp,(3)}(810,x)$, in red: the main terms of Theorem \[explicitcohenramanujan\] with 5 and 25 pairs of zeros and $1 \le x \le 100$.](CR_gr5.eps "fig:") ![In blue: $\mathfrak{C}^{\sharp,(3)}(810,x)$, in red: the main terms of Theorem \[explicitcohenramanujan\] with 5 and 25 pairs of zeros and $1 \le x \le 100$.](CR_gr6.eps "fig:")
Proof of Theorem \[line1theorem\] and Corollary \[corollaryPNT1\]
=================================================================
We shall use the lemma in $\mathsection$3.12 of [@titchmarsh]. Take $a_q = c_q^{(\beta)}(n)$, $\alpha = 1$ and let $x$ be half an odd integer. Let $s = 1+it$, then $$\begin{aligned}
\sum_{q < x} {\frac{{c_q^{(\beta )}(n)}}{{{q^s}}}} &= \frac{1}{{2\pi i}}\int_{c - iT}^{c + iT} {\frac{{\sigma_{1 - (s + w)/\beta }^{(\beta )}(n)}}{{\zeta (s + w)}}\frac{{{x^w}}}{w}dw} + O\left( {\frac{{{x^c}}}{{Tc}}} \right) + O\left( {\frac{1}{T}\sigma_1^{(\beta )}(n)\log x} \right) \\
&= \frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}} + \frac{1}{{2\pi i}}\bigg( {\int_{c - iT}^{ - \delta - iT} {} + \int_{ - \delta - iT}^{ - \delta + iT} {} + \int_{ - \delta + iT}^{c + iT} {} } \bigg)\frac{{\sigma_{1 - (s + w)/\beta }^{(\beta )}(n)}}{{\zeta (s + w)}}\frac{{{x^w}}}{w}dw, \end{aligned}$$ where $c>0$ and $\delta$ is small enough that $\zeta(s+w)$ has no zeros for $$\operatorname{Re} (w) \geqslant - \delta ,\quad |\operatorname{Im} (s + w)| = |t + \operatorname{Im} (w)| \leqslant |t| + T.$$ It is known from $\mathsection$3.6 of [@titchmarsh] that $\zeta(s)$ has no zeros in the region $\sigma > 1 - A \log^{-9}t$, where $A$ is a positive constant. Thus, we can take $\delta = A \log^{-9}T$. The contribution from the vertical integral is given by $$\begin{aligned}
\int_{ - \delta - iT}^{ - \delta + iT} {\frac{{\sigma_{1 - (s + w)/\beta }^{(\beta )}(n)}}{{\zeta (s + w)}}\frac{{{x^w}}}{w}dw} &\ll {{x^{ - \delta }}n^{\delta} {{\log }^7}T \int_{ - T}^T {\frac{{{dv}}}{{\sqrt {{\delta ^2} + {v^2}} }}} } \ll {x^{ - \delta }}{n^{ \delta }}{\log ^8}T. \end{aligned}$$ For the top horizontal integral we get $$\begin{aligned}
\int_{ - \delta + iT}^{c + iT} {\frac{{\sigma_{1 - (s + w)/\beta }^{(\beta )}(n)}}{{\zeta (s + w)}}\frac{{{x^w}}}{w}dw} &\ll {\frac{{{{\log }^7}T}}{T} \bigg(\int_{ - \delta }^{\min(c,\beta - 1)} {{n^{ \beta - u }}{x^u}du} } + \int_{\min(c,\beta - 1)}^{c} x^u du \bigg) \\
&\ll {\frac{{{{\log }^7}T}}{T}{x^c}\bigg( \int_{ - \delta }^{\min(c,\beta - 1)} {{n^{ \beta - u}}du} } + c \bigg) \ll \frac{\log^7 T}{T} x^c n^{\delta}, \end{aligned}$$ provided $x>1$. For the bottom horizontal integral we proceed the same way. Consequently, we have the following $$\begin{aligned}
\sum_{q < x} {\frac{{c_q^{(\beta )}(n)}}{{{q^s}}}} - \frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}} \ll {\frac{{{x^c}}}{{Tc}}} + {\frac{1}{T}\sigma_1^{(\beta )}(n)\log x} + {x^{ - \delta }}{n^{ \delta }}{\log ^8}T + {\frac{{{{\log }^7}T}}{T}{x^c}{n^{\delta }}} . \end{aligned}$$ Now, we choose $c = 1/ \log x$ so that $x^c =e$. We take $T = \exp\{ (\log x)^{1/10}\}$ so that $\log T = (\log x)^{1/10}$, $\delta = A (\log x)^{-9/10}$ and $x^{\delta} = T^A$. Then it is seen that the right-hand side tends to zero as $x \to \infty$ and the result follows.
Corollary \[corollaryPNT1\] follows by Lemma \[lemma23\] for $\beta \ge 1$ and, by Theorem with ${\operatorname{Re}}(s) \ge 1$. If $s=1$ then the first equation follows. If in we set $s=1$ then the second equation follows. Setting $s = \beta =1$ yields the third equation. Finally, putting $n=1$ in the third equation yields the fourth equation.
The plots of Corollary \[corollaryPNT1\] are illustrated below.
![Plot of $\sum\nolimits_{q = 1}^x {c_q^{(1)}(24)/q}$ and of $\sum\nolimits_{q = 1}^x {c_q^{(2)}(24)/q}$ for $1 \le x \le 1000$.](CR_gr7.eps "fig:") ![Plot of $\sum\nolimits_{q = 1}^x {c_q^{(1)}(24)/q}$ and of $\sum\nolimits_{q = 1}^x {c_q^{(2)}(24)/q}$ for $1 \le x \le 1000$.](CR_gr8.eps "fig:")
![Plot of $\sum\nolimits_{q = 1}^x {c_q^{(2)}(24)/q^2} - \sigma_0^{(2)}(24)/\zeta (2)$ and of $\sum\nolimits_{q = 1}^x {c_q^{(3)}(24)/q^3} - \sigma_0^{(3)}(24)/\zeta (3)$ for $1 \le x \le 1000$.](CR_gr9.eps "fig:") ![Plot of $\sum\nolimits_{q = 1}^x {c_q^{(2)}(24)/q^2} - \sigma_0^{(2)}(24)/\zeta (2)$ and of $\sum\nolimits_{q = 1}^x {c_q^{(3)}(24)/q^3} - \sigma_0^{(3)}(24)/\zeta (3)$ for $1 \le x \le 1000$.](CR_gr10.eps "fig:")
Proof of Theorem \[equivalence1\]
=================================
In the lemma of $\mathsection$3.12 of [@titchmarsh], take $a_q = c_q^{(\beta)}(n)$, $f(s)=\sigma_{1-s/\beta}^{(\beta)}(n) / \zeta(s)$, $c=2$, $x$ half an odd integer. Then $$\begin{aligned}
\sum_{q < x} {\frac{{c_q^{(\beta )}(n)}}{{{q^s}}}} &= \frac{1}{{2\pi i}}\int_{2 - iT}^{2 + iT} {\frac{{\sigma_{1 - (s + w)/\beta }^{(\beta )}(n)}}{{\zeta (s + w)}}\frac{{{x^w}}}{w}dw} + O\left( {\frac{{{x^2}}}{T}} \right) \\
&= \frac{1}{{2\pi i}}\bigg( {\int_{2 - iT}^{\tfrac{1}{2} - \sigma + \delta - iT} {} + \int_{\tfrac{1}{2} - \sigma + \delta - iT}^{\tfrac{1}{2} - \sigma + \delta + iT} {} + \int_{\tfrac{1}{2} - \sigma + \delta + iT}^{2 + iT} {} } \bigg)\frac{{\sigma_{1 - (s + w)/\beta }^{(\beta )}(n)}}{{\zeta (s + w)}}\frac{{{x^w}}}{w}dw \\
&+ \frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}} + O\left( {\frac{{{x^2}}}{T}} \right), \end{aligned}$$ where $0 < \delta < \sigma - \tfrac{1}{2}$. If we assume RH, then $\zeta(s) \ll t^{\varepsilon}$ for $\sigma \ge \tfrac{1}{2}$ and $\forall \varepsilon >0$ so that the first and third integrals are $$\ll {{T^{ - 1 + \varepsilon }} \bigg( \int_{\tfrac{1}{2} - \sigma + \delta }^{\min(\beta - \sigma,2)} {{n^{\beta - \sigma + v}}{x^v}dv} } + \int_{\min(\beta - \sigma,2)}^{2} x^v dv \bigg) \ll {T^{ - 1 + \varepsilon }}{x^2},$$ provided $x > 1$. The second integral is $$\ll {{x^{\tfrac{1}{2} - \sigma + \delta }} n^{\beta + \delta + 1} \int_{ - T}^T {{{(1 + |t|)}^{ - 1 + \varepsilon }}dt} } \ll {{x^{\tfrac{1}{2} - \sigma + \delta }}{T^\varepsilon }} .$$ Thus we have $$\sum_{q < x} {\frac{{c_q^{(\beta )}(n)}}{{{q^s}}}} = \frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}} + O({x^{\tfrac{1}{2} - \sigma + \delta }}{T^\varepsilon }) + O({x^2}{T^{\varepsilon - 1}}).$$ Taking $T=x^3$ the $O$-terms tend to zero as $x \to \infty$, and the result follows. Conversely, if is convergent for $\sigma > \tfrac{1}{2}$, then it is uniformly convergent for $\sigma \ge \sigma_ 0 > \tfrac{1}{2}$, and so in this region it represents an analytic function, which is $\sigma_{1-s/\beta}^{(\beta)}(n) / \zeta(s)$ for $\sigma > 1$ and so throughout the region. This means that the Riemann hypothesis is true and the proof is now complete.
Proof of Theorem \[equivalence2\]
=================================
In the lemma of $\mathsection$3.12 of [@titchmarsh], take $a_q = c_q^{(\beta)}(n)$, $f(w)=\sigma_{1-w/\beta}^{(\beta)}(n) / \zeta(w)$, $c=2$, $s = 0$, $\delta > 0$ and $x$ half an odd integer. Then $$\begin{aligned}
\mathfrak{C}^{(\beta)}(n,x) & = \sum_{q < x} c_q^{(\beta)}(n) = \frac{1}{2 \pi i} \int_{2 - iT}^{2 + iT} {\frac{{\sigma_{1 - w/\beta }^{(\beta )}(n)}}{{\zeta (w)}}\frac{{{x^w}}}{w}dw} + O\bigg( {\frac{{{x^2 \sigma_1^{(\beta)}(n)}}}{T}} \bigg) \\
& = \frac{1}{{2\pi i}}\bigg( {\int_{2 - iT}^{\tfrac{1}{2} + \delta - iT} {} + \int_{\tfrac{1}{2} + \delta - iT}^{\tfrac{1}{2} + \delta + iT} {} + \int_{\tfrac{1}{2} + \delta + iT}^{2 + iT} {} } \bigg) {\frac{{\sigma_{1 - w/\beta }^{(\beta )}(n)}}{{\zeta (w)}}\frac{{{x^w}}}{w}dw} + O\left( {\frac{{{x^2}}}{T}} \right) \\
&\ll \int_{-T}^{T} x^{1/2 + \delta} \sigma_{1 - (\frac{1}{2} + \delta)/\beta}^{(\beta )}(n) (1 + |t|)^{\varepsilon - 1} dt + T^{\varepsilon - 1} x^2 \sigma_{1 - (\frac{1}{2}+ \delta)/\beta}^{(\beta )}(n) + {\frac{{{x^2 }}}{T}} \\
&\ll T^{\varepsilon} x^{1/2 + \delta} + T^{\varepsilon - 1} x^2 + {\frac{{{x^2 }}}{T}} .\end{aligned}$$ If we take $T = x^2$, then $$\mathfrak{C}^{(\beta)}(n,x) \ll x^{1/2 + \varepsilon'} .$$ for $\varepsilon' = 2 \varepsilon + \delta > 0$ and (\[RH-1\]) follows. Conversely, if (\[RH-1\]) holds, then by Abel summation $$\begin{aligned}
\sum_{q \le x} \frac{c_q^{(\beta)}(n)}{q^s} \ll \mathfrak{C}^{(\beta)}(n,x) x^{-\sigma} t + \int_1^x \mathfrak{C}^{(\beta)}(n,t) t^{-\sigma - 1} dt \ll x^{\frac{1}{2} - \sigma + \varepsilon} \end{aligned}$$ converges as $x \to \infty$ for $\sigma > \frac{1}{2}$, and thus the Riemann hypothesis follows.
Proof of Theorem \[explicitCohenChebyshev\]
===========================================
First, the Dirichlet series are given by the following result.
\[dirichletcohenmangoldt\] For ${\operatorname{Re}}(s)>1$ and $\beta, k \in {\mathbb{N}}$ one has $$\sum_{n = 1}^\infty {\frac{{ \Lambda _{k,m}^{(\beta )}(n)}}{{{n^s}}}} = {( - 1)^k}\sigma_{1 - s/\beta }^{(\beta )}(m)\frac{{{\zeta ^{(k)}}(s)}}{{\zeta (s)}},$$ where $\zeta^{(k)}(s)$ is the $k^{\operatorname{th}}$ derivative of the Riemann zeta-function.
By Lemma \[lemma23\], and $$\sum_{n=1}^{\infty} \frac{\log^k n}{n^s} = (-1)^k \zeta^{(k)}(s)$$ for ${\operatorname{Re}}(s) >1$, the result follows by Dirichlet convolution.
From Lemma \[dirichletcohenmangoldt\] we deduce that $$\Lambda _{1,m}^{(\beta )}(n) \ll n^{\varepsilon}$$ for each $\varepsilon > 0$, otherwise the sum would not be absolutely convergent for ${\operatorname{Re}}(s) > 1$. It is known, see for instance Lemma 12.2 of [@montvau], that for each real number $T \ge 2$ there is a $T_1$, $T \le T_1 \le T+1$, such that $$\frac{{\zeta '}}{\zeta }(\sigma + i{T_1}) \ll {(\log T)^2}$$ uniformly for $-1 \le \sigma \le 2$. By using Perron’s inversion formula with $\sigma_0 = 1 + 1/ \log x$ we obtain $$\psi _{0,m}^{(\beta )}(x) = - \frac{1}{{2\pi i}}\int_{{\sigma _0} - i{T_1}}^{{\sigma _0} + i{T_1}} {\sigma_{1 - s/\beta }^{(\beta )}(m)\frac{{\zeta '}}{\zeta }(s)\frac{{{x^s}}}{s}ds} + {R_1},$$ where $${R_1} \ll \sum_{\substack{x/2 < n < 2x \\ n \ne x}} {\Lambda_{1,m}^{(\beta )}(n)\min \left( {1,\frac{x}{{T|x - n|}}} \right) + \frac{x}{T}} \sum_{n = 1}^\infty {\frac{{\Lambda_{1,m}^{(\beta )}(n)}}{{{n^{{\sigma _0}}}}}} .$$ The second sum is $$- \sigma_{1 - {\sigma _0}/\beta }^{(\beta )}(m)\frac{{\zeta '}}{\zeta }({\sigma _0}) \asymp \frac{{\sigma_{1 - {\sigma _0}/\beta }^{(\beta )}(m)}}{{{\sigma _0} - 1}} = \sigma_{1 - (1 + 1/\log x)/\beta }^{(\beta )}(m)\log x.$$ The term involving the generalized divisor function can be bounded in the following way: $$\sigma_{1 - (1 + 1/\log x)/\beta }^{(\beta )}(m) \le m^{\beta + 1/\log x}$$ if $\frac{1}{\log x} \le \beta - 1$, and $\le m$ otherwise. In both cases, this is bounded in $x$. For the first sum we do as follows. The terms for which $x + 1 \le n < 2x$ contribute an amount which is $$\ll \sum_{x + 1 \leqslant n < 2x} {\frac{{ x^{1+\varepsilon}}}{{T(n - x)}}} \ll \frac{x^{1+\varepsilon}\log x}{T} .$$ The terms for which $x / 2 < n \le x-1$ are dealt with in a similar way. The remaining terms for which $x -1 < n < x+1$ contribute an amount which is $$\ll x^{\varepsilon} \min \bigg( {1,\frac{x}{{T\left\langle x \right\rangle_{\beta} }}} \bigg) ,$$ therefore, the final bound for $R_1$ is $${R_1} \ll x^{\varepsilon} \min \bigg( {1,\frac{x}{{T\left\langle x \right\rangle_{\beta} }}} \bigg) + \frac{x^{1+\varepsilon}\log x}{T} .$$ We denote by $N$ an odd positive integer and by $\mathcal{D}$ the contour consisting of line segments connecting $\sigma_0 - iT_1 , - N - iT_1, -N +iT_1, \sigma_0 + iT_1$. An application of Cauchy’s residue theorem yields $$\psi _{0,m}^{(\beta )}(x) = {M_0} + {M_1} + {M_\rho } + {M_{ - 2k}} + {R_1} + {R_2}$$ where the terms on the right-hand sides are the residues at $s=0$, $s=1$, the non-trivial zeros $\rho$ and at the trivial zeros $-2k$ for $k=1,2,3,\cdots$, respectively, and where $${R_2} = - \frac{1}{{2\pi i}}\oint\nolimits_\mathcal{D} {\sigma_{1 - s/\beta }^{(\beta )}(m)\frac{{\zeta '}}{\zeta }(s)\frac{{{x^s}}}{s}ds}.$$ For the constant term we have $${M_0} = \mathop {\operatorname{res} }\limits_{s = 0} \sigma_{1 - s/\beta }^{(\beta )}(m)\frac{{\zeta '}}{\zeta }(s)\frac{{{x^s}}}{s} = \sigma_1^{(\beta )}(m)\frac{{\zeta '}}{\zeta }(0) = \sigma_1^{(\beta )}(m)\log (2\pi ),$$ and for the leading term $${M_1} = \mathop {\operatorname{res} }\limits_{s = 1} \sigma_{1 - s/\beta }^{(\beta )}(m)\frac{{\zeta '}}{\zeta }(s)\frac{{{x^s}}}{s} = \sigma_{1 - 1/\beta }^{(\beta )}(m)x.$$ The fluctuaring term coming from the non-trivial zeros yields $${M_\rho } = \sum_\rho {\mathop {\operatorname{res} }\limits_{s = \rho } \sigma_{1 - s/\beta }^{(\beta )}(m)\frac{{\zeta '}}{\zeta }(s)\frac{{{x^s}}}{s}} = \sum_\rho {\sigma_{1 - \rho /\beta }^{(\beta )}(m)\frac{{{x^\rho }}}{\rho }},$$ by the use of the logarithmic derivative of the Hadamdard product of the Riemann zeta-function, and finally for the trivial zeros $${M_{ - 2k}} = \sum_{k = 1}^\infty {\mathop {\operatorname{res} }\limits_{s = - 2k} \sigma_{1 - s/\beta }^{(\beta )}(m)\frac{{\zeta '}}{\zeta }(s)\frac{{{x^s}}}{s}} = \sum_{k = 1}^\infty {\sigma_{1 + 2k/\beta }^{(\beta )}(m)\frac{{{x^{ - 2k}}}}{{ - 2k}}}.$$ Since $|\sigma \pm i{T_1}| \geqslant T$, we see, by our choice of $T_1$, that $$\begin{aligned}
\int_{ - 1 \pm i{T_1}}^{{\sigma _0} \pm i{T_1}} {\sigma_{1 - s/\beta }^{(\beta )}(m)\frac{{\zeta '}}{\zeta }(s)\frac{{{x^s}}}{s}ds} &\ll \frac{{{{\log }^2}T}}{T} \bigg(\int_{ - 1}^{{\min(\beta,\sigma_0)}} \left(\frac{x}{m}\right)^{\sigma} d\sigma + \int_{\min(\beta,\sigma_0)}^{{\sigma _0}} x^{\sigma} d\sigma \bigg) \\
&\ll \frac{{x{{\log }^2}T}}{{T\log x}} \ll \frac{{x{{\log }^2}T}}{T} . \end{aligned}$$ Next, we invoke the following result, see Lemma 12.4 of [@montvau]: if $\mathcal{A}$ denotes the set of points $s \in \mathbb{C}$ such that $\sigma \le -1$ and $|s+2k| \ge 1/4$ for every positive integer $k$, then $$\frac{{\zeta '}}{\zeta }(s) \ll \log (|s| + 1)$$ uniformly for $s \in \mathcal{A}$. This, combined with the fact that $$\frac{\log |\sigma \pm i{T_1}|}{|\sigma \pm i{T_1}|} \ll \frac{\log T}{T} ,$$ gives us $$\begin{aligned}
\int_{ - N \pm i{T_1}}^{ - 1 \pm i{T_1}} {\sigma_{1 - s/\beta }^{(\beta )}(m)\frac{{\zeta '}}{\zeta }(s)\frac{{{x^s}}}{s}ds} &\ll \frac{{\log T}}{T}\int_{ - \infty }^{ - 1} {{\left(\frac{x}{m}\right)^\sigma }d\sigma } \ll \frac{{\log T}}{{xT\log x}} \ll \frac{{\log T}}{T} .\end{aligned}$$ Thus this bounds the horizontal integrals. Finally, for the left vertical integral, we have that $|-N +iT| \ge N$ and by the above result regarding the bound of the logarithmic derivative we also see that $$\begin{aligned}
\int_{ - N - i{T_1}}^{ - N + i{T_1}} {\sigma_{1 - s/\beta }^{(\beta )}(m)\frac{{\zeta '}}{\zeta }(s)\frac{{{x^s}}}{s}ds} &\ll \frac{{\log NT}}{N}{x^{ - N}}\sigma_{1 + N/\beta }^{(\beta )}(m)\int_{ - {T_1}}^{{T_1}} {dt} \ll \frac{{T\log NT}}{{N}} \left(\frac{m}{x}\right)^N . \end{aligned}$$ This last term goes to $0$ as $N \to \infty$ since $x > m$.
Proof of Theorems \[PNTzerofree\] and \[PNTRH\]
===============================================
Let us denote by $\rho = \beta^* + i\gamma$ a non-trivial zero. For this, we will use the result that if $|\gamma| < T$, where $T$ is large, then $\beta^* < 1 - c_1 / \log T$, where $c_1$ is a positive absolute constant. This immediately yields $$|{x^\rho }| = {x^{{\beta ^*}}} < x e^{- {c_1}\log x/\log T}.$$ Moreover, $|\rho| \ge \gamma$, for $\gamma >0$. We recall that the number of zeros $N(t)$ up to height $t$ is (Chapter 18 of [@davenport].) $$N(t) = \frac{t}{{2\pi }}\log \frac{t}{{2\pi }} - \frac{t}{{2\pi }} + O(\log t) \ll t \log t .$$ We need to estimate the following sum $$\sum_{0 < \gamma < T} {\frac{{\sigma_{1 - \gamma /\beta }^{(\beta )}(m)}}{\gamma }} = \sum_{1 < \gamma < T} {\frac{{\sigma_{1 - \gamma /\beta }^{(\beta )}(m)}}{\gamma }}.$$ This is $$\ll \int_1^T {\frac{{\sigma_{1 - t/\beta }^{(\beta )}(m)}}{{{t^2}}}N(t)dt} \ll m \int_1^{\beta} {\frac{{\log t}}{t}dt} + m^{\beta + 1}\int_\beta^{T} {\frac{{ \log t}}{t m^t}dt} \ll {\log ^2}T .$$ Therefore, $$\sum_{|\gamma | < T} {\left| {\sigma_{1 - \rho /\beta }^{(\beta )}(m)\frac{{{x^\rho }}}{\rho }} \right|} \ll x{(\log T)^2} e^{- {c_1}\log x/\log T} .$$ Without loss of generality we take $x$ to be an integer in which case the error term of the explicit formula of Theorem \[explicitCohenChebyshev\] becomes $$R(x,T) \ll \frac{x^{1+\varepsilon}\log x}{T} + \frac{{x{{\log }^2}T}}{T} .$$ Finally, we can bound the sum $$\sum_{k = 1}^{\infty} \sigma_{1 + 2k/\beta}^{(\beta)} (m) \frac{x^{-2k}}{2k} \le m^{\beta + 1} \sum_{k = 1}^{\infty} m^{2k} \frac{x^{-2k}}{2k} = \frac{1}{2} m^{\beta + 1} \log \bigg(1 - \bigg(\frac{x}{m}\bigg)^{-2} \bigg) = o(1).$$ Thus, we have the following $$|\psi _m^{(\beta )}(x) - \sigma_{1 - 1/\beta }^{(\beta )}(m)x| \ll \frac{x^{1+\varepsilon}\log x}{T} + \frac{{x{{\log }^2}T}}{T} + x{(\log T)^2}e^{- {c_1}\log x/\log T},$$ for large $x$. Let us now take $T$ as a function of $x$ by setting ${(\log T)^2} = \log x$ so that $$\begin{aligned}
|\psi _m^{(\beta )}(x) - \sigma_{1 - 1/\beta }^{(\beta )}(m)x| \ll x^{1+\varepsilon} \log x e^{- {(\log x)^{1/2}}} + x(\log x)e^{- {c_1}{(\log x)^{1/2}}} \ll x^{1+\varepsilon} e^{- {c_2}{(\log x)^{1/2}}} , \end{aligned}$$ for all $\varepsilon >0$ provided that $c_2$ is a suitable constant that is less than both $1$ and $c_1$.
Next, if we assume the Riemann hypothesis, then $|{x^\rho }| = {x^{1/2}}$ and the other estimate regarding $\sum {\sigma_{1 - \gamma /\beta }^{(\beta )}(m){\gamma ^{ - 1}}}$ stays the same. Thus, the explicit formula yields $$|\psi _m^{(\beta )}(x) - \sigma_{1 - 1/\beta }^{(\beta )}(m)x| = O \bigg( x^{1/2} \log^2 T + \frac{x^{1+\varepsilon}\log x}{T} + \frac{{x{{\log }^2}T}}{T} \bigg)$$ provided that $x$ is an integer. Taking $T = x^{1/2}$ leads to $$\begin{aligned}
\psi _m^{(\beta )}(x) & = \sigma_{1 - 1/\beta }^{(\beta )}(m)x + O({x^{1/2}}{\log ^2}{x} + x^{1/2+\varepsilon}\log x ) = \sigma_{1 - 1/\beta }^{(\beta )}(m)x + O({x^{1/2+\varepsilon}} )\end{aligned}$$
Proof of Theorem \[bartz11\]
============================
We now look at the contour integral $${\Upsilon ^{(\beta )}}(n,z) = \oint\nolimits_\Omega {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}{e^{sz}}ds}$$ taken around the path $\Omega = [-1/2,3/2,3/2+iT_n,-1/2+iT_n]$.
For the upper horizontal integral we have $$\begin{aligned}
\bigg| {\int_{-1/2 + i{T_m}}^{3/2 + i{T_m}} {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}{e^{sz}}ds} } \bigg| &\leqslant \int_{-1/2}^{\min(\beta,3/2)} {\bigg| {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n){e^{sz}}}}{{\zeta (\sigma + i{T_m})}}} \bigg|d\sigma } + \int_{\min(\beta,3/2)}^{3/2} {\bigg| {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n){e^{sz}}}}{{\zeta (\sigma + i{T_m})}}} \bigg|d\sigma } \nonumber \\
&\ll T_m^{{c_1}}{n^{\beta + 1}}{e^{ - T_m y}}\int_{-1/2}^{\min(\beta,3/2)} {{n^{ - \sigma }}{e^{\sigma x}}d\sigma } + n{e^{ - T_m y}}\int_{\min(\beta,3/2)}^{3/2} {{e^{\sigma x}}d\sigma } \nonumber \\
&\to 0 \nonumber \end{aligned}$$ as $m \to \infty$. An application of Cauchy’s residue theorem yields $$\begin{aligned}
\label{cauchy_bartz_cohen}
\int_{ - 1/2 + i\infty }^{ - 1/2} {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}{e^{sz}}ds} + \int_{ - 1/2}^{3/2} {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}{e^{sz}}ds} + \int_{3/2}^{3/2 + i\infty } {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}{e^{sz}}ds} = 2\pi i\varpi _n^{(\beta )}(z),\end{aligned}$$ where for ${\operatorname{Im}}(z) > 0$ we have $$\varpi _n^{(\beta )}(z) = \mathop {\lim }\limits_{m \to \infty } \sum_{\substack{\rho \\ 0 < \operatorname{Im} \rho < {T_m}}} {\frac{1}{{({k_\rho } - 1)!}}\frac{{{d^{k\rho - 1}}}}{{d{s^{k\rho - 1}}}}{{\bigg[ {{{(s - \rho )}^{k\rho }}\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}{e^{sz}}} \bigg]}_{s = \rho }}}$$ with $k_\rho$ denoting the order of multiplicity of the non-trivial zero $\rho$ of the Riemann zeta-function. We denote by $\varpi _{1,n}^{(\beta )}(z)$ and by $\varpi _{2,n}^{(\beta )}(z)$ the first and second integrals on the left hand-side of respectively. If we operate under assumption that there are no multiple zeros, then the above can be simplified to . This is done for the sake of simplicity, since dealing with this extra term would relax this assumption.
If $z \in \mathbb{H}$ then by one has $$2\pi i\varpi _n^{(\beta )}(z) = \varpi _{1,n}^{(\beta )}(z) + \varpi _{2,n}^{(\beta )}(z) + \varpi _{3,n}^{(\beta )}(z),$$ where the last term is given by the vertical integral on the right of the $\Omega$ contour $$\varpi _{3,n}^{(\beta )}(z) = \int_{3/2}^{3/2 + i\infty } {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}{e^{sz}}ds}.$$ By the use of the Dirichlet series of $c_q^{(\beta)}(n)$ given in Lemma \[lemma23\] and since we are in the region of absolute convergence we see that $$\varpi _{3,n}^{(\beta )}(z) = \sum_{q = 1}^\infty {c_q^{(\beta )}(n)\int_{3/2}^{3/2 + i\infty } {{e^{sz - s\log q}}ds} } = - {e^{3z/2}}\sum_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{{{q^{3/2}}(z - \log q)}}} .$$ By standard bounds of Stirling and the functional equation of the Riemann zeta-function we have that $$|\zeta(-\tfrac{1}{2} + it)| \approx (1 +|t|)$$ as $|t| \to \infty$. Therefore, we see that $$|\varpi _{1,n}^{(\beta )}(z)| = \bigg| {\int_{ - 1/2 + i\infty }^{ - 1/2} {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}{e^{sz}}ds} } \bigg| = O \bigg( {n^{\beta + 3/2}}{e^{ - x/2}}\int_0^\infty {{e^{ - ty}}dt} \bigg) = O \bigg(\frac{{{n^{\beta + 3/2}}{e^{ - x/2}}}}{y} \bigg),$$ and $\varpi _{1,n}^{(\beta )}(z)$ is absolutely convergent for $y = {\operatorname{Im}}(z) > 0$. We know that $\varpi _{n}^{(\beta )}(z)$ is analytic for $y>0$ and the next step is to show that that it can be meromorphically continued for $y > - \pi$. To this end, we go back to the integral $$\varpi _{1,n}^{(\beta )}(z) = - \int_{ - 1/2}^{ - 1/2 + i\infty } {\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}{e^{sz}}ds}$$ with $y>0$. The functional equation of $\zeta(s)$ yields $$\begin{aligned}
\label{aux_1}
\varpi _{1,n}^{(\beta )}(z) &= - \int_{ - 1/2}^{ - 1/2 + i\infty } {\sigma_{1 - s/\beta }^{(\beta )}(n)\frac{{\Gamma (s)}}{{\zeta (1 - s)}}{e^{s(z - \log 2\pi - i\pi /2)}}ds} \nonumber \\
&- \int_{ - 1/2}^{ - 1/2 + i\infty } {\sigma_{1 - s/\beta }^{(\beta )}(n)\frac{{\Gamma (s)}}{{\zeta (1 - s)}}{e^{s(z - \log 2\pi + i\pi /2)}}ds} \nonumber \\
&= \varpi _{11,n}^{(\beta )}(z) + \varpi _{12,n}^{(\beta )}(z).\end{aligned}$$ Since one has by standard bounds that $$\frac{{\Gamma ( - \tfrac{1}{2} + it)}}{{\zeta (\tfrac{3}{2} - it)}} \ll {e^{ - \pi t/2}}$$ it then follows that $$\varpi _{11,n}^{(\beta )}(z) \ll {n^{\beta + 3/2}}\int_0^\infty {{e^{ - \pi t/2}}{e^{ - \tfrac{1}{2}x - ty + t\pi /2}}dt} \ll \frac{{{e^{ - x/2}}{n^{\beta + 3/2}}}}{y},$$ and hence $\varpi _{11,n}^{(\beta )}(z)$ is regular for $y>0$. Similarly, $$\varpi _{12,n}^{(\beta )}(z) \ll {n^{\beta + 3/2}}{e^{ - x/2}}\int_0^\infty {{e^{ - (\pi + y)t}}dt} \ll \frac{{{n^{\beta + 3/2}}{e^{ - x/2}}}}{{y + \pi }} ,$$ so that $\varpi _{12,n}^{(\beta )}(z)$ is regular for $y > -\pi$. Let us further split $\varpi _{11,n}^{(\beta )}(z)$ $$\varpi _{11,n}^{(\beta )}(z) = \bigg( { - \int_{ - 1/2 - i\infty }^{ - 1/2 + i\infty } {} + \int_{ - 1/2 - i\infty }^{ - 1/2} {} } \bigg){e^{s(z - \log 2\pi - i\pi /2)}}\sigma_{1 - s/\beta }^{(\beta )}(n)\frac{{\Gamma (s)}}{{\zeta (1 - s)}}ds = I_{1,n}^{(\beta )}(z) + I_{2,n}^{(\beta )}(z).$$ By the same technique as above, it follows that the integral $I_{2,n}^{(\beta )}(z)$ is convergent for $y < \pi$. Moreover, since $\varpi _{11,n}^{(\beta )}(z)$ is regular for $y>0$, then it must be that $I_{1,n}^{(\beta )}(z)$ is convergent for $0 < y < \pi$. Let $$f(n,q,s,z) = \sigma_{1 - s/\beta }^{(\beta )}(n){e^{s(z - \log 2\pi - i\pi /2 + \log q)}}\Gamma (s).$$ By the theorem of residues we see that $$\begin{aligned}
\label{aux_2}
- \int_{ - 1/2 - i\infty }^{ - 1/2 + i\infty } {f(n,q,s,z)ds} &= - \int_{1 - i\infty }^{1 + i\infty } {f(n,q,s,z)ds} + 2\pi i\mathop {\operatorname{res} }\limits_{s = 0} f(n,q,s,z) \nonumber \\
&= - \int_{1 - i\infty }^{1 + i\infty } {f(n,q,s,z)ds} + 2\pi i\sigma_1^{(\beta )}(n). \end{aligned}$$ This last integral integral is equal to $$\begin{aligned}
\label{aux_3}
\int_{1 - i\infty }^{1 + i\infty } {f(n,q,s,z)ds} = 2\pi i\sum_{k = 0}^\infty {\frac{{{{( - 1)}^k}}}{{k!}}{{\left(e^{-z} \frac{2 \pi i}{q} \right)}^k}\sigma_{1 + k/\beta }^{(\beta )}(n)}, \end{aligned}$$ where the last sum is absolutelty convergent. To prove this note that $$\operatorname{Re} ( {e^{ - (z - \log 2\pi - i\pi /2 + \log q)}} ) = (e^{-x} 2\pi/q)\sin y > 0$$ for $0 < y < \pi$. Next, consider the path of integration with vertices $[1 \pm iT]$ and $[ -N \pm iT]$, where $N$ is an odd positive integer. By Cauchy’s theorem $$\begin{aligned}
& \bigg( \int_{1 - iT }^{1 + iT } - \int_{-N + iT }^{1 + iT } - \int_{-N - iT }^{-N + iT } + \int_{-N - iT }^{1 - iT } \bigg) \left( e^{-z} \frac{2 \pi i}{q} \right)^{ - s}\sigma_{1 - s/\beta }^{(\beta )}(n)\Gamma (s)ds \\
& = 2\pi i\sum_{k = 0}^{N-1} \frac{( - 1)^k}{k!}{\left(e^{-z} \frac{2 \pi i}{q} \right)^k}\sigma_{1 + k/\beta }^{(\beta )}(n).
\end{aligned}$$ The third integral on the far left of the path can be bounded in the following way $$\begin{aligned}
I_3 &:= \int_{-N - iT }^{-N + iT } \left( e^{-z} \frac{2 \pi i}{q} \right)^{ - s}\sigma_{1 - s/\beta }^{(\beta )}(n)\Gamma (s)ds \ll \int_{-T}^{T} \bigg(e^{-x} \frac{2\pi}{q} \bigg)^{N} e^{-t (y - \frac{\pi}{2})} n^{\beta + N + 1} e^{- \frac{\pi}{2} |t|} dt \\
&\ll \left(e^{-x} \frac{2\pi n}{q}\right)^{N} \int_{-T}^{T} e^{-t (y - \frac{\pi}{2})} e^{- \frac{\pi}{2} |t|} dt \ll \left(e^{-x} \frac{2\pi n}{q}\right)^{N} ( e^{T(y - \pi)} + e^{- Ty} ) \\
&\ll e^{-Nx + N\log \frac{2 \pi n}{q}} e^{-T \min(y, \pi - y)} .\end{aligned}$$ We now bound the horizontal parts. For the top one $$\begin{aligned}
I_{+} &:= \int_{-N + iT }^{1 + iT } \left( e^{-z} \frac{2 \pi i}{q} \right)^{ - s}\sigma_{1 - s/\beta }^{(\beta )}(n)\Gamma (s)ds \ll \int_{-N}^{1} \left(e^{-x} \frac{2\pi}{q}\right)^{-\sigma} e^{-T (y - \frac{\pi}{2})} n^{\beta - \sigma + 1} T^{\frac{1}{2}} e^{- T \frac{\pi}{2}} d\sigma \\
&\ll T^{\frac{1}{2}} e^{-T y} \int_{-N}^{1} \left(e^{-x} \frac{2\pi n}{q}\right)^{-\sigma} d\sigma \ll T^{\frac{3}{2}} e^{T (y - \pi)} \left(e^{-x} \frac{2\pi n}{q}\right)^{N} \\
&\ll T^{\frac{1}{2}} e^{- Nx + N\log \frac{2 \pi n}{q}} e^{-T y} ,
\end{aligned}$$ and analogously for the bottom one $$\begin{aligned}
I_{-} &:= \int_{-N - iT }^{1 - iT } \left( e^{-z} \frac{2 \pi i}{q} \right)^{ - s}\sigma_{1 - s/\beta }^{(\beta )}(n)\Gamma (s)ds \ll \int_{-N}^{1} \left(e^{-x} \frac{2\pi}{q}\right)^{-\sigma} e^{T (y - \frac{\pi}{2})} n^{\beta - \sigma + 1} T^{\frac{1}{2}} e^{-T \frac{\pi}{2}} d\sigma \\
&\ll T^{\frac{1}{2}} e^{-T (\pi - y)} \left(e^{-x} \frac{2\pi n}{q}\right)^{N} \ll T^{\frac{1}{2}} e^{-Nx + N\log \frac{2 \pi n}{q}} e^{-T (\pi - y)} .
\end{aligned}$$ Let now $T = T(N)$ such that $$T > \frac{N(-x + \log \frac{2 \pi n}{q})}{\min(y, \pi - y)}.$$ It is now easy to see that all of the three parts tend to 0 as $N \to \infty$ through odd integers, and thus the result follows. Thus, putting together with and gives us $$\begin{aligned}
\label{general_series}
I_{1,n}^{(\beta )}(z) &= - \int_{ - 1/2 - i\infty }^{ - 1/2 + i\infty } {{e^{s(z - \log 2\pi - i\pi /2)}}\sigma_{1 - s/\beta }^{(\beta )}(n)\frac{{\Gamma (s)}}{{\zeta (1 - s)}}ds} \nonumber \\
&= - \sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\int_{ - 1/2 - i\infty }^{ - 1/2 + i\infty } {{e^{s(z - \log 2\pi - i\pi /2)}}{q^s}\sigma_{1 - s/\beta }^{(\beta )}(n)\Gamma (s)ds} } \nonumber \\
&= - \sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\bigg( { 2\pi i\sum_{k = 0}^\infty {\frac{{{{( - 1)}^k}}}{{k!}}{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}\sigma_{1 + k/\beta }^{(\beta )}(n)} - 2\pi i\sigma_1^{(\beta )}(n)} \bigg)} \nonumber \\
&= - 2\pi i\sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 0}^\infty {\frac{{{{( - 1)}^k}}}{{k!}}{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}\sigma_{1 + k/\beta }^{(\beta )}(n)} } \end{aligned}$$ since $\sum\nolimits_{q = 1}^\infty {\mu (q)/q} = 0$. Moreover, $$\begin{aligned}
|{(2\pi i)^{ - 1}}I_{1,n}^{(\beta )}(z)| &= \bigg| {\sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\bigg( {\sum_{k = 0}^\infty {\frac{{{{( - 1)}^k}}}{{k!}}{{\left({{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}\sigma_{1 + k/\beta }^{(\beta )}(n)} - \sigma_{1}^{(\beta )}(n)} \bigg)} } \bigg| \nonumber \\
& = \bigg| {\sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\bigg( {\sum_{k = 1}^\infty {\frac{{{{( - 1)}^k}}}{{k!}}{{\left({{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}\sigma_{1 + k/\beta }^{(\beta )}(n)}} \bigg)} } \bigg| \nonumber \\
&\leqslant n^{\beta+1} \sum_{q = 1}^\infty {\frac{1}{q}\bigg( {\sum_{k = 1}^\infty {\frac{{{1}}}{{k!}}{{\left( {{e^{ - x}}\frac{{2\pi }}{q}} \right)}^k}{n^k}}} \bigg)} \nonumber \\
&= n^{\beta+1} \sum_{q = 1}^\infty {\frac{1}{q}\left( {\exp \left( {{e^{ - x}}\frac{{2\pi }}{q}n} \right)} - 1\right)} \nonumber \\
&\ll {n^{\beta + 1}}{e^{2\pi n /{e^x}}}\sum_{q \leqslant [2\pi n /{e^x}]} \frac{1}{q} + \frac{{2\pi n^{\beta + 2}}}{{{e^x}}} \sum_{q \geqslant [2\pi n/{e^x}] + 1} {\frac{1}{{{q^2}}}} \ll {c_2}(x) , \nonumber \end{aligned}$$ and the series on the right hand-side of is absolutely convergent for all $y$. Thus, this proves the analytic continuation of $\varpi _{1,n}^{(\beta )}(z)$ to $y > -\pi$. For $|y| < \pi$ one has $$\begin{aligned}
\varpi _{1,n}^{(\beta )}(z) &= I_{1,n}^{(\beta)}(z) + I_{2,n}^{(\beta)}(z) + \varpi_{12,n}^{(\beta)}(z) = - 2\pi i\sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 0}^\infty {\frac{{{{( - 1)}^k}}}{{k!}}{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}\sigma_{1 + k/\beta }^{(\beta )}(n)} } \\
& + \int_{ - 1/2 - i\infty }^{ - 1/2} {{e^{s(z - \log 2\pi - i\pi /2)}}\sigma_{1 - s/\beta }^{(\beta )}(n)\frac{{\Gamma (s)}}{{\zeta (1 - s)}}ds} \\
& - \int_{ - 1/2}^{ - 1/2 + i\infty } {{e^{s(z - \log 2\pi + i\pi /2)}}\sigma_{1 - s/\beta }^{(\beta )}(n)\frac{{\Gamma (s)}}{{\zeta (1 - s)}}ds} \end{aligned}$$ where the first term is holomorphic for all $y$, the second one for $y < \pi$ and the third for $y > -\pi$. Hence, this last equation shows the continuation of $\varpi _{n}^{(\beta )}(z)$ to the region $y > -\pi$. To complete the proof of the theorem, one then considers the function $$\hat \varpi _n^{(\beta )}(z) = \mathop {\lim }\limits_{m \to \infty } \sum_{\substack{\rho \\ - {T_m} < \operatorname{Im} \rho < 0}} {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}{e^{\rho z}}},$$ where the zeros are in the lower part of the critical strip and $z$ now belongs to the lower half-plane $\hat {\mathbb{H}} = \{z \in {\mathbb{C}}: {\operatorname{Im}}(z)<0 \}$. It then follows by repeating the above argument that $$\hat \varpi _{1,n}^{(\beta )}(z) = \hat \varpi _{11,n}^{(\beta )}(z) + \hat \varpi _{12,n}^{(\beta )}(z),$$ where $$\hat \varpi _{11,n}^{(\beta )}(z) = - \int_{ - 1/2 - i\infty }^{ - 1/2} {{e^{s(z - \log 2\pi - i\pi /2)}}\sigma_{1 - s/\beta }^{(\beta )}(n)\frac{{\Gamma (s)}}{{\zeta (1 - s)}}ds}$$ is absolutely convergent for $y < \pi$ and $$\hat \varpi _{12,n}^{(\beta )}(z) = - \int_{ - 1/2 - i\infty }^{ - 1/2} {{e^{s(z - \log 2\pi + i\pi /2)}}\sigma_{1 - s/\beta }^{(\beta )}(n)\frac{{\Gamma (s)}}{{\zeta (1 - s)}}ds}$$ is absolutely convergent for $y<0$. Spliting up the first integral just as before and using a similar analysis to the one we have just carried out, but using the fact that $\zeta (\bar s) = \overline {\zeta (s)}$ and choosing $T_m$ ($m \le T_m \le m+1$) such that $$\left| \frac{1}{{{\zeta (\sigma - i{T_n})}}} \right| < T_n^{{c_1}},\quad - 1 \leqslant \sigma \leqslant 2,$$ yields that $$\begin{aligned}
\hat \varpi _{1,n}^{(\beta )}(z) &= - 2\pi i\sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 0}^\infty {\frac{1}{k!}{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}\sigma_{1 + k/\beta }^{(\beta )}(n)} } \nonumber \\
&- \int_{ - 1/2 - i\infty }^{ - 1/2} {{e^{s(z - \log 2\pi - i\pi /2)}}\sigma_{1 - s/\beta }^{(\beta )}(n)\frac{{\Gamma (s)}}{{\zeta (1 - s)}}ds} \nonumber \\
&+ \int_{ - 1/2}^{ - 1/2 + i\infty } {{e^{s(z - \log 2\pi + i\pi /2)}}\sigma_{1 - s/\beta }^{(\beta )}(n)\frac{{\Gamma (s)}}{{\zeta (1 - s)}}ds}. \nonumber \end{aligned}$$ Therefore, $\hat \varpi _n^{(\beta )}(z)$ admits an analytic continuation from $y<0$ to the half-plane $y < \pi$.
Proof of Theorem \[bartz12\]
============================
Adding up the two results of our previous section $$\varpi _{1,n}^{(\beta )}(z) + \hat \varpi _{1,n}^{(\beta )}( z) = - 2\pi i\sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 0}^\infty {\frac{1}{{k!}}\bigg\{ {{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k} + {{\left( { - {e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}} \bigg\}\sigma_{1 + k/\beta }^{(\beta )}(n)} }.$$ The other terms do not contribute since $$\varpi _{2,n}^{(\beta )}(z) + \hat \varpi _{2,n}^{(\beta )}(z) = \bigg( {\int_{ - 1/2}^{3/2} {} + \int_{3/2}^{ - 1/2} {} } \bigg){e^{sz}}\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}ds = 0,$$ and by the Theorem \[bartz11\] we have $$\varpi _{3,n}^{(\beta )}(z) + \hat \varpi _{3,n}^{(\beta )}(z) = 0.$$ Consequently, we have $$\begin{aligned}
\label{pre_FE}
\varpi _n^{(\beta )}(z) + \hat \varpi _n^{(\beta )}(z) = - \sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 0}^\infty {\frac{1}{{k!}}\bigg\{ {{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k} + {{\bigg( { - {e^{ - z}}\frac{{2\pi i}}{q}} \bigg)}^k}} \bigg\}\sigma_{1 + k/\beta }^{(\beta )}(n)} }\end{aligned}$$ for $|y| < \pi$. Thus, once again, by the previous theorem for all $y < \pi$ $$\varpi _n^{(\beta )}(z) = - \hat \varpi _n^{(\beta )}(z) - \sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 0}^\infty {\frac{1}{{k!}}\bigg\{ {{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k} + {{\left( { - {e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}} \bigg\}\sigma_{1 + k/\beta }^{(\beta )}(n)} }$$ by analytic continuation, and for $y > -\pi$ $$\hat \varpi _n^{(\beta )}(z) = - \varpi _n^{(\beta )}(z) - \sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 0}^\infty {\frac{1}{{k!}}\bigg\{ {{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k} + {{\left( { - {e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}} \bigg\}\sigma_{1 + k/\beta }^{(\beta )}(n)} }.$$ This shows that $\varpi _n^{(\beta )}(z)$ and $\hat \varpi _n^{(\beta )}(z)$ can be analytically continued over ${\mathbb{C}}$ as a meromorphic function and that holds for all $z$. To prove the functional equation, we look at the zeros. If $\rho$ is a non-trivial zero of $\zeta(s)$ then so is $\bar \rho$. For $z \in \mathbb{H}$ one has $$\varpi _n^{(\beta )}(z) = \mathop {\lim }\limits_{m \to \infty } \overline {\sum_{\substack{\rho \\ 0 < \operatorname{Im} \rho < {T_m}}} {\overline {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}{e^{\rho z}}} } } .$$ By using $\overline {\sigma_{1 - \rho /\beta }^{(\beta )}(n)} = \sigma_{1 - \bar \rho /\beta }^{(\beta )}(n)$ and since $\zeta (\bar s) = \overline {\zeta (s)}$ we get $$\begin{aligned}
\varpi _n^{(\beta )}(z) &= \overline {\sum_{\substack{\rho \\ 0 < \operatorname{Im} \rho < {T_m}}} {\overline {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}{e^{\rho z}}} } } = \sum_{\substack{\rho \\ 0 < \operatorname{Im} \rho < {T_m}}} {\frac{{\sigma_{1 - \bar \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\bar \rho )}}{e^{\overline {\rho z} }}} \nonumber \\
&= \sum_{\substack{\rho \\ - {T_m} < \operatorname{Im} \rho < 0}} {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}{e^{\rho \bar z}}} = \overline {\hat{\varpi} _n^{(\beta )}(\bar z)} . \nonumber \end{aligned}$$ Invoking with $z \in \mathbb{H} $ we see that $$\begin{aligned}
\varpi _n^{(\beta )}(z) &= \overline {\hat \varpi _n^{(\beta )}(\bar z)} = - \overline {\varpi _n^{(\beta )}(\bar z)} - \overline {\sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 0}^\infty {\frac{1}{{k!}}\bigg\{ {{{\left( {{e^{ - \bar z}}\frac{{2\pi i}}{q}} \right)}^k} + {{\left( { - {e^{ - \bar z}}\frac{{2\pi i}}{q}} \right)}^k}} \bigg\}\sigma_{1 + k/\beta }^{(\beta )}(n)} } } \nonumber \\
&= - \overline {\varpi _n^{(\beta )}(\bar z)} - \sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 0}^\infty {\frac{1}{{k!}}\bigg\{ {{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k} + {{\left( { - {e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}} \bigg\}\sigma_{1 + k/\beta }^{(\beta )}(n)} } \nonumber \end{aligned}$$ and by complex conjugation for $z \in \hat{\mathbb{H}}$, and by analytic continuation for $z$ with $y={\operatorname{Im}}(z)=0$. This proves the functional equation .
Another expression can be found which depends on the values of the Riemann zeta-function at odd integers $$\begin{aligned}
A_n^{(\beta )}(z) &= - \sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\bigg( {\sum_{k = 0}^\infty {\frac{1}{{k!}}\bigg( {{{\bigg( {{e^{ - z}}\frac{{2\pi i}}{q}} \bigg)}^k} + {{\left( { - {e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k}} \bigg)\sigma_{1 + k/\beta }^{(\beta )}(n)} - 2\sigma_1^{(\beta )}(n)} \bigg)} \nonumber \\
&= - \sum_{q = 1}^\infty {\frac{{\mu (q)}}{q}\sum_{k = 1}^\infty {\frac{1}{{k!}}\bigg( {{{\left( {{e^{ - z}}\frac{{2\pi i}}{q}} \right)}^k} + {{\bigg( { - {e^{ - z}}\frac{{2\pi i}}{q}} \bigg)}^k}} \bigg)\sigma_{1 + k/\beta }^{(\beta )}(n)} } \nonumber \\
&= - \sum_{k = 1}^\infty {\frac{1}{{k!}}{{({e^{ - z}}2\pi i)}^k}\sigma_{1 + k/\beta }^{(\beta )}(n)\sum_{q = 1}^\infty {\frac{{\mu (q)}}{{{q^{1 + k}}}}} } - \sum_{k = 1}^\infty {\frac{1}{{k!}}{{( - {e^{ - z}}2\pi i)}^k}\sigma_{1 + k/\beta }^{(\beta )}(n)\sum_{q = 1}^\infty {\frac{{\mu (q)}}{{{q^{1 + k}}}}} } \nonumber \\
&= - \sum_{k = 1}^\infty {\frac{1}{{k!}}({{({e^{ - z}}2\pi i)}^k} + {{( - {e^{ - z}}2\pi i)}^k})\sigma_{1 + k/\beta }^{(\beta )}(n)\frac{1}{{\zeta (1 + k)}}} \nonumber \\
&=- 2\sum_{k = 1}^\infty {\frac{{{{( - 1)}^k}{{(2\pi )}^{2k}}}}{{(2k)!}}\frac{{{e^{ - 2kz}}\sigma_{1 + 2k/\beta }^{(\beta )}(n)}}{{\zeta (2k+1)}}}, \nonumber \end{aligned}$$ since the even terms vanish. Finally, if $z = x + iy$, then we are left with $$|A_n^{(\beta )}(z)| \le 2 n^{\beta+1} \sum_{k = 1}^{\infty} \frac{(2 \pi n e^{-x})^{2k}}{(2k)!},$$ which converges absolutely. Thus $A_n^{(\beta )}(z)$ defines an entire function.
Proof of Theorem \[bartz13\]
============================
This now follows from Theorem \[bartz12\].
Acknowledgements
================
The authors acknowledge partial support from SNF grants PP00P2\_138906 as well as 200020\_149150$\backslash$1. They also wish to thank the referee for useful comments which greatly improved the quality of the paper.
[99]{}
K. M. Bartz, *On some complex explicit formulae connected with the Möbius function, [I]{.nodecor}*, Acta Arithmetica **57** (1991).
K. M. Bartz, *On some complex explicit formulae connected with the Möbius function, [II]{.nodecor}*, Acta Arithmetica **57** (1991).
E. Cohen, *An extension of Ramanujan’s sum*, Duke Math. J. Volume **16**, Number 1 (1949).
H. Davenport, *Multiplicative Number Theory*, Lectures in Advanced Mathematics, 1966.
A. Ivić, *The Riemann Zeta Function*, John Wiley & Sons, 1985.
A. Ivić, *On certain functions that generalize von Mangoldt’s function $\Lambda(n)$*, Mat. Vesnik (Belgrade) **12** (27), 361-366 (1975).
A. Ivić, *On the asymptotic formulas for a generalization on von Mangoldt’s function*, Rendiconti Mat. Roma **10** (1), Serie VI, 51-59 (1977).
G. H. Hardy, *Note on Ramanujan’s trigonometrical function $c_q(n)$, and certain series of arithmetical functions*, Proceedings of the Cambridge Philosophical Society, **20**, 263-271.
J. Kaczorowski, *Results on the Möbius function*, J. London Math. Soc. **(2)** 75 (2007), 509-521.
J. E. Littlewood, *Quelques conséquences de l’hypothèse que la fonction $\zeta(s)$ de Riemann n’a pas de zéros dans le demi-plan $\mathbf{R}(s) > \tfrac{1}{2}$*, C.R. 154 (1912), 263-6.
H. L. Montgomery, *Extreme values of the Riemann zeta-function*, Comment. Math. Helvetici **52** (1977), 511-518.
H. L. Montgomery and R. C. Vaughan, *Multiplicative Number Theory [I]{.nodecor}. Classical Theory*, Cambridge Univ. Press, 2007.
S. Ramanujan, *On certain trigonometrical sums and their applications in the theory of numbers*, Transactions of the Cambridge Philosophical Society, vol. **22**, (1918).
E. C. Titchmarsh, *The Theory of the Riemann Zeta-Function*, 2nd ed., Oxford Univ. Press, 1986.
G. Valiron, *Sur les fonctions entieres d’ordre nul et ordre fini*, Annales de Toulouse **(3)**, 5 (1914), 117-257
|
---
abstract: 'We present the results of a deep study of the isolated dwarf galaxies Andromeda XXVIII and Andromeda XXIX with Gemini/GMOS and Keck/DEIMOS. Both galaxies are shown to host old, metal-poor stellar populations with no detectable recent star formation, conclusively identifying both of them as dwarf spheroidal galaxies (dSphs). And XXVIII exhibits a complex horizontal branch morphology, which is suggestive of metallicity enrichment and thus an extended period of star formation in the past. Decomposing the horizontal branch into blue (metal poor, assumed to be older) and red (relatively more metal rich, assumed to be younger) populations shows that the metal rich are also more spatially concentrated in the center of the galaxy. We use spectroscopic measurements of the Calcium triplet, combined with the improved precision of the Gemini photometry, to measure the metallicity of the galaxies, confirming the metallicity spread and showing that they both lie on the luminosity-metallicity relation for dwarf satellites. Taken together, the galaxies exhibit largely typical properties for dSphs despite their significant distances from M31. These dwarfs thus place particularly significant constraints on models of dSph formation involving environmental processes such as tidal or ram pressure stripping. Such models must be able to completely transform the two galaxies into dSphs in no more than two pericentric passages around M31, while maintaining a significant stellar populations gradient. Reproducing these features is a prime requirement for models of dSph formation to demonstrate not just the plausibility of environmental transformation but the capability of accurately recreating real dSphs.'
author:
- 'Colin T. Slater and Eric F. Bell'
- 'Nicolas F. Martin'
- 'Erik J. Tollerud and Nhung Ho'
title: 'A Deep Study of the Dwarf Satellites Andromeda XXVIII & Andromeda XXIX'
---
Introduction
============
The unique physical properties and environments of dwarf galaxies make them excellent test cases for improving our understanding of the processes that affect the structure, stellar populations, and evolution of galaxies. Because of their shallow potential wells, dwarf galaxies are particularly sensitive to a wide range of processes that may only weakly affect larger galaxies. These processes range from cosmological scales, such as heating by the UV background radiation [@gnedin00], to interactions at galaxy scales such as tidal stripping and tidal stirring [@mayer01; @klimentowski09; @kravtsov04], resonant stripping [@donghia09], and ram pressure stripping [@mayer06], to the effects of feedback from from the dwarfs themselves [@dekel86; @maclow99; @gnedin02; @sawala10].
Many studies have focused on understanding the differences between the gas-rich, star forming dwarf irregular galaxies (dIrrs) and the gas-poor, non-star-forming dwarf spheroidals. While a number of processes could suitably recreate the broad properties of this differentiation, finding observational evidence in support of any specific theory has been difficult. One of the main clues in this effort is the spatial distribution of dwarfs; while dIrrs can be found throughout the Local Group, dSphs principally are only found within 200-300 kpc of a larger host galaxy such as the Milky Way or Andromeda [@einasto74; @vandenbergh94; @grebel03]. This is trend is also reflected in the gas content of Local Group dwarfs [@blitz00; @grcevich09]. This spatial dependence seems to indicate that environmental effects such as tides and ram pressure stripping are likely to be responsible for creating dSphs. However, there are outliers from this trend, such as Cetus, Tucana, and Andromeda XV, which are dSphs that lie more than 700 kpc from either the Milky Way or Andromeda. The existence of such distant dSphs may suggest that alternative channels for dSph formation exist [@kazantzidis11b], or it could be an incidental effect seen in galaxies that have passed through a larger host on very radial orbits [@teyssier12; @slater13].
The set of isolated dwarf galaxies was recently enlarged by the discovery of Andromeda XXVIII and XXIX, which by their position on the sky were known to be approximately 360 and 200 kpc from Andromeda, respectively [@slater11; @bell11]. While And XXIX was identified as a dSph by the images confirming it as a galaxy, there was no comparable data on And XXVIII (beyond the initial SDSS discovery data) with which to identify it as a dSph or dIrr. We thus sought to obtain deeper imaging of both galaxies down to the horizontal branch level which would enable a conclusive identification of the galaxies as dSphs or dIrrs by constraining any possible recent star formation. In addition, the deep photometry permits more precise determination of the spatial structure and enables the interpretation of the spectroscopic Calcium triplet data from @tollerud13 to obtain a metallicity measurement. As we will discuss, the information derived from these measurements along with dynamical considerations imposed by their position in the Local Group can together place significant constraints on plausible mechanisms for the origin of these two dSphs.
This work is organized as follows: we discuss the imaging data and the reduction process in Section \[data\], and illustrate the general features of the color-magnitude diagram in Section \[obsCMD\]. Spectroscopic metallicities are presented in Section \[spectra\], and the structure and stellar populations of the dwarfs are discussed in Section \[structure\]. We discuss the implications of these results for theories of dSph formation in Section \[discussion\].
Imaging Observations & Data Reduction {#data}
=====================================
Between 22 July 2012 and 13 August 2012 we obtained deep images of And XXVIII and XXIX with the GMOS instrument on Gemini-North (Gemini program GN-2012B-Q-40). The observations for each dwarf consisted of a total of 3150 seconds in SDSS-i band and 2925 seconds in r, centered on the dwarf. Because the dwarfs each nearly fill the field of view of the instrument, we also obtained a pair of flanking exposures for each dwarf to provide an “off-source” region for estimating the contamination from background sources. These exposures consisted of at least 1350 s in both r and i, though some fields received a small number of extra exposures. The images were all taken in 70th percentile image quality conditions or better, which yielded excellent results with the point source full width at half maximum ranging between 0.47and 0.8.
All of the images were bias subtracted, flat fielded, and coadded using the standard bias frames and twilight flats provided by Gemini. The reduced images can be seen in Figure \[images\]. Residual flat fielding and/or background subtraction uncertainty exists at the 1% level (0.01 magnitudes, roughly peak to valley). PSF photometry was performed using DAOPHOT [@stetson87], which enabled accurate measurements even in the somewhat crowded centers of the dwarfs. In many cases the seeing in one filter was much better than the other, such as for the core of And XXVIII where the seeing was $0.47\arcsec$ in i and $0.68\arcsec$ in r. In these cases we chose to first detect and measure the position of stars in the image with the best seeing, and then require the photometry of the other band to reuse the positions of stars detected in the better band. This significantly extends our detection limit, which would otherwise be set by the shallower band, but with limited color information at these faint magnitudes.
The images were calibrated to measurements from the Sloan Digital Sky Survey (SDSS), Data Release 9 [@DR9]. For each stacked image we cross-matched all objects from the SDSS catalog that overlapped our fields, with colors between $-0.2 < (r-i)_0 < 0.6$, and classified as stars both by SDSS and DAOPHOT. Star-galaxy separation was performed using the “sharp” parameter from DAOPHOT. From this we measured the weighted mean offset between the SDSS magnitudes and the instrumental magnitudes to determine the zeropoint for each field. Between the saturation limit of the Gemini data, mitigated by taking several exposures, and faint limits of the SDSS data (corresponding to approximately $19
< i < 22.5$ and $19.5 < r < 22.5$) there were of order 100 stars used for the calibration of each frame. Based on the calculated stellar measurement uncertainties the formal uncertainty on the calibration is at the millimagnitude level, but unaccounted systematic effects likely dominate the statistical uncertainty (e.g., precision reddening measurements). All magnitudes were dereddened with the extinction values from @schlafly11.
The photometric completeness of each stacked image was estimated by artificial star tests. For each field we took the PSF used by DAOPHOT for that field and inserted a large grid of artificial stars, with all of the stars at the same magnitude but with Poisson noise on the actual pixel values added to each image. This was performed for both r and i band images simultaneously, and the resulting pair of images was then run through the same automated DAOPHOT pipeline that was used on the original image. Artificial stars were inserted over a a grid of i band magnitudes and r-i colors, producing measurements of the recovery rate that cover the entire CMD. The 50% completeness limit for both dwarfs is at least $r_0 = 25.5$, with slightly deeper data in the i-band for And XXVIII.
The observed CMDs suffer from both foreground and background contamination. Foreground dwarf stars in the Milky Way tend to contribute at the bright end of the CMD. At the faint end, distant galaxies that are too small to be resolved become the dominant source of contamination. This effect can quickly become significant at fainter magnitudes due to the rapid rise in the observed galaxy luminosity function. This effect was minimized by the superb seeing at the Gemini observatory, which allowed smaller galaxies to be resolved and excluded from our sample.
Observed CMDs {#obsCMD}
=============
The CMDs of And XXVIII and XXIX are shown in the left panels of Figures \[cmd\_28\] and \[cmd\_29\], respectively. A 12 Gyr old isochrone from @dotter08 is overlaid at the distances and spectroscopic metallicities determined later in this work. Both dwarfs show a well-populated giant branch with a very prominent red clump/red horizontal branch (RC/RHB) near $r_0 \sim 24.5$ - $25.0$. This feature is particularly clear as a large bump in the luminosity functions of each dwarf, shown by the thick black line in the right panels of Figure \[cmd\_28\] and \[cmd\_29\]. In addition to the RC/RHB, And XXVIII also shows a blue horizontal branch (BHB) slightly fainter than $r_0 \sim 25.0$ and spanning $-0.3 < (r-i)_0 < 0.0$ in color. The luminosity function for stars with $(r-i)_0 < 0.0$ is shown by the thin line on the right panel of Figure \[cmd\_28\]. The presence of a complex horizontal branch suggests that And XXVIII has had an extended star formation history (SFH), since the BHB is typically seen in the oldest globular clusters, while the RHB tends to appear in globular clusters roughly 2-4 Gyr younger than the oldest populations [@stetson89; @sarajedini95], although a few globular clusters do show both BHB and RHB [@an08]. The additional information from the spectroscopic metallicity spread, as will be discussed below, also confirms the extended star formation in both dwarfs. And XXIX does not show the same prominent BHB. There are 5-10 stars in a similar position as the BHB in And XXVIII, but this is almost negligible compared to the 100 or more stars in the BHB of And XXVIII and could be background contamination. This does not indicate that there is no ancient population in And XXIX, as, for example, the Draco dSph also contains very few BHB stars [@segall07].
There is a notable absence of any young main-sequence stars in the observed CMDs of both XXVIII and XXIX, which suggests that there has not been any recent star formation at appreciable rates in either galaxy. The handful of stars brighter than the HB and on the blue side of the RGB are consistent with foreground (or background) contamination. The CMD of And XXIX has an almost negligible number of stars bluewards of the RGB at any magnitude. The CMD of And XXVIII does show some blue detections below the BHB, but it is difficult to conclusively identify their origin. Since the precision of the colors degrades at faint magnitudes, these detections could be an (artificial) broadening of the RGB, possibly scattering more stars towards the blue due to the somewhat shallower depth of the r-band exposures. It is also possible that they are background sources or false detections from noise, both of which could be strongly weighted towards the faintest magnitudes. None of these origins are clearly favored and some combination could be at work, but there is not sufficient evidence to believe that these sources are main sequence stars.
The absence of observed young main sequence stars in And XXVIII is complimented by recent work that shows little to no cold gas in the galaxy. Observations with the Westerbork Synthesis Radio Telescope place a $5-\sigma$ upper limit on the total HI mass of $2.8 \times 10^3$ $M_\odot$ (T. Oosterloo, private communication). For comparison, the similarly low-mass dwarf Leo T has had recent star formation and contains $\sim 2.8 \times 10^5$ $M_\odot$ of H<span style="font-variant:small-caps;">I</span> [@ryan-weber08], while most dSphs have upper limits at this level or less [@grcevich09]. This stringent limit on the gas in And XXVIII adds further evidence that it is a dSph.
Distance and Luminosity
-----------------------
The clear HB in both dwarfs enables an accurate measurement of the distance to the dwarfs, and hence their distance to M31. We fit a Gaussian plus a linear background model to the r-band luminosity function of each dwarf in the region of the HB, using only stars redder than $(r-i)_0 = 0$. The measured HB position is indicated in the right panels of Figure \[cmd\_28\] and \[cmd\_29\] by the horizontal arrow, and is $m_{g,0} = 24.81$ for And XXVIII and $m_{g,0} = 24.84$ for And XXIX. We use the RHB absolute magnitude calibration of @chen09, which is based on globular clusters RHBs measured directly in the SDSS filter set. In the r-band this calibration, using a linear metallicity dependence and without the age term, is $$M_r = 0.165 [\textrm{Fe/H}] + 0.569.$$ The resulting distances are $811 \pm 48$ kpc for And XXVIII and $829 \pm 42 $kpc for And XXIX, using the spectroscopic metallicities as determined in Section \[spectra\]. Both of these are slightly further than the measured distances from @slater11 and @bell11, but just within (And XXVIII) or just outside (And XXIX) the formal one-sigma uncertainties. The updated heliocentric distances does not substantially change the measured distances between the dwarfs and M31, since both are near the tangent point relative to M31[^1]. Based on these distances, both dwarfs lie well away from the plane of satellites from @conn13 and @ibata13. As seen from M31 the satellites are $80^\circ$ (And XXVIII) and $60^\circ$ (And XXIX) from the plane. The closest galaxy to And XXVIII is And XXXI at 164 kpc, while And XXIX’s closest neighbor is And XIX at 88 kpc, making both relatively isolated from other dwarfs.
We measured the total luminosity of both dwarfs by comparing the portion of the LF brighter than the HB to the LF of the Draco dwarf. Using data from @segall07 we constructed a background-subtracted LF for Draco inside $r_h$, then scaled the LF of the dwarfs such that they best matched the Draco LF. The resulting luminosities are $M_V = -8.7 \pm 0.4$ for And XXVIII and $M_V
= -8.5 \pm 0.3$ for And XXIX, both of which are again in good agreement with values measured by previous works.
Spectroscopic Metallicity {#spectra}
=========================
To complement the imaging data, we also make use of metallicities derived from Keck/DEIMOS spectroscopy of the brightest RGB stars. The source data and spectroscopic reductions are described in @tollerud13, and a sample spectrum is shown in Figure \[sample\_spectrum\]. We derive metallicities from the $\lambda \sim 8550$ Å Calcium triplet features, following the methodology described in @ho13. Briefly, this procedure fits a Gaussian profiles to the strongest two CaT lines, and uses these fits to derive CaT equivalent widths. In combination with absolute magnitudes from the aforementioned photometric data (Section \[data\]), these data can be calibrated to act as effective proxies for $[{\rm Fe/H}]$ of these stars. For this purpose, we adopt the @carrera13 metallicity calibration to convert our photometry and equivalent widths to $[{\rm Fe/H}]$.
A table of the spectroscopic metallicity measurements of individual stars in each dwarf is presented in Table \[stars\_table\]. We determine the uncertainty in the galaxy mean \[Fe/H\] by performing 1000 Monte Carlo resamplings of the distribution. For each resampling, we add a random offset to the metallicity of each star drawn from a Gaussian with width of the per-star \[Fe/H\] uncertainty, and compute the mean of the resulting distribution. For measuring each galaxy’s metallicity spread $\sigma({\rm[Fe/H]})$, we report the second moment of the individual measurement distribution and derive uncertainties from a resampling procedure like that for the galaxy mean \[Fe/H\].
The resulting metallicity distributions for And XXVIII and XXIX are shown as cumulative distribution functions in Figure \[fehcdfs\]. From this it is immediately clear that, while the number of stars are relatively small, the median of the distribution peaks at $[{\rm Fe/H}] \sim -2$ (see Table \[properties\_table\]). Motivated by this, in Figure \[lfeh\], we show the luminosity-metallicity relation for the brighter M31 satellites [@ho13] and the MW satellites [@kirby11; @kirby13], using luminosities from Martin et al. (2015, submitted). It is immediately clear from this figure that And XXVIII and XXIX are fully consistent with the metallicity-luminosity relation that holds for other Local Group satellites. Our measurement for And XXVIII is also consistent with the prior measurement by @collins13 of \[Fe/H\] $= -2.1 \pm 0.3$, but at higher precision.
Structure & Stellar Populations {#structure}
===============================
We determined the structural properties of the dwarfs using an updated version of the maximum likelihood method presented in @martin08. This method fits an exponential radial density profile to the galaxies without requiring the data to be binned, which enables more precise measurements of the structure in galaxies with only a small number of observed stars. The updated version samples the parameter space with a Markov Chain Monte Carlo process, and can more easily account for missing data (Martin et al. 2015, submitted.) This is necessary to account for the limited field of view of GMOS, which could cause a systematic size error [@munoz12], as well as the very center of And XXIX where an inconveniently-located bright foreground star contaminates the very center of the image and prevents reliable photometry in the surrounding region.
The resulting radial profiles and posterior probability distributions are shown in Figures \[profile\_28\] and \[profile\_29\]. The half-light radii and ellipticities all have fairly typical values for other dwarfs of similar luminosities [@brasseur11]. The results are also consistent with the parameters estimated from the much shallower SDSS data [@slater11; @bell11].
The separation between the red and blue horizontal branches in And XXVIII enables us to examine the spatial distribution of the metal-poor, older, and the more metal-rich, younger, stellar populations. Radial profiles of the two horizontal branches (separated at $(r-i)_0 = 0.0$) are shown in Figure \[bhb\_rhb\]. The difference in the radial profiles is easily seen in the right panel, and the posterior probability distributions for the half-light radius confirm the statistical significance of the difference. This behavior has been seen in other dwarf galaxies, such as Sculptor [@tolstoy04], Fornax [@battaglia06], Canes Venatici I [@ibata06], And II [@mcconnachie07], and Leo T [@dejong08]. In all of these cases the more metal-rich population is the more centrally concentrated one, consistent with And XXVIII. Measuring the spatial structure of the two components independently shows that they appear to be simply scaled versions of each other; the half-light radii are $370 \pm 60$ pc and $240 \pm 15$ pc (blue and red, respectively), while the ellipticities of $0.48 \pm 0.06$ and $0.43 \pm 0.03$, along with position angles of $45^\circ \pm 5^\circ$ and $34^\circ \pm 3^\circ$, agree well with each other. Taken together this implies that the process that transformed the dwarf into a pressure-supported system did so without randomizing the orbital energies of individual stars enough to completely redistribute the older and younger populations, but both populations did end up with the same general morphology.
Simulations of isolated dwarfs by @kawata06 are able to reproduce a radial metallicity gradient, but with some uncertainty over the number of stars at the lowest metallicity values and the total luminosity of the simulated dwarfs (and also see @revaz12 for simulated dwarfs without gradients). In these simulations the metallicity gradient is produced by the continuous accretion of gas to the center of the galaxy, which tends to cause more metal enrichment and a younger population (weighted by mass) at small radii when compared to the outer regions of the galaxy. This explanation suggests that the “two populations” we infer from the RHB and BHB of And XXVIII are perhaps more properly interpreted as two distinct tracers of what is really a continuous range of ages and metallicities present in the dwarf. In this scenario, the lack of observed multiple populations in And XXIX could be the result of the dwarf lacking sufficient gas accretion and star formation activity to generate a strong metallicity gradient. If this is the case, then there may be a mass dependence to the presence of such gradients, which makes it particularly significant that And XXVIII is a relatively low-mass galaxy to host such a behavior. Whether this is merely stochasticity, or the influence of external forces, or if it requires a more complex model of the enrichment process is an open question.
Discussion and Conclusions {#discussion}
==========================
The analysis of And XXVIII and XXIX shows that both galaxies are relatively typical dwarf spheroidals, with old, metal-poor stellar populations and no measurable ongoing or recent star formation. The significance of these galaxies in distinguishing models of dSph formation comes from their considerable distances from M31. If environment-independent processes such as supernova feedback or reionization are responsible for transforming dIrrs into dSphs, then finding dSphs at these distances is quite natural. However, such models are by themselves largely unable to reproduce the radial dependence of the dSph distribution around the Milky Way and M31. An environment-based transformation process, based on some combination of tidal or ram pressure forces, can potentially account for the radial distribution, but correctly reproducing the properties of dSphs large radii is the critical test of such models. It is in this light that Andromeda XXVIII and Andromeda XXIX have the most power to discriminate between models.
Models of tidal transformation have been studied extensively and can account for many of the observed structural properties of dSphs [@mayer01; @lokas10; @lokas12]. However, a critical component of understanding whether these models can reproduce the entire population of Local Group dSphs is the dependence of the transformation process on orbital pericenter distances and the number of pericentric passages. At large radii the weaker tidal force may lose its ability completely transform satellites into dSphs, potentially leaving observable signatures in satellites on the outskirts of host galaxies.
Observationally we cannot directly know the orbital history of individual satellites without proper motions (of which there are very few), and must test the radial distribution of dSphs in a statistical way. @slater13 used the Via Lactea simulations to show that a significant fraction of the dwarf galaxies located between 300 and 1000 kpc from their host galaxy have made at least one pericentric passage near a larger galaxy. However, the fraction of dwarfs that have undergone two or more pericentric passages decreases sharply near 300 kpc. This suggests that it is unlikely for And XXVIII to have undergone multiple pericentric passages.
This presents a clear question for theories of dSph formation based on tidal interactions: can a dwarf galaxy be completely transformed into a dSph with only a single pericenter passage? Simulations of tidal stirring originally seemed to indicate that the answer was no, and when dwarfs were placed on different orbits it was only the ones with several ($\sim4-5$) pericenter passages that were transformed into dSphs [@kazantzidis11a]. However, more recent simulations that used cored dark matter profiles for the dwarfs suggest that multiple pericenter passages might not be required. @kazantzidis13 show that dwarfs with very flat central dark matter profiles (inner power-law slopes of 0.2) can be transformed into pressure supported systems after only one or two pericenter passages. This result is encouraging, but it also comes with the consequence that cored dark matter profiles also tend to make the dwarfs susceptible to complete destruction by tidal forces. In the simulations of @kazantzidis13, five out of the seven dwarfs that were successfully transformed into dSphs after only one or two pericenter passages were subsequently destroyed. Taken together, these results indicate that rapid formation of a dSph is indeed plausible, but there may only be a narrow range of structural and orbital parameters compatible with such a process. Recent proper motion measurements of the dSph Leo I support this picture even further, as it appears to have had only one pericentric passage [@sohn13] yet is unambiguously a dSph.
The properties of And XXVIII add an additional constraint that any tidal transformation must not have been so strong as to completely mix the older and younger stellar populations. A simple test case of this problem has been explored by @lokas12, in which particles were divided into two populations by their initial position inside or outside of the half light radius. The dwarfs were then placed on reasonable orbits around a host galaxy, and evolved for 10 Gyr. The resulting radial profiles of the two populations are distinct in nearly all cases, with some variation depending on the initial conditions of the orbit. These tests may be overly optimistic, since initial differentiation into two populations is performed by such a sharp radius cut, but the simulations illustrate the plausibility of a dwarf retaining spatially distinct populations after tidal stirring.
An additional piece of the puzzle is provided by the metallicities. And XXVIII and XXIX are both consistent with the luminosity-metallicity relation shown by other Local Group satellites (see Section \[spectra\]). This implies that they could not have been subject to substantial tidal *stripping*, as this would drive them off this relation by lowering the luminosity without substantially altering their metallicities. This point is further reinforced by the similarity of the luminosity-metallicity relation of both dSph and dIrr galaxies in the Local Group [@kirby13], making it unlikely that the measured luminosity-metallicity relation itself is significantly altered by tidal stripping. Whether or not more gentle tidal effects can induce morphological transformation without altering the luminosity-metallicity relation remains to be seen.
Taken together, the properties of And XXVIII and XXIX present a range of challenges for detailed models of dwarf galaxy evolution to explain. Particularly for And XXVIII, the wide separation and low mass of the system add significant challenges to reproducing the gas-free spheroidal morphology with a stellar population gradient, while there may be similar challenges for explaining the apparent absence (or at least low-detectability) of such gradients in And XXIX. Though plausible explanations have been shown to exist for many of these features individually and under ideal conditions, whether the combination of these conditions can be accurately reproduced in a simulation is unknown. Further modeling of these types of systems is required before we can understand the physical drivers of these observed features.
We thank the anonymous referee for their helpful comments which improved the paper. This work was partially supported by NSF grant AST 1008342. Support for EJT was provided by NASA through Hubble Fellowship grant \#51316.01 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555.
Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência, Tecnologia e Inovação (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).
Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
[*Facility:*]{}
Ahn, C. P., Alexandroff, R., Allende Prieto, C., et al. 2012, , 203, 21 An, D., Johnson, J. A., Clem, J. L., et al.2008, , 179, 326 Battaglia, G., Tolstoy, E., Helmi, A., et al. 2006, , 459, 423 Bell, E. F., Slater, C. T., & Martin, N. F. 2011, , 742, L15 Blitz, L., & Robishaw, T. 2000, , 541, 675 Brasseur, C. M., Martin, N. F., Macci[ò]{}, A. V., Rix, H.-W., & Kang, X. 2011, , 743, 179 Carrera, R., Pancino, E., Gallart, C., & del Pino, A. 2013, , 434, 1681 Chen, Y. Q., Zhao, G., & Zhao, J. K. 2009, , 702, 1336 Collins, M. L. M., Chapman, S. C., Rich, R. M., et al. 2013, , 768, 172 Conn, A. R., Lewis, G. F., Ibata, R. A., et al. 2013, , 766, 120 Dekel, A., & Silk, J. 1986, , 303, 39 D’Onghia, E., Besla, G., Cox, T. J., & Hernquist, L. 2009, , 460, 605 Dotter, A., Chaboyer, B., Jevremovi[ć]{}, D., et al. 2008, , 178, 89 Einasto, J., Saar, E., Kaasik, A., & Chernin, A. D. 1974, , 252, 111 Gnedin, N. Y. 2000, , 542, 535 Gnedin, O. Y., & Zhao, H. 2002, , 333, 299 Grcevich, J., & Putman, M. E.2009, , 696, 385 Grebel, E. K., Gallagher, J. S., III, & Harbeck, D. 2003, , 125, 1926 Ho, N., Geha, M., Tollerud, E. J., et al. 2015, , 798, 77 Ibata, R., Chapman, S., Irwin, M., Lewis, G., & Martin, N. 2006, , 373, L70 Ibata, R. A., Lewis, G. F., Conn, A. R., et al. 2013, , 493, 62 de Jong, J. T. A., Harris, J., Coleman, M. G., et al. 2008, , 680, 1112 Kawata, D., Arimoto, N., Cen, R., & Gibson, B. K. 2006, , 641, 785 Kazantzidis, S., [Ł]{}okas, E. L., Callegari, S., Mayer, L., & Moustakas, L. A. 2011, , 726, 98 Kazantzidis, S., [Ł]{}okas, E. L., Mayer, L., Knebe, A., & Klimentowski, J. 2011, , 740, L24 Kazantzidis, S., [Ł]{}okas, E. L., & Mayer, L. 2013, , 764, L29 Kirby, E. N., Lanfranchi, G. A., Simon, J. D., Cohen, J. G., & Guhathakurta, P. 2011, , 727, 78 Kirby, E. N., Cohen, J. G., Guhathakurta, P., et al. 2013, , 779, 102 Klimentowski, J., [Ł]{}okas, E. L., Kazantzidis, S., Mayer, L., & Mamon, G. A. 2009, , 397, 2015 Kravtsov, A. V., Gnedin, O. Y., & Klypin, A. A. 2004, , 609, 482 Koch, A., Grebel, E. K., Kleyna, J. T., et al. 2007, , 133, 270 Koch, A., Wilkinson, M. I., Kleyna, J. T., et al. 2007, , 657, 241 okas, E. L., Kazantzidis, S., Klimentowski, J., Mayer, L., & Callegari, S. 2010, , 708, 1032 okas, E. L., Kowalczyk, K., & Kazantzidis, S. 2012, arXiv:1212.0682 Mac Low, M.-M., & Ferrara, A.1999, , 513, 142 Martin, N. F., de Jong, J. T. A., & Rix, H.-W. 2008, , 684, 1075 Mayer, L., Governato, F., Colpi, M., et al. 2001, , 559, 754 Mayer, L., Mastropietro, C., Wadsley, J., Stadel, J., & Moore, B. 2006, , 369, 1021 McConnachie, A. W., Pe[ñ]{}arrubia, J., & Navarro, J. F. 2007, , 380, L75 Mu[ñ]{}oz, R. R., Padmanabhan, N., & Geha, M. 2012, , 745, 127 Revaz, Y., & Jablonka, P. 2012, , 538, AA82 Ryan-Weber, E. V., Begum, A., Oosterloo, T., et al. 2008, , 384, 535 Sarajedini, A., Lee, Y.-W., & Lee, D.-H. 1995, , 450, 712 Sawala, T., Scannapieco, C., Maio, U., & White, S. 2010, , 402, 1599 S[é]{}gall, M., Ibata, R. A., Irwin, M. J., Martin, N. F., & Chapman, S. 2007, , 375, 831 Schlafly, E. F., & Finkbeiner, D. P. 2011, , 737, 103 Slater, C. T., Bell, E. F., & Martin, N. F. 2011, , 742, L14 Slater, C. T., & Bell, E. F. 2013, , 773, 17 Sohn, S. T., Besla, G., van der Marel, R. P., et al. 2013, , 768, 139 Teyssier, M., Johnston, K. V., & Kuhlen, M. 2012, , 426, 1808 Stetson, P. B. 1987, , 99, 191 Stetson, P. B., Hesser, J. E., Smith, G. H., Vandenberg, D. A., & Bolte, M. 1989, , 97, 1360 Tollerud, E. J., Geha, M. C., Vargas, L. C., & Bullock, J. S. 2013, , 768, 50 Tolstoy, E., Irwin, M. J., Helmi, A., et al. 2004, , 617, L119 van den Bergh, S. 1994, , 428, 617
[^1]: The distance between And XXIX and M31 reported in @bell11 was incorrect due to a geometry error; it is fixed in this work.
|
1.2cm [**The ALEPH Collaboration**]{}\
1.2cm [**Abstract**]{}
Searches for scalar top, scalar bottom and degenerate scalar quarks have been performed with data collected with the ALEPH detector at LEP. The data sample consists of 57 $\mathrm{pb}^{-1}$ taken at $\rts$ = 181–184 GeV. No evidence for scalar top, scalar bottom or degenerate scalar quarks was found in the channels $\stop \rightarrow \mathrm{c}\neu$, $\stop \rightarrow \mathrm{b}\ell\snu$, $\sbot \rightarrow
\mathrm{b}\neu$, and $\mathrm{\tilde{q}} \rightarrow \mathrm{q}\neu$. From the channel $\stop \rightarrow \mathrm{c}\neu$ a limit of 74 $\gev$ has been set on the scalar top quark mass, independent of the mixing angle. This limit assumes a mass difference between the $\stop$ and the $\neu$ in the range 10–40 $\gev$. From the channel $\stop \rightarrow \mathrm{b}\ell\snu$ the mixing-angle-independent scalar top limit is 82 $\gev$, assuming $m_{\mathrm{\tilde{t}}}-m_{\tilde{\nu}}$ $>$ 10 $\gev$. From the channel $\sbot \rightarrow \mathrm{b}\neu$, a limit of 79 $\gev$ has been set on the mass of the supersymmetric partner of the left-handed state of the bottom quark. This limit is valid for $m_{\mathrm{\tilde{b}}}-m_{\neu}$ $>$ 10 $\gev$. From the channel $\mathrm{\tilde{q}} \rightarrow \mathrm{q}\neu$, a limit of 87 $\gev$ has been set on the mass of supersymmetric partners of light quarks assuming five degenerate flavours and the production of both “left-handed” and “right-handed” squarks. This limit is valid for $m_{\mathrm{\tilde{q}}}-m_{\neu}$ $>$ 5 $\gev$.
=10000
**The ALEPH Collaboration**
Introduction
============
In the Minimal Supersymmetric extension of the Standard Model (MSSM) [@SUSY], each chirality state of the Standard Model fermions has a scalar supersymmetric partner. The scalar quarks (squarks) $\tilde{\rm{q}}_{\rm{R}}$ and $\tilde{\rm{q}}_{\rm{L}}$ are the supersymmetric partners of the left-handed and right-handed quarks, respectively. They are weak interaction eigenstates which can mix to form the mass eigenstates. Since the size of the mixing is proportional to the mass of the Standard Model partner, the lighter scalar top (stop) could be the lightest supersymmetric charged particle. The stop mass eigenstates are obtained by a unitary transformation of the $\stopr$ and $\stopl$ fields, parametrised by the mixing angle $\thetamix$. The lighter stop is given by $\stop = \mathrm{\tilde{t}_L \cos{\thetamix}}
+ \mathrm{\tilde{t}_R \sin{\thetamix}}$, while the heavier stop is the orthogonal combination. The stop could be produced at LEP in pairs, $\rm{e^+ e^-} \to \stop \bar{\stop}$, via [*s*]{}-channel exchange of a virtual photon or a Z.
The searches for stops described here assume that all supersymmetric particles except the lightest neutralino $\neu$ and possibly the sneutrino $\snu$ are heavier than the stop. The conservation of R-parity is also assumed; this implies that the lightest supersymmetric particle (LSP) is stable. Under these assumptions, the two dominant decay channels are $\stop \to \rm{c} \neu$ and $\stop \to\rm{b} \ell \tilde{\nu}$ [@Hikasa]. The first decay can only proceed via loops and thus has a very small width, of the order of 0.01–1 eV [@Hikasa]. The $\stop \to \rm{b} \ell \tilde{\nu}$ channel proceeds via a virtual chargino exchange and has a width of the order of 0.1–10 keV [@Hikasa]. The latter decay dominates when it is kinematically allowed. The phenomenology of the scalar bottom (sbottom), the supersymmetric partner of the bottom quark, is similar to the phenomenology of the stop. Assuming that the $\sbot$ is lighter than all supersymmetric particles except the $\neu$, the $\sbot$ will decay as $\mathrm{\tilde{b} \rightarrow
b \neu}$. Compared to the $\stop$ decays, the $\sbot$ decay has a large width of the order of 10–100 MeV. Direct searches for stops and sbottoms are performed in the stop decay channels $\stop \to \rm{c}\neu$ and $\stop \to\rm{b} \ell \tilde{\nu}$ and in the sbottom decay channel $\sbot \to \rm{b} \neu$. The results of these searches supersede the ALEPH results reported earlier for data collected at energies up to $\rts$ = 172 GeV [@ALEPH_stop]. The D0 experiment [@D0] has reported a lower limit on the stop mass of 85 ${\mathrm GeV}/c^2$ for the decay into $\rm{c} {\chi}$ and for a mass difference between the $\stop$ and the $\chi$ larger than about 40 $\gev$. Searches for $\stop \rightarrow \rm{c} {\chi}$, $\stop\rightarrow\mathrm{b} \ell \tilde{\nu}$ and $\sbot\rightarrow\rm{b}{\chi}$ using data collected at LEP at energies up to $\sqrt{s}$ = 172 $\mathrm{GeV}$ have also been performed by OPAL [@OPAL].
The supersymmetric partners of the light quarks are generally expected in the MSSM to be heavy, i.e., beyond the reach of LEP2, but their masses receive large negative corrections from gluino loops [@doni]. The dominant decay mode is assumed to be $\tilde{\rm{q}} \to \rm{q}\neu$. Limits are set on the production of the u, d, s, c, b squarks, under the assumption that they are mass degenerate. The D0 and CDF Collaborations have published limits on degenerate squarks [@d0ds; @cdfds]. These limits are outside the LEP2 kinematic range for the case of a light gluino; however limits from LEP2 are competitive with those from the Tevatron if the gluino is heavy.
The ALEPH detector
==================
A detailed description of the ALEPH detector can be found in Ref. [@Alnim], and an account of its performance as well as a description of the standard analysis algorithms can be found in Ref. [@Alperf]. Only a brief overview is given here.
Charged particles are detected in a magnetic spectrometer consisting of a silicon vertex detector (VDET), a drift chamber (ITC) and a time projection chamber (TPC), all immersed in a 1.5 T axial magnetic field provided by a superconducting solenoid. The VDET consists of two cylindrical layers of silicon microstrip detectors; it performs very precise measurements of the impact parameter in space thus allowing powerful short-lifetime particle tags, as described in Ref. [@Rb1]. Between the TPC and the coil, a highly granular electromagnetic calorimeter (ECAL) is used to identify electrons and photons and to measure their energies. Surrounding the ECAL is the return yoke for the magnet, which is instrumented with streamer tubes to form the hadron calorimeter (HCAL). Two layers of external streamer tubes are used together with the HCAL to identify muons. The region near the beam line is covered by two luminosity calorimeters, SICAL and LCAL, which provide coverage down to 34 mrad. The information obtained from the tracking system is combined with that from the calorimeters to form a list of “energy flow particles” [@Alperf]. These objects serve to calculate the variables that are used in the analyses described in Section 3.
The Analyses
============
Data collected at $\sqrt{s}$ = 181, 182, 183, and 184 GeV have been analysed, corresponding to integrated luminosities of 0.2, 3.9, 51.0, and 1.9 $\rm{pb}^{-1}$, respectively. Three separate analyses are used to search for the processes , , and $\mathrm \stop \rightarrow b \ell \snu$. All of these channels are characterised by missing momentum and energy. The experimental topology depends largely on $\deltm$, the mass difference between the $\tilde{\rm{q}}$ and the $\neu$ or $\snu$. When $\deltm$ is large, there is a substantial amount of energy available for the visible system and the signal events tend to look like $\ww$, $\ewnu$, $\zz$, and $\qqg$ events. These processes are characterised by high multiplicity and high visible mass $M_{\mathrm{vis}}$. When $\deltm$ is small, the energy available for the visible system is small and the signal events are therefore similar to $\ggqq$ events. The process $\ggqq$ is characterised by low multiplicity, low $M_{\mathrm{vis}}$, low total transverse momentum $\pt$ and the presence of energy near the beam axis. In order to cope with the different signal topologies and background situations, each analysis employs a low $\deltm$ selection and a high $\deltm$ selection. The values of the analysis cuts are set in an unbiased way following the $\bar{N}_{95}$ procedure [@N95]. The simulation of the $\ggqq$ background is difficult. As a consequence, a safer rejection of this background is ensured by applying tighter cuts than would result from the $\bar{N}_{95}$ procedure.
The analyses used to search for evidence of stop and sbottom production are quite similar to those used at $\rts$ = 172 GeV [@ALEPH_stop] with the addition of b tagging in the channel $\sbot \to \rm{b} \neu$ to further reject the $\ww$, $\ewnu$, and $\zz$ background. The differences between the cuts used at $\rts$ = 130–172 GeV and the cuts used at $\rts$ = 181–184 GeV are described in detail below.
Search for $\stop \to \rm{c} \neu$
----------------------------------
The process $\mathrm e^{+}e^{-} \rightarrow \stop\bar{\stop}$ ($\mathrm \stop \rightarrow c\neu$) is characterised by two acoplanar jets and missing mass and energy.
For the small $\deltm$ selection, only the thrust and the visible mass cuts needed adjustments. The thrust is required to be less than 0.915 to reduce further the low-multiplicity $\ggqq$ background. The lower cut on the visible mass $\mvis$, which is effective against $\ggqq$ background, depends upon the mass difference of the signal considered. Since signal events with small $\deltm$ tend to have smaller values of $\mvis$, the optimal value of this cut decreases as $\deltm$ decreases. The visible mass is required to be in excess of 4 $\gev$. This cut is raised to 7.5 $\gev$ for $\deltm$ $>$ 7 $\gev$.
For large $\deltm$, the selection is quite similar to the selection at $\rts$ = 130–172 GeV. However, a few changes have been made in order to confront the increased level of background from $\ww$, $\ewnu$, and $\zz$ that results from the increased luminosity. The $\thscat$ variable is used to reduce background from $\ewnu$. This variable was introduced in Ref. [@ALEPH_stop] as a means of eliminating $\ggqq$ background. Assuming that one of the incoming electrons is scattered while the other one continues undeflected, the polar angle of the scattered electron, $\thscat$, can then be calculated from the missing transverse momentum $\pt$. This variable can also be interpreted in the context of $\ewnu$ background. The final state of this background typically includes an electron which goes down the beampipe and a neutrino which is within the detector acceptance. In this case, $\thscat$ is an estimate of the neutrino polar angle which tends to be large for the $\ewnu$ background. For the signal process, $\thscat$ tends to be smaller, as long as $\deltm$ is not too large. The optimal value of this cut for a hypothesis of $\deltm$ $<$ 35 $\gev$ is $\thscat$ $<$ $60^{\circ}$, while for a hypothesis of $\deltm$ $>$ 35 $\gev$ the cut is not applied.
Background from $\ww$, $\ewnu$, and $\zz$ is also addressed by tightening the $\mvis$/$\rts$ cut. This cut also depends on the $\deltm$ of the signal being considered. A hypothesis on $\deltm$=15 $\gev$ gives an optimal cut of $\mvis$/$\rts$ $<$ 0.25, while a hypothesis on $\deltm$=35 $\gev$ gives an optimal cut of $\mvis$/$\rts$ $<$ 0.33. Finally, the optimal cut for all $\deltm$ greater than 50 $\gev$ is $\mvis$/$\rts$ $<$ 0.35.
### Selection efficiency and background {#selection-efficiency-and-background .unnumbered}
According to the $\bar{N}_{95}$ procedure, the low $\deltm$ selection is used for $\deltm$ $<$ 15 $\gev$, while for $\deltm$ $\geq$ 15 $\gev$, the high $\deltm$ selection is used. The changeover occurs at a larger $\deltm$ value than for $\rts$ = 130–172 GeV, where it was $\deltm$ = 10 $\gev$, due to the larger contamination in the high $\deltm$ selection.
The $\stop \to \rm{c} \neu$ efficiencies are shown in Figure \[steff\]a; the discontinuity at $\deltm$ = 15 $\gev$ is due to the switching between the low and high $\deltm$ selections.
-- --
-- --
The background to the low $\deltm$ selection is dominated by $\ggqq$ and $\ggtt$ and has a total expectation of 1.5 events for the looser value of the lower $\mvis$ cut. For the high $\deltm$ selection, the background is dominated by $\ww$, $\ewnu$, $\zz$, and $\qqg$. If the upper cut on $\thscat$ is not applied and the loosest value of the upper cut on $\mvis$ is applied, the total background expectation for the high $\deltm$ selection is 3.5 events .
Search for $\sbot \to \rm{b} \chi$
------------------------------------
The experimental topology of the $\mathrm e^{+}e^{-} \to \tilde{b}\bar{\tilde{b}}$ ($\sbot \to \rm{b} \chi$) process is characterised by two acoplanar b jets and missing mass and energy. Both the low and high $\Delta m$ selections use the same selection criteria against $\gamma\gamma\rightarrow \rm{q} \bar{\rm{q}} $ background as at lower energies [@ALEPH_stop] with cuts rescaled to the centre-of-mass energy when appropriate. Only the cuts against WW, ${\rm Z\gamma^* }$, and We$\nu$ have been reoptimized. Most of the cuts are similar to those used for the $\stop \to \rm{c} \chi$ process; here only the differences will be described.
For the low $\Delta m$ selection, the visible mass of the event is required to be greater than 7.5 GeV/c$^{2}$. In this channel the b quark in the final state produces a visible mass higher than in the $\stop \to \rm{c} \chi$ channel.
For the high $\Delta m$ selection, the level of WW, $\rm{ Z\gamma^* }$ and We$\nu$ background was reduced by taking advantage of the lifetime content of the $\sbot \rightarrow \rm{b} \chi$ topology. The b quark events were tagged by using the b-tag event probability $({\rm P_{uds}})$ described in [@Rb1]. Since this probability depends largely on the b jet boost, events are more b-like as the event visible mass increases. A two-dimensional cut is applied in the $\mvis$ vs $\log_{10}({\rm P_{uds}})$ plane (Figure \[steff\]b); starting from a loose value of the b-tag cut when the visible mass is low, the cut becomes tighter for larger values of visible mass.
### Selection efficiency and background {#selection-efficiency-and-background-1 .unnumbered}
According to the $\bar{N}_{95}$ procedure, for $\Delta m <$ 12 GeV/$c^{2} $ the low $\Delta m $ selection is used while for $\Delta m >$ 12 GeV/$c^{2} $ the high $\Delta m $ selection is used. The efficiency is shown in Figure \[steff\]a, as a function of $\Delta m$ for $M_{\sbot}$=80 GeV/$c^{2}$. The total background expectation for the low $\Delta m $ selection is 1.1 events (20 fb), and dominated by $ \gamma\gamma\rightarrow \rm{q\bar{q}}$. For the high $\Delta m $ selection the WW, ${\rm Z\gamma^*} $, and We$\nu$ background is highly suppressed by b tagging and the total background expectation is 0.6 events (10 fb).
Search for $\stop \to \rm{b} \ell \snu$
---------------------------------------
The experimental signature for $\mathrm e^{+}e^{-} \rightarrow \stop\bar{\stop}$ ($\stop \to \rm{b} \ell \snu$) is two acoplanar jets plus two leptons with missing momentum.
For the low $\deltm$ selection, the $\pt$ cut used at 172 GeV is reinforced by requiring that the $\pt$ calculated without neutral hadrons and the $\pt$ calculated with only charged tracks both be greater than 0.75%$\sqrt{s}$.
For the large $\deltm$ selection, the cuts are optimised in order to confront the increased rate of $\ww$ and $\zz$ backgrounds at 183 GeV. Only two cut values are changed with respect to their values at 172 GeV. First, the upper cut on the hadronic mass is tightened: the hadronic mass is required to be less than 25%$\rts$ if one lepton is identified and less than 20%$\rts$ if more than one lepton is identified. Additionally, the cut on the leading lepton isolation is reinforced: the energy in a $30^{\circ}$ cone around the direction of the electron or muon momentum must be smaller than 2.7 times the electron or muon energy.
### Selection efficiency and background {#selection-efficiency-and-background-2 .unnumbered}
The combination of the two selections is chosen according to the $\bar{N}_{95}$ procedure. For $\deltm$ $<$ 11 $\gev$, the logical OR of the two selections is used: the high $\deltm$ selection helps to recover efficiency while leaving the background level unchanged. For $\deltm$ $\geq$ 11 $\gev$, only the high $\deltm$ selection is used. This is in contrast to the situation for the 130–172 GeV data; because the background was low for both the low and the high $\deltm$ selections, the OR of the two selections was optimal for all $\deltm$.
The $\stop \to \rm{b} \ell \snu$ selection efficiencies are given in Figure \[steff\]a. The effect of switching from the OR of the two selections to the high $\deltm$ selection by itself can be seen at $\deltm$ = 11 $\gev$. The background to the low $\deltm$ selection is dominated by $\ggqq$ and has a total expectation of 0.8 events . For the high $\deltm$ selection, the very low expected background (0.1 events expected or ${\sim} 2 \,{\rm fb}$) is dominated by $\ww$ events.
Systematic uncertainties
------------------------
The systematic uncertainties on the $\stop$ and $\sbot$ selection efficiencies originating from the physical processes in the Monte Carlo simulation as well as those related to detector effects are evaluated following the procedure described in Reference [@ALEPH_stop]. The relative uncertainty on the selection efficiency in the case of $\stop \to \rm{c} \chi$ is 13% for low $\Delta m$ and 6% for high $\Delta m$; in the case of $\stop \to \rm{b} \ell \snu$ it is 16% for low $\Delta m$ and 6% for high $\Delta m$; for the $\sbot \to \rm{b} \chi$ channel it is 12% for low $\Delta m$ and 6% for high $\Delta m$. These errors are dominated by the uncertainties on the simulation of $\stop$ and $\sbot$ production and decay. An additional source of systematic error for the high $\Delta m$ $\sbot \to \rm{b} \chi$ selection efficiency derives from the uncertainty on the b-tagging. This systematic uncertainty has been studied by measuring $ R_{\rm{b}}$ as a function of the b-tag cut in the calibration data collected at the Z peak during the 1997 run. The total uncertainty on the efficiency for the high $\Delta m$ $\sbot \to \rm{b} \chi$ selection is 7%. The systematic uncertainties are included in the final result following the method described in [@syse].
Results
=======
A total of five events are selected by the $\stop \to \rm{c} \neu$ analysis, four by the high $\deltm$ selection and one by the low $\deltm$ selection. This is consistent with the 3.5 events expected in the high $\deltm$ selection and the 1.5 events expected in the low $\deltm$ selection. The kinematic properties of the high $\deltm$ events are all consistent with $\zz$, $\ww$, or $\ewnu$, while the kinematic properties of the low $\deltm$ event suggest the process $\ggqq$.
A single event is selected by the $\sbot \to \rm{b} \neu$ analysis. This event, which is found by the high $\deltm$ selection, is also found by the $\stop \to \rm{c} \neu$ high $\deltm$ selection. The three high $\deltm$ $\stop$ candidates not selected by the sbottom analysis are all rejected by the b-tag. The number of events selected in the data is consistent with the expectation from background processes (1.1 from the low and 0.6 from the high $\deltm$ selections).
A single event is also selected by the $\stop \to \rm{b} \ell \snu$ analysis. The event is found by the low $\deltm$ selection, and is consistent with $\ggqq$ production. The total of one event selected is consistent with the 0.9 events that are expected from background processes (0.8 from the low and 0.1 from the high $\deltm$ selections).
Since no evidence for the production of $\stop$ or $\sbot$ is found, it is appropriate to set lower limits on their masses. The limits are extracted without background subtraction. Figures \[stchi\]a and \[stchi\]b give the 95% C.L. excluded regions for the channel $\mathrm \stop \rightarrow c\neu$. For this channel, the $\thetamix$-independent lower limit on $m_{\stop}$ is 74 $\gev$, assuming a mass difference between the $\stop$ and the $\neu$ of 10–40 $\gev$, corresponding to a large part of the region not excluded by the D0 search. Figures \[stblv\]a and \[stblv\]b give the excluded regions for the $\mathrm \stop \rightarrow b\ell\widetilde{\nu}$ channel, assuming equal branching ratios for the $\stop$ decay to $\mathrm{e}$, $\mu$ and $\tau$. In this case, the $\thetamix$-independent lower limit on $m_{\stop}$ is 82 $\gev$, assuming a mass difference between the $\stop$ and the $\snu$ of at least 10 $\gev$, and using also the LEP1 exclusion on the sneutrino mass.
Figures \[sbotchi\]a and \[sbotchi\]b give the excluded regions for the $\sbot$ in the decay channel $\mathrm \sbot \rightarrow b\neu$. A lower limit of 79 $\gev$ is set on $m_{\sbot}$, assuming that $\thetab$ is $0^{\circ}$ and that the mass difference between the $\sbot$ and the $\neu$ is at least 10 $\gev$. Figure \[sbotchi\]b shows that only a restricted region is excluded when $\thetab$ =$68^{\circ}$. When decoupling from the Z occurs, sbottoms can only be produced through photon exchange and the cross section for the $\sbot$ (charge $-1/3$) is four times lower than the cross section for the $\stop$ (charge $+2/3$).
Limits on degenerate squarks {#limits-on-degenerate-squarks .unnumbered}
----------------------------
Here the decay $\tilde{\rm{q}} \to \rm{q}\neu$ is assumed to be dominant. It has a topology similar to that of $\stop \to \rm{c}\neu$. The $\stop \to \rm{c}\neu$ analysis can therefore be used to search for generic squark production. In order to check the efficiency of the $\stop \to \rm{c}\neu$ selection when it is applied to degenerate squark production, samples of the process $\tilde{\rm{q}} \to \rm{q}\neu$ were generated and run through the full ALEPH detector simulation. As expected, the selection efficiency for these samples is similar to the selection efficiency for the corresponding $\stop \to \rm{c}\neu$ samples. The $\tilde{\rm{q}} \to \rm{q}\neu$ efficiencies were then parametrised as a function of squark mass and $\deltm$, and this parametrization is used to set the limits on generic squark production. For q = u, d, s, or c, the mixing between $\rm{\tilde{q}_{R}}$ and $\rm{\tilde{q}_{L}}$ is expected to be negligible. The mixing between $\rm{\tilde{b}_{R}}$ and $\rm{\tilde{b}_{L}}$ is also assumed to be negligible in the case that the sbottoms are mass degenerate with the partners of the four lightest quarks.
Figure \[degsq\]a shows the exclusion curves assuming five degenerate squark flavours. In the more conservative curve, only $\rm{\tilde{q}_{R}\bar{\tilde{q}}_{R}}$ production is allowed, while in the other curve, both $\rm{\tilde{q}_{R}\bar{\tilde{q}}_{R}}$ and $\rm{\tilde{q}_{L}\bar{\tilde{q}}_{L}}$ production are allowed, assuming that $\rm{\tilde{q}_{L}}$ and $\rm{\tilde{q}_{R}}$ are mass degenerate. For these curves the efficiency parametrisation developed for the dedicated sbottom search has been applied to the processes $\mathrm e^{+}e^{-} \to \tilde{b}_{R}\bar{\tilde{b}}_{R}$ and $\mathrm e^{+}e^{-} \to \tilde{b}_{L}\bar{\tilde{b}}_{L}$.
Searches for degenerate squarks have been performed by CDF and D0 at the Tevatron; the resulting limits in the gluino-squark mass plane are shown in Figure \[degsq\]b. While these experiments can exclude quite an extensive region of this plane, there is an uncovered region in the exclusion for large gluino masses.
In the MSSM, when GUT relation are assumed, the neutralino mass can be related to the gluino mass once the values of $\mu$ and tan$\beta$ are fixed. Therefore, ALEPH limits in the squark-neutralino mass plane can be translated to the gluino-squark mass plane. The ALEPH results are shown in Figure \[degsq\]b assuming the values of $\mu$ and tan$\beta$ used by CDF (tan$\beta$ = 4 and $\mu$ = $-400~\gev$). This exclusion is affected only slightly if the D0 set of values (tan$\beta$ = 2 and $\mu$ = $-250~\gev$) is used instead.
The limit on $m_{\rm{\tilde{q}}}$ is at least 87 $\gev$ up to (535 $\gev$ if D0 values are used). At this point, $m_{\neu}$ = 82 $\gev$ and the mass difference between $\rm{\tilde{q}}$ and $\neu$ is only 5 $\gev$. Beyond these points, the $\stop \to \rm{c} \neu$ analysis is no longer sensitive to the production of degenerate squarks.
Conclusions
===========
Searches have been performed for scalar quarks at $\rts$ = 183 GeV. Five candidate events are observed in the $\mathrm \stop \rightarrow c \neu$ channel, one in the $\mathrm \stop \rightarrow b \ell \tilde{\nu}$ channel, and one in the $\mathrm \sbot \rightarrow b \neu$ channel. These totals are consistent with the expectation from background processes.
A 95% C.L. limit of $m_{\stop}$ $>$ 74 $\gev$ is obtained from the $\mathrm \stop \rightarrow c\neu$ search, independent of the mixing angle and for $10 < \Delta m <40~$GeV/$c^2$. From the $\mathrm \stop \rightarrow b\ell\widetilde{\nu}$ channel, the $\thetamix$-independent limit is established, if the mass difference between the $\stop$ and the $\snu$ is greater than 10 $\gev$ and for equal branching ratios of the $\stop$ into $\mathrm{e}$, $\mu$, and $\tau$.
A limit is also obtained for the $\sbot$ decaying as $\mathrm \sbot \rightarrow b\neu$. The limit is $m_{\sbot}$ $>$ 79 $\gev$ for the supersymmetric partner of the left-handed state of the bottom quark if the mass difference between the $\sbot$ and the $\neu$ is greater than 10 $\gev$.
Finally, limits are also derived for the supersymmetric partners of the light quarks. Assuming five degenerate flavours and the production of both “left-handed” and “right-handed” squarks, a limit of $m_{\rm{\tilde{q}}}$ $>$ 87 $\gev$ is set. This limit is valid for $\deltm$ $>$ 5 $\gev$. Using the GUT relations, for tan$\beta$ = 4 and $\mu$ = $-400$ $\gev$, the limit of $m_{\rm{\tilde{q}}}$ $>$ 87 $\gev$ is valid for gluino masses smaller than 545 $\gev$.
Acknowledgements
================
We wish to congratulate our colleagues from the accelerator divisions for the continued successful operation of LEP at high energies. We would also like to express our gratitude to the engineers and support people at our home institutes without whom this work would not have been possible. Those of us from non-member states wish to thank CERN for its hospitality and support.
-- --
-- --
-- --
-- --
-- --
-- --
-- --
-- --
[99]{}
For a review see: Ed. M. Jacob, [*Supersymmetry and Supergravity.*]{} North-Holland and World Scientific, 1986. K. Hikasa and M. Kobayashi, Phys. Rev. [**D 36**]{} (1987) 724;\
M. Drees and K. Hikasa, Phys. Lett. [**B 252**]{} (1990) 127.
ALEPH Collaboration, [*Search for Scalar Top and Scalar Bottom Quarks at LEP2.*]{} Phys. Lett. [**B 413**]{} (1997) 431.
D0 Collaboration, [*Search for Light Top Squarks in $p\bar{p}$ Collisions at $\sqrt{s}=1.8$ TeV.*]{} Phys. Rev. Lett. [**76**]{} (1996) 2222.
OPAL Collaboration, [*Search for Scalar Top and Scalar Bottom Quarks at $\sqrt{s}$= 170 GeV - 172 GeV in e$^+$ e$^-$ Collisions.*]{} Z. Phys. [**C 75**]{} (1997) 409.
A. Donini, Nucl. Phys. [**B 467**]{} (1996) 3.
D0 Collaboration, [*Search for Squarks and Gluinos in $p\bar{p}$ Collisions at $\sqrt{s}=1.8$ TeV.*]{} Phys. Rev. Lett. [**75**]{} (1995) 618.
CDF Collaboration, [*Search for gluinos and squarks at the Fermilab Tevatron collider.*]{} Phys. Rev. [**D 56**]{} (1997) 1357.
ALEPH Collaboration, [*ALEPH: A detector for electron-positron annihilation at LEP.* ]{} Nucl. Instrum. and Methods [**A 294** ]{} (1990) 121.
ALEPH Collaboration, [*Performance of the ALEPH detector at LEP.* ]{} Nucl. Instrum. and Methods [**A 360**]{} (1995) 481.
ALEPH Collaboration, [*A Precise Measurement of $\Gamma_{Z\to b\bar{b}}/\Gamma_{Z\to hadrons}$.*]{} Phys. Lett. [**B 313**]{} (1993) 535.
ALEPH Collaboration, Phys. Lett. [**B 384**]{} (1996) 427;\
G. F. Grivaz and F. Le Diberder, LAL [**92-37**]{} (1992).
R.D. Cousins and V.L. Highland, Nucl. Instrum. and Methods [**A 320**]{} (1992) 331.
|
Introduction {#sec:Introduction}
=============
Quantum theory (QT) may either be defined by a set of axioms or otherwise be ’derived’ from classical physics by using certain assumptions. Today, QT is frequently identified with a set of axioms defining a Hilbert space structure. This mathematical structure has been created (by von Neumann) by abstraction from the linear solution space of the central equation of QT, the Schr[ö]{}dinger equation. Thus, deriving Schr[ö]{}dinger’s equation is basically the same as deriving QT. To derive the most general version of the time-dependent Schr[ö]{}dinger equation, describing $N$ particles with spin in an external gauge field, means to derive essentially the whole of non-relativistic QT.
The second way of proceeding is sometimes called ’quantization’. In the standard (canonical) quantization method one starts from a classical Hamiltonian whose basic variables are then ’transformed’, by means of well-known correspondence rules, $$\label{eq:DTR43QUPR}
p\to\frac{\hbar}{\imath}\frac{\mathrm{d}}{\mathrm{d}x},\;\;\;
E\to-\frac{\hbar}{\imath}\frac{\mathrm{d}}{\mathrm{d}t}
\mbox{,}$$ into operators. Then, all relevant classical observables may be rewritten as operators acting on states of a Hilbert space etc; the details of the ’derivation’ of Schr[ö]{}dinger’s equation along this lines may be found in many textbooks. There are formal problems with this approach which have been identified many years ago, and can be expressed e.g. in terms of Groenewold’s theorem, see [@groenewold:principles], [@gotay:groenewold]. Even more seriously, there is no satisfactory *explanation* for this ’metamorphosis’ of observables into operators. This quantization method (as well as several other mathematically more sophisticated versions of it) is just a *recipe* or, depending on one’s taste, “black magic”, [@hall:exact_uncertainty]. Note that the enormous success of this recipe in various contexts - including field quantization - is no substitute for an explanation.
The choice of a particular quantization procedure will be strongly influenced by the preferred interpretation of the quantum theoretical formalism. If QT is interpreted as a theory describing individual events, then the Hamiltonian of classical mechanics becomes a natural starting point. This ’individuality assumption’ is an essential part of the dominating ’conventional’, or ’Copenhagen’, interpretation (CI) of QT. It is well-known, that QT becomes a source of mysteries and paradoxes[^1] whenever it is interpreted in the sense of CI, as a (complete) theory for individual events. Thus, the canonical quantization method and the CI are similar in two respects: both rely heavily on the concept of individual particles and both are rather mysterious.
This situation confronts us with a fundamental alternative. Should we accept the mysteries and paradoxes as inherent attributes of reality or should we not, instead, critically reconsider our assumptions, in particular the ’individuality assumption’. As a matter of fact, the dynamical numerical output of quantum mechanics consists of *probabilities*. A probability is a “deterministic” prediction which can be verified in a statistical sense only, i.e. by performing experiments on a large number of identically prepared individual systems, see [@belinfante:individual], [@margenau:measurements]. Therefore, the very structure of QT tells us that it is a theory about statistical ensembles only, see [@ballentine:statistical]. If dogmatic or philosophical reasons ’force’ us to interpret QT as a theory about individual events, we have to create complicated intellectual constructs, which are not part of the physical formalism, and lead to unsolved problems and contradictions.
The present author believes, like several other physicists \[see e.g. [@kemble:principles1; @einstein:physics_reality; @margenau:quantum-mechanical; @blokhintsev:quantum; @ballentine:statistical; @belinfante:measurements; @ross-bonney:dice; @young:quantum; @newton:probability; @pippard:interpretation; @tschudi:statistical; @toyozawa:measurement; @krueger:epr_debate; @ali:ensemble]\] that QT is a purely statistical theory whose predictions can only be used to describe the behavior of statistical ensembles and not of individual particles. This statistical interpretation (SI) of QT eliminates all mysteries and paradoxes - and this shows that the mysteries and paradoxes are not part of QT itself but rather the result of a particular (mis)interpretation of QT. In view of the similarity discussed above, we adopt the statistical point of view, not only for the interpretation of QT itself, but also in our search for *quantization conditions*. The general strategy is to find a new set of (as) simple (as possible) statistical assumptions which can be understood in physical terms and imply QT. Such an approach would also provide an explanation for the correspondence rules .
The present paper belongs to a series of works aimed at such an explanation. Quite generally, the present work continues a long tradition of attempts, see [@schrodinger:quantisierung_I; @motz:quantization; @schiller:quasiclassical; @rosen:classical_quantum; @frieden:fisher_basis; @lee.zhu:principle; @hall.reginatto:quantum_heisenberg; @frieden:sciencefisher], to characterize QT by mathematical relations which can be understood in physical terms[^2] (in contrast to the axiomatic approach). More specifically, it continues previous attempts to derive Schr[ö]{}dinger’s equation with the help of statistical concepts, see [@hall.reginatto:schroedinger], [@reginatto:derivation; @syska:fisher], [@klein:schroedingers]. These works, being quite different in detail, share the common feature that a statistical ensemble and not a particle Hamiltonian is used as a starting point for quantization. Finally, in a previous work, [@klein:statistical], of the present author an attempt has been undertaken to construct a complete statistical approach to QT with the help of a small number of very simple (statistical) assumptions. This work will be referred to as I. The present paper is a continuation and extension of I.
The quantization method reported in I is based on the following general ideas:
- QT should be a probabilistic theory in configuration space (not in phase space).
- QT should fullfil abstract versions of (i) a conservation law for probability (continuity equation) , and (ii) Ehrenfest’s theorem. Such relations hold in all statistical theories no matter whether quantum or classical.
- There are no laws for particle trajectories in QT anymore. This arbitrariness, which represents a crucial difference between QT and classical statistics, should be handled by a statistical principle analogous to the principle of maximal entropy in classical statistics.
These general ideas lead to the mathematical assumptions which represent the basis for the treatment reported in I. This work was restricted to a one-dimensional configuration space (a single particle ensemble with a single spatial degree of freedom). The present work generalizes the treatment of I to a $3N-$dimensional configuration space ( ensembles representing an arbitrary number $N$ of particles allowed to move in three-dimensional space), gauge-coupling, and spin. In a first step the generalization to three spatial dimensions is performed; the properly generalized basic relations are reported in section \[sec:calcwpt\]. This section contains also a review of the fundamental ideas.
In section \[sec:gaugecoupling\] we make use of a mathematical freedom, which is already contained in our basic assumptions, namely the multi-valuedness of the variable $S$. This leads to the appearance of potentials in statistical relations replacing the local forces of single-event (mechanical) theories. The mysterious non-local action of the vector potential (in effects of the Aharonov-Bohm type) is explained as a consequence of the statistical nature of QT. In section \[sec:stat-constr-macr\] we discuss a related question: Which constraints on admissible forces exist for the present class of statistical theories ? The answer is that only macroscopic (elementary) forces of the form of the Lorentz force can occur in nature, because only these survive the transition to QT . These forces are statistically represented by potentials, i.e. by the familiar gauge coupling terms in matter field equations. The present statistical approach provides a natural explanation for the long-standing question why potentials play an indispensable role in the field equations of physics.
In section \[sec:fisher-information\] it is shown that among all statistical theories only the time-dependent Schrödinger equation follows the logical requirement of maximal disorder or minimal Fisher information. Spin one-half is introduced, in section \[sec:spin\], as the property of a statistical ensemble to respond to an external gauge field in two different ways. A generalized calculation, reported in sections \[sec:spin\] and \[sec:spin-fish-inform\], leads to Pauli’s (single-particle) equation. In section \[sec:spin-as-gauge\] an alternative derivation, following [@arunsalam:hamiltonians], and [@gould:intrinsic] is reported, which is particularly convenient for the generalization to arbitrary $N$. The latter is performed in section \[sec:final-step-to\], which completes our statistical derivation of non-relativistic QT.
In section \[sec:classical-limit\] the classical limit of QT is studied and it is stressed that the classical limit of QT is not classical mechanics but a classical statistical theory. In section \[sec:discussion\] various questions related to the present approach, including the role of potentials and the interpretation of QT, are discussed. The final section \[sec:concludingremarks\] contains a short summary and mentions a possible direction for future research.
Basic equations for a class of statistical theories {#sec:calcwpt}
===================================================
In I three different types of theories have been defined which differ from each other with regard to the role of probability. We give a short review of the defining properties and supply some additional comments characterizing these theories.
The dogma underlying *theories of type 1* is determinism with regard to single events; probability does not play any role. If nature behaves according to this dogma, then measurements on identically prepared individual systems yield identical results. Classical mechanics is obviously such a deterministic type 1 theory. We shall use below (as a ’template’ for the dynamics of our statistical theories) the following version of Newton’s law, where the particle momentum $p_k(t)$ plays the role of a second dynamic variable besides the spatial coordinate $x_k(t)$: $$\label{eq:TVONLIUSCHL}
\frac{\mathrm{d}}{\mathrm{d}t} x_k(t) = \frac{p_k(t)}{m},
\hspace{0.5cm}
\frac{\mathrm{d}}{\mathrm{d}t} p_k(t) = F_k(x,\,p,\,t)
\mbox{.}$$ In classical mechanics there is no restriction as regards the admissible forces. Thus, $F_k$ is an arbitrary function of $x_k,\,p_k,\,t$; it is, in particular, not required that it be derivable from a potential. Note that Eqs. do *not* hold in the present theory; these relations are just used to establish a correspondence between classical mechanics and associated statistical theories.
Experimental data from atomic systems, recorded since the beginning of the last century, indicate that nature does not behave according to this single-event deterministic dogma. A simple but somewhat unfamiliar idea is, to construct a theory which is deterministic only in a statistical sense. This means that measurements on identically prepared individual systems do not yield identical results (no determinism with regard to single events) but repeated measurements on ensembles \[consisting each time of a large (infinite) number of measurements on individual systems\] yield identical results. In this case we have ’determinism’ with regard to ensembles (expectation values, or probabilities).
Note that such a theory is far from chaotic even if our macroscopic anticipation of (single-event) determinism is not satisfied. Note also that there is no reason to assume that such a statistical theory for microscopic events is incompatible with macroscopic determinism. It is a frequently observed (but not always completely understood) phenomenon in nature that systems with many (microscopic) degrees of freedom can be described by a much smaller number of variables. During this process of elimination of variables the details of the corresponding microscopic theory for the individual constituents are generally lost. In other words, there is no reason to assume that a fundamental statistical law for individual atoms and a deterministic law for a piece of matter consisting of, say, $10^{23}$ atoms should not be compatible with each other. This way of characterizing the relation between two physical theories is completely different from the common reductionistic point of view. Convincing arguments in favor of the former may, however, be found in [@anderson:more], [@laughlin:different].
As discussed in I two types (referred to as type 2 and type 3) of indeterministic theories may be identified. In *type 2 theories* laws for individual particles exist (roughly speaking the individuality of particles remains intact) but the initial values are unknown and are described by probabilities only. An example for such a (classical-statistical) type 2 theory is statistical thermodynamics. On the other hand, in *type 3 theories* the amount of uncertainty is still greater, insofar as no dynamic laws for individual particles exist any more. A possible candidate for this ’extreme’ type of indeterministic theory is quantum mechanics.
The method used in I to construct statistical theories was based on the following three assumptions,
- A local conservation law of probability with a particular form of the probability current.
- Two differential equations which are similar in structure to the canonical equations but with observables replaced by expectation values.
- A differential version (minimal Fisher information) of the statistical principle of maximal disorder.
These (properly generalized) assumptions represent also the formal basis of the present work. The first and second of these cover type 2 as well as type 3 theories, while it will be shown that the third - the requirement of maximal disorder - does only hold for a single type 3 theory, namely quantum mechanics. In this sense quantum mechanics may be considered as the *most reasonable* theory among all statistical theories defined by the first two assumptions. There is obviously an analogy between quantum mechanics and the principle of minimal Fisher information on the one hand and classical statistical mechanics and the principle of maximal entropy on the other hand; both theories are realizations of the principle of maximal disorder.
Let us now generalize the basic equations of I (see section 3 of I) with respect to the number of spatial dimensions and with respect to gauge freedom. The continuity equation takes the form $$\label{eq:CONT3DSCHL}
\frac{\partial \rho(x,t)}{\partial t}+
\frac{\partial}{\partial x_k} \frac{\rho(x,t)}{m}\frac{\partial
\tilde{S}(x,t)}{\partial x_k}=0
\mbox{.}$$ We use the summation convention, indices $i,\,k,...$ run from $1$ to $3$ and are omitted if the corresponding variable occurs in the argument of a function. The existence of a local conservation law for the probability density $\rho(x,t)$ is a necessity for a probabilistic theory. The same is true for the fact that the probability current takes the form $j_k(x,t) = \rho(x,t) \tilde{p}_k(x,t)/m$, where $\tilde{p}_k(x,t)$ is the $k-$th component of the momentum probability density. The only non-trivial assumption contained in , is the fact that $\tilde{p}_k(x,t)$ takes the form of the gradient, $$\label{eq:DEFMOMSCHL}
\tilde{p}_k(x,\,t)=\frac{\partial \tilde{S}(x,\,t)}{\partial x_k}
\mbox{,}$$ of a function $\tilde{S}(x,t)$. In order to gain a feeling for the physical meaning of we could refer to the fact that a similar relation may be found in the Hamilton-Jacobi formulation of classical mechanics [@synge:classical]; alternatively we could also refer to the fact that this condition characterizes ’irrotational flow’ in fluid mechanics. Relation could also be justified by means of the principle of simplicity; a gradient is the simplest way to represent a vector field, because it can be derived from a single scalar function.
In contrast to I we allow now for *multi-valued* functions $\tilde{S}(x,t)$. At first sight this seems strange since a multi-valued quantity cannot be an observable and should, consequently, not appear in equations bearing a physical meaning. However, only derivatives of $\tilde{S}(x,t)$ occur in our basic equations. Thus, this freedom is possible without any additional postulate; we just have to require that $$\label{eq:TGT7RMT}
\tilde{S}(x,t)\;\;\;\mbox{multi-valued},
\;\;\;\;\;
\frac{\partial \tilde{S}}{\partial t},
\frac{\partial \tilde{S}}{\partial x_{k}}
\;\;\;\mbox{single-valued}
\mbox{.}$$ (the quantity $\tilde{p}$ defined in is not multi-valued; this notation is used to indicate that this quantity has been defined with the help of a multi-valued $\tilde{S}$). As discussed in more detail in section \[sec:gaugecoupling\] this new ’degree of freedom’ is intimately related to the existence of gauge fields. In contrast to $\tilde{S}$, the second dynamic variable $\rho$ is a physical observable (in the statistical sense) and is treated as a single-valued function.
The necessary and sufficient condition for single-valuedness of a function $\tilde{S}(x,t)$ (in a subspace $\mathcal{G} \subseteq \mathcal{R}^{4}$) is that all second order derivatives of $\tilde{S}(x,t)$ with respect to $x_{k}$ and $t$ commute with each other (in $\mathcal{G}$) \[see e.g. [@kaempfer:concepts]\]. As a consequence, the order of two derivatives of $\tilde{S}$ with respect to anyone of the variables $x_k,t$ must not be changed. We introduce the (single-valued) quantities $$\label{eq:INC35HEOS}
\tilde{S}_{[j,k]} =
\left[\frac{\partial^{2}\tilde{S}}{\partial x_{j}\partial x_{k}} -\frac{\partial^{2}\tilde{S}}{\partial x_{k}\partial x_{j}} \right], \;\;\;
\tilde{S}_{[0,k]} =
\left[\frac{\partial^{2}\tilde{S}}{\partial t \partial x_{k}} -\frac{\partial^{2}\tilde{S}}{\partial x_{k}\partial t } \right]
\mbox{}$$ in order to describe the non-commuting derivatives in the following calculations.
The second of the assumptions listed above has been referred to in I as ’statistical conditions’. For the present three-dimensional theory these are obtained in the same way as in I, by replacing the observables $x_k(t),\,p_k(t)$ and the force field $F_k(x(t),\,p(t),\,t)$ of the type 1 theory (\[eq:TVONLIUSCHL\]) by averages $\overline{x_k},\,\overline{p_k}$ and $\overline{F_k}$. This leads to the relations $$\begin{gathered}
\frac{\mathrm{d}}{\mathrm{d}t} \overline{x_k} = \frac{\overline{p_k}}{m}
\label{eq:FIRSTAETSCHL}\\
\frac{\mathrm{d}}{\mathrm{d}t} \overline{p_k} =
\overline{F_k(x,\,p,\,t)}
\label{eq:SECONAETSCHL}
\mbox{,}\end{gathered}$$ where the averages are given by the following integrals over the random variables $x_k,\,p_k$ (which should be clearly distinguished from the type I observables $x_k(t),\,p_k(t)$ which will not be used any more): $$\begin{gathered}
\overline{x_k} = \int_{-\infty}^{\infty} \mathrm{d^{3}} x\, \rho(x,t)\, x_k
\label{eq:ERWAETXKSCHL}\\
\overline{p_k} = \int_{-\infty}^{\infty} \mathrm{d^{3}} p\, w(p,t)\, p_k
\label{eq:ERWAETPKSCHL}\\
\overline{F_k(x,\,p,\,t)} =
\int_{-\infty}^{\infty} \mathrm{d^{3}} x \, \mathrm{d^{3}} p
\,W(x,\,p,\,t)F_k(x,\,p,\,t)
\label{eq:ERWAETFKXPSCHL}
\mbox{.}\end{gathered}$$ The time-dependent probability densities $W,\,\rho,\,w$ should be positive semidefinite and normalized to unity, i.e. they should fulfill the conditions $$\label{eq:NCFOPDSSCHL}
\int_{-\infty}^{\infty} \mathrm{d^{3}} x\, \rho(x,t)= \int_{-\infty}^{\infty} \mathrm{d^{3}} p\, w(p,t)=
\int_{-\infty}^{\infty} \mathrm{d^{3}} x \, \mathrm{d^{3}} p
\,W(x,\,p,\,t)=1
\mbox{}$$ The densities $\rho$ and $w$ may be derived from the fundamental probability density $W$ by means of the relations $$\label{eq:IDTHRAWFBWSCHL}
\rho(x,t)= \int_{-\infty}^{\infty} \mathrm{d^{3}} p\, W(x,\,p,\,t);\hspace{1cm}
w(p,t)= \int_{-\infty}^{\infty} \mathrm{d^{3}} x \,W(x,\,p,\,t)
\mbox{.}$$ The present construction of the statistical conditions (\[eq:FIRSTAETSCHL\]) and (\[eq:SECONAETSCHL\]) from the type 1 theory (\[eq:TVONLIUSCHL\]) shows two differences as compared to the treatment in I. The first is that we allow now for a $p-$dependent external force. This leads to a more complicated probability density $W(x,\,p,\,t)$ as compared to the two decoupled densities $\rho(x,t)$ and $ w(p,t)$ of I. The second difference, which is in fact related to the first, is the use of a multi-valued $\tilde{S}(x,t)$.
Note, that the $p-$dependent probability densities $w(p,t)$ and $W(x,\,p,\,t)$ have been introduced in the above relations in a purely formal way. We defined an expectation value $\overline{p_k}$ \[via Eq. \] and assumed \[in Eq. \] that a random variable $p_k$ and a corresponding probability density $w(p,t)$ exist. But the validity of this assumption is not guaranteed . There is no compelling conceptual basis for the existence of these quantities in a pure configuration-space theory. If they exist, they must be defined with the help of additional considerations (see section 6 of I). The deeper reason for this problem is that the concept of measurement of momentum (which is proportional to the time derivative of position) is ill-defined in a theory whose observables are defined in terms of a large number of experiments at *one and the same* instant of time (measurement of a derivative requires measurements at different times). Fortunately, these considerations, which have been discussed in more detail in I, play not a prominent role \[apart from the choice of $W(x,\,p,\,t)$ discussed in section \[sec:stat-constr-macr\]\], for the derivation of Schrödinger’s equation reported in the present paper[^3].
Using the continuity equation and the statistical conditions and the present generalization of the integral equation Eq. (24) of I may be derived. The steps leading to this result are very similar to the corresponding steps in I and may be skipped. The essential difference to the one-dimensional treatment is - apart from the number of space dimensions - the non-commutativity of the second order derivatives of $\tilde{S}(x,t)$ leading to non-vanishing quantities $\tilde{S}_{[j,k]},\,\tilde{S}_{[0,k]}$ defined in Eq. . The result takes the form $$\label{eq:NID2S3TBSF}
\begin{split}
-&\int_{-\infty}^{\infty} \mathrm{d}^{3} x
\frac{\partial \rho}{\partial x_{k}}
\left[
\frac{\partial \tilde{S}}{\partial t}
+\frac{1}{2m}
\sum_{j}\left( \frac{\partial \tilde{S}}{\partial x_{j}} \right)^{2}
+V
\right]\\
+&\int_{-\infty}^{\infty} \mathrm{d}^{3} x
\rho
\left[
\frac{1}{m}
\frac{\partial \tilde{S}}{\partial x_{j}}
\tilde{S}_{[j,k]}+\tilde{S}_{[0,k]}
\right] = \overline{F^{(e)}_{k}(x,\,p,\,t)}
\end{split}
\mbox{,}$$ In the course of the calculation leading to it has been assumed that the macroscopic force $F_k(x,\,p,\,t)$ entering the second statistical condition (\[eq:SECONAETSCHL\]) may be written as a sum of two contributions, $F_k^{(m)}(x,t)$ and $F^{(e)}_k(x,\,p,\,t)$, $$\label{eq:AZ12EKI2T}
F_k(x,\,p,\,t)=F_k^{(m)}(x,t)+F^{(e)}_k(x,\,p,\,t)
\mbox{,}$$ where $F_k^{(m)}(x,t)$ takes the form of a negative gradient of a scalar function $V(x,t)$ (mechanical potential) and $F^{(e)}_k(x,\,p,\,t)$ is the remaining $p-$dependent part.
Comparing Eq. (\[eq:NID2S3TBSF\]) with the corresponding formula obtained in I \[see Eq. (24) of I\] we see that two new terms appear now, the expectation value of the $p-$dependent force on the r.h.s., and the second term on the l.h.s. of Eq. (\[eq:NID2S3TBSF\]). The latter is a direct consequence of our assumption of a multi-valued variable $\tilde{S}$. In section \[sec:stat-constr-macr\] it will be shown that for vanishing multi-valuedness Eq. (\[eq:NID2S3TBSF\]) has to agree with the three-dimensional generalization of the corresponding result \[Eq. (24) of I\] obtained in I. This means that the $p-$dependent term on the r.h.s. has to vanish too in this limit and indicates a relation between multi-valuedness of $\tilde{S}$ and $p-$dependence of the external force.
Gauge coupling as a consequence of a multi-valued phase {#sec:gaugecoupling}
=======================================================
In this section we study the consequences of the multi-valuedness \[[@london:gauge], [@weyl:elektron], [@dirac:quantised]\] of the quantity $\tilde{S}(x,t)$ in the present theory. We assume that $\tilde{S}(x,t)$ may be written as a sum of a single-valued part $S(x,t)$ and a multi-valued part $\tilde{N}$. Then, given that holds, the derivatives of $\tilde{S}(x,t)$ may be written in the form $$\label{eq:DOS4MBWIF}
\frac{\partial \tilde{S}}{\partial t}=
\frac{\partial S}{\partial t}+e \Phi,\;\;\;\;\;
\frac{\partial \tilde{S}}{\partial x_{k}}=
\frac{\partial S}{\partial x_{k}}-\frac{e}{c}A_{k}
\mbox{,}$$ where the four functions $\Phi$ and $A_{k}$ are proportional to the derivatives of $\tilde{N}$ with respect to $t$ and $x_{k}$ respectively (Note the change in sign of $\Phi$ and $A_{k}$ in comparison to [@klein:schroedingers]; this is due to the fact that the multi-valued phase is now denoted by $\tilde{S}$). The physical motivations for introducing the pre-factors $e$ and $c$ in Eq. (\[eq:DOS4MBWIF\]) have been extensively discussed elsewhere, see [@kaempfer:concepts], [@klein:schroedingers], in an electrodynamical context. In agreement with Eq. , $\tilde{S}$ may be written in the form \[[@kaempfer:concepts], [@klein:schroedingers]\] $$\label{eq:AWIZ7MMVF}
\tilde{S}(x,t;\mathcal{C})=S(x,t)-
\frac{e}{c}
\int_{x_{0},t_{0};\mathcal{C}}^{x,t}
\left[\mathrm{d}x_{k}' A_k(x',t')-c\mathrm{d}t' \phi(x',t')\right]
\mbox{,}$$ as a path-integral performed along an arbitrary path $\mathcal{C}$ in four-dimensional space; the multi-valuedness of $\tilde{S}$ simply means that it depends not only on $x,t$ but also on the path $\mathcal{C}$ connecting the points $x_0,t_0$ and $x,t$.
The quantity $\tilde{S}$ cannot be a physical observable because of its multi-valuedness. The fundamental physical quantities to be determined by our (future) theory are the four derivatives of $\tilde{S}$ which will be rewritten here as two observable fields $-\tilde{E}(x,\,t)$, $\tilde{p}_k(x,\,t)$, $$\begin{gathered}
-\tilde{E}(x,\,t)=\frac{\partial S(x,t)}{\partial t}+e \Phi(x,t),
\label{eq:DOJUE8W11}\\
\tilde{p}_k(x,\,t)=\frac{\partial S(x,t)}{\partial x_{k}}-\frac{e}{c}A_{k}(x,t)
\label{eq:DOJUE8W22}
\mbox{,}\end{gathered}$$ with dimensions of energy and momentum respectively.
We encounter a somewhat unusual situation in Eqs. (\[eq:DOJUE8W11\]), (\[eq:DOJUE8W22\]): On the one hand the left hand sides are observables of our theory, on the other hand we cannot solve our (future) differential equations for these quantities because of the peculiar multi-valued structure of $\tilde{S}$. We have to use instead the decompositions as given by the right hand sides of (\[eq:DOJUE8W11\]) and (\[eq:DOJUE8W22\]). The latter eight terms (the four derivatives of $S$ and the four scalar functions $\Phi$ and $A_{k}$) are single-valued (in the mathematical sense) but need not be unique because only the left hand sides are uniquely determined by the physical situation. We tentatively assume that the fields $\Phi$ and $A_{k}$ are ’given’ quantities in the sense that they represent an external influence (of ’external forces’) on the considered statistical situation. An actual calculation has to be performed in such a way that fixed fields $\Phi$ and $A_{k}$ are chosen and then the differential equations are solved for $S$ (and $\rho$). However, as mentioned already, what is actually uniquely determined by the physical situation is the *sum* of the two terms on the right hand sides of (\[eq:DOJUE8W11\]) and (\[eq:DOJUE8W22\]). Consequently, a different set of fixed fields $\Phi^{'}$ and $A_{k}^{'}$ may lead to a physically equivalent, but mathematically different, solution $S^{'}$ in such a way that the sum of the new terms \[on the right hand sides of (\[eq:DOJUE8W11\]) and (\[eq:DOJUE8W22\])\] is the *same* as the sum of the old terms. We assume here, that the formalism restores the values of the physically relevant terms. This implies that the relation between the old and new terms is given by $$\begin{gathered}
S^{'}(x,t)=S(x,t)+\varphi(x,t) \label{eq:EEFFRRW11} \\
\Phi^{'}(x,t)=\Phi(x,t)-\frac{1}{e}\frac{\partial \varphi(x,t)}{\partial t}
\label{eq:EEFFRRW22} \\
A_{k}^{'}(x,t)=A_{k}(x,t)+\frac{c}{e}\frac{\partial \varphi(x,t)}{\partial x_k}
\label{eq:EEFFRRW33}
\mbox{,}\end{gathered}$$ where $\varphi(x,t)$ is an arbitrary, single-valued function of $x_k,\,t$. Consequently, all ’theories’ (differential equations for $S$ and $\rho$ defined by the assumptions listed in section \[sec:calcwpt\]) should be form-invariant under the transformations (\[eq:EEFFRRW11\])-(\[eq:EEFFRRW33\]). These invariance transformations, predicted here from general considerations, are (using an arbitrary function $\chi=c\varphi/e$ instead of $\varphi$) denoted as ’gauge transformations of the second kind’.
The fields $\Phi(x,t)$ and $A_{k}(x,t)$ describe an external influence but their numerical value is undefined; their value at $x,\,t$ may be changed according to (\[eq:EEFFRRW22\]) and (\[eq:EEFFRRW33\]) without changing their physical effect. Thus, these fields *cannot play a local role* in space and time like forces and fields in classical mechanics and electrodynamics. What, then, is the physical meaning of these fields ? An explanation which seems obvious in the present context is the following: They describe the *statistical effect* of an external influence on the considered system (ensemble of identically prepared individual particles). The statistical effect of a force field on an ensemble may obviously *differ* from the local effect of the same force field on individual particles; thus the very existence of fields $\Phi$ and $A_{k}$ different from $\vec{E}$ and $\vec{B}$ is no surprise. The second common problem with the interpretation of the ’potentials’ $\Phi$ and $A_{k}$ is their non-uniqueness. It is hard to understand that a quantity ruling the behavior of individual particles should not be uniquely defined. In contrast, this non-uniqueness is much easier to accept if $\Phi$ and $A_{k}$ rule the behavior of ensembles instead of individual particles. We have no problem to accept the fact that a function that represents a global (integral) effect may have many different local realizations.
It seems that this interpretation of the potentials $\Phi$ and $A_{k}$ is highly relevant for the interpretation of the effect found by [@aharonov.bohm:significance]. If QT is interpreted as a theory about individual particles, the Aharonov-Bohm effects imply that a charged particle may be influenced in a nonlocal way by electromagnetic fields in inaccessible regions. This paradoxical prediction, which is however in strict agreement with QT, led even to a discussion about the reality of these effects, see [@bocchieri.loinger:nonexistence], [@roy:condition], [@klein:nonexistence], [@peshkin.tonomura:aharonov]. A statistical interpretation of the potentials has apparently never been suggested, neither in the vast literature about the Aharonov-Bohm effect nor in papers promoting the statistical interpretation of QT; most physicists discuss this nonlocal ’paradox’ from the point of view of ’the wave function of a single electron’. Further comments on this point may be found in section \[sec:discussion\].
The expectation value $\overline{F^{(e)}_{k}(x,\,p,\,t)}$ on the right hand side of (\[eq:NID2S3TBSF\]) is to be calculated using local, macroscopic forces whose functional form is still unknown. Both the potentials and these local forces represent an external influence, and it is reasonable to assume that the (nonlocal) potentials are the statistical representatives of the local forces on the r.h.s. of Eq. (\[eq:NID2S3TBSF\]). The latter have to be determined by the potentials but must be uniquely defined at each space-time point. The gauge-invariant fields $$\label{eq:DFS32WUSS}
E_{k}=-\frac{1}{c}\frac{\partial A_{k}}{\partial t}-\frac{\partial\Phi}{\partial x_{k}},\;\;\;\;\;\;
B_{k}= \epsilon_{kij}\frac{\partial A_{j}}{\partial x_{i}}
\mbox{,}$$ fulfill these requirements. As a consequence of the defining relations (\[eq:DFS32WUSS\]) they obey automatically the homogeneous Maxwell equations.
In a next step we rewrite the second term on the l.h.s. of Eq. (\[eq:NID2S3TBSF\]). The commutator terms (\[eq:INC35HEOS\]) take the form $$\label{eq:DCT88SMBE}
\tilde{S}_{[0,k]} = -e \left(
\frac{1}{c}
\frac{\partial A_{k}}{\partial t}+
\frac{\partial\Phi}{\partial x_{k}} \right),\;\;\;\;
\tilde{S}_{[j,k]} = \frac{e}{c} \left(
\frac{\partial A_{j}}{\partial x_{k}}-
\frac{\partial A_{k}}{\partial x_{j}} \right)
\mbox{.}$$ As a consequence, they may be expressed in terms of the local fields (\[eq:DFS32WUSS\]), which have been introduced above for reasons of gauge-invariance. Using (\[eq:DCT88SMBE\]), (\[eq:DFS32WUSS\]) and the relation (\[eq:DOJUE8W22\]) for the momentum field, Eq. (\[eq:NID2S3TBSF\]) takes the form $$\label{eq:NIJUR4TBSF}
\begin{split}
-&\int_{-\infty}^{\infty} \mathrm{d}^{3} x
\frac{\partial \rho}{\partial x_{k}}
\left[
\frac{\partial \tilde{S}}{\partial t}
+\frac{1}{2m}
\sum_{j}\left( \frac{\partial \tilde{S}}{\partial x_{j}} \right)^{2}
+V
\right]\\
+&\int_{-\infty}^{\infty} \mathrm{d}^{3} x \rho
\left[\frac{e}{c} \epsilon_{kij} \tilde{v}_{i} B_{j} + eE_{k}
\right] = \overline{F^{(e)}_{k}(x,\,p,\,t)}
\end{split}
\mbox{,}$$ with a velocity field defined by $\tilde{v}_{i}=\tilde{p}_{i}/m$. Thus, the new terms on the l.h.s. of (\[eq:NIJUR4TBSF\]) - stemming from the multi-valuedness of $\tilde{S}$ - take the form of an expectation value (with $\mathcal{R}^{3}$ as sample space) of the Lorentz force field $$\label{eq:LOZU5FF}
\vec{F}_{L}(x,t)=e\vec{E}(x,t)+\frac{e}{c}\vec{\tilde{v}}(x,t)
\times \vec{B}(x,t)
\mbox{,}$$ if the particle velocity is identified with the velocity field $\vec{\tilde{v}}(x,t)$.
The above steps imply a relation between potentials and local fields. From the present statistical (nonlocal) point of view the potentials are more fundamental than the local fields. In contrast, considered from the point of view of macroscopic physics, the local fields are the physical quantities of primary importance and the potentials may (or may not) be introduced for mathematical convenience.
A constraint for forces in statistical theories {#sec:stat-constr-macr}
===============================================
Let us discuss now the nature of the macroscopic forces $F_{k}^{(e)}(x,\,p,\,t)$ entering the expectation value on the r.h.s. of Eq. (\[eq:NIJUR4TBSF\]). In our type I parent theory, classical mechanics, there are no constraints for the possible functional form of $F_{k}^{(e)}(x,\,p,\,t)$. However, this need not be true in the present statistical framework. As a matter of fact, the way the mechanical potential $V(x,t)$ entered the differential equation for $S$ (in the previous work I) indicates already that such constraints do actually exist. Let us recall that in I we tacitly restricted the class of forces to those derivable from a potential $V(x,t)$. If we eliminate this restriction and admit arbitrary forces, with components $F_{k}(x,t)$, we obtain instead of the above relation (\[eq:NIJUR4TBSF\]) the simpler relation \[Eq. (24) of I, generalized to three dimensions and arbitrary forces of the form $F_{k}(x,t)$\] $$\label{eq:DZJU623F}
-\int_{-\infty}^{\infty} \mathrm{d}^{3} x\,\frac{\partial\rho}{\partial x_{k}}\,
\left[
\frac{1}{2m}\sum_{j} \left(\frac{\partial S}{\partial x_{j}} \right)^{2}+
\frac{\partial S}{\partial t}
\right]=
\int_{-\infty}^{\infty} \mathrm{d} x \rho F_{k}(x,t)
\mbox{.}$$ This is a rather complicated integro-differential equation for our variables $\rho(x,t)$ and $S(x,t)$. We assume now, using mathematical simplicity as a guideline, that Eq. can be written in the common form of a local differential equation. This assumption is of course not evident; in principle the laws of physics could be integro-differential equations or delay differential equations or take an even more complicated mathematical form. Nevertheless, this assumption seems rather weak considering the fact that all fundamental laws of physics take this ’simple’ form. Thus, we postulate that Eq. is equivalent to a differential equation $$\label{eq:DZ3MSV93F}
\frac{1}{2m}\sum_{j} \left(\frac{\partial S}{\partial x_{j}} \right)^{2}+
\frac{\partial S}{\partial t} + T = 0
\mbox{,}$$ where the unknown term $T$ describes the influence of the force $F_{k}$ but may also contain other contributions. Let us write $$\label{eq:JUE23PO2W}
T=-L_{0}+V
\mbox{,}$$ where $L_{0}$ does not depend on $F_{k}$, while $V$ depends on it and vanishes for $F_{k} \to 0$. Inserting and in yields $$\label{eq:DTWQ84O7F}
\int \mathrm{d}^{3}x \,\frac{\partial\rho}{\partial x_{k}}\,
\left( -L_{0}+V \right)= \int \mathrm{d}^{3}x\, \rho F_{k}(x,t)
\mbox{.}$$ For $F_{k} \to 0$ Eq. leads to the relation $$\label{eq:DQWEI977F}
\int \mathrm{d}^{3}x \,\frac{\partial\rho}{\partial x_{k}}\,L_{0}= 0
\mbox{,}$$ which remains true for finite forces because $L_{0}$ does not depend on $F_{k}$. Finally, performing a partial integration, we see that a relation $$\label{eq:ESZURVF5ZW}
F_{k}=-\frac{\partial V}{\partial x_{k}}+s_{k},\; \; \;
\int_{-\infty}^{\infty} \mathrm{d}^{3}x\,\rho s_{k}=0
\mbox{,}$$ exists between $F_{k}$ and $V$, with a vanishing expectation value of the (statistically irrelevant) functions $s_{k}$. This example shows that the restriction to gradient fields, made above and in I, is actually not necessary. We may *admit* force fields which are arbitrary functions of $x$ and $t$; the statistical conditions (which play now the role of a ’statistical constraint’) eliminate automatically all forces that cannot be written after statistical averaging as gradient fields.
This is very interesting and indicates the possibility that the present statistical assumptions leading to Schrödinger’s equation may also be responsible, at least partly, for the structure of the real existing (gauge) interactions of nature.
Does this statistical constraint also work in the present $p-$dependent case ? We assume that the force in (\[eq:NIJUR4TBSF\]) is a standard random variable with the configuration space as sample space (see the discussion in section 4 of I) and that the variable $p$ in $F_{k}^{(e)}(x,\,p,\,t)$ may consequently be replaced by the field $\tilde{p}(x,t)$ \[see (\[eq:DOJUE8W22\])\]. Then, the expectation value on the r.h.s. of (\[eq:NIJUR4TBSF\]) takes the form $$\label{eq:DEHDF5DIMW}
\overline{F^{(e)}_{k}(x,\,p,\,t)}=
\int_{-\infty}^{\infty} \mathrm{d}^{3} x \rho(x,t)
H_{k}(x,\frac{\partial \tilde{S}(x,t)}{\partial x},t)
\mbox{.}$$ The second term on the l.h.s. of (\[eq:NIJUR4TBSF\]) has the *same* form. Therefore, the latter may be eliminated by writing $$\label{eq:WWH45HDSTL}
H_{k}(x,\frac{\partial \tilde{S}}{\partial x},t )=
\frac{e}{c} \epsilon_{kij}
\frac{1}{m} \frac{\partial \tilde{S}}{\partial x_{i}}
B_{j} + eE_{k}
+h_{k}(x,\frac{\partial \tilde{S}}{\partial x},t )
\mbox{,}$$ with $h_{k}(x,p,t)$ as our new unknown functions. They obey the simpler relations $$\label{eq:NISSD4BSF}
-\int_{-\infty}^{\infty} \mathrm{d}^{3} x
\frac{\partial \rho}{\partial x_{k}}
\left[
\frac{\partial \tilde{S}}{\partial t}
+\frac{1}{2m}
\sum_{j}\left( \frac{\partial \tilde{S}}{\partial x_{j}} \right)^{2}
+V
\right] =
\int_{-\infty}^{\infty} \mathrm{d}^{3} x \rho
h_{k}(x,\frac{\partial \tilde{S}}{\partial x},t)
\mbox{.}$$ On a first look this condition for the allowed forces looks similar to the $p-$independent case \[see (\[eq:DZJU623F\])\]. But the dependence of $h_{k}$ on $x,t$ cannot be considered as ’given’ (externally controlled), as in the $p-$independent case, because it contains now the unknown $x,t$-dependence of the derivatives of $\tilde{S}$. We may nevertheless try to incorporate the r.h.s by adding a term $\tilde{T}$ to the bracket which depends on the derivatives of the multivalued quantity $\tilde{S}$. This leads to the condition $$\label{eq:ES2ZODF222}
h_{k}(x,\frac{\partial \tilde{S}}{\partial x},t)=
-\frac{\partial \tilde{T}(x,\frac{\partial \tilde{S}}{\partial x},t)}
{\partial x_{k}}+s_{k},\; \; \;
\int_{-\infty}^{\infty} \mathrm{d}^{3}x\,\rho s_{k}=0
\mbox{.}$$ But this relation cannot be fulfilled for nontrivial $h_{k},\,\tilde{T}$ because the derivatives of $\tilde{S}$ cannot be subject to further constraints beyond those given by the differential equation; on top of that the derivatives with regard to $x$ on the r.h.s. create higher order derivatives of $\tilde{S}$ which are not present at the l.h.s. of Eq. (\[eq:ES2ZODF222\]). The only possibility to fulfill this relation is for constant $\frac{\partial \tilde{S}}{\partial x}$, a special case which has in fact already be taken into account by adding the mechanical potential $V$. We conclude that the statistical constraint leads to $h_{k}=\tilde{T}=0$ and that the statistical condition (\[eq:NISSD4BSF\]) takes the form $$\label{eq:NIFF9OTFBSF}
-\int \mathrm{d}^{3} x
\frac{\partial \rho}{\partial x_{k}}
\left[
\frac{\partial \tilde{S}}{\partial t}
+\frac{1}{2m}
\sum_{j}\left( \frac{\partial \tilde{S}}{\partial x_{j}} \right)^{2}
+V
\right] =0
\mbox{.}$$
Thus, only a mechanical potential and the four electrodynamic potentials are compatible with the statistical constraint and will consequently - assuming that the present statistical approach reflects a fundamental principle of nature - be realized in nature. As is well known all existing interactions follow (sometimes in a generalized form) the gauge coupling scheme derived above. The statistical conditions imply not only Schrödinger’s equation but also the form of the (gauge) coupling to external influences and the form of the corresponding local force, the Lorentz force, $$\label{eq:LOFFUNW}
\vec{F}_{L}=e\vec{E}+\frac{e}{c}\vec{v} \times \vec{B}
\mbox{,}$$ if the particle velocity $\vec{v}$ is identified with the velocity field $\vec{\tilde{v}}(x,t)$.
In the present derivation the usual order of proceeding is just inverted. In the conventional deterministic treatment the form of the local forces (Lorentz force), as taken from experiment, is used as a starting point. The potentials are introduced afterwards, in the course of a transition to a different formal framework (Lagrange formalism). In the present approach the fundamental assumptions are the statistical conditions. Then, taking into account an existing mathematical freedom (multi-valuedness of a variable) leads to the introduction of potentials. From these, the shape of the macroscopic (Lorentz) force can be derived, using the validity of the statistical conditions as a constraint.
Fisher information as the hallmark of quantum theory {#sec:fisher-information}
====================================================
The remaining nontrivial task is the derivation of a local differential equation for $S$ and $\rho$ from the integral equation (\[eq:NIFF9OTFBSF\]). As our essential constraint we will use, besides general principles of simplicity (like homogeneity and isotropy of space) the principle of maximal disorder, as realized by the requirement of minimal Fisher information. Using the abbreviation $$\label{eq:DG4DGFL3NEU}
\bar{L}(x,t)=\frac{\partial \tilde{S}}{\partial t}+\frac{1}{2m}\left( \frac{\partial \tilde{S}(x,t)}{\partial x}\right)^{2}
+V(x,t)
\mbox{,}$$ the general solution of (\[eq:NIFF9OTFBSF\]) may be written in the form $$\label{eq:DAL2L9IWF}
\frac{\partial \rho}{\partial x_{k}}
\bar{L}(x,t) = G_{k}(x,t)
\mbox{,}$$ where the three functions $G_{k}(x,t)$ have to vanish upon integration over $\mathcal{R}^{3}$ and are otherwise arbitrary. If we restrict ourselves to an isotropic law, we may write $$\label{eq:WR23OTAIL}
G_{k}(x,t)=\frac{\partial \rho}{\partial x_{k}}
L_{0}
\mbox{.}$$ Then, our problem is to find a function $L_{0}$ which fulfills the differential equation $$\label{eq:ZUW78ERHD}
\bar{L}(x,t)-L_{0}=0
\mbox{,}$$ and condition . The method used in I for a one-dimensional situation, to determine $L_{0}$ from the requirement of minimal Fisher information, remains essentially unchanged in the present three-dimensional case. The reader is referred to the detailed explanations reported in I.
In I it has been shown that this principle of maximal disorder leads to an anomalous variational problem and to the following conditions for our unknown function $L_{0}$: $$\begin{gathered}
\bar{L}(x,t)-
L_{0}\left(\rho,\frac{\partial \rho}{\partial x},
\frac{\partial^{2} \rho}{\partial x \partial x} \right)
=0 \label{eq:EEFFRRWWW2} \\
\delta \int \mathrm{d}^{3} x\rho
\left[ \bar{L}(x,t)-
L_{0} \left( \rho,\frac{\partial \rho}{\partial x},
\frac{\partial^{2} \rho}{\partial x \partial x} \right)
\right] = 0
\label{eq:EEFFRRWXYZ2}
\mbox{,}\end{gathered}$$ where $L_{0}$ contains only derivatives of $\rho$ up to second order and does not explicitely depend on $x,\,t$. If Eq. (\[eq:EEFFRRWWW2\]) is taken into account, the Euler-Lagrange equations of the variational problem (\[eq:EEFFRRWXYZ2\]) lead to the following differential equation $$\label{eq:NK17DEDGLV}
-\frac{\partial}{\partial x_{k}} \frac{\partial}{\partial x_{i}}
\frac{\partial \beta}
{\partial\left(
\frac{\partial^{2} \rho}{\partial x_{k} \partial x_{i}}
\right)}
+\frac{\partial}{\partial x_{k}}
\frac{\partial \beta}
{\partial\left(
\frac{\partial \rho}{\partial x_{k} } \right)}
-\frac{\partial \beta}{\partial \rho}
+\frac{\beta}{\rho}=0
\mbox{}$$ for the variable $\beta=\rho L_{0}$. Eq. (\[eq:NK17DEDGLV\]) is a straightforward generalization of the corresponding one-dimensional relation \[equation (68) of I\] to three spatial dimensions.
Besides (\[eq:NK17DEDGLV\]) a further (consistency) condition exists, which leads to a simplification of the problem. The function $L_{0}$ may depend on second order derivatives of $\rho$ but this dependence must be of a special form not leading to any terms in the Euler-Lagrange equations \[according to (\[eq:EEFFRRWWW2\]) our final differential equation for $S$ and $\rho$ must not contain higher than second order derivatives of $\rho$\]. Consequently, the first term in Eq. (\[eq:NK17DEDGLV\]) (as well as the sum of the remaining terms) has to vanish separately and (\[eq:NK17DEDGLV\]) can be replaced by the two equations $$\begin{gathered}
\frac{\partial}{\partial x_{k}} \frac{\partial}{\partial x_{i}}
\frac{\partial \beta}
{\partial\left(
\frac{\partial^{2} \rho}{\partial x_{k} \partial x_{i}}
\right)}
=0 \label{eq:HHWQHW11} \\
\frac{\partial}{\partial x_{k}}
\frac{\partial \beta}
{\partial\left(
\frac{\partial \rho}{\partial x_{k} } \right)}
-\frac{\partial \beta}{\partial \rho}
+\frac{\beta}{\rho}=0
\label{eq:EHHWQRW22}
\mbox{.}\end{gathered}$$ In I a new derivation of Fisher’s functional has been obtained, using the general solution of the one-dimensional version of (\[eq:NK17DEDGLV\]), as well as the so-called composition law. In the present three-dimensional situation we set ourselves a less ambitious aim. We know that Fisher’s functional describes the maximal amount of disorder. If we are able to find a solution of (\[eq:HHWQHW11\]), (\[eq:EHHWQRW22\]) that agrees with this functional (besides ’null-terms’ giving no contribution to the Euler-Lagrange equations) then we will accept it as our correct solution. It is easy to see that this solution is given by $$\label{eq:DES3SL5HLL}
L_{0}=B_{0}\left[
-\frac{1}{2\rho^{2}}\sum_j \left(\frac{\partial \rho}
{\partial x_{j}} \right)^{2}
+\frac{1}{\rho}\sum_j \frac{\partial^{2} \rho}
{\partial x_{j}^{2}}
\right]
\mbox{,}$$ where $B_{0}$ is an arbitrary constant. Eq. (\[eq:DES3SL5HLL\]) presents again the three-dimensional (and isotropic) generalization of the one-dimensional result obtained in I. By means of the identity $$\label{eq:ZU23A98SI}
\frac{\partial}{\partial x_{i}}
\frac{\partial \sqrt{\rho} }{\partial x_{i}}
\frac{\partial \sqrt{\rho} }{\partial x_{k}}=
\frac{\partial \sqrt{\rho} }{\partial x_{k}}
\frac{\partial}{\partial x_{i}}
\frac{\partial \sqrt{\rho} }{\partial x_{i}}+
\frac{1}{2}
\frac{\partial}{\partial x_{k}}
\frac{\partial \sqrt{\rho} }{\partial x_{i}}
\frac{\partial \sqrt{\rho} }{\partial x_{i}}
\mbox{,}$$ it is easily verified that the solution (\[eq:DES3SL5HLL\]) obeys also condition (\[eq:DQWEI977F\]). Using the decomposition (\[eq:DOS4MBWIF\]) and renaming $B$ according to $B=\hbar^{2}/4m$, the continuity equation (\[eq:CONT3DSCHL\]) and the second differential equation (\[eq:EEFFRRWWW2\]) respectively, take the form $$\begin{gathered}
\frac{\partial \rho}{\partial t}+\frac{\partial}{\partial x_{k}}
\frac{\rho}{m} \left(
\frac{\partial S}{\partial x_{k}}-\frac{e}{c} A_{k}
\right)=0
\label{eq:CONH3TMF}
\mbox{,} \\
\frac{\partial S}{\partial t}+e\phi + \frac{1}{2m}
\sum_{k}
\left(
\frac{\partial S}{\partial x_{k}}-\frac{e}{c} A_{k}
\right)^{2} +V -
\frac{\hbar^2}{2m}\frac{\triangle\sqrt{\rho}}{\sqrt{\rho}}
=0
\label{eq:QHJH34MF}
\mbox{.}\end{gathered}$$ The function $S$ occurring in (\[eq:CONH3TMF\]), (\[eq:QHJH34MF\]) is single-valued but not unique (not gauge-invariant). If now the complex-valued variable $$\label{eq:HDIS3JWUI}
\psi = \sqrt{\rho}\mathrm{e}^{\imath \frac{S}{\hbar}}
\mbox{,}$$ is introduced, the two equations (\[eq:CONH3TMF\]), (\[eq:QHJH34MF\]) may be written in compact form as real and imaginary parts of the linear differential equation $$\label{eq:DGMAENE}
\big(\frac{\hbar}{\imath} \frac{\partial}{\partial t}
+e\phi \big)\psi
+ \frac{1}{2 m }
\big(\frac{\hbar}{\imath} \frac{\partial}{\partial \vec{x}} -
\frac{e}{c} \vec{A} \big)^{2}\psi
+V\psi=0
\mbox{,}$$ which completes our derivation of Schrödinger’s equation in the presence of a gauge field.
Eq. (\[eq:DGMAENE\]) is in manifest gauge-invariant form. The gauge-invariant derivatives of $\tilde{S}$ with respect to $t$ and $\vec{x}$ correspond to the two brackets in (\[eq:DGMAENE\]). In particular, the canonical momentum $\partial S/ \partial \vec{x}$ corresponds to the momentum operator proportional to $\partial / \partial \vec{x}$. Very frequently, Eq. (\[eq:DGMAENE\]) is written in the form $$\label{eq:SCH96RG2IFU}
-\frac{\hbar}{\imath} \frac{\partial}{\partial t} \psi=
H \psi,
\mbox{,}$$ with the Hamilton operator $$\label{eq:HA3UZM9OPR}
H =
\frac{1}{2 m }
\big(\frac{\hbar}{\imath} \frac{\partial}{\partial \vec{x}} -
\frac{e}{c} \vec{A} \big)^{2}
+V +e\phi
\mbox{, }$$
Our final result, Eqs. , , agrees with the result of the conventional quantization procedure. In its simplest form, the latter starts from the classical relation $H(x,p)=E$, where $H(x,p)$ is the Hamiltonian of a classical particle in a conservative force field, and $E$ is its energy. To perform a “canonical quantization” means to replace $p$ and $E$ by differential expressions according to and let then act both sides of the equation $H(x,p)=E$ on states $\psi$ of a function space. The ’black magic’ involved in this process has been eliminated, or at least dramatically reduced, in the present approach, where Eqs. , have been derived from a set of assumptions which can all be interpreted in physical terms.
The Hamiltonian depends on the potentials $\Phi$ and $\vec{A}$ and is consequently a non-unique (not gauge-invariant) mathematical object. The same is true for the time-development operator $U(H)$ which is an operator function of $H$, see e.g. [@kobe:time-evolution]. This non-uniqueness is a problem if $U(H)$ is interpreted as a quantity ruling the time-evolution of a single particle. It is no problem from the point of view of the SI where $H$ and $U(H)$ are primarily convenient mathematical objects which occur in a natural way if the time-dependence of statistically relevant (uniquely defined) quantities, like expectation values and transition probabilities, is to be calculated.
Spin as a statistical degree of freedom {#sec:spin}
=======================================
Spin is generally believed to be a phenomenon of quantum-theoretic origin. For a long period of time, following Dirac’s derivation of his relativistic equation, it was also believed to be essentially of relativistic origin. This has changed since the work of [@schiller:spinning], [@levy-leblond:nonrelativistic], [@arunsalam:hamiltonians], [@gould:intrinsic], [@reginatto:pauli] and others, who showed that spin may be derived entirely in the framework of non-relativistic QT without using any relativistic concepts. Thus, a new derivation of non-relativistic QT like the present one should also include a derivation of the phenomenon of spin. This will be done in this and the next two sections.
A simple idea to extend the present theory is to assume that sometimes - under certain external conditions to be identified later - a situation occurs where the behavior of our statistical ensemble of particles cannot longer be described by $\rho,\,S$ alone but requires, e.g., the double number of field variables; let us denote these by $\rho_1,\,S_1,\,\rho_2,\,S_2$ (we restrict ourselves here to spin one-half). The relations defining this generalized theory should be formulated in such a way that the previous relations are obtained in the appropriate limits. One could say that we undertake an attempt to introduce a new (discrete) degree of freedom for the ensemble. If we are able to derive a non-trivial set of differential equations - with coupling between $\rho_1,\,S_1$ and $ \rho_2,\,S_2$ - then such a degree of freedom could exist in nature.
Using these guidelines, the basic equations of the generalized theory can be easily formulated. The probability density and probability current take the form $\rho = \rho_1 + \rho_2$ and $\vec{j}=\vec{j}_1+\vec{j}_2$, with $\vec{j}_i\;$($i=1,2$) defined in terms of $\rho_i,\,S_i$ exactly as before (see section \[sec:calcwpt\]). Then, the continuity equation is given by $$\label{eq:contspin}
\frac{\partial (\rho_1+\rho_2)}{\partial t}+
\frac{\partial}{\partial x_l}
\left(
\frac{\rho_1}{m}\frac{\partial \tilde{S}_1}{\partial x_l}
+\frac{\rho_2}{m}\frac{\partial \tilde{S}_2}{\partial x_l}
\right)=0
\mbox{,}$$ where we took the possibility of multi-valuedness of the “phases“ already into account, as indicated by the notation $\tilde{S}_i$. The statistical conditions are given by the two relations $$\begin{gathered}
\frac{\mathrm{d}}{\mathrm{d}t} \overline{x_k} = \frac{\overline{p_k}}{m}
\label{eq:FIRSTAETSPIN}\\
\frac{\mathrm{d}}{\mathrm{d}t} \overline{p_k} =
\overline{F_k^{(T)}(x,\,p,\,t)}
\label{eq:SECONAETSPIN}
\mbox{,}\end{gathered}$$ which are similar to the relations used previously (in section \[sec:calcwpt\] and in I), and by an additional equation $$\label{eq:ANA35EQUT}
\frac{\mathrm{d}}{\mathrm{d}t} \overline{s_k} =
\overline{F_k^{(R)}(x,\,p,\,t)}
\mbox{,}$$ which is required as a consequence of our larger number of dynamic variables. Eq. (\[eq:ANA35EQUT\]) is best explained later; it is written down here for completeness. The forces $F_k^{(T)}(x,\,p,\,t)$ and $F_k^{(R)}(x,\,p,\,t)$ on the r.h.s. of (\[eq:SECONAETSPIN\]) and (\[eq:ANA35EQUT\]) are again subject to the “statistical constraint“, which has been defined in section \[sec:gaugecoupling\]. The expectation values are defined as in (\[eq:ERWAETXKSCHL\])-(\[eq:ERWAETFKXPSCHL\]).
Performing mathematical manipulations similar to the ones reported in section \[sec:calcwpt\], the l.h.s. of Eq. (\[eq:SECONAETSPIN\]) takes the form $$\label{eq:AHK1LUA9IB}
\begin{split}
\frac{\mathrm{d}}{\mathrm{d}t} \overline{p_k} &=
\int \mathrm{d}^{3} x
\Big[
\frac{\partial \rho_1}{\partial t}
\frac{\partial \tilde{S}_1}{\partial x_{k}}
+\frac{\partial \rho_2}{\partial t}
\frac{\partial \tilde{S}_2}{\partial x_{k}}\\
& - \frac{\partial \rho_1}{\partial x_{k}}
\frac{\partial \tilde{S}_1}{\partial t}
- \frac{\partial \rho_2}{\partial x_{k}}
\frac{\partial \tilde{S}_2}{\partial t}
+\rho_1 \tilde{S}_{[0,k]}^{(1)}
+\rho_2 \tilde{S}_{[0,k]}^{(2)}
\Big] \mbox{,}
\end{split}$$ where the quantities $ \tilde{S}_{[j,k]}^{(i)},\,i=1,2$ are defined as above \[see Eq. (\[eq:INC35HEOS\])\] but with $\tilde{S}$ replaced by $\tilde{S}_i$.
Let us write now $\tilde{S}$ in analogy to section \[sec:calcwpt\] in the form $\tilde{S}_i=S_i+\tilde{N}_i$, as a sum of a single-valued part $S_i$ and a multi-valued part $\tilde{N}_i$. If $\tilde{N}_1$ and $\tilde{N}_2$ are to represent an external influence, they must be identical and a single multi-valued part $\tilde{N}=\tilde{N}_1=\tilde{N}_2$ may be used instead. The derivatives of $\tilde{N}$ with respect to $t$ and $x_k$ must be single-valued and we may write $$\label{eq:DHI9DD8HIN}
\frac{\partial \tilde{S}_i}{\partial t}=
\frac{\partial S_i}{\partial t}+e \Phi,\;\;\;\;\;
\frac{\partial \tilde{S}_i}{\partial x_{k}}=
\frac{\partial S_i}{\partial x_{k}}-\frac{e}{c}A_{k}
\mbox{,}$$ using the same familiar electrodynamic notation as in section \[sec:calcwpt\]. In this way we arrive at eight single-valued functions to describe the external conditions and the dynamical state of our system, namely $\Phi,\,A_{k}$ and $\rho_i,\,S_i$.
In a next step we replace $\rho_i,\,S_i$ by new dynamic variables $\rho,\,S,\,\vartheta,\,\varphi$ defined by $$\label{eq:DS12TRF8FU}
\begin{split}
&\rho_1=\rho \cos^{2}\frac{\vartheta}{2},
\hspace{1.6cm}S_1=S+\frac{\hbar}{2}\varphi, \\
&\rho_2=\rho \sin^{2}\frac{\vartheta}{2},
\hspace{1.6cm}S_2=S-\frac{\hbar}{2}\varphi.
\end{split}$$ A transformation similar to Eq. (\[eq:DS12TRF8FU\]) has been introduced by [@takabayasi:vector] in his reformulation of Pauli’s equation. Obviously, the variables $S,\,\rho$ describe ’center of mass’ properties (which are common to both states $1$ and $2$) while $\vartheta,\,\varphi$ describe relative (internal) properties of the system.
The dynamical variables $S,\,\rho$ and $\vartheta,\,\varphi$ are not decoupled from each other. It turns out (see below) that the influence of $\vartheta,\,\varphi$ on $S,\,\rho$ can be described in a (formally) similar way as the influence of an external electromagnetic field if a ’vector potential’ $\vec{A}^{(s)}$ and a ’scalar potential’ $\phi^{(s)}$, defined by $$\label{eq:HIDV32PESSF}
A_{l}^{(s)}= -\frac{\hbar c}{2e} \cos\vartheta \frac{\partial
\varphi}{\partial x_l},\hspace{1cm}
\phi^{(s)} = \frac{\hbar}{2e} \cos\vartheta \frac{\partial
\varphi}{\partial t}
\mbox{,}$$ are introduced. Denoting these fields as ’potentials’, we should bear in mind that they are not externally controlled but defined in terms of the internal dynamical variables. Using the abbreviations $$\label{eq:ZUWA94BRR}
\hat{A_l}=A_l+A_{l}^{(s)},\hspace{1cm}
\hat{\phi}=\phi+\phi^{(s)}
\mbox{,}$$ the second statistical condition (\[eq:SECONAETSPIN\]) can be written in the following compact form $$\label{eq:H7TRJS4DZ}
\begin{split}
&-\int \mathrm{d}^{3} x
\frac{\partial \rho}{\partial x_{l}}
\bigg[
\bigg( \frac{\partial S}{\partial t}+e \hat{\phi} \bigg)
+\frac{1}{2m}
\sum_{j}
\bigg(
\frac{\partial S}{\partial x_{j}} - \frac{e}{c} \hat{A_j}
\bigg)^{2}
\bigg]\\
&+\int \mathrm{d}^{3} x
\rho
\bigg[
-\frac{e}{c} v_j
\bigg(
\frac{\partial \hat{A}_l}{\partial x_{j}}
-\frac{\partial \hat{A}_j}{\partial x_{l}}
\bigg)
-\frac{e}{c} \frac{\partial \hat{A}_l}{\partial t}
-e \frac{\partial \hat{\phi}}{\partial x_{l}}
\bigg] \\
&= \overline{F^{(T)}_{l}(x,\,p,\,t)}
=\int \mathrm{d}^{3} x \rho F^{(T)}_{l}(x,\,p,\,t)
\mbox{,}
\end{split}$$ which shows a formal similarity to the spinless case \[see (\[eq:NID2S3TBSF\]) and (\[eq:DCT88SMBE\])\]. The components of the velocity field in (\[eq:H7TRJS4DZ\]) are given by $$\label{eq:DSS3DKDXVF}
v_j=\frac{1}{m} \bigg(
\frac{\partial S}{\partial x_{j}} - \frac{e}{c} \hat{A_j}
\bigg)
\mbox{.}$$ If now fields $E_l,\,B_l$ and $E_l^{(s)},\,B_l^{(s)}$ are introduced by relations analogous to (\[eq:DFS32WUSS\]), the second line of (\[eq:H7TRJS4DZ\]) may be written in the form $$\label{eq:DLH3KJ7GNG}
\int \mathrm{d}^{3} x \rho
\bigg[
\big(
e\vec{E}+\frac{e}{c}\vec{v} \times \vec{B}
\big)_l
+
\big(
e\vec{E^{(s)}}+\frac{e}{c}\vec{v} \times \vec{B^{(s)}}
\big)_l
\bigg]
\mbox{,}$$ which shows that both types of fields, the external fields as well as the internal fields due to $\vartheta,\,\varphi$, enter the theory in the same way, namely in the form of a Lorentz force.
The first, externally controlled Lorentz force in (\[eq:DLH3KJ7GNG\]) may be eliminated in exactly the same manner as in section \[sec:gaugecoupling\] by writing $$\label{eq:DLI3LDF9SNG}
\overline{F^{(T)}_{l}(x,\,p,\,t)}=
\int \mathrm{d}^{3} x \rho
\big(
e\vec{E}+\frac{e}{c}\vec{v} \times \vec{B}
\big)_l
+\int \mathrm{d}^{3} x \rho F^{(I)}_{l}(x,\,p,\,t)
\mbox{.}$$ This means that one of the forces acting on the system as a whole is again given by a Lorentz force; there may be other nontrivial forces $F^{(I)}$ which are still to be determined. The second ’internal’ Lorentz force in (\[eq:DLH3KJ7GNG\]) can, of course, not be eliminated in this way. In order to proceed, the third statistical condition (\[eq:ANA35EQUT\]) must be implemented. To do that it is useful to rewrite Eq. (\[eq:H7TRJS4DZ\]) in the form $$\label{eq:H23DASU8Z}
\begin{split}
&-\int \mathrm{d}^{3} x
\frac{\partial \rho}{\partial x_{l}}
\bigg[
\bigg( \frac{\partial S}{\partial t}+e \hat{\phi} \bigg)
+\frac{1}{2m}
\sum_{j}
\bigg(
\frac{\partial S}{\partial x_{j}} - \frac{e}{c} \hat{A_j}
\bigg)^{2}
\bigg]\\
&+\int \mathrm{d}^{3} x
\frac{\hbar}{2} \rho \sin \vartheta
\bigg(
\frac{\partial \vartheta}{\partial x_{l}}
\bigg[
\frac{\partial \varphi}{\partial t}
+ v_j\frac{\partial \varphi}{\partial x_{j}}
\bigg]
-\frac{\partial \varphi}{\partial x_{l}}
\bigg[
\frac{\partial \vartheta}{\partial t}
+ v_j\frac{\partial \vartheta}{\partial x_{j}}
\bigg]
\bigg) \\
&= \overline{F^{(I)}_{l}(x,\,p,\,t)}
=\int \mathrm{d}^{3} x \rho F^{(I)}_{l}(x,\,p,\,t)
\mbox{,}
\end{split}$$ using (\[eq:DLH3KJ7GNG\]), (\[eq:DLI3LDF9SNG\]) and the definition (\[eq:HIDV32PESSF\]) of the fields $A_{l}^{(s)}$ and $\phi^{(s)}$.
We interpret the fields $\varphi$ and $\vartheta$ as angles (with $\varphi$ measured from the $y-$axis of our coordinate system) determining the direction of a vector $$\label{eq:UF27VWN8B}
\vec{s}=\frac{\hbar}{2}
\big(
\sin \vartheta \sin \varphi \, \vec{e}_x
+ \sin \vartheta \cos \varphi \, \vec{e}_y
+ \cos \vartheta \, \vec{e}_z
\big)
\mbox{,}$$ of constant length $\frac{\hbar}{2}$. As a consequence, $\dot{\vec{s}}$ and $\vec{s}$ are perpendicular to each other and the classical force $\vec{F}^{(R)}$ in Eq. (\[eq:ANA35EQUT\]) should be of the form $\vec{D}\times \vec{s}$, where $\vec{D}$ is an unknown field. In contrast to the ’external force’, we are unable to determine the complete form of this ’internal’ force from the statistical constraint \[an alternative treatment will be reported in section \[sec:spin-as-gauge\]\] and set $$\label{eq:NG89NWG2EN}
\vec{F}^{(R)}=
-\frac{e}{mc}\vec{B} \times \vec{s}
\mbox{,}$$ where $\vec{B}$ is the external ’magnetic field’, as defined by Eq. (\[eq:DFS32WUSS\]), and the factor in front of $\vec{B}$ has been chosen to yield the correct $g-$factor of the electron.
The differential equation $$\label{eq:BBHA87EWUT}
\frac{\mathrm{d}}{\mathrm{d}t} \vec{s} =
-\frac{e}{mc}\vec{B} \times \vec{s}
\mbox{}$$ for particle variables $ \vartheta(t),\;\varphi(t)$ describes the rotational state of a classical magnetic dipole in a magnetic field, see [@schiller:spinning]. Recall that we do *not* require that is fulfilled in the present theory. The present variables are the fields $ \vartheta(x,t),\;\varphi(x,t)$ which may be thought of as describing a kind of ’rotational state’ of the statistical ensemble as a whole, and have to fulfill the ’averaged version’ (\[eq:ANA35EQUT\]) of (\[eq:BBHA87EWUT\]).
Performing steps similar to the ones described in I (see also section \[sec:calcwpt\]), the third statistical condition (\[eq:ANA35EQUT\]) implies the following differential relations, $$\begin{aligned}
\dot{\varphi}+v_j\frac{\partial \varphi}{\partial x_j} & = &
\frac{e}{mc} \frac{1}{\sin \vartheta}
\left(
B_z \sin \vartheta - B_y \cos \vartheta \cos\varphi
- B_z \cos \vartheta \sin\varphi
\right) \nonumber \\
& + & \frac{\cos \varphi }{\sin \vartheta} G_1-
\frac{\sin \varphi }{\sin \vartheta} G_2, \label{eq:Q29D3SJUE} \\
\dot{\vartheta}+v_j\frac{\partial \vartheta}{\partial x_j} & = &
\frac{e}{mc} \left( B_x \cos \varphi - B_y \sin\varphi \right)
-\frac{G_3}{\sin \vartheta}
\label{eq:Q29D3SJ22}
\mbox{,}\end{aligned}$$ for the dynamic variables $\vartheta$ and $\varphi$. These equations contain three fields $G_i(x,t),\;i=1,2,3$ which have to obey the conditions $$\label{eq:DR4ZGC98FG}
\int \mathrm{d}^{3} x \rho \, G_i =0,\hspace{0.5cm}
\vec{G}\vec{s}=0
\mbox{,}$$ and are otherwise arbitrary. The ’total derivatives’ of $\varphi$ and $\vartheta$ in (\[eq:H23DASU8Z\]) may now be eliminated with the help of (\[eq:Q29D3SJUE\]),(\[eq:Q29D3SJ22\]) and the second line of Eq. (\[eq:H23DASU8Z\]) takes the form $$\label{eq:AT2UZ5WE8Q}
\begin{split}
&\int \mathrm{d}^{3} x \frac{\partial \rho}{\partial x_{l}}
\frac{e}{mc}s_jB_j+
\int \mathrm{d}^{3} x \rho \,\frac{e}{mc}
s_j\frac{\partial}{\partial x_l} B_j \\
+&\int \mathrm{d}^{3} x \rho \,\frac{\hbar}{2}
\big(
\cos\varphi \frac{\partial \vartheta}{\partial x_l} G_1
- \sin\varphi \frac{\partial \vartheta}{\partial x_l} G_2
+ \frac{\partial \varphi}{\partial x_l} G_3
\big)
\mbox{.}
\end{split}$$
The second term in (\[eq:AT2UZ5WE8Q\]) presents an external macroscopic force. It may be eliminated from (\[eq:H23DASU8Z\]) by writing $$\label{eq:DSK1NW4DNG}
\overline{F^{(I)}_{l}(x,\,p,\,t)}=
\int \mathrm{d}^{3} x \rho
\big(-\mu_j \frac{\partial}{\partial x_l} B_j)
+\overline{F^{(V)}_{l}(x,\,p,\,t)}
\mbox{,}$$ where the magnetic moment of the electron $\mu_i=-(e/mc)s_i$ has been introduced. The first term on the r.h.s. of (\[eq:DSK1NW4DNG\]) is the expectation value of the well-known electrodynamical force exerted by an inhomogeneous magnetic field on the translational motion of a magnetic dipole; this classical force plays an important role in the standard interpretation of the quantum-mechanical Stern-Gerlach effect. It is satisfying that both translational forces, the Lorentz force as well as this dipole force, can be derived in the present approach. The remaining unknown force $\vec{F}^{(V)}$ in (\[eq:DSK1NW4DNG\]) leads (in the same way as in section \[sec:gaugecoupling\]) to a mechanical potential $V$, which will be omitted for brevity.
The integrand of the first term in (\[eq:AT2UZ5WE8Q\]) is linear in the derivative of $\rho$ with respect to $x_l$. It may consequently be added to the first line of (\[eq:H23DASU8Z\]) which has the same structure. Therefore, it represents (see below) a contribution to the generalized Hamilton-Jacobi differential equation. The third term in (\[eq:AT2UZ5WE8Q\]) has the mathematical structure of a force term, but does not contain any externally controlled fields. Thus, it must also represent a contribution to the generalized Hamilton-Jacobi equation. This implies that this third term can be written as $$\label{eq:SUM44HJE9W}
\int \mathrm{d}^{3} x \rho \,\frac{\hbar}{2}
\big(
\cos\varphi \frac{\partial \vartheta}{\partial x_l} G_1
- \sin\varphi \frac{\partial \vartheta}{\partial x_l} G_2
+ \frac{\partial \varphi}{\partial x_l} G_3
\big)=
\int \mathrm{d}^{3} x \frac{\partial \rho}{\partial x_{l}}
L_0^{\prime}
\mbox{,}$$ where $L_0^{\prime}$ is an unknown field depending on $G_1,\,G_2,\,G_3$.
Collecting terms and restricting ourselves, as in section \[sec:fisher-information\], to an isotropic law, the statistical condition takes the form of a generalized Hamilton-Jacobi equation: $$\label{eq:DZ3GANU7EHJ}
\bar{L} := \bigg( \frac{\partial S}{\partial t}+e \hat{\phi} \bigg)
+\frac{1}{2m}
\sum_{j}
\bigg(
\frac{\partial S}{\partial x_{j}} - \frac{e}{c} \hat{A_j}
\bigg)^{2}
+\mu_i B_i=L_0
\mbox{.}$$ The unknown function $L_0$ must contain $L_0^{\prime}$ but may also contain other terms, let us write $L_0=L_0^{\prime}+\Delta L_0$.
’Missing’ quantum spin terms from Fisher information {#sec:spin-fish-inform}
====================================================
Let us summarize at this point what has been achieved so far. We have four coupled differential equations for our dynamic field variables $\rho,\,S,\,\vartheta,\,\varphi$. The first of these is the continuity equation , which is given, in terms of the present variables, by $$\label{eq:CO3SPI4NVAR}
\frac{\partial \rho}{\partial t}+
\frac{\partial}{\partial x_l}
\bigg[
\frac{\rho}{m}
\big(
\frac{\partial S}{\partial x_{j}} - \frac{e}{c} \hat{A_j}
\big)
\bigg]=0
\mbox{.}$$ The three other differential equations, the evolution equations , and the generalized Hamilton-Jacobi equation , do not yet possess a definite mathematical form. They contain four unknown functions $G_i,\,L_0$ which are constrained, but not determined, by , .
The simplest choice, from a formal point of view, is $G_i=L_0=0$. In this limit the present theory agrees with Schiller’s field-theoretic (Hamilton-Jacobi) version, see [@schiller:spinning], of the equations of motion of a classical dipole. This is a classical (statistical) theory despite the fact that it contains \[see \] a number $\hbar$. But this classical theory is not realized in nature; at least not in the microscopic domain. The reason is that the simplest choice from a formal point of view is not the simplest choice from a physical point of view. The postulate of maximal simplicity (Ockham’s razor) implies equal probabilities and the principle of maximal entropy in classical statistical physics. A similar principle which is able to ’explain’ the nonexistence of classical physics (in the microscopic domain) is the principle of minimal Fisher information [@frieden:sciencefisher]. The relation between the two (classical and quantum-mechanical) principles has been discussed in detail in I.
The mathematical formulation of the principle of minimal Fisher information for the present problem requires a generalization, as compared to I, because we have now several fields with coupled time-evolution equations. As a consequence, the spatial integral (spatial average) over $\rho (\bar{L}-L_{0})$ in the variational problem (\[eq:EEFFRRWXYZ2\]) should be replaced by a space-time integral, and the variation should be performed with respect to all four variables. The problem can be written in the form $$\begin{gathered}
\delta \int \mathrm{d} t \int \mathrm{d}^{3} x \rho
\left( \bar{L} - L_{0} \right) = 0 \label{eq:HH43TRT9RQ} \\
E_a=0,\;\;\;a=S,\,\rho,\,\vartheta,\,\varphi \label{eq:HQ3TR4JWQ}
\mbox{,}\end{gathered}$$ where $E_a=0$ is a shorthand notation for the equations , , . Eqs. (\[eq:HH43TRT9RQ\]), (\[eq:HQ3TR4JWQ\]) require that the four Euler-Lagrange equations of the variational problem (\[eq:HH43TRT9RQ\]) agree with the differential equations (\[eq:HQ3TR4JWQ\]). This imposes conditions for the unknown functions $L_{0},\,G_i$. If the *solutions* of (\[eq:HH43TRT9RQ\]), (\[eq:HQ3TR4JWQ\]) for $L_{0},\,G_i$ are inserted in the variational problem (\[eq:HH43TRT9RQ\]), the four relations (\[eq:HQ3TR4JWQ\]) become redundant and $\rho ( \bar{L} - L_{0} )$ becomes the Lagrangian density of our problem. Thus, Eqs. (\[eq:HH43TRT9RQ\]) and (\[eq:HQ3TR4JWQ\]) represent a method to construct a Lagrangian.
We assume a functional form $L_{0}(\chi_{\alpha},\,\partial_{k}\chi_{\alpha},\,
\partial_{k}\partial_{l}\chi_{\alpha})$, where $\chi_{\alpha}=\rho,\,\vartheta,\,\varphi$. This means $L_{0}$ does not possess an explicit $x,t$-dependence and does not depend on $S$ (this would lead to a modification of the continuity equation). We further assume that $L_{0}$ does not depend on time-derivatives of $\chi_{\alpha}$ (the basic structure of the time-evolution equations should not be affected) and on spatial derivatives higher than second order. These second order derivatives must be taken into account but should not give contributions to the variational equations (a more detailed discussion of the last point has been given in I).
The variation with respect to $S$ reproduces the continuity equation which is unimportant for the determination of $L_{0},\,G_i$. Performing the variation with respect to $\rho,\,\vartheta,\,\varphi$ and taking the corresponding conditions , into account leads to the following differential equations for $L_{0},\,G_1\cos \varphi - G_2\sin \varphi $ and $G_3$, $$\begin{gathered}
-\frac{\partial}{\partial x_{k}} \frac{\partial}{\partial x_{i}}
\frac{\partial \rho L_0}
{\partial
\frac{\partial^{2} \rho}{\partial x_{k} \partial x_{i}}}
+\frac{\partial}{\partial x_{k}}
\frac{\partial \rho L_0}
{\partial
\frac{\partial \rho}{\partial x_{k} }}
-\rho \frac{\partial L_0}{\partial\rho}=0
\label{eq:DI2DS8RHO} \\
-\frac{\partial}{\partial x_{k}} \frac{\partial}{\partial x_{i}}
\frac{\partial \rho L_0}
{\partial\
\frac{\partial^{2} \vartheta}{\partial x_{k} \partial x_{i}}}
+\frac{\partial}{\partial x_{k}}
\frac{\partial \rho L_0}
{\partial
\frac{\partial \vartheta}{\partial x_{k} }}
-\frac{\partial \rho L_0}{\partial\vartheta}
-\frac{\hbar \rho}{2}
\left( G_1\cos \varphi - G_2\sin \varphi \right)
=0
\label{eq:D6DSV3ARTH}\\
-\frac{\partial}{\partial x_{k}} \frac{\partial}{\partial x_{i}}
\frac{\partial \rho L_0}
{\partial\
\frac{\partial^{2} \varphi}{\partial x_{k} \partial x_{i}}}
+\frac{\partial}{\partial x_{k}}
\frac{\partial \rho L_0}
{\partial
\frac{\partial \varphi}{\partial x_{k} }}
-\frac{\partial \rho L_0}{\partial\varphi}
-\frac{\hbar }{2} \rho G_3 =0
\label{eq:D9ZSG4PHI}
\mbox{.}\end{gathered}$$ The variable $S$ does not occur in (\[eq:DI2DS8RHO\])-(\[eq:D9ZSG4PHI\]) in agreement with our assumptions about the form of $L_0$. It is easy to see that a proper solution (with vanishing variational contributions from the second order derivatives) of (\[eq:DI2DS8RHO\])-(\[eq:D9ZSG4PHI\]) is given by $$\begin{gathered}
L_0 = \frac{\hbar^{2}}{2m} \bigg[
\frac{1}{\sqrt{\rho}}
\frac{\partial}{\partial \vec{x}} \frac{\partial}{\partial \vec{x}}
\sqrt{\rho}
-\frac{1}{4}\sin^{2} \vartheta
\left( \frac{\partial \varphi}{\partial \vec{x}} \right)^{2}
-\frac{1}{4}
\left( \frac{\partial \vartheta}{\partial \vec{x}} \right)^{2}
\bigg]
\label{eq:DI3SOLL0HO} \\
\hbar G_1\cos \varphi - \hbar G_2\sin \varphi=
\frac{\hbar^{2}}{2m} \bigg[
\frac{1}{2}\sin 2\vartheta
\left( \frac{\partial \varphi}{\partial \vec{x}} \right)^{2}
-\frac{1}{\rho}
\frac{\partial}{\partial \vec{x}} \rho
\frac{\partial \vartheta}{\partial \vec{x}}
\bigg]
\label{eq:D8SOLG1G2H}\\
\hbar G_3 = - \frac{\hbar^{2}}{2m} \frac{1}{\rho}
\frac{\partial}{\partial \vec{x}}
\big(
\rho \sin \vartheta^{2} \frac{\partial \varphi}{\partial \vec{x}}
\big)
\label{eq:D9SOLG3U2W}
\mbox{.}\end{gathered}$$ A new adjustable parameter appears on the r.h.s of (\[eq:DI3SOLL0HO\])- (\[eq:D9SOLG3U2W\]) which has been identified with $\hbar^{2}/2m$, where $\hbar$ is again Planck’s constant. This second $\hbar$ is related to the quantum-mechanical principle of maximal disorder. It is in the present approach not related in any obvious way to the previous “classical” $\hbar$ which denotes the amplitude of a rotation; compare, however, the alternative derivation of spin in section \[sec:spin-as-gauge\].
The solutions for $G_1,\,G_2$ may be obtained with the help of the second condition ($\vec{G} \vec{s} =0 $) listed in Eq. (\[eq:DR4ZGC98FG\]). The result may be written in the form $$\label{eq:HRG1G2U4Z}
\begin{split}
G_1 &= \frac{\hbar}{2m} \frac{1}{\rho}
\frac{\partial}{\partial \vec{x}} \rho
\bigg( \frac{1}{2} \sin 2\vartheta \sin \varphi
\frac{\partial \varphi}{\partial \vec{x}}
- \cos \varphi \frac{\partial
\vartheta}{\partial \vec{x}}
\bigg)\\
G_2 &= \frac{\hbar}{2m} \frac{1}{\rho}
\frac{\partial}{\partial \vec{x}} \rho
\bigg( \frac{1}{2} \sin 2\vartheta \cos \varphi
\frac{\partial \varphi}{\partial \vec{x}}
+ \sin \varphi \frac{\partial
\vartheta}{\partial \vec{x}}
\bigg)
\mbox{.}
\end{split}$$ Eqs. (\[eq:D9SOLG3U2W\]) and (\[eq:HRG1G2U4Z\]) show that the first condition listed in (\[eq:DR4ZGC98FG\]) is also satisfied. The last condition is also fulfilled: $L_0$ can be written as $L_0^{\prime}+\Delta L_0$, where $$\label{eq:AE98VZVDEF}
L_0^{\prime} =
-\frac{\hbar^{2}}{8m} \bigg[
\sin^{2} \vartheta
\left( \frac{\partial \varphi}{\partial \vec{x}} \right)^{2}
-\left( \frac{\partial \vartheta}{\partial \vec{x}} \right)^{2}
\bigg],\;\;\;
\Delta L_0 =
\frac{\hbar^{2}}{2m}
\frac{1}{\sqrt{\rho}}
\frac{\partial}{\partial \vec{x}} \frac{\partial}{\partial \vec{x}}
\sqrt{\rho}
\mbox{,}$$ and $L_0^{\prime}$ fulfills (\[eq:SUM44HJE9W\]). We see that $L_0^{\prime}$ is a quantum-mechanical contribution to the rotational motion while $\Delta L_0$ is related to the probability density of the ensemble (as could have been guessed considering the mathematical form of these terms). The last term is the same as in the spinless case \[see (\[eq:QHJH34MF\])\].
The remaining task is to show that the above solution for $L_0$ does indeed lead to a (appropriately generalized) Fisher functional. This can be done in several ways. The simplest is to use the following result due to [@reginatto:pauli]: $$\begin{gathered}
\int \mathrm{d}^{3} x \left( -\rho L_{0} \right) =
\frac{\hbar^{2}}{8m} \sum_{j=1}^{3}
\int \mathrm{d}^{3} x\,\sum_{k=1}^{3}
\frac{1}{\rho^{(j)}}
\left(\frac{\partial \rho^{(j)}}{\partial x_k} \right)^{2} \mbox{,}
\label{eq:RW2QI7U9Z} \\
\rho^{(1)} := \rho \sin^{2}\frac{\vartheta}{2}
\cos^{2}\frac{\varphi}{2},\;\;
\rho^{(2)} := \rho \sin^{2}\frac{\vartheta}{2}
\sin^{2}\frac{\varphi}{2},\;\;
\rho^{(3)} := \rho \cos^{2}\frac{\vartheta}{2}
\label{eq:HJLLH3LKL}
\mbox{.}\end{gathered}$$ The functions $\rho^{(j)}$ represent the probability that a particle is at space-time point $x,\,t$ and $\vec{s}$ points into direction $j$. Inserting (\[eq:DI3SOLL0HO\]) the validity of (\[eq:RW2QI7U9Z\]) may easily be verified. The r.h.s. of Eq. (\[eq:RW2QI7U9Z\]) shows that the averaged value of $L_{0}$ represents indeed a Fisher functional, which completes our calculation of the ’quantum terms’ $L_{0},\,G_i$.
Summarizing, our assumption, that under certain external conditions four state variables instead of two may be required, led to a nontrivial result, namely the four coupled differential equations , , , with $L_{0},\,G_i$ given by (\[eq:DI3SOLL0HO\]), (\[eq:HRG1G2U4Z\]), (\[eq:D9SOLG3U2W\]). The external condition which stimulates this splitting is given by a gauge field; the most important case is a magnetic field $\vec{B}$ but other possibilities do exist (see below). These four differential equations are equivalent to the much simpler differential equation $$\label{eq:DT26RPAULI}
\big(\frac{\hbar}{\imath} \frac{\partial}{\partial t}
+e\phi \big) \hat{\psi}
+ \frac{1}{2 m }
\big(\frac{\hbar}{\imath} \frac{\partial}{\partial \vec{x}} -
\frac{e}{c} \vec{A} \big)^{2} \hat{\psi}
+ \mu_{B}\vec{\sigma}\vec{B} \hat{\psi}=0
\mbox{,}$$ which is linear in the complex-valued two-component state variable $\hat{\psi}$ and is referred to as Pauli equation (the components of the vector $\vec{\sigma}$ are the three Pauli matrices and $\mu_{B}=-e\hbar/2mc$). To see the equivalence one writes, see [@takabayasi:vector], [@holland:quantum], $$\label{eq:ASF7UE1PSIHT}
\hat{\psi}=\sqrt{\rho}\, \mathrm{e}^{\frac{\imath}{\hbar}S}
\left(
\begin{array}{cc}
\cos \frac{\vartheta}{2} \mathrm{e}^{\imath \frac{\varphi}{2}} & \\
\\
\imath \sin \frac{\vartheta}{2} \mathrm{e}^{-\imath \frac{\varphi}{2}} &
\end{array}
\right)
\mbox{,}$$ and evaluates the real and imaginary parts of the two scalar equations (\[eq:DT26RPAULI\]). This leads to the four differential equations , , and completes the present spin theory.
In terms of the real-valued functions $\rho,\,S,\,\vartheta,\,\varphi$ the quantum-mechanical solutions (\[eq:DI3SOLL0HO\]), (\[eq:D9SOLG3U2W\]), (\[eq:HRG1G2U4Z\]) for $L_{0},\,G_i$ look complicated in comparison to the classical solutions $L_{0}=0,\,G_i=0$. In terms of the variable $\hat{\psi}$ the situation changes to the contrary: The quantum-mechanical equation becomes simple (linear) and the classical equation, which has been derived by [@schiller:spinning], becomes complicated (nonlinear). The simplicity of the underlying physical principle (principle of maximal disorder) leads to a simple mathematical representation of the final basic equation (if a complex-valued state function is introduced). One may also say that the linearity of the equations is a consequence of this principle of maximal disorder. This is the deeper reason why it has been possible, see [@klein:schroedingers], to derive Schrödinger’s equation from a set of assumptions including linearity.
Besides the Pauli equation we found, as a second important result of our spin calculation, that the following local force is compatible with the statistical constraint: $$\label{eq:LO3DAZI8FF}
\vec{F}^{L}+\vec{F}^{I}=
e \left( \vec{E} + \frac{1}{c} \vec{v} \times \vec{B} \right)
-\vec{\mu} \cdot \frac{\partial}{\partial\vec{x} } \vec{B}
\mbox{.}$$ Here, the velocity field $\tilde{\vec{v}}(x,t)$ and the magnetic moment field $\vec{\mu}(x,t)=-(e/mc)\vec{s}(x,t)$ have been replaced by corresponding particle quantities $\vec{v}(t)$ and $\vec{\mu}(t)$; the dot denotes the inner product between $\vec{\mu}$ and $\vec{B}$. The first force in (\[eq:LO3DAZI8FF\]), the Lorentz force, has been derived here from first principles without any additional assumptions. The same cannot be said about the second force which takes this particular form as a consequence of some additional assumptions concerning the form of the ’internal force’ $\vec{F}^{R}$ \[see (\[eq:NG89NWG2EN\])\]. In particular, the field appearing in $\vec{F}^{R}$ was arbitrary as well as the proportionality constant (g-factor of the electron) and had to be adjusted by hand. It is well-known that in a relativistic treatment the spin term appears automatically if the potentials are introduced. Interestingly, this unity is not restricted to the relativistic regime. Following [@arunsalam:hamiltonians] and [@gould:intrinsic] we report in the next section an alternative (non-relativistic) derivation of spin, which does not contain any arbitrary fields or constants - but is unable to yield the expression for the macroscopic electromagnetic forces.
In the present treatment spin has been introduced as a property of an ensemble and not of individual particles. Similar views may be found in the literature, see [@ohanian:spin]. Of course, it is difficult to imagine the properties of an ensemble as being completely independent from the properties of the particles it is made from. The question whether or not a property ’spin’ can be ascribed to single particles is a subtle one. Formally, we could assign a probability of being in a state $i$ ($i=1,2$) to a particle just as we assign a probability for being at a position $\vec{x} \in R^{3}$. But contrary to position, no classical meaning - and no classical measuring device - can be associated with the discrete degree of freedom $i$. Experimentally, the measurement of the ’spin of a single electron’ is - in contrast to the measurement of its position - a notoriously difficult task. Such experiments, and a number of other interesting questions related to spin, have been discussed by [@morrison.spin].
Spin as a consequence of a multi-valued phase {#sec:spin-as-gauge}
=============================================
As shown by [@arunsalam:hamiltonians], [@gould:intrinsic], and others, spin in non-relativistic QT may be introduced in exactly the same manner as the electrodynamic potentials. In this section we shall apply a slightly modified version of their method and try to derive spin in an alternative way - which avoids the shortcoming mentioned in the last section.
[@arunsalam:hamiltonians] and [@gould:intrinsic] introduce the potentials by applying the well-known minimal-coupling rule to the free Hamiltonian. In the present treatment this is achieved by making the quantity $S$ multi-valued. The latter approach seems intuitively preferable considering the physical meaning of the corresponding classical quantity. Let us first review the essential steps \[see [@klein:schroedingers] for more details\] in the process of creating potentials in the *scalar* Schr[ö]{}dinger equation:
- Chose a free Schr[ö]{}dinger equation with single-valued state function.
- ’Turn on’ the interaction by making the state function multi-valued (multiply it with a multi-valued phase factor)
- Shift the multi-valued phase factor to the left of all differential operators, creating new terms (potentials) in the differential equation.
- Skip the multi-valued phase. The final state function is again single-valued.
Let us adapt this method for the derivation of spin (considering spin one-half only). The first and most important step is the identification of the free Pauli equation. An obvious choice is $$\label{eq:FREEPAONE}
\left[ \frac{\hbar}{\imath} \frac{\partial}{\partial t}
+ \frac{1}{2 m }
\big(\frac{\hbar}{\imath} \frac{\partial}{\partial \vec{x}}\big)^{2}
+V \right] \bar{\psi} =0
\mbox{,}$$ where $\bar{\psi}$ is a single-valued *two-component* state function; (\[eq:FREEPAONE\]) is essentially a duplicate of Schr[ö]{}dinger’s equation. We may of course add arbitrary vanishing terms to the expression in brackets. This seems trivial, but some of these terms may vanish *only* if applied to a single-valued $\bar{\psi} $ and may lead to non-vanishing contributions if applied later (in the second of the above steps) to a multi-valued state function $\bar{\psi}^{multi} $.
In order to investigate this possibility, let us rewrite Eq. in the form $$\label{eq:FREPZONE2}
\left[ \hat{p}_{0} + \frac{1}{2 m }\vec{\hat{p}}\vec{\hat{p}}
+V \right] \bar{\psi} =0
\mbox{,}$$ where $\hat{p}_{0}$ is an abbreviation for the first term of and the spatial derivatives are given by $$\label{eq:FJU67LE2}
\vec{\hat{p}}= \hat{p}_{k}\vec{e}_{k},\hspace{1cm}\hat{p}_{k}=
\frac{\hbar}{\imath} \frac{\partial}{\partial x_{k}}
\mbox{.}$$ All terms in the bracket in (\[eq:FREPZONE2\]) are to be multiplied with a $2x2$ unit-matrix $E$ which has not be written down. Replace now the derivatives in (\[eq:FREPZONE2\]) according to $$\label{eq:REPLW344N}
\hat{p}_{0} \Rightarrow \hat{p}_{0}M_{0},\hspace{1cm}
\vec{\hat{p}} \Rightarrow \vec{\hat{p}}_{k} M_{k}
\mbox{,}$$ where $M_{0},\,M_{k}$ are hermitian $2x2$ matrices with constant coefficients, which should be constructed in such a way that the new equation agrees with (\[eq:FREPZONE2\]) for single-valued $\bar{\psi}$, i.e. assuming the validity of the condition $$\label{eq:ESBNN3HG}
\left( \hat{p}_{i}\hat{p}_{k} - \hat{p}_{k}\hat{p}_{i}\right) \bar{\psi} =0
\mbox{.}$$ This leads to the condition $$\label{eq:AN2CFO8M}
M_{0}^{-1}M_{i}M_{k}=E \delta_{ik}+T_{ik}
\mbox{,}$$ where $T_{ik}$ is a $2x2$ matrix with two cartesian indices $i,\,k$, which obeys $T_{ik}=-T_{ki}$. A solution of (\[eq:AN2CFO8M\]) is given by $M_{0}=\sigma_{0},\,M_{i}=\sigma_{i}$, where $\sigma_{0},\,\sigma_{i}$ are the four Pauli matrices. In terms of this solution, Eq. (\[eq:AN2CFO8M\]) takes the form $$\label{eq:HW2CF78M}
\sigma_{i}\sigma_{k}=\sigma_{0}\delta_{ik}+\imath \varepsilon_{ikl}\sigma_{l}
\mbox{.}$$ Thus, an alternative free Pauli-equation, besides (\[eq:FREEPAONE\]) is given by $$\label{eq:FEE7PAUDZW}
\left[ \frac{\hbar}{\imath} \frac{\partial}{\partial t}
+ \frac{1}{2 m }
\left(\frac{\hbar}{\imath}\right)^{2}
\sigma_{i} \frac{\partial}{\partial x_{i}}
\sigma_{k} \frac{\partial}{\partial x_{k}}
+V \right] \bar{\psi} =0
\mbox{.}$$ The quantity in the bracket is the generalized Hamiltonian constructed by [@arunsalam:hamiltonians] and [@gould:intrinsic]. In the present approach gauge fields are introduced by means of a multi-valued phase. This leads to the same formal consequences as the minimal coupling rule but allows us to conclude that the second free Pauli equation (\[eq:FEE7PAUDZW\]) is *more appropriate* than the first, Eq. (\[eq:FREEPAONE\]), because it is more general with regard to the consequences of multi-valuedness. This greater generality is due to the presence of the second term on the r.h.s. of (\[eq:HW2CF78M\]).
The second step is to turn on the multi-valuedness in Eq. (\[eq:FEE7PAUDZW\]), $\bar{\psi} \Rightarrow \bar{\psi}^{multi}$, by multiplying $\bar{\psi}$ with a multi-valued two-by-two matrix. This matrix must be chosen in such a way that the remaining steps listed above lead to Pauli’s equation (\[eq:DT26RPAULI\]) in presence of an gauge field. Since in our case the final result (\[eq:DT26RPAULI\]) is known, this matrix may be found by performing the inverse process, i.e. performing a singular gauge transformation $\hat{\psi}=\Gamma \bar{\psi}^{multi}$ of Pauli’s equation (\[eq:DT26RPAULI\]) from $\hat{\psi}$ to $\bar{\psi}^{multi}$, which *removes* all electrodynamic terms from (\[eq:DT26RPAULI\]) and creates Eq. (\[eq:FEE7PAUDZW\]). The final result for the matrix $\Gamma$ is given by $$\label{eq:TFRF34TMG}
\Gamma=E \exp
\bigg\{
\imath \frac{e}{\hbar c}
\int^{x,t} \left[\mathrm{d}x_{k}' A_k(x',t')-c\mathrm{d}t' \phi(x',t')\right]
\bigg\}
\mbox{,}$$ and agrees, apart from the unit matrix $E$, with the multi-valued factor introduced previously \[see and \] leading to the electrodynamic potentials. The inverse transition from (\[eq:FEE7PAUDZW\]) to (\[eq:DT26RPAULI\]), i.e. the creation of the potentials and the Zeeman term, can be performed by using the inverse of .
The Hamiltonian (\[eq:FEE7PAUDZW\]) derived by [@arunsalam:hamiltonians] and [@gould:intrinsic] shows that spin can be described by means of the same abelian gauge theory that leads to the standard quantum mechanical gauge coupling terms; no new adjustable fields or parameters appear. The only requirement is that the appropriate free Pauli equation is chosen as starting point. The theory of [@dartora_cabrera:magnetization], on the other hand, started from the alternative (from the present point of view inappropriate) free Pauli equation and leads to the conclusion that spin must be described by a non-abelian gauge theory.
As far as our derivation of non-relativistic QT is concerned we have now two alternative, and in a sense complementary, possibilities to introduce spin. The essential step in the second (Arunsalam-Gould) method is the transition from to the equivalent free Pauli equation . This step is a remarkable short-cut for the complicated calculations, performed in the last section, leading to the various terms required by the principle of minimal Fisher information. The Arunsalam-Gould method is unable to provide the shape of the corresponding macroscopic forces but is very powerful insofar as no adjustable quantities are required. It will be used in the next section to perform the transition to an arbitrary number of particles.
Transition to N particles as final step to non-relativistic quantum theory {#sec:final-step-to}
==========================================================================
In this section the present derivation of non-relativistic QT is completed by deriving Schrödinger’s equation for an arbitrary number $N$ of particles or, more precisely, for statistical ensembles of identically prepared experimental arrangements involving $N$ particles.
In order to generalize the results of sections \[sec:calcwpt\] and \[sec:fisher-information\], a convenient set of $n=3N$ coordinates $q_1,...q_n$ and masses $m_1,...m_n$ is defined by $$\label{eq:T38AN5OD}
\begin{split}
\left( q_{1},q_{2},q_{3},...q_{n-2},q_{n-1},q_{n}\right)&=
\left(x_1,y_1,z_1,...,x_N,y_N,z_N \right) \mbox{,}
\\
\left( m_{1},m_{2},m_{3},...m_{n-2},m_{n-1},m_{n}\right) &=
\left(m_1,m_1,m_1,...,m_N,m_N,m_N \right)\mbox{.}
\end{split}$$ The index $I=1,...N$ is used to distinguish particles, while indices $i,k,..$ are used here to distinguish the $3N$ coordinates $q_1,...q_n$. No new symbol has been introduced in to distinguish the masses $m_I$ and $m_i$ since there is no danger of confusion in anyone of the formulas below. However, the indices of masses will be frequently written in the form $m_{(i)}$ in order to avoid ambiguities with regard to the summation convention. The symbol $Q$ in arguments denotes dependence on all $q_1,...q_n$. In order to generalize the results of section \[sec:spin-as-gauge\] a notation $x_{I,k}$, $\vec{x}_{I}$, and $m_{I}$ (with $I=1,...,N$ and $k=1,2,3$) for coordinates, positions, and masses will be more convenient.
The basic relations of section \[sec:calcwpt\], generalized in an obvious way to $N$ particles, take the form $$\begin{gathered}
\frac{\partial \rho(Q,t)}{\partial t}+
\frac{\partial}{\partial q_k} \frac{\rho(Q,t)}{m_{(k)}}\frac{\partial
S(Q,t)}{\partial q_k}=0
\label{eq:CEQ34FNPT}\\
\frac{\mathrm{d}}{\mathrm{d}t} \overline{q_k} =
\frac{1}{m_{(k)}}\overline{p_k}
\label{eq:FIRNAT12HL}\\
\frac{\mathrm{d}}{\mathrm{d}t} \overline{p_k} =
\overline{F_k(Q,t)}
\label{eq:SE28UA5SHL}\\
\overline{q_k} = \int \mathrm{d} Q\, \rho(Q,\,t)\, q_k
\label{eq:UZ1K7ZTWT}\end{gathered}$$ Here, $S$ is a single-valued variable; the multi-valuedness will be added later, following the method of section \[sec:spin-as-gauge\].
The following calculations may be performed in complete analogy to the corresponding steps of section \[sec:calcwpt\]. For the present $N-$dimensional problem, the vanishing of the surface integrals, occurring in the course of various partial integrations, requires that $\rho$ vanishes exponentially in arbitrary directions of the configuration space. The final conclusion to be drawn from Eqs. - takes the form $$\label{eq:AUBGJJV9F}
\sum_{j=1}^{n} \frac{1}{2m_{(j)}} \left(\frac{\partial S}{\partial q_{j}} \right)^{2}+
\frac{\partial S}{\partial t} + V = L_0,\hspace{1cm}
\int \mathrm{d}Q\,\frac{\partial\rho}{\partial q_{k}}\,L_{0}= 0
\mbox{,}$$ The remaining problem is the determination of the unknown function $L_0$, whose form is constrained by the condition defined in Eq. .
$L_0$ can be determined using again the principle of minimal Fisher information, see I for details. Its implementation in the present framework takes the form $$\begin{gathered}
\delta \int \mathrm{d} t \int \mathrm{d} Q\, \rho
\left( L - L_{0} \right) = 0 \label{eq:HHWE8AAA} \\
E_a=0,\;\;\;a=S,\,\rho \label{eq:H2WQ45JHJ}
\mbox{,}\end{gathered}$$ where $E_S=0,\,E_{\rho}=0$ are shorthand notations for the two basic equations and . As before, Eqs. , represent a method to construct a Lagrangian. After determination of $L_0$ the three relations listed in , become redundant and become the fundamental equations of the $N-$particle theory.
The following calculation can be performed in complete analogy to the case $N=1$ reported in section \[sec:fisher-information\]. All relations remain valid if the upper summation limit $3$ is replaced by $3N$. This is also true for the final result, which takes the form $$\label{eq:DES3NR8LL}
L_{0}=\frac{\hbar^{2}}{4\rho}\left[
-\frac{1}{2\rho}
\frac{1}{m_{(j)}}
\frac{\partial \rho}{\partial q_{j}}
\frac{\partial \rho}{\partial q_{j}}
+\frac{1}{m_{(j)}}
\frac{\partial^{2} \rho}
{\partial q_{j}\partial q_{j}}
\right]
\mbox{.}$$ If a complex-valued variable $\psi$, defined as in (\[eq:HDIS3JWUI\]), is introduced, the two basic relations $E_a=0$ may be condensed into the single differential equation, $$\label{eq:DGSFF3R4E}
\Bigg[
\frac{\hbar}{\imath} \frac{\partial}{\partial t}
+\sum_{I=1}^{N}\frac{1}{2 m_{(I)} }
\left( \frac{\hbar}{\imath}\frac{\partial}{\partial x_{I,k}} \right)
\left( \frac{\hbar}{\imath}\frac{\partial}{\partial x_{I,k}} \right)
+V \Bigg] \psi=0
\mbox{,}$$ which is referred to as $N-$particle Schrödinger equation, rewritten here in the more familiar form using particle indices. As is well-known, only approximate solutions of this partial differential equation of order $3N+1$ exist for realistic systems. The inaccessible complexity of quantum-mechanical solutions for large $N$ is not reflected in the abstract Hilbert space structure (which is sometimes believed to characterize the whole of QT) but plays probably a decisive role for a proper description of the mysterious relation between QT and the macroscopic world.
Let us now generalize the Arunsalam-Gould method, discussed in section \[sec:spin-as-gauge\], to an arbitrary number of particles. We assume, that the considered $N-$particle statistical ensemble responds in $2^{N}$ ways to the external electromagnetic field. This means we restrict ourselves again, like in section \[sec:spin\], \[sec:spin-fish-inform\] to spin one-half. Then, the state function may be written as $\psi(x_1,s_1;....x_I,s_I;...x_N,s_N)$ where $s_I=1,\,2$. In the first of the steps listed at the beginning of section \[sec:spin-as-gauge\], a differential equation, which is equivalent to Eq. for single-valued $\psi$ but may give non-vanishing contributions for multi-valued $\psi$, has to be constructed. The proper generalization of Eq. (\[eq:FEE7PAUDZW\]) to arbitrary $N$ takes the form $$\label{eq:DSHZ9L7W4E}
\Bigg[
\frac{\hbar}{\imath} \frac{\partial}{\partial t}
+\sum_{I=1}^{N}\frac{1}{2 m_{(I)} }
\left( \frac{\hbar}{\imath} \sigma_{k}^{(I)} \frac{\partial}{\partial x_{I,k}} \right)
\left( \frac{\hbar}{\imath} \sigma_{l}^{(I)} \frac{\partial}{\partial x_{I,l}} \right)
+V \Bigg] \psi=0
\mbox{,}$$ where the Pauli matrices $\sigma_{k}^{(I)}$ operate by definition only on the two-dimensional subspace spanned by the variable $s_{I}$. In the second step we perform the replacement $$\label{eq:TF4SDP8HMG}
\psi \Rightarrow
\exp
\bigg\{
-\frac{\imath}{\hbar}
\sum_{I=1}^{N} \frac{e_{I}}{c}
\sum_{k=1}^{3}
\int^{\vec{x}_{I},t} \left[ \mathrm{d}x_{I,k}' A_k(x_{I}',t')-c\mathrm{d}t' \phi(x_{I}',t')\right]
\bigg\} \psi
\mbox{,}$$ using a multi-valued phase factor, which is an obvious generalization of Eq. . The remaining steps, in the listing of section \[sec:spin-as-gauge\], lead in a straightforward way to the final result $$\label{eq:DW2FPGF9NP}
\begin{split}
\Bigg[
\frac{\hbar}{\imath} \frac{\partial}{\partial t}
&+\sum_{I=1}^{N}e_{I}\phi(x_{I},t)
+\sum_{I=1}^{N} \sum_{k=1}^{3} \frac{1}{2 m_{(I)} }
\left(
\frac{\hbar}{\imath}\frac{\partial}{\partial x_{I,k}} -\frac{e_{I}}{c}A_{k}(x_{I},t)
\right)^{2}\\
&+\sum_{I=1}^{N} \mu_{B}^{(I)}\sigma_{k}^{(I)}B_{k}(x_{I},t)
+V(x_1,...,x_N,t) \Bigg] \psi=0
\end{split}
\mbox{,}$$ where $\mu_{B}^{(I)}=-\hbar e_{I}/2m_{I}c$ and $\vec{B}=\mathrm{rot}\vec{A}$. The mechanical potential $V(x_1,...,x_N,t)$ describes a general many-body force but contains, of course, the usual sum of two-body potentials as a special case. Eq. is the $N-$body version of Pauli’s equation and completes - in the sense discussed at the very beginning of this paper - the present derivation of non-relativistic QT.
The classical limit of quantum theory is a statistical theory {#sec:classical-limit}
=============================================================
The classical limit of Schrödinger’s equation plays an important role for two topics discussed in the next section, namely the interpretation of QT and the particular significance of potentials in QT; to study these questions it is sufficient to consider a single-particle ensemble described by a single state function. This ’classical limit theory’ is given by the two differential equations $$\begin{gathered}
\frac{\partial \rho}{\partial t}+\frac{\partial}{\partial x_{k}}
\frac{\rho}{m} \left(
\frac{\partial S}{\partial x_{k}}-\frac{e}{c} A_{k}
\right)=0
\label{eq:CONACL7TMF}
\mbox{,} \\
\frac{\partial S}{\partial t}+e\phi + \frac{1}{2m}
\sum_{k}
\left(
\frac{\partial S}{\partial x_{k}}-\frac{e}{c} A_{k}
\right)^{2} +V =0
\label{eq:QHJCL74MF}
\mbox{,}\end{gathered}$$ which are obtained from Eqs. (\[eq:CONH3TMF\]) and (\[eq:QHJH34MF\]) by performing the limit $\hbar \to 0$. The quantum mechanical theory (\[eq:CONH3TMF\]) and (\[eq:QHJH34MF\]) and the classical theory (\[eq:CONACL7TMF\]) and (\[eq:QHJCL74MF\]) show fundamentally the same mathematical structure; both are initial value problems for the variables $S$ and $\rho$ obeying two partial differential equations. The difference is the absence of the last term on the l.h.s. of (\[eq:QHJH34MF\]) in the corresponding classical equation (\[eq:QHJCL74MF\]). This leads to a *decoupling* of $S$ and $\rho$ in (\[eq:QHJCL74MF\]); the identity of the classical object described by $S$ is no longer affected by statistical aspects described by $\rho$.
The field theory (\[eq:CONACL7TMF\]), (\[eq:QHJCL74MF\]) for the two ’not decoupled’ fields $S$ and $\rho$ is obviously very different from classical mechanics which is formulated in terms of trajectories. The fact that one of these equations, namely (\[eq:QHJCL74MF\]), agrees with the Hamilton-Jacobi equation, does not change the situation since the presence of the continuity equation (\[eq:CONACL7TMF\]) cannot be neglected. On top of that, even if it could be neglected, Eq. (\[eq:QHJCL74MF\]) would still be totally different from classical mechanics: In order to construct particle trajectories from the partial differential equation (\[eq:QHJCL74MF\]) for the field $S(x,t)$, a number of clearly defined mathematical manipulations, which are part of the classical theory of canonical transformations, see [@greiner:mechanics_systems], must be performed. The crucial point is that the latter theory is *not* part of QT and cannot be added ’by hand’ in the limit $\hbar \to 0$. Thus, (\[eq:CONACL7TMF\]), (\[eq:QHJCL74MF\]) is, like QT, an *indeterministic* theory predicting not values of single event observables but only probabilities, which must be verified by ensemble measurements.
Given that we found a solution $S(x,t)$, $\rho(x,t)$ of , for given initial values, we may ask which experimental predictions can be made with the help of these quantities. Using the fields $\vec{\tilde{p}}(x,\,t)$, $\tilde{E}(x,\,t)$ defined by Eqs. , , the Hamilton-Jacobi equation takes the form $$\label{eq:DIE23TFADE}
\frac{\vec{\tilde{p}}^{2}(x,\,t)}{2m}+V(x,t)=\tilde{E}(x,\,t)
\mbox{,}$$ The l.h.s. of depends on the field $\vec{\tilde{p}}$ in the same way as a classical particle Hamiltonian on the (gauge-invariant) kinetic momentum $\vec{p}$. We conclude that the field $\vec{\tilde{p}}(x,\,t)$ describes a mapping from space-time points to particle momenta: If a particle (in an external electromagnetic field) is found at time $t$ at the point $x$, then its kinetic momentum is given by $\vec{\tilde{p}}(x,\,t)$. This is not a deterministic prediction since we can not predict if a single particle will be or will not be at time $t$ at point $x$; the present theory gives only a probability $\rho(x,t)$ for such an event. Combining our findings about $\vec{\tilde{p}}(x,\,t)$ and $x$ we conclude that the experimental prediction which can be made with the help of $S(x,t)$, $\rho(x,t)$ is given by the following phase space probability density: $$\label{eq:PPEE44W2S}
w(x,p,t)=\rho(x,t)\delta^{(3)}(p-\frac{\partial
\tilde{S}(x,t)}{\partial x})
\mbox{.}$$ Eq. confirms our claim that the classical limit theory is a statistical theory. The one-dimensional version of has been obtained before by means of a slightly different method in I. The deterministic element \[realized by the delta-function shaped probability in (\[eq:PPEE44W2S\])\] contained in the classical statistical theory (\[eq:CONACL7TMF\]), (\[eq:QHJCL74MF\]) is *absent* in QT, see I.
Eqs. (\[eq:CONACL7TMF\]), (\[eq:QHJCL74MF\]) constitute the mathematically well-defined limit $\hbar \to 0$ of Schrödinger’s equation. Insofar as there is general agreement with regard to two points, namely that (i) ’non-classicality’ (whatever this may mean precisely) is expressed by a nonzero $\hbar$, and that (ii) Schrödinger’s equation is the most important relation of quantum theory, one would also expect general agreement with regard to a further point, namely that Eqs. (\[eq:CONACL7TMF\]), (\[eq:QHJCL74MF\]) present essentially (for a three-dimensional configuration space) the *classical limit* of quantum mechanics. But this is, strangely enough, not the case. With a few exceptions, see [@vanvleck:correspondence], [@schiller:quasiclassical], [@ballentine:ehrenfest], [@shirai:reinterpretation], [@klein:schroedingers], most works (too many to be quoted) take it for granted that the classical limit of quantum theory is classical mechanics. The objective of papers like [@rowe:classical], [@werner:classical], [@landau_lj:macroscopic], [@allori_zanghi:classical] devoted to “..the classical limit of quantum mechanics..“ is very often not the problem: ”*what is* the classical limit of quantum mechanics ?” but rather: “*how to bridge the gap* between quantum mechanics and classical mechanics ?”. Thus, the fact that classical mechanics is the classical limit of quantum mechanics is considered as *evident* and any facts not compatible with it - like Eqs. (\[eq:CONACL7TMF\]), (\[eq:QHJCL74MF\]) - are denied.
What, then, is the reason for this widespread denial of reality ? One of the main reasons is the principle of reductionism which still rules the thinking of most physicists today. The reductionistic ideal is a hierarchy of physical theories; better theories have an enlarged domain of validity and contain ’inferior’ theories as special cases. This principle which has been extremely successful in the past *dictates* that classical mechanics is a special case of quantum theory. Successful as this idea might have been during a long period of time it is not necessarily universally true; quantum mechanics and classical mechanics describe different domains of reality, both may be true in their own domains of validity. Many phenomena in nature indicate that the principle of reductionism (alone) is insufficient to describe reality, see [@laughlin:everything]. Releasing ourselves from the metaphysical principle of reductionism, we accept that the classical limit of quantum mechanics for a three-dimensional configuration space is the statistical theory defined by Eqs. (\[eq:CONACL7TMF\]), (\[eq:QHJCL74MF\]). It is clear that this theory is not realized in nature (with the same physical meaning of the variables) because $\hbar$ is different from zero. But this is a different question and does not affect the conclusion.
Extended discussion {#sec:discussion}
===================
In this paper it has been shown, continuing the work of I, that the basic differential equation of non-relativistic QT may be derived from a number of clearly defined assumptions of a statistical nature. Although this does not exclude the possibility of other derivations, we consider this success as a strong argument in favor of the statistical interpretation of QT.
This result explains also, at least partly, the success of the canonical quantization rules . Strictly speaking, these rules have only be derived for a particular (though very important) special case, the Hamiltonian. However, one can expect that can be verified for all meaningful physical observables[^4]. On the other hand, it cannot be expected that the rules hold for arbitrary functions of $x,\,p$; each case has to be investigated separately. Thus, the breakdown of , as expressed by Groenewold’s theorem, is no surprise.
The fundamental Ehrenfest-like relations of the present theory establish \[like the formal rules \] a *correspondence* between particle mechanics and QT. Today, philosophical questions concerning, in particular, the ’reality’ of particles play an important role in the thinking of some physicists. So: ’What is this theory about.. ?’ While the present author is no expert in this field, the concept of *indeterminism*, as advocated by the philosopher [@popper:open_universe], seems to provide an appropriate philosophical basis for the present work.
The present method to introduce gauge fields by means of a multi-valued dynamic variable (’phase function’) has been invented many years ago but leads, in the context of the present statistical theory, nevertheless to several new results. In particular, it has been shown in section \[sec:gaugecoupling\], that only the Lorentz force can exist as fundamental macroscopic force if the statistical assumptions of section \[sec:calcwpt\] are valid. It is the only force (in the absence of spin effects, see the remarks below) that can be incorporated in a ’standard’ differential equation for the dynamical variables $\rho$, $S$. The corresponding terms in the statistical field equations, *representing* the Lorentz force, are given by the familiar gauge (minimal) coupling terms containing the potentials. The important fact that all forces in nature follow this ’principle of minimal coupling’ is commonly explained as a consequence of local gauge invariance. The present treatment offers an alternative explanation.
Let us use the following symbolic notation to represent the relation between the local force and the terms representing its action in a statistical context: $$\label{eq:IRD7SAI1NC}
\Phi,\,\vec{A} \Rightarrow
e\vec{E}+\frac{e}{c}\vec{v} \times \vec{B}
\mbox{.}$$ The fields $\vec{E}$ and $\vec{B}$ are uniquely defined in terms of the potentials $\phi$ and $\vec{A}$ \[see (\[eq:DFS32WUSS\])\] while the inverse is not true. Roughly speaking, the local fields are ’derivatives’ of the potentials - and the potentials are ’integrals’ of the local field; this mathematical relation reflects the physical role of the potentials $\phi$ and $\vec{A}$ as statistical representatives of the the local fields $\vec{E}$ and $\vec{B}$, as well as their non-uniqueness. It might seem that the logical chain displayed in (\[eq:IRD7SAI1NC\]) is already realized in the classical treatment of a particle-field system, where potentials have to be introduced in order to construct a Lagrangian, see e.g. [@landau.lifshitz:classical]. However, in this case, the form of the local force is not derived but postulated. The present treatment ’explains’ the form of the Lagrangian - as a consequence of the basic assumptions listed in section \[sec:calcwpt\].
The generalization of the present theory to spin, reported in sections \[sec:spin\] and \[sec:spin-fish-inform\], leads to a correspondence similar to Eq. (\[eq:IRD7SAI1NC\]), namely $$\label{eq:IRUD2AO9NC}
\vec{\mu}\vec{B}
\rightarrow
\vec{\mu} \cdot \frac{\partial}{\partial\vec{x} } \vec{B}
\mbox{.}$$ The term linear in $\vec{B}$, on the l.h.s. of (\[eq:IRUD2AO9NC\]), plays the role of a ’potential’ for the local force on the r.h.s. The points discussed after Eq. (\[eq:IRD7SAI1NC\]) apply here as well \[As a matter of fact we consider $\vec{B}$ as a unique physical quantity; it would not be unique if it would be defined in terms of the tensor on the r.h.s. of (\[eq:IRUD2AO9NC\])\]. We see here a certain analogy between gauge and spin interaction terms. Unfortunately, the derivation of the spin force on the r.h.s. of (\[eq:IRUD2AO9NC\]) requires - in contrast to the Lorentz force - additional assumptions (see the remarks in sections \[sec:spin-fish-inform\], \[sec:spin-as-gauge\]).
Our notation for potentials $\phi,\,\vec{A}$, fields $\vec{E},\,\vec{B}$, and parameters $e,\,c$ suggests that these quantities are electrodynamical in nature. However, this is not necessarily true. By definition, the fields $\vec{E},\,\vec{B}$ obey four equations (the homogeneous Maxwell equations), which means that additional conditions are required in order to determine these six fields. The most familiar possibility is, of course, the second pair of Maxwell’s equations. A second possible realization for the fields $\vec{E},\,\vec{B}$ is given by the *inertial* forces acting on a mass $m$ in an arbitrarily accelerated reference frame, see [@hughes:feynmans]. The inertial gauge field may also lead to a spin response of the ensemble; such experiments have been proposed by [@mashhoon_kaiser:inertia]. It is remarkable that the present theory establishes a (admittedly somewhat vague) link between the two extremely separated physical fields of inertia and QT.
It is generally assumed that the electrodynamic potentials have a particular significance in QT which they do not have in classical physics. Let us analyze this statement in detail. The first part of the statement, concerning the significance of the potentials, is of course true. The second part, asserting that in classical physics all external influences *can* be described solely in terms of field strengths, is wrong. More precisely, it is true for classical mechanics but not for classical physics in general. A counterexample - a theory belonging to classical physics but with potentials playing an indispensable role - is provided by the classical limit (\[eq:CONACL7TMF\]),(\[eq:QHJCL74MF\]) of Schrödinger’s equation. In this field theory the potentials play an indispensable role because (in contrast to particle theories, like the canonical equations) no further derivatives of the Hamiltonian, which could restore the fields, are to be performed. This means that the significance of the potentials is not restricted to quantum theory but rather holds for the whole class of *statistical* theories discussed above, which contains both quantum theory and its classical limit theory as special cases. This result is in agreement with the statistical interpretation of potentials proposed in section \[sec:gaugecoupling\].
The precise characterization of the role of the potentials is of particular importance for the interpretation of the Aharonov-Bohm effect. The ’typical quantum-mechanical features’ observed in these phase shift experiments should be identified by comparing the quantum mechanical results not with classical mechanics but with the predictions of the classical statistical theory (\[eq:CONACL7TMF\]), (\[eq:QHJCL74MF\]). The predictions of two statistical theories, both of which use potentials to describe the influence of the external field, have to be compared.
The limiting behavior of Schrödinger’s equation as $\hbar\rightarrow 0$, discussed in section \[sec:classical-limit\], is very important for the proper interpretation of QT. The erroneous belief (wish) that this limit can (must) be identified with classical mechanics is closely related to the erroneous belief that QT is able to describe the dynamics of individual particles. In this respect QT is obviously an *incomplete* theory, as has been pointed out many times before, during the last eighty years, see e.g. [@einstein:reply], [@margenau:quantum-mechanical] , [@ballentine:statistical], [@held:axiomatic]. Unfortunately, this erroneous opinion is historically grown and firmly established in our thinking as shown by the ubiquitous use of phrases like ’the wave function of the electron’. But it is clear that an erroneous identification of the domain of validity of a physical theory will automatically create all kinds of mysteries and unsolvable problems - and this is exactly what happens. Above, we have identified one of the (more subtle) problems of this kind, concerning the role of potentials in QT, but many more could be found. Generalizing the above argumentation concerning potentials, we claim that characteristic features of QT cannot be identified by comparison with classical mechanics. Instead, quantum theory should be compared with its classical limit, which is in the present $3D$-case given by (\[eq:CONACL7TMF\]), (\[eq:QHJCL74MF\]) - we note in this context that several ’typical’ quantum phenomena have been explained by [@kirkpatrick:quantal] in terms of classical probability theory. One has to compare the solutions of the classical, nonlinear equations (\[eq:CONACL7TMF\]), (\[eq:QHJCL74MF\]) with those of the quantum mechanical, linear equations, (\[eq:CONH3TMF\]), (\[eq:QHJH34MF\]), in order to find out which ’typical quantum-mechanical features’ are already given by statistical (nonlocal) correlations of the classical limit theory and which features are really quantum-theoretical in nature - related to the nonzero value of $\hbar$.
Summary {#sec:concludingremarks}
=======
In the present paper it has been shown that the method reported in I, for the derivation of Schrödingers’s equation, can be generalized in such a way that essentially all aspects of non-relativistic QT are taken into account. The success of this derivation from statistical origins is interpreted as an argument in favor of the SI. The treatment of gauge fields and spin in the present statistical framework led to several remarkable new insights. We understand now why potentials (and not local fields) occur in the field equations of QT. The non-uniqueness of the potentials and the related concept of gauge invariance is not a mystery any more. Spin is derived as a kind of two-valuedness of a statistical ensemble. The local forces associated with the gauge potentials, the Lorentz force and the force experienced by a particle with magnetic moment, can also be derived. Apart from some open questions in the area of non-relativistic physics, a major problem for future research is a relativistic generalization of the present theory.
[xx]{}
Aharonov, Y. Bohm, D. 1959. Significance of electromagnetic potentials in quantum theory, [ *Phys. Rev.*]{} 115(3): 485.
Ali, A. H. 2009. The ensemble quantum state of a single particle, [*Int. J. Theor. Phys.*]{} 48: 194–212.
Allori, V. Zanghi, N. 2009. On the classical limit of quantum mechanics, [*Foundations of Physics*]{} 39: 20–32.
Anderson, P. W. 1972. More is different, [*Science*]{} 177(4047): 393–396.
Arunsalam, V. 1970. Hamiltonians and wave equations for particles of spin $0$ and spin $\frac{1}{2}$ with nonzero mass, [*Am. J. Phys.*]{} 38: 1010–1022.
Ballentine, L. E. 1970. The statistical interpretation of quantum mechanics, [*Reviews of Modern Physics*]{} 42: 358–381.
Ballentine, L. E. 1994. Inadequacy of [E]{}hrenfest’s theorem to characterize the classical regime, [*Physical Review*]{} A 50: 2854–2859.
Belinfante, F. J. 1975. , Pergamon Press, Oxford.
Belinfante, F. J. 1978. Can individual elementary particles have individual properties?, [ *Am. J. Phys.*]{} 46(4): 329–336.
Blokhintsev, D. I. 1964. , Reidel, Dordrecht.
Bocchieri, P. Loinger, A. 1978 . Nonexistence of the [A]{}haronov-[B]{}ohm effect, [*Nuovo Cimento*]{} 47A: 475–482.
Dartora, C. A. Cabrera, G. G. 2008. Magnetization, spin current, and spin-transfer torque from [SU(2)]{} local gauge invariance of the nonrelativistic [Pauli-Schrödinger]{} theory, [*Physical Review*]{} B 78: 012403.
Dirac, P. A. M. 1931. Quantised singularities in the electromagnetic field, [*Proc. R. Soc. London, Ser. A*]{} 133: 60–72.
Einstein, A. 1936. Physics and reality, [*J. Franklin Inst.*]{} 221: 349.
Einstein, A. 1949. , Harper and Row, New York, p. 665.
Frieden, B. R. 1989. Fisher information as the basis for the [Schr[ö]{}dinger]{} wave equation, [*Am. J. Phys.*]{} 57(11): 1004–1008.
Frieden, B. R. 2004. , Cambridge University Press, Cambridge.
Gotay, M. J. 1999. On the [G]{}roenewold-[V]{}an [H]{}ove problem for [R]{}$^{2n}$, [*J. Math. Phys.*]{} 40: 2107–2116.
Gould, R. J. 1995. The intrinsic magnetic moment of elementary particles, [*Am. J. Phys.*]{} 64: 597–601.
Greiner, W. 1989. , Springer, New York.
Groenewold, H. J. 1946. On the principles of elementary quantum mechanics, [*Physica*]{} 12: 405–460.
Hall, M. J. Reginatto, M. 2002a . Quantum mechanics from a [H]{}eisenberg-type equality, [*Fortschr. Phys.*]{} 50: 5–7.
Hall, M. J. Reginatto, M. 2002b . equation from an exact uncertainty principle, [*J. Phys. A*]{} 35: 3289–3303.
Hall, M. J. W. 2005. Exact uncertainty approach in quantum mechanics and quantum gravity, [*Gen. Relativ. Grav.*]{} 37: 1505–1515.
Held, C. 2008. Axiomatic quantum mechanics and completeness, [*Foundations of Physics*]{} 38: 707–732.
Holland, P. R. 1995. , Cambridge University Press, Cambridge, U.K.
Hughes, R. J. 1992. On [F]{}eynman’s proof of the [M]{}axwell equations, [*Am. J. Phys.*]{} 60: 301–306.
Kaempfer, F. A. 1965. , Academic Press, New York.
Kemble, E. C. 1929. The general principles of quantum mechanics. part [I]{}, [*Rev. Mod. Phys.*]{} 1: 157–215.
Kirkpatrick, K. A. 2003. ’[Q]{}uantal’ behavior in classical probability, [*Foundations of Physics Letters*]{} 16: 199–224.
Klein, U. 1981. Comment on ’[C]{}ondition for nonexistence of [A]{}haronov-[B]{}ohm effect’, [*Physical Review D*]{} 23: 1463–1465.
Klein, U. 2009. equation with gauge coupling derived from a continuity equation, [*Foundations of physics*]{} 39: 964.
Klein, U. 2011. The statistical origins of quantum mechanics, [*Physics Research International*]{} Article ID 808424.
Kobe, D. H. Yang, K. 1985. Gauge transformation of the time-evolution operator, [*Phys. Rev. A*]{} 32: 952–958.
Kr[ü]{}ger, T. 2004. An attempt to close the [Einstein-Podolsky-Rosen]{} debate, [*Can. J. Phys.*]{} 82: 53–65.
Landau, L. D. Lifshitz, E. M. 1967. , Vol. II of [*Course of theoretical physics*]{}, 5 edn, Pergamon Press, Oxford. Translation from Russian, Nauka, Moscow, 1973.
Landau, L. J. 1996. Macroscopic observation of a quantum particle in a slowly varying potential - on the classical limit of quantum theory, [*Annals of Physics*]{} 246: 190–227.
Laughlin, R. B. 2005. , Basic Books, Cambridge.
Laughlin, R. B. Pines, D. 2000 . The theory of everything, [*Proc. Natl. Acad. Sci. USA*]{} 97: 28–31.
Lee, Y. C. Zhu, W. 1999. The principle of minimal quantum fluctuations for the time-dependent [S]{}chr[ö]{}dinger equation, [*J. Phys. A*]{} 32: 3127–3131.
Levy-Leblond, J. M. 1967. Nonrelativistic particles and wave equations, [*Comm. Math. Physics*]{} 6: 286–311.
London, F. 1927. Quantenmechanische [D]{}eutung der [T]{}heorie von [W]{}eyl, [*Z. Phys.*]{} 42: 375.
Margenau, H. 1935. Quantum-mechanical description, [*Physical Review*]{} 49: 24–242.
Margenau, H. 1963. Measurements in quantum mechanics, [*Annals of Physics*]{} 23: 469–485.
Mashhoon, B. Kaiser, H. 2006. Inertia of intrinsic spin, [*Physica B*]{} 385-386: 1381–1383.
Morrison, M. 2007. Spin: [A]{}ll is not what it seems, [*Studies in history and philosophy of modern physics*]{} 38: 529–557.
Motz, L. 1962. Quantization and the classical [H]{}amilton-[J]{}acobi equation, [ *Phys. Rev.*]{} 126: 378–382.
Newman, R. G. 1980. Probability interpretation of quantum mechanics, [*Am. J. Phys.*]{} 48: 1029–1034.
Ohanian, H. C. 1986. What is spin ?, [*Am. J. Phys.*]{} 54: 500–505.
Peshkin, M. Tonomura, A. 1989. , Lecture Notes in Physics, Springer Verlag, Berlin.
Pippard, A. B. 1986. The interpretation of quantum mechanics, [*Eur. J. Phys.*]{} 7: 43–48.
Popper, K. R. 1982. , Rowman and Littlefield, Totowa.
Reginatto, M. 1998a. Derivation of the equations of nonrelativistic quantum mechanics using the principle of minimum [F]{}isher information, [*Phys. Rev. A*]{} 58: 1775–1778.
Reginatto, M. 1998b. Derivation of the [P]{}auli equation using the principle of minimum [F]{}isher information, [*Physics Letters*]{} A 249: 355–357.
Rosen, N. 1964. The relation between classical and quantum mechanics, [*Am. J. Phys.*]{} 32: 597–600.
Ross-Bonney, A. A. 1975. Does god play dice ? - a discussion of some interpretations of quantum mechanics, [*Nuovo Cimento*]{} 30 B: 55.
Rowe, E. G. P. 1991. Classical limit of quantum mechanics [(]{}electron in a magnetic field[)]{}, [*Am. J. Phys.*]{} 59: 1111–1117.
Roy, S. M. 1980. Condition for nonexistence of [A]{}haronov-[B]{}ohm effect, [*Physical Review Letters*]{} 44(3): 111–114.
Schiller, R. 1962a. Quasi-classical theory of the nonspinning electron, [*Phys. Rev.*]{} 125(3): 1100–1108.
Schiller, R. 1962b. Quasi-[C]{}lassical [T]{}heory of the [S]{}pinning [E]{}lectron, [*Phys. Rev.*]{} 125(3): 1116–1123.
Schr[ö]{}dinger, E. 1926. Quantisierung als [E]{}igenwertproblem, [E]{}rste [M]{}itteilung, [ *Annalen der Physik*]{} 79: 361.
Shirai, H. 1998. Reinterpretation of quantum mechanics based on the statistical interpretation, [*Foundations of Physics*]{} 28: 1633–1662.
Synge, J. L. 1960. , Springer, Berlin, pp. 1–223.
Syska, J. 2007. Fisher information and quantum-classical field theory: classical statistics similarity, [*phys. stat. sol.(b)*]{} 244: 2531–2537.
Takabayasi, T. 1955. The vector representation of spinning particles in the quantum theory [I]{}, [*Progress in Theoretical Physics.*]{} 14: 283–302.
Toyozawa, Y. 1992. Theory of measurement, [*Progress of Theoretical Physics*]{} 87: 293–305.
Tschudi, H. R. 1987. On the statistical interpretation of quantum mechanics, [ *Helvetica Physica Acta*]{} 60: 363–383.
, J. H. 1928. The correspondence principle in the statistical interpretation of quantum mechanics, [*Proc. Natl. Acad. Sci. U.S.*]{} 14: 178–188.
Werner, R. F. Wolff, M. P. H. 1995. Classical mechanics as quantum mechanics with infinitesimal $\hbar$, [*Physics Letters A*]{} 202: 155–159.
Weyl, H. 1929. Elektron und [G]{}ravitation [I]{}, [*Z. Phys.*]{} 56: 330–352.
Young, R. H. 1980. Quantum mechanics based on position, [*Foundations of Physics*]{} 10: 33–56.
[^1]: I cannot report here a list, all the less a description, of all the quantum mechanical paradoxes found in the last eighty years.
[^2]: The listing given here is far from complete.
[^3]: These considerations seem relevant for attempts to define phase-space densities, e.g. of the Wigner type, in QT
[^4]: As indicated by preliminary calculations of the angular momentum relation corresponding to Eq.
|
---
abstract: 'The longitudinal spin structure factor for the $XXZ$-chain at small wave-vector $q$ is obtained using Bethe Ansatz, field theory methods and the Density Matrix Renormalization Group. It consists of a peak with peculiar, non-Lorentzian shape and a high-frequency tail. We show that the width of the peak is proportional to $q^2$ for finite magnetic field compared to $q^3$ for zero field. For the tail we derive an analytic formula without any adjustable parameters and demonstrate that the integrability of the model directly affects the lineshape.'
author:
- 'R. G. Pereira'
- 'J. Sirker'
- 'J.-S. Caux'
- 'R. Hagemans'
- 'J. M. Maillet'
- 'S. R. White'
- 'I. Affleck'
title: 'The dynamical spin structure factor for the anisotropic spin-$1/2$ Heisenberg chain'
---
One of the seminal models in the field of strong correlation effects is the antiferromagnetic spin-$1/2$ $XXZ$-chain $$H=J\sum_{j=1}^{N}\left[S_{j}^{x}S_{j+1}^{x}+S_{j}^{y}S_{j+1}^{y}+\Delta
S_{j}^{z}S_{j+1}^{z}-hS_{j}^{z}\right]\; ,
\label{XXZ}$$ where $J>0$ is the coupling constant and $h$ a magnetic field. The parameter $\Delta$ describes an exchange anisotropy and the model is critical for $-1<\Delta\leq 1$. Recently, much interest has focused on understanding its dynamics, in particular, the spin [@Zotos] and the heat conductivity [@KluemperSakai], both at wave-vector $q=0$. A related important question refers to dynamical correlation functions at small but nonzero $q$, in particular the dynamical spin structure factors $S^{\mu\mu}(q,\omega)$, $\mu=x,y,z$ [@Muller]. These quantities are in principle directly accessible by inelastic neutron scattering. Furthermore, they are important to resolve the question of ballistic versus diffusive transport raised by recent experiments [@Thurber] and would also be useful for studying Coulomb drag for two quantum wires [@Glazman].
In this letter we study the lineshape of the longitudinal structure factor $S^{zz}(q,\omega)$ at zero temperature in the limit of small $q$. Our main results can be summarized as follows: By calculating the form factors $F(q,\omega)\equiv\left\langle 0\left|S_{q}^{z}\right|\alpha\right\rangle $ (here $|0\rangle$ is the ground state and $|\alpha\rangle$ an excited state) for finite chains based on a numerical evaluation of exact Bethe Ansatz (BA) expressions [@KitanineMaillet; @CauxMaillet] we establish that $S^{zz}(q,\omega)$ consists of a peak with peculiar, non-Lorentzian shape centered at $\omega\sim vq$, where $v$ is the spin-wave velocity, and a high-frequency tail. We find that $|F(q,\omega)|$ is a rapidly decreasing function of the number of particles involved in the excitation. In particular, we find for all $\Delta$ that the peak is completely dominated by two-particle (single particle-hole) and the tail by four-particle states (denoted by 2$p$ and 4$p$ states, respectively). Including up to eight-particle as well as bound states we verify using Density Matrix Renormalization Group (DMRG) that the sum rules are fulfilled with high accuracy corroborating our numerical results. By solving the BA equations for small $\Delta$ and infinite system size analytically we show that the width of the peak scales like $q^2$ for $h\neq 0$. Furthermore, we calculate the high-frequency tail analytically based on a parameter-free effective bosonic Hamiltonian. We demonstrate that our analytical results for the linewidth and the tail are in excellent agreement with our numerical data.
For a chain of length $N$ the longitudinal dynamical structure factor is defined by $$\begin{aligned}
S^{zz}\left(q,\omega\right)&=&\frac{1}{N}\sum_{j,j^{\prime}=1}^{N}e^{-iq\left(j-j^{\prime}\right)}\int_{-\infty}^{+\infty}\!\!\!\!\!\!\!\!
dt\,
e^{i\omega t}\left\langle
S_{j}^{z}\left(t\right)S_{j^{\prime}}^{z}\left(0\right)\right\rangle
\nonumber \\
&=& \frac{2\pi}{N}\sum_{\alpha}\left|\left\langle
0\left|S_{q}^{z}\right|\alpha\right\rangle
\right|^{2}\delta\left(\omega-E_{\alpha}\right) \; .
\label{strucFac}\end{aligned}$$ Here $S_{q}^{z}=\sum_{j}S_{j}^{z}e^{-iqj}$ and $\left|\alpha\right\rangle $ is an eigenstate with energy $E_{\alpha}$ above the ground state energy. For a finite system, $S^{zz}\left(q,\omega\right)$ at fixed $q$ is a sum of $\delta$-peaks at the energies of the eigenstates. In the thermodynamic limit $N\rightarrow\infty$, the spectrum is continuous and $S^{zz}\left(q,\omega\right)$ becomes a smooth function of $\omega$ and $q$. By linearizing the dispersion around the Fermi points and representing the fermionic operators in terms of bosonic ones the Hamiltonian (\[XXZ\]) at low energies becomes equivalent to the Luttinger model [@GiamarchiBook]. For this free boson model $S^{zz}(q,\omega)$ can be easily calculated and is given by $$S^{zz}\left(q,\omega\right)=K\left|q\right|\delta\left(\omega-v\left|q\right|\right)\, ,
\label{Szz_freeBoson}$$ where $K$ is the Luttinger parameter. This result is a consequence of Lorentz invariance: a single boson with momentum $\left|q\right|$ always carries energy $\omega=v|q|$, leading to a $\delta$-function peak at this level of approximation.
We expect the simple result (\[Szz\_freeBoson\]) to be modified in various ways. First of all, the peak at $\omega\sim vq$ should acquire a finite width $\gamma_q$. The latter can be easily calculated for the $XX$ point, $\Delta=0$, where the model is equivalent to non-interacting spinless fermions. In this case the only states that couple to the ground state via $S^z_q$ are those containing a single particle-hole excitation (2$p$ states). As a result, the exact $S^{zz}(q,\omega)$ is finite only within the boundaries of the 2$p$ continuum. For $h\neq 0$, one finds $\gamma_q\approx q^2/m$ for small $q$, where $m=(J\cos k_F)^{-1}$ is the effective mass at the Fermi momentum $k_F$. For $h=0$, $m^{-1}\to 0$ and the width becomes instead $\gamma_q\approx Jq^3/8$. In both cases the non-zero linewidth is associated with the band curvature at the Fermi level and sets a finite lifetime for the bosons in the Luttinger model. Different attempts to calculate $\gamma_q$ for $\Delta\neq0$ have focused on perturbation theory in the band curvature terms [@Samokhin] or in the interaction $\Delta$ [@Kopietz; @Teber] and contradictory results were found. All these approaches have to face the breakdown of perturbation theory near $\omega\sim vq$.
Since perturbative approaches show divergences on shell, our discussion about the broadening of the peak is based on the BA solution. The BA allows us to calculate the energy of an eigenstate exactly from a system of coupled non-linear equations [@tak99]. For $\Delta=0$ these equations decouple, the structure factor is determined by 2$p$ states only and one recovers the free fermion solution. For $|\Delta|\ll 1$ the most important excitations are still of the 2$p$ type and one can obtain the energies of these eigenstates analytically in the thermodynamic limit by expanding the BA equations in lowest order in $\Delta$. For $h\neq0$ (*i.e.*, finite magnetization $s\equiv \left\langle S_j^z\right\rangle$) this leads to $$\label{BA7}
\gamma_q = 4J \l(1+\frac{2\Delta}{\pi}\sin k_F\r)\cos k_F \sin^2\frac{q}{2}
\approx \frac{q^2}{m^*} \; .$$ for the 2$p$ type excitations. We therefore conclude that the interaction does not change the scaling of $\gamma_q$ compared to the free fermion case but rather induces a renormalization of the mass given by $m\rightarrow m^* =
m/(1+2\Delta\sin k_F/\pi)$. We have verified our analytical small $\Delta$ result by calculating the form factors numerically [@CauxMaillet]. For all $\Delta$, we find that excitations involving more than two particles have negligible spectral weight in the peak region. In Fig. \[fig1\] we therefore show only the form factors for the 2$p$ states and a typical set of parameters.
![(Color online) Form factors squared for the 2$p$ excitations and different $N$ at $\Delta=0.25$, $s=-0.1$ and $q=2\pi/25$. The inset shows the scaling of the width $\gamma_q$. The points are obtained by an extrapolation $N\to\infty$ of the numerical data. The solid line is the prediction (\[BA7\]), $\gamma_q=0.356\, q^2$.[]{data-label="fig1"}](Lineshape_D0.25_s0.1.eps){width="0.85\columnwidth"}
The form factors, for different chain lengths, $N$, collapse onto a single curve determining the lineshape of $S^{zz}(q,\omega)$ except for a high-frequency tail discussed later. The form factors are enhanced near the lower threshold $\omega_L(q)$ and suppressed near the upper threshold $\omega_U(q)$ in contrast to the almost flat distribution for $\Delta=0$. The lineshape agrees qualitatively with the recent result in [@Glazmannew] predicting a power-law singularity at $\omega_L(q)$ with a $q$- and $\Delta$-dependent exponent. The inset of Fig. \[fig1\] provides a numerical confirmation that $\gamma_q\sim q^2$, with the correct pre-factor as predicted in (\[BA7\]) for $k_F=2\pi/5$. For zero field, the bounds of the 2$p$ continuum are known analytically [@CloizeauxGaudin] and lead to a scaling $\gamma_q\sim q^3$ for $-1<\Delta\leq 1$. Furthermore, for $h=0$ and $\Delta=1$, an exact result for the 2$p$ contributions to the structure factor has been derived [@KarbachMueller]. Calculating a small number of form factors for finite chains poses two important questions: 1) For finite chains $S^{zz}(q,\omega)$ at small $q$ is dominated by 2$p$ excitations. Is this still true in the thermodynamic limit? 2) How much of the spectral weight does the relatively small number of form factors calculated account for? We can shed some light on these questions by considering the sum rule $I(q)=(2\pi)^{-1}\int d\omega\, S^{zz}(q,\omega) = N^{-1}\langle S^z_q
S^z_{-q}\rangle$ where the static correlation function can be obtained by DMRG. As example, we consider again $\Delta=0.25$, $s=-0.1$, $q=2\pi/25$ with $N=200$. For this case we have calculated $2,220,000$ form factors including up to $8p$ excited states as well as bound states. Note, however, that this is still small compared to a total number of states of $2^{200}$. In the DMRG up to $2400$ states were kept and the ordinary two-site method was utilized but with corrections to the density matrix to ensure good convergence with periodic boundary conditions [@White]. The typical truncation error was then $\sim 10^{-10}$ and within the accuracy of the DMRG calculation (3 parts in $10^5$) the $2,220,000$ form factors account for 100% of $I(q)$. 99.97% of the spectral weight is concentrated in $I_2(q)$, the contribution caused by the $qN/2\pi=8$ single particle-hole type excitations at $\omega\sim vq$. With increasing $N$ we observe an extremely slow decrease in $I_2(q)$; however, even for a system of $2400$ sites, $I_2(q)$ is only reduced by 0.13% compared to the $N=200$ case. While this large $N$ behavior definitely requires further investigation it may not be very relevant to experiments, where effective chain lengths are limited by defects.
Another feature missed in (\[Szz\_freeBoson\]) is the small spectral weight extending up to high frequencies $\omega\sim J$. This is relevant in the context of drag resistance in quantum wires because of the equivalence of $S^{zz}(q,\omega)$ and the density-density correlation function for spinless fermions [@Glazman]. To calculate the high-frequency tail we start from the Luttinger model $$\mathcal{H}_{LL}=\frac{v}{2}\left[\Pi^2+\left(\partial_x\phi\right)^2\right] \, .
\label{LL}$$ Here, $\phi(x)$ is a bosonic field and $\Pi(x)$ its conjugated momentum satisfying $[\phi(x),\Pi(y)]=i\delta(x-y)$. The slowly varying part of the spin operator is expressed as $S^z_j\sim\sqrt{K/\pi}\, \partial_x\phi$. Note that both $v$ and $K$ depend on $\Delta$ and $h$. In the language of the Luttinger model, the spectral weight at high frequencies is made possible by boson-boson interactions. Therefore, we add to the model (\[LL\]) the following terms $$\begin{aligned}
\delta\mathcal{H}(x) &=&
\eta_{-}\left[\left(\partial_{x}\phi_{L}\right)^{3}-\left(\partial_{x}\phi_{R}\right)^{3}\right]
\nonumber \\
&+& \eta_{+}\left[\left(\partial_{x}\phi_{L}\right)^{2}\partial_{x}\phi_{R}-\left(\partial_{x}\phi_{R}\right)^{2}\partial_{x}\phi_{L}\right]
\nonumber \\
&+&\zeta_-\l[(\partial_x\phi_L)^4+(\partial_x\phi_R)^4\right]+\zeta_+(\partial_x\phi_L)^2(\partial_x\phi_R)^2
\nonumber \\
&+&
\zeta_3\l[\partial_x\phi_L(\partial_x\phi_R)^3+\partial_x\phi_R(\partial_x\phi_L)^3\r]
\nonumber \\
&+& \lambda \cos\l(4\sqrt{\pi K}\phi+4k_Fx\r)\; ,
\label{Ham}
\end{aligned}$$ where $\phi_{R,L}$ are the right- and left-moving components of the bosonic field with $\phi=(\phi_L-\phi_R)/\sqrt{2}$. They obey the commutation relations $[\partial_x\phi_{L,R}(x),\partial_x\phi_{L,R}(y)]=\mp i\partial_y\delta (x-y)$. These are the leading irrelevant operators stemming from band curvature and the interaction part. The amplitudes $\eta_\pm$, $\zeta_\pm$, $\zeta_3$ and $\lambda$ are functions of $\Delta$ and $h$. For $h\neq 0$ the $\lambda$-term (Umklapp term) is oscillating and can therefore be omitted at low energies. Besides, the $\zeta$-terms have a higher scaling dimension than the $\eta_\pm$-terms, so the latter yield the leading corrections. For $h=0$, on the other hand, particle-hole symmetry dictates that $\eta_\pm=0$ and we must consider the $\zeta$-terms as well as the Umklapp term. For $\gamma_{q}\ll\omega-v\left|q\right|\ll J$ it is safe to use finite order perturbation theory in these irrelevant terms.
In the finite field case the tail is due to the $\eta_{+}$-interaction. This allows for intermediate states with one right- and one left-moving boson, which together can carry small momentum but high energy $\omega\gg
v\left|q\right|$. It is convenient to write the structure factor defined in (\[strucFac\]) as $S^{zz}(q,\omega)=-2\,\mbox{Im}\,\chi^{ret}(q,\omega)$ where $\chi^{ret}=-(K/\pi)\langle\partial_x\,\phi\partial_x\phi\rangle$ is the retarded spin-spin correlation function. The correction at lowest order in $\eta_+$ to the free boson result then reads $$\begin{aligned}
&&\!\!\!\!\!\!\!\delta\chi\left(q,i\omega_{n}\right)=\left[D_{R}^{\left(0\right)}\left(q,i\omega_{n}\right)+D_{L}^{\left(0\right)}\left(q,i\omega_{n}\right)\right]^{2}\Pi_{RL}\left(q,i\omega_{n}\right)
\nonumber \\
&&\Pi_{RL}\left(q,i\omega_{n}\right)=-\frac{2K\eta_{+}^{2}}{\pi}\int\int dxd\tau
e^{-i(qx-\omega_{n}\tau)} \\
&&\quad\quad\quad\quad\quad\quad\times D_{R}^{\left(0\right)}\left(x,\tau\right)D_{L}^{\left(0\right)}\left(x,\tau\right). \nonumber
\label{fF_selfE}\end{aligned}$$ where $D_{R,L}^{(0)}(q,i\omega_n)=\langle\partial_x
\phi_{L,R}\,\partial_x\phi_{L,R}\rangle =\pm|q|/(i\omega_n\mp v|q|)$ are the free boson propagators for the right- and left-movers, respectively, and $\Pi_{RL}\left(q,i\omega_{n}\right)$ is the self-energy. The tail of $S^{zz}(q,\omega)$ for $h\neq 0$ is then given by $$\delta
S^{zz}_{\eta_+}\left(q,\omega\right)=\frac{K\eta_{+}^{2}q^{4}}{v\pi}\,\frac{\theta\left(\omega-v\left|q\right|\right)}{\omega^{2}-v^{2}q^{2}}.
\label{fF_tail}$$ For $h=0$ a connection between the integrability of the $XXZ$-model and the parameters in the corresponding low-energy effective theory exists [@Bazhanov]. The integrability is related to an infinite set of conserved quantities where the first nontrivial one is the energy current defined by $J_E=\int dx\, j_E(x)$ with $\partial_x\, j_E(x) = i [\mathcal{H}(x),\int dy
\mathcal{H}(y)]$ [@ZotosPrelovsek]. For the Hamiltonian (\[Ham\]) we find $$\begin{aligned}
j_E &=& -\frac{v}{2}\l[(\partial_x\phi_L)^2-(\partial_x\phi_R)^2\r]
- 4\zeta_-\l[(\partial_x\phi_L)^4-(\partial_x\phi_R)^4\r] \nonumber \\
&+& 2\zeta_3\l[\partial_x\phi_L(\partial_x\phi_R)^3-\partial_x\phi_R(\partial_x\phi_L)^3\r]
+\ldots
\; , \label{heat}\end{aligned}$$ where the neglected terms contain more than four derivatives. Now conservation of the energy current, $[J_E, H] = 0$, implies $\zeta_3 =
0$ [^1]. The spectral weight at high frequencies is therefore given by the $\zeta_{+}$ and $\lambda$-terms only. The perturbation theory for the $\zeta_+$-term is analogous to the one for the $\eta_+$-term. Now the incoming left (right) boson can decay into one left (right) and two right (left) bosons. This contribution is then given by $$\delta
S^{zz}_{\zeta_+}\left(q,\omega\right)=\frac{K\zeta_+^2}{48\pi^2v^3}q^2(\omega^2-v^2q^2)\theta(\omega-v|q|)\;
.
\label{integer_tail}$$ For the Umklapp term, we calculate the correlations following [@Schulz] and find $$\delta
S^{zz}_\lambda\left(q,\omega\right)=Aq^2(\omega^2-v^2q^2)^{4K-3}\theta(\omega-v|q|)\; ,
\label{frac_tail}$$ where $A=8\pi^2\lambda^2K^2(2v)^{3-8K}/\Gamma^2(4K)$. We remark that, in a more general non-integrable model, the $\zeta_3$ term in Eq. (\[Ham\]) leads to an additional contribution to the tail which decreases with energy and becomes large near $\omega\sim vq$. The increasing tail found in the integrable case implies a non-monotonic behavior of $S^{zz}(q,\omega )$. Eqs. (\[fF\_tail\]), (\[integer\_tail\]) and (\[frac\_tail\]) are valid in the thermodynamic limit. We can extend these results for finite systems and express them in terms of the form factors appearing in (\[strucFac\]). For a given momentum $q=2\pi n/N$, the form factors generated by integer dimension operators as in (\[fF\_tail\]) and (\[integer\_tail\]) will then be situated at the discrete energies $\omega_l
=2\pi v l/N$ with $l=n+2,n+4,\cdots$. The form factors belonging to (\[frac\_tail\]), on the other hand, will have energies $\omega_l =2\pi
v(l+4K)/N$ with $l=n,n+2,\cdots$.
To compare our field theory results for the tail with BA data for the form factors we have to determine the [*a priori*]{} unknown parameters in the effective Hamiltonian (\[Ham\]). In general, they can only be obtained in terms of a small-$\Delta$ expansion. To lowest order in $\Delta$, Eq. (\[fF\_tail\]) reduces to the weakly interacting result in [@Glazman; @Teber]. We also checked that in this limit $\zeta_3=0$ for the $XXZ$-model but $\zeta_3$ becomes finite if we introduce a next-nearest neighbor interaction that breaks integrability. For an integrable model the coupling constants can be determined by comparing thermodynamic quantities accessible by BA and field theory. Lukyanov [@Lukyanov] used this procedure to find a closed form for $\zeta_\pm$ and $\lambda$ in the case $h=0$. Similarly, the parameters $\eta_\pm$ can be related to the change in $v$ and $K$ when varying $h$ and we find $J\eta_-(h_0) = \sqrt{2\pi/K}v^2(a+b/2)/6$ and $J\eta_+(h_0) =
\sqrt{2\pi/K}v^2b/4$ where $a=v^{-1}\partial v/\partial h |_{h=h_0}$ and $b=K^{-1}\partial K/\partial h |_{h=h_0}$. A numerical solution of the BA integral equations for $v$, $K$ for infinite system size then allows us to fix $\eta_\pm$ accurately [*for all anisotropies and fields*]{} so that the formulas for the tail do not contain any free parameters. The comparison with the form factors computed by BA for finite and zero field is shown in Figs. \[fig2\] and \[fig3\], respectively. We note that the energies of the eigenstates are actually nondegenerate and spread around the energy levels predicted by field theory (see inset of Fig. \[fig2\]).
![(Color online) Form factors (dots) obtained by BA compared to formula (\[fF\_tail\]) (line) for $\Delta=0.75$, $s=-0.1$, $N=600$ and $q=2\pi/50$. The form factors for the exact eigenstates at $\omega\approx 2\pi v l/N$ are added and represented as a single point. The inset shows each form factor separately. The number of states at each level agrees with a simple counting based on multiple particle-hole excitations created around the Fermi points.[]{data-label="fig2"}](finiteField_D0.75_ik4.eps){width="0.85\columnwidth"}
![(Color online) Sum of form factors at $\omega \approx 2\pi
v(l+4K)/N$ (dots) and at $\omega \approx 2\pi vl/N$ (squares) obtained by BA for $\Delta=0.25$, $s=0$, $N=600$ and $q=2\pi/50$. The solid line corresponds to (\[frac\_tail\]), the dotted line to (\[frac\_tail\]) with finite size corrections included [@Note], and the dashed line to (\[integer\_tail\]).[]{data-label="fig3"}](D0.25_ik4_h0.0.eps){width="0.85\columnwidth"}
In summary, we have presented results for the lineshape of $S^{zz}(q,\omega)$ for small $q$ based on a numerical evaluation of form factors for finite chains. We established a linewidth $\gamma_q\sim q^2$ for $h\neq 0$ by solving the BA equations analytically for small $\Delta$. In addition, we showed that the spectral weight for frequencies $\gamma_{q}\ll\omega-v\left|q\right|\ll J$ is well described by the effective bosonic Hamiltonian. We presented evidence that the lineshape of $S^{zz}(q,\omega)$ depends on the integrability of the model. This becomes manifest in the field theory approach by a fine tuning of coupling constants and the absence of certain irrelevant operators. We are grateful to L.I. Glazman and F.H.L. Essler for useful discussions. This research was supported by CNPq through Grant No. 200612/2004-2 (R.G.P), the DFG (J.S.), FOM (J.-S.C.), CNRS and the EUCLID network (J.M.M.), the NSF under DMR 0311843 (S.R.W.), and NSERC (J.S., I.A.) and the CIAR (I.A.).
[18]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, (), , ****, (), , ****, (), , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, (), , p. (), , ****, (), , ****, (), , ****, ().
, ** (, ).
, ****, ().
, ().
, ().
, ** (, ), , ** (, ).
, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
.
[^1]: $\zeta_3$ is also absent in the effective Hamiltonian in Ref. [@Lukyanov].
|
---
abstract: 'We study the affine quasi-Einstein Equation for homogeneous surfaces. This gives rise through the modified Riemannian extension to new half conformally flat generalized quasi-Einstein neutral signature $(2,2)$ manifolds, to conformally Einstein manifolds and also to new Einstein manifolds through a warped product construction.'
address:
- 'MBV: Universidade da Coruña, Differential Geometry and its Applications Research Group, Escola Politécnica Superior, 15403 Ferrol, Spain'
- 'EGR-XVR: Faculty of Mathematics, University of Santiago de Compostela, 15782 Santiago de Compostela, Spain'
- 'PBG: Mathematics Department, University of Oregon, Eugene OR 97403-1222, USA'
author:
- 'M. Brozos-Vázquez E. García-Río P. Gilkey, and X. Valle-Regueiro'
title: 'The affine quasi-Einstein Equation for homogeneous surfaces'
---
[^1]
Introduction
============
The affine quasi-Einstein Equation (see Equation (\[E1.c\])) is a $0^{th}$ order perturbation of the Hessian. It is a natural linear differential equation in affine differential geometry. We showed (see [@BGGV17a]) that it gives rise to strong projective invariants of the affine structure. Moreover, this equation also appears in the classification of half conformally flat quasi-Einstein manifolds in signature $(2,2)$. In this paper, we will examine the solution space to the affine quasi-Einstein Equation in the context of homogeneous affine geometries.
A description of locally homogeneous affine surfaces has been given by Opozda [@Op04] (see Theorem \[T1.7\] below). They fall into 3 families. The first family is given by the Levi-Civita connection of a surface of constant curvature (Type $\mathcal{C}$). There are two other families. The first (Type $\mathcal{A}$) generalizes the Euclidean connection and the second (Type $\mathcal{B}$) is a generalization of the hyperbolic plane. As the Type $\mathcal{C}$ geometries are very rigid, we shall focus on the other two geometries. There are many non-trivial solutions of the affine quasi-Einstein Equation for Type $\mathcal{A}$ geometries (see Section \[ss1.5\]) and for Type $\mathcal{B}$ geometries (see Section \[ss1.6\]). This leads (see Theorem \[T1.1\] and Remark \[R1.2\]) to new examples of half conformally flat and conformally Einstein isotropic quasi-Einstein manifolds of signature $(2,2)$. We also use results of [@KimKim] to construct new higher dimensional Einstein manifolds. Our present discussion illustrates many of the results of [@BGGV17a] and focusses on the dimension of the eigenspaces of the solutions to the affine quasi-Einstein Equation for homogeneous surfaces.
Notational conventions
----------------------
Recall that a pair $\mathcal{M}=(M,\nabla)$ is said to be an [*affine manifold*]{} if $\nabla$ is a torsion free connection on the tangent bundle of a smooth manifold $M$ of dimension $m\ge2$. We shall be primarily interested in the case of affine surfaces ($m=2$) but it is convenient to work in greater generality for the moment. In a system of local coordinates, express $\nabla_{\partial_{x^i}}\partial_{x^j}=\Gamma_{ij}{}^k\partial_{x^k}$ where we adopt the Einstein convention and sum over repeated indices. The connection $\nabla$ is torsion free if and only if the Christoffel symbols $\Gamma=(\Gamma_{ij}{}^k)$ satisfy the symmetry $\Gamma_{ij}{}^k=\Gamma_{ji}{}^k$ or, equivalently, if given any point $P$ of $M$, there exists a coordinate system centered at $P$ so that in that coordinate system we have $\Gamma_{ij}{}^k(P)=0$.
Let $f$ be a smooth function on $M$. The Hessian $$\label{E1.a}
\mathcal{H}_\nabla f=\nabla^2f:=(\partial_{x^i}\partial_{x^j}f-\Gamma_{ij}{}^k\partial_{x^k}f)\,dx^i\otimes dx^j$$ is an invariantly defined symmetric $(0,2)$-tensor field; $\mathcal{H}_\nabla:C^\infty(M)\rightarrow C^\infty(S^2(M))$ is a second order partial differential operator which is natural in the context of affine geometry. The curvature operator $R_\nabla$ and the Ricci tensor $\rho_\nabla$ are defined by setting: $$R_\nabla(x,y):=\nabla_x\nabla_y-\nabla_y\nabla_x-\nabla_{[x,y]}\text{ and }
\rho_\nabla(x,y):=\operatorname{Tr}\{z\rightarrow R_\nabla(z,x)y\}\,.$$ The Ricci tensor carries the geometry if $m=2$; an affine surface is flat if and only if $\rho_\nabla=0$ because $$\rho_{11}=R_{211}{}^2,\quad\rho_{12}=R_{212}{}^2,\quad\rho_{21}=R_{121}{}^1,\quad
\rho_{22}=R_{122}{}^1\,.$$ In contrast to the situation in Riemannian geometry, $\rho_\nabla$ is not in general a symmetric $(0,2)$-tensor field. The symmetrization and anti-symmetrization of the Ricci tensor are defined by setting, respectively, $$\textstyle\rho_{s,\nabla}(x,y):=\frac12\{\rho_\nabla(x,y)+\rho_\nabla(y,x)\}\text{ and }
\textstyle\rho_{a,\nabla}(x,y):=\frac12\{\rho_\nabla(x,y)-\rho_\nabla(y,x)\}\,.$$ We use $\rho_{s,\nabla}$ to define a $0^{\operatorname{th}}$ order perturbation of the Hessian. The [*affine quasi-Einstein operator*]{} $ \mathfrak{Q}_{\mu,\nabla}:C^\infty(M)\rightarrow C^\infty(S^2(M))$ is defined by setting: $$\label{E1.b}
\mathfrak{Q}_{\mu,\nabla} f:=\mathcal{H}_\nabla f-\mu f\rho_{s,\nabla}
\,.$$ The eigenvalue $\mu$ is a parameter of the theory; again, this operator is natural in the category of affine manifolds. The [*affine quasi-Einstein Equation*]{} is the equation: $$\label{E1.c}
\mathfrak{Q}_{\mu,\nabla} f=0\text{ i.e. }\mathcal{H}_\nabla f=\mu f\rho_{s,\nabla}\,.$$ We introduce the associated eigenspaces by setting: $$E(\mu,\nabla):=\ker( \mathfrak{Q}_{\mu,\nabla})=\{f\in C^\infty(M):\mathcal{H}_\nabla f=\mu f\rho_{s,\nabla}\}\,.$$ Similarly, if $P$ is a point of $M$, we let $E(P,\mu,\nabla)$ be the space of germs of solutions to Equation (\[E1.c\]) which are defined near $P$. Note that $E(0,\nabla)=\ker(\mathcal{H}_\nabla)$ is the set of [*Yamabe solitons*]{}. Also note that $\rho_{s,\nabla}=0$ implies $E(\mu,\nabla)=E(0,\nabla)$ for any $\mu$. If $\mu\ne0$ and $f>0$, let $\hat f:=-2\mu^{-1}\log(f)$, i.e. $f=e^{-\frac12\mu\hat f}$. This transformation converts Equation (\[E1.c\]) into the equivalent non-linear equation: $$\label{E1.d}
\mathcal{H}_\nabla\hat f+2\rho_{s,\nabla}-\textstyle\frac12\mu d\hat f\otimes d\hat f=0\,.$$
Half conformally flat 4-dimensional geometry
--------------------------------------------
Equation (\[E1.d\]) plays an important role in the study of the quasi-Einstein Equation in neutral signature geometry [@BGGV17]. Let $\mathcal{N}=(N,g,F,\mu_N)$ be a quadruple where $(N,g)$ is a pseudo-Riemannian manifold of dimension $n$, $F\in {C}^\infty(N)$, and $\mu_N\in \mathbb{R}$. Let $\nabla^g$ be the Levi-Civita connection of $g$; the associated Ricci tensor $\rho_g$ is a symmetric $(0,2)$-tensor field. We say that $\mathcal{N}$ is a *quasi-Einstein manifold* if $$\mathcal{H}_{\nabla^g}F+\rho_g-\mu_NdF\otimes dF=\lambda\, g\text{ for }\lambda\in\mathbb{R}\,.$$ We say $\mathcal{N}$ is [*isotropic*]{} if $\|dF\|=0$. We restrict to the 4-dimensional setting where Walker geometry (see [@DeR; @Walker]) enters by means of the deformed Riemannian extension. If $(x^1,x^2)$ are local coordinates on an affine surface $\mathcal{M}=(M,\nabla)$, let $(y_1,y_2)$ be the corresponding dual coordinates on the cotangent bundle $T^*M$; if $\omega$ is a 1-form, then we can express $\omega=y_1dx^1+y_2dx^2$. Let $\Phi$ be an auxiliary symmetric $(0,2)$-tensor field. The [*deformed Riemannian extension*]{} is defined [@CGGV09] by setting: $$\label{E1.e}
g_{\nabla,\Phi}= 2 dx^i \circ dy_i +\left\{-2y_k \Gamma_{ij}{}^k+\Phi_{ij}\right\}dx^i \circ dx^j\,.$$ These neutral signature metrics are invariantly defined. Let $\pi:T^*M\rightarrow M$ be the natural projection. One has the following useful intertwining relation: $$\mathcal{H}_{g_{\nabla,\Phi}}\pi^*\hat f=\pi^*\mathcal{H}_\nabla\hat f,
\quad\rho_{g_{\nabla,\Phi}}=2\pi^*\rho_{s,\nabla},\quad\|d\pi^*\hat f\|_{g_{\nabla,\Phi}}
^2=0\,,$$ for any $\hat f\in C^\infty(M)$. The following observation is now immediate; note the factor of $\frac12$ in passing from the eigenvalue $\mu$ on the base to the eigenvalue $\mu_{T^*M}$ on the total space:
\[T1.1\] Let $(M,\nabla)$ be an affine surface and let $\hat f\in C^\infty(M)$ satisfy Equation (\[E1.d\]) or, equivalently, $f=e^{-\frac12\mu\hat f}\in E(\mu,\nabla)$. Let $F=\pi^*\hat f$, and let $\Phi$ be an arbitrary symmetric $(0,2)$-tensor field on $M$. Then $(T^*M,g_{\nabla,\Phi},F,\mu_{T^*M})$ for $\mu_{T^*M}=\frac12\mu$ is a self-dual isotropic quasi-Einstein Walker manifold of signature $(2,2)$ with $\lambda=0$.
\[R1.2\]Starting with a quasi-Einstein manifold $(N, g, F, \mu_N)$ where $\mu_N=\frac{1}{r}$ for $r$ a positive integer, there exist appropriate Einstein fibers $E$ of dimension $r$ so that the warped product $N\times_\varphi E$ is Einstein where $\varphi=e^{-F/r}$ [@KimKim]. We can use Theorem \[T1.1\] to construct self-dual isotropic quasi-Einstein Walker manifolds of neutral signature $(2,2)$ from affine quasi-Einstein surfaces. Thus it is important to have solutions to the affine quasi-Einstein Equation for quite general $\mu$ and, in particular, for $\mu=\frac 2r$. This will be done quite explicitly in our subsequent analysis. The parameter $\mu_N=-\frac{1}{n-2}$ is a distinguished value which is often exceptional. For $n\geq 3$, $(N,g, f,\mu_N= -\frac{1}{n-2})$ is a quasi-Einstein manifold if and only if $e^{-\frac{2}{n-2}f}g$ is Einstein [@EGR-KR2]. Therefore, taking into the account the fact that $\mu_{T^*M}=\frac12\mu_M$, having solutions for the parameter $\mu_m=-\frac{1}{m-1}$ on $(M^m,\nabla)$ gives rise to conformally Einstein Riemannian extensions $(T^*M,g_{\nabla,\Phi})$ [@BGGV17].
The critical eigenvalue $\mu_m=-\frac1{m-1}$ is distinguished in this theory (see Theorem \[T1.5\] below). For affine surfaces, this corresponds to $\mu_2=-1$ or, equivalently, $\mu_{T^*M}=-\frac12$. Excluding this value, we have the following converse to Theorem \[T1.1\] (see [@BGGV17]).
Let $(N,g,F,\mu_N)$ be a self-dual quasi-Einstein manifold of signature $(2,2)$ which is not locally conformally flat with $\mu_N\neq -\frac{1}{2}$. Assume $(N,g)$ is not Ricci flat. Then $\lambda=0$ and $(N,g,F,\mu_N)$ is locally isometric to a manifold which has the form given in Theorem \[T1.1\].
Foundational results concerning the affine quasi-Einstein Equation
------------------------------------------------------------------
We established the following result in [@BGGV17a]:
\[T1.4\] Let $\mathcal{M}=(M,\nabla)$ be an affine manifold. Let $f\in E(P,\mu,\nabla)$.
1. One has $f\in C^\infty(M)$. If $\mathcal{M}$ is real analytic, then $f$ is real analytic.
2. If $X$ is the germ of an affine Killing vector field based at $P$, then$Xf\in E(P,\mu,\nabla)$.
3. If $f(P)=0$ and $df(P)=0$, then $f\equiv0$. Thus $\dim\{E(P,\mu,\nabla)\}\le m+1$.
4. If $M$ is simply connected and if $\dim\{E(P,\mu,\nabla)\}$ is constant on $M$, then $f$ extends uniquely to an element of $E(\mu,\nabla)$.
We say that $\nabla$ and $\tilde\nabla$ are [*projectively equivalent*]{} if there exists a $1$-form $\omega$ so that $\nabla_XY= \tilde\nabla_XY+\omega(X)Y+\omega(Y)X$ for all $X$ and $Y$. The equivalence is said to be a [*strong projective equivalence*]{} if $\omega$ is closed. If two projectively equivalent connections have symmetric Ricci tensors, then the two connections are, in fact, strongly projectively equivalent [@E70; @NS; @S95]. A connection $\nabla$ is said to be [*projectively flat*]{} (resp. [*strongly projectively flat*]{}) if $\nabla$ is projectively equivalent (resp. strongly projectively equivalent) to a flat connection.
\[T1.5\] If $\mathcal{M}$ is an affine surface, then $\dim\{E(\mu_2,\nabla)\}\ne2$. Moreover
1. $\mathcal{M}$ is strongly projectively flat if and only if $\dim\{E(\mu_2,\nabla)\}=3$.
2. If $\mathcal{M}$ is strongly projectively flat and $\operatorname{rank}\rho_\nabla=2$, then for $\mu\neq \mu_2$:
1. $\dim \{E(\mu,\nabla)\}=0$ for $\mu\neq 0$, and
2. $\dim \{E(0,\nabla)\}=1$.
3. If $\dim\{E(\mu,\nabla)\}=3$ for $\mu\ne\mu_2$, then $\mathcal{M}$ is Ricci flat and also strongly projectively flat.
In Theorem \[T1.17\], we will exhibit Type $\mathcal{B}$ surfaces with $\operatorname{rank}\rho_{s,\nabla}=2$ and $\operatorname{dim}\{ E(\mu,\nabla)\}\neq 0$ for $\mu\ne0$. Consequently, Theorem \[T1.5\] (2) fails without the assumption that $\mathcal{M}$ is strongly projectively flat.
Locally homogeneous surfaces
----------------------------
Let $\Gamma_{ij}{}^k=\Gamma_{ji}{}^k\in\mathbb{R}$ define a connection $\nabla^\Gamma$ on $\mathbb{R}^2$. The translation subgroup $(x^1,x^2)\rightarrow(x^1+a^1,x^2+a^2)$ acts transitively on $\mathbb{R}^2$ and preserves $\nabla^\Gamma$ so $(\mathbb{R}^2,\Gamma)$ is a homogeneous geometry. In a similar fashion, let $\Gamma_{ij}{}^k=(x^1)^{-1}C_{ij}{}^k$ for $C_{ij}{}^k=C_{ji}{}^k\in\mathbb{R}$ define a connection $\nabla^C$ on $\mathbb{R}^+\times\mathbb{R}$. The non-Abelian group $(x^1,x^2)\rightarrow(ax^1,ax^2+b)$ for $a>0$ and $b\in\mathbb{R}$ acts transitively on the geometry $(\mathbb{R}^+\times\mathbb{R},\nabla^C)$ so this also is a homogeneous geometry. The following result was established by Opozda [@Op04].
\[T1.7\] Let $\mathcal{M}=(M,\nabla)$ be a locally homogeneous affine surface. Then at least one of the following three possibilities hold describing the local geometry:
- There exists a coordinate atlas so the Christoffel symbols $\Gamma_{ij}{}^k$ are constant.
- There exists a coordinate atlas so the Christoffel symbols have the form $\Gamma_{ij}{}^k=(x^1)^{-1}C_{ij}{}^k$ for $C_{ij}{}^k$ constant and $x^1>0$.
- $\nabla$ is the Levi-Civita connection of a metric of constant Gauss curvature.
Let $\mathcal{M}=(M,\nabla)$ be a locally homogeneous affine surface which is not flat, i.e. $\rho_\nabla$ does not vanish identically. One says that $\mathcal{M}$ is a [*Type $\mathcal{A}$*]{}, [*Type $\mathcal{B}$*]{} or [*Type $\mathcal{C}$*]{} surface depending on which possibility holds in Theorem \[T1.7\]. These are not exclusive possibilities.
Type $\mathcal{C}$ surfaces are strongly projectively flat with Ricci tensor of rank $2$ in the non-flat case. Hence Theorem \[T1.5\] shows that $\operatorname{dim}\{ E(0,\nabla)\}=1$, $\operatorname{dim}\{ E(-1,\nabla)\}=3$, and $\operatorname{dim}\{ E(\mu,\nabla)\}=0$ otherwise.
We shall say that two Type $\mathcal{A}$ surfaces are [*linearly equivalent*]{} if they are intertwined by the action of an element of the general linear group. Similarly we shall say that two Type $\mathcal{B}$ surfaces are [*linearly equivalent*]{} if they are intertwined by some action $(x^1,x^2)\rightarrow(x^1,ax^1+bx^2)$ for $b\ne0$. The Ricci tensor is symmetric for Type $\mathcal{A}$ and Type $\mathcal{C}$ geometries; there are Type $\mathcal{B}$ geometries where the Ricci tensor is not symmetric.
Let $\mu\in\mathbb{R}$. It is convenient to consider complex solutions to the affine quasi-Einstein Equation by taking $E_{\mathbb{C}}(\mu,\nabla):=E(\mu,\nabla)\otimes_{\mathbb{R}}\mathbb{C}$. Real solutions can then be obtained by taking the real and imaginary parts as both the underlying equation and the eigenvalue are real.
Type $\mathcal{A}$ surfaces {#ss1.5}
---------------------------
The following result will be proved in Section \[S3\]; it is the foundation of our later results concerning Type $\mathcal{A}$ geometry as it provides the ansatz for our computations. Let $\mathfrak{A}$ be the commutative unital algebra of affine Killing vector fields generated by $\{\partial_{x^1},\partial_{x^2}\}$.
\[T1.9\] Let $\mathcal{E}$ be a finite dimensional $\mathfrak{A}$ submodule of $C^\infty(\mathbb{R}^2)\otimes_{\mathbb{R}}\mathbb{C}$. Then there exists a basis for $\mathcal{E}$ of functions of the form $e^{\alpha_1x^1+\alpha_2x^2}p(x^1,x^2)$ for $p$ polynomial, where $\alpha_i\in\mathbb{C}$. Furthermore, $e^{\alpha_1x^1+\alpha_2x^2}\partial_{x^i}p\in\mathcal{E}$ for $i=1,2$.
Let $\mathcal{M}=(\mathbb{R}^2,\nabla)$ be a Type $\mathcal{A}$ surface. Any Type $\mathcal{A}$ surface is strongly projectively flat with symmetric Ricci tensor [@BGG16]. The following result will be established in Section \[S4\].
\[T1.10\] Let $\mathcal{M}$ be a Type $\mathcal{A}$ surface which is not flat.
1. Let $\mu=0$. Then $E(0,\nabla)=\operatorname{Span}\{1\}$ or, up to linear equivalence, one of the following holds:
1. $\Gamma_{11}{}^1=1$, $\Gamma_{12}{}^1=0$, $\Gamma_{22}{}^1=0$, and $E(0,\nabla)=\operatorname{Span}\{1,e^{x^1}\}$.
2. $\Gamma_{11}{}^1=\Gamma_{12}{}^1=\Gamma_{22}{}^1=0$, and $E(0,\nabla)=\operatorname{Span}\{1,x^1\}$.
2. Let $\mu=-1$. Then $\dim\{E(-1,\nabla)\}=3$.
3. Let $\mu\ne 0,-1$. Then $\dim\{E(\mu,\nabla)\}=\left\{\begin{array}{ll}
2\text{ if } \operatorname{rank}\rho_{\nabla}=1\\
0\text{ if } \operatorname{rank}\rho_{\nabla}=2
\end{array}\right\}$.
\[Re1.11\] Let $\mathcal{M}$ be a Type $\mathcal{A}$ surface which is not flat. By Remark \[R1.2\], since $\operatorname{dim}\{ E(-1,\nabla)\}=3$, the corresponding Riemannian extension $\mathcal{N}:=(T^*M,g_{\nabla,\Phi})$ is conformally Einstein for any $\Phi$. Although in this instance $\mathcal{N}$ is locally conformally flat if $\Phi=0$ [@Afifi], $\mathcal{N}$ is not locally conformally flat for generic $\Phi\neq 0$.
Type $\mathcal{B}$ surfaces {#ss1.6}
---------------------------
Let $\mathfrak{B}$ be the unital non-commutative algebra generated by the vector fields $\partial_{x^2}$ and $X:=x^1\partial_{x^1}+x^2\partial_{x^2}$. The representation theory of the algebra $\mathfrak{B}$ is crucial to our investigation:
\[T1.12\] Let $\mathcal{E}$ be a finite dimensional $\mathfrak{B}$ submodule of $C^\infty(\mathbb{R}^+\times\mathbb{R})\otimes_{\mathbb{R}}\mathbb{C}$.
1. If $f\in\mathcal{E}$, then $ f=\sum_{\alpha,i,j}c_{\alpha,i,j}(x^1)^\alpha(\log(x^1))^i(x^2)^j$ where in this finite sum $c_{\alpha,i,j}\in\mathbb{C}$, $\alpha\in\mathbb{C}$, and $i$ and $j$ are non-negative integers.
2. If $\dim\{\mathcal{E}\}=1$, then $\mathcal{E}=\operatorname{Span}_{\mathbb{C}}\{(x^1)^a\}$ for some $a$.
3. If $\dim\{\mathcal{E}\}=2$, then one of the following possibilities hold:
1. $\mathcal{E}=\operatorname{Span}_{\mathbb{C}}\{(x^1)^\alpha,(x^1)^\alpha(c_1x^1+x^2)\}$.
2. $\mathcal{E}=\operatorname{Span}_{\mathbb{C}}\{(x^1)^\alpha,(x^1)^\alpha\log(x^1)\}$.
3. $\mathcal{E}=\operatorname{Span}_{\mathbb{C}}\{(x^1)^\alpha,(x^1)^\beta\}$ for $\alpha\ne\beta$.
Let $\mathcal{M}$ be a Type $\mathcal{B}$ surface. Then $\partial_{x^2}$ and $X$ are Killing vector fields for $M$ and hence, by Theorem \[T1.4\], $E(\mu,\nabla)$ is a finite dimensional $\mathfrak{B}$ module. The functions of Assertion (1) are all real analytic; this is in accordance with Theorem \[T1.4\]. We will assume $\mathcal{M}$ is not Ricci flat and thus $\dim\{E(\mu,\nabla)\}\le2$ for $\mu\ne-1$ so Assertions (2,3) apply. An appropriate analogue of Assertion (1) holds in arbitary dimensions.
Surfaces of Type $\mathcal{A}$ are strongly projectively flat. Thus any Type $\mathcal{B}$ surface which is also Type $\mathcal{A}$ is strongly projectively flat (see Theorem \[T6.1\]). There are, however, strongly projectively flat surfaces of Type $\mathcal{B}$ which are not of Type $\mathcal{A}$. We will establish the following result in Section \[S6.1\].
\[T1.13\] If $\mathcal{M}$ be a Type $\mathcal{B}$ surface, then $\mathcal{M}$ is strongly projectively flat if and only if $\mathcal{M}$ is linearly equivalent to one of the surfaces:
1. $C_{12}{}^1=C_{22}{}^1=C_{22}{}^2=0$ (i.e. $\mathcal{M}$ is also of Type $\mathcal{A}$).
2. $C_{11}{}^1=1+2v$, $C_{11}{}^2=0$, $C _{12}{}^1=0$, $C_{12}{}^2=v$, $C_{22}{}^1=\pm1$, $C_{22}{}^2=0$.
We remark that the special choice of $v=-1$ in Assertion (2) corresponds to the hyperbolic plane and the Lorentzian analogue in Theorem \[T6.1\] (3).
We now turn to the study of the affine quasi-Einstein Equation. We first examine the Yamabe solitons, working modulo linear equivalence. We shall establish the following result in Section \[S6.2\].
\[T1.14\] Let $\mathcal{M}$ be a Type $\mathcal{B}$ surface. Then $E(0,\nabla)=\operatorname{Span}\{1\}$ except in the following cases where we also require $\rho_\nabla\ne0$.
1. $C_{11}{}^2=c\,C_{11}{}^1$, $C_{12}{}^2=c\,C_{12}{}^1$, $C_{22}{}^2=c\,C_{22}{}^{ 1}$, $E(0,\nabla)=\operatorname{Span}\{1,x^2-cx^1\}$.
2. $C_{11}{}^1=-1$, $C_{12}{}^1=0$, $C_{22}{}^1=0$, $E(0,\nabla)=\operatorname{Span}\{1,\log(x^1)\}$.
3. $C_{11}{}^1=\alpha-1$, $C_{12}{}^1=0$, $C_{22}{}^1=0$, $E(0,\nabla)=\operatorname{Span}\{1,(x^1)^\alpha\}$ for $\alpha\ne0$.
By Theorem \[T1.5\] (1), $\dim\{E(-1,\nabla)\}=3$ if and only if $\mathcal{M}$ is strongly projectively flat. Thus this case is covered by Theorem \[T1.13\]. Furthermore, $\dim\{E(-1,\nabla)\}\ne2$ by Theorem \[T1.5\]. We now examine the remaining case where $\dim\{E(-1,\nabla)\}=1$.
\[T1.15\] Let $\mathcal{M}$ be a Type $\mathcal{B}$ surface with $\rho_\nabla\ne0$ which is not strongly projectively flat. Then $\dim\{E(-1,\nabla)\}=0$ except in the following cases where $\dim\{E(-1,\nabla)\}=1$:
1. $C_{22}{}^1=0$, $C_{22}{}^2=C_{12}{}^1\neq 0$.
2. $C_{22}{}^1=\pm 1$, $C_{12}{}^1=0$, $C_{22}{}^2=\pm 2C_{11}{}^2\neq 0$, $C_{11}{}^1=1+2C_{12}{}^2\pm(C_{11}{}^2)^2$.
\[Re1.16\] Let $\mathcal{M}$ be a Type $\mathcal{B}$ surface with $\operatorname{dim}\{ E(-1,\nabla)\}=1$. By Remark \[R1.2\], $\mathcal{N}:=(T^*M,g_{\nabla,\Phi})$ is conformally Einstein for any $\Phi$. Moreover, since $\mathcal{M}$ is not strongly projectively flat, unlike in the Type $\mathcal{A}$ setting, $\mathcal{N}$ is not locally conformally flat for any $\Phi$ [@Afifi].
Let $\mu\ne0$ and $\mu\ne-1$. In the Type $\mathcal{A}$ setting, Theorem \[T1.10\] shows that $\dim\{E(\mu,\nabla)\}=0$ or $\dim\{E(\mu,\nabla)\}=2$. The situation is quite different in the Type $\mathcal{B}$ setting as there are examples where $\dim\{E(\mu,\nabla)\}=1$.
\[T1.17\] Let $\mathcal{M}$ be a Type $\mathcal{B}$ surface which is not of Type $\mathcal{A}$ with $\rho_{s,\nabla}\ne 0$. Let $\mu\notin\{0,-1\}$. Then $\dim\{E(\mu,\nabla)\}\geq 1$ if and only if $\mathcal{M}$ is linearly equivalent to a surface with $C_{12}{}^1=0$, $C_{22}{}^1=\pm 1$, $C_{22}{}^2=\pm 2C_{11}{}^2$, $\mu=\frac{-(C_{11}{}^1)^2+2 C_{11}{}^1 C_{12}{}^2 \pm 2 (C_{11}{}^2)^2 -(C_{12}{}^2)^2+2
C_{12}{}^2+1}{(C_{11}{}^1-C_{12}{}^2-1)^2}$, $C_{11}{}^1-C_{12}{}^2-1\ne0$. Furthermore, $\dim\{E(\mu,\nabla)\}=2$ if and only if $\mathcal{M}$ is linearly equivalent to one of: $C_{11}{}^1=-1+C_{12}{}^2$, $C_{11}{}^2=0$, $C_{12}{}^1=0$, $C_{22}{}^1=\pm 1$, $C_{22}{}^2=0$, $\mu=\frac{1}{2}C_{12}{}^2\neq 0$. $ C_{11}{}^1=-\frac{1}{2}(5\pm 16 (C_{11}{}^2)^2)$, $C_{12}{}^1=0$, $C_{12}{}^2=-\frac{1}{2}(3\pm 8(C_{11}{}^2)^2)$, $C_{22}{}^1=\pm 1$, $C_{22}{}^2=\pm 2 C_{11}{}^2$, $\mu=-\frac{3\pm 8(C_{11}{}^2)^2}{4\pm 8(C_{11}{}^2)^2}$, $C_{11}{}^2\notin\{ 0,\pm\frac{1}{\sqrt{2}}\}$.
The proof of Theorem \[T1.5\]
=============================
Assertions (1) and (3) of Theorem \[T1.5\] follow from more general results of [@BGGV17a]. The following result (see [@E70; @NS]) characterizes strongly projectively flat surfaces and will be used in the proof of Theorem \[T1.5\] (2).
\[L2.1\] If $\mathcal{M}=(M,\nabla)$ is an affine surface, then the following assertions are equivalent:
1. $\mathcal{M}$ is strongly projectively flat.
2. $\rho_\nabla$ and $\nabla\rho_\nabla$ are totally symmetric.
3. $\rho_\nabla$ is symmetric and $\mathcal{M}$ is projectively flat.
The proof of Theorem \[T1.5\] (2)
---------------------------------
Let $\mathcal{M}$ be strongly projectively flat with $\operatorname{rank}\rho_\nabla=2$. By Lemma \[L2.1\], $\rho_{\nabla}=\rho_{s,\nabla}$. We may covariantly differenciate the quasi-Einstein Equation with respect to $\partial_{x^i}$ in local coordinates to get $$\mathcal{H}_{\nabla,jk;i} f=\mu \{(\partial_{x^i} f) \rho_{\nabla,jk}+ f \rho_{\nabla,jk;i}\}.$$ We interchange the indices $i$ and $j$ and subtract to get $$\begin{array}{rcl}
R_{\nabla,ijk}{}^l (\partial_{x^l}f)&=& \mathcal{H}_{\nabla,ik;j} f-\mathcal{H}_{\nabla,jk;i} f\\
\noalign{\medskip}
&=&\mu \{(\partial_{x^j} f) \rho_{\nabla,ik}+ f \rho_{\nabla,ik;j}\}-\mu \{(\partial_{x^i} f) \rho_{\nabla,jk}+ f \rho_{\nabla,jk;i}\}.
\end{array}$$ By Lemma \[L2.1\], $\nabla\rho_\nabla$ is totally symmetric and the previous expression simplifies: $$R_{\nabla,ijk}{}^l (\partial_{x^l}f)=\mu \{(\partial_{x^j} f) \rho_{\nabla,ik}-(\partial_{x^i} f) \rho_{\nabla,jk}\}.$$ Since $M$ is 2-dimensional the only curvature term is $$R_{\nabla,12}:\partial_{x^i}\rightarrow\rho_{\nabla,2i} \partial_{x^1}-\rho_{\nabla,1i} \partial_{x^2} \,.$$ Consequently, $(R_{\nabla,12} \partial_{x^i})f=-\mu (R_{\nabla,12} \partial_{x^i})f$ and, hence, $$(\mu+1) (R_{\nabla,12} \partial_{x^i})f=0.$$ This is a homogeneous system of linear equations. Because $\rho_\nabla$ has rank 2, $R_{\nabla,12}$ has rank 2. If $\mu\neq -1,0$, then the only solutions are $\partial_{x^i}f=0$ and $f$ is necessarily constant. This shows that $\operatorname{dim}\{ E(\mu,\nabla)\}=0$ for all $\mu\neq 0,-1$. Moreover, if $\mu=0$ one has $R_{\nabla,12} \partial_{x^i}f=0$ and $f$ is constant, thus showing that $\operatorname{dim}\{ E(0,\nabla)\}=1$.
The proof of Theorem \[T1.9\] {#S3}
=============================
Let $\mathfrak{A}$ be the commutative unital algebra generated by $\{\partial_{x^1},\partial_{x^2}\}$. Let $\mathcal{E}$ be a finite dimensional $\mathfrak{A}$ submodule of $C^\infty(\mathbb{R}^2)\otimes_{\mathbb{R}}\mathbb{C}$. If $f\in\mathcal{E}$, set $$\mathcal{E}(f):=\operatorname{Span}\{f,\partial_{x^1}f,\dots,\partial^k_{x^1}f,\dots\}\subset\mathcal{E}\,.$$ As $\mathcal{E}$ is finite dimensional, there is a minimal dependence relation: $$\label{E3.a}
\textstyle\prod_i(\partial_{x^1}-\lambda_i)^{n_i}f=0\text{ for }\lambda_i\in\mathbb{C}\text{ distinct}\,.$$ Let $f_i:=\prod_{j\ne i}(\partial_{x^1}-\lambda_j)^{n_j}f\in\mathcal{E}(f)$. If we fix $x^2$, then $f_i(x^1,x^2)$ satisfies the constant coefficient ODE $(\partial_{x^1}-\lambda_i)^{n_i}f_i(x^1,x^2)=0$ with suitably chosen initial conditions determined by $\{f_i(0,x^2),\partial_{x^1}f_i(0,x^2),\dots\}$. Consequently, we may express $f_i(x^1,x^2)=\{(x^1)^{n_i-1} h _{n_i-1}(x^2)+\dots+ h _0(x^2)\}e^{\lambda_ix^1}$. Since Equation (\[E3.a\]) is to be a minimal dependence relation for $f$, $ h _{n_i-1}$ does not vanish identically. Consequently, the functions $\{(\partial_{x^1}-\lambda_i)^kf_i\}_{0\le k\le n_i-1}$ for fixed $i$ are linearly independent. The different exponential terms do not interact and thus for dimensional reasons, the collection of all these functions forms a basis for $\mathcal{E}(f)$. Thus, $f$ can be expressed in terms of these elements, i.e. any element of $\mathcal{E}$ can be expressed as a sum of functions of the form $e^{\alpha x^1} \sum_{i} (x^1)^{i} h_{i}(x^2)$ where $\alpha\in \mathbb{C}$. A similar analysis of the $x^2$ dependence shows that we can express $f$ in the given form. We complete the proof by noting that we can express $e^{\alpha_1x^1+\alpha_2x^2}\partial_{x^1}p=(\partial_{x^1}-\alpha_1)\{e^{\alpha_1x^1+\alpha_2x^2}p(\vec x)\}\in\mathcal{E}$.
Type $\mathcal{A}$ surfaces {#S4}
===========================
We shall assume that $\rho\ne0$. By Theorem \[T1.4\], $\dim\{E(\mu,\nabla)\}\le3$ and by Theorem \[T1.5\], $\dim\{E(\mu,\nabla)\}\ne3$ if $\mu\ne\mu_2=-1$. We use these facts implicitly in what follows.
The proof of Theorem \[T1.10\] (1) {#the-proof-of-theoremt1.101 .unnumbered}
----------------------------------
Let $\mathcal{M}$ be a Type $\mathcal{A}$ surface. It is immediate from the definition that $1\in E(0,\nabla)$. Suppose there exists a non-constant function $f\in E(0,\nabla)$, i.e. $\dim\{E(0,\nabla)\}=2$. By Lemma 4.1 of [@BGG16], $R_{\nabla,12}(df)=0$. This implies that $df$ belongs to the kernel of the curvature operator. Consequently after a suitable linear change of coordinates, we have $f=f(x^1)$ for any $f\in E(0,\nabla)$. We apply Theorem \[T1.9\] to see that we may assume $f(x^1,x^2)=p(x^1)e^{a_1x^1}$.
Case 1a. {#case-1a. .unnumbered}
--------
Suppose that $a_1\ne0$. By Theorem \[T1.9\], we may assume $f(x^1)=e^{a_1x^1}$. Since $\mathcal{H}_\nabla f=0$, Equation (\[E1.a\]) implies $\Gamma_{12}{}^1=0$, $\Gamma_{22}{}^1=0$, and $\Gamma_{11}{}^1=a$. Thus $a$ is real. Rescale the first coordinate to assume $\Gamma_{11}{}^1=1$. A direct computation shows $E(0,\nabla))=\operatorname{Span}\{1,e^{x^1}\}$ which is the possibility of Assertion (1a).
Case 2a. {#case-2a. .unnumbered}
--------
Suppose that $a_1=0$ so $f(x^1,x^2)=p(x^1)$ is a non-constant polynomial. We apply Theorem \[T1.9\] to assume $p$ is linear. Subtracting the constant term then permits us to assume $p(x^1)=x^1$. We then obtain $\Gamma_{11}{}^1=\Gamma_{12}{}^1=\Gamma_{22}{}^1=0$. A direct computation shows $E(0,\nabla))=\operatorname{Span}\{1,x^1\}$ which is the possibility of Assertion (1b).
The proof of Theorem \[T1.10\] (2)
----------------------------------
Results of [@BGG16] show that any Type $\mathcal{A}$ geometry is strongly projectively flat. Theorem \[T1.5\] then shows $\dim\{E(-1,\nabla)\}=3$.
The proof of Theorem \[T1.10\] (3)
----------------------------------
Assume that $\mu\notin\{0,-1\}$.
Case 3a. {#case-3a. .unnumbered}
--------
Let $\mathcal{M}$ be a Type $\mathcal{A}$ surface with $\operatorname{rank}\rho_\nabla=1$. As $\dim\{E(\mu,\nabla)\}\le2$, it suffices to show $\dim\{E(\mu,\nabla)\}\ge2$. We make a linear change of coordinates to assume $\rho_{{ \nabla},11}=\rho_{\nabla,12}=0$. By Lemma 2.3 of [@BGG16], this implies $\Gamma_{11}{}^2=\Gamma_{12}{}^2=0$. The affine quasi-Einstein Equation for $f(x^1,x^2)=e^{a_2x^2}$ reduces to the single equation: $$a_2^2-a_2\Gamma_{22}{}^2-\mu\rho_{22}=0\,.$$ Generically, this has two distinct complex solutions which completes the proof in this special case. However, we must deal with the case in which the discriminant $(\Gamma_{22}{}^2)^2+4\mu\rho_{22}=0$. Since $\mu\ne0$ and $\rho_{22}\ne0$, $\Gamma_{22}{}^2\ne0$. Thus there is a single exceptional value $\mu=-(\Gamma_{22}{}^2)^2/(4\rho_{22})$. The affine quasi-Einstein Equation for $f(x^1,x^2)=x^2e^{a_2x^2}$ again reduces to a single equation $$(2a_2-\Gamma_{22}{}^2)(4+2a_2x^2-x^2\Gamma_{22}{}^2)=0\,.$$ Let $a_2=\frac12\Gamma_{22}{}^2$ to ensure $x^2e^{a_2x^2}\in E(\mu,\nabla)$ so $\dim\{E(\mu,\nabla)\}\ge2$ as desired.
Case 3b {#case-3b .unnumbered}
-------
Suppose $\mathcal{M}$ is a Type $\mathcal{A}$ surface with $\operatorname{rank}\rho_\nabla=2$. We apply Theorem \[T1.5\] (2) to show that $\dim\{E(\mu,\nabla)\}=0$.
The proof of Theorem \[T1.12\]
==============================
We prove the three assertions seriatim.
The proof of Theorem \[T1.12\] (1) {#the-proof-of-theoremt1.121 .unnumbered}
----------------------------------
Let $\mathfrak{B}$ be the unital non-commutative algebra generated by $X:=x^1\partial_{x^1}+x^2\partial_{x^2}$ and $\partial_{x^2}$. Let $\mathcal{E}$ be a finite dimensional $\mathfrak{B}$ submodule of $C^\infty(\mathbb{R}^+\times\mathbb{R})\otimes_{\mathbb{R}}\mathbb{C}$. Let $0\ne f\in\mathcal{E}$. Applying exactly the same argument as that used to prove Theorem \[T1.9\] permits us to expand $f$ in the form $$f=\sum_{ij}e^{\beta_jx^2}(x^2)^i h _{ij}(x^1)\text{ where }\beta_j\in\mathbb{C}\,.$$ Suppose that $h_{ij}\not\equiv0$ for some $\beta_j\ne0$. Choose $i_0$ maximal so $h_{i_0j}\not\equiv0$. We compute $$X^nf=e^{\beta_jx^2}\{\beta_j^n(x^2)^{i_0+n}h_{i_0j}(x^1)+O((x^2)^{i_0+n-1})\}+\dots$$ where we have omitted terms not involving the exponential $e^{\beta_jx^2}$. This constructs an infinite sequence of linearly independent functions in $\mathcal{E}$ which contradicts our assumption that $\mathcal{E}$ is finite dimensional. Consequently $f$ is polynomial in $x^2$. We wish to determine the form of the coefficient functions $h_i(x^1)$. Let $\tilde X:=x^1\partial_{x^1}$. We have $X^kf=\sum_i(x^2)^i(i+\tilde X)^kh_i$. Since $\mathcal{E}$ is finite dimensional, the collection $\{(i+\tilde X)^kh_i\}$, or equivalently $\{\tilde X^kh_i\}$, can not be an infinite sequence of linearly independent functions. If $h_i$ is constant, we do not need to proceed further in considering $h_i$. Otherwise, there must be a minimal dependence relation which we can factor in the form $$\label{E5.a}
\textstyle\prod_j(\tilde X-\lambda_j)^{n_j}h_i=0\,.$$ The solutions to Equation (\[E5.a\]) are spanned by the functions $ (\log(x^1))^k(x^1)^\lambda$ but apart from that, the analysis is the same as that performed in the proof of Theorem \[T1.9\] and we can expand each $h_i$ in terms of these functions. This completes the proof of Assertion (1).
The proof of Theorem \[T1.12\] (2) {#the-proof-of-theoremt1.122 .unnumbered}
----------------------------------
Let $\dim\{\mathcal{E}\}=1$ and let $0\ne f\in\mathcal{E}$. Expand $f$ in the form of Assertion (1) and choose $j_0$ maximal so $c_{a,i,j_0}\ne0$ for some $(a,i)$. If $j_0>0$, then $\partial_{x^2}f\ne0$. Consequently $\{f,\partial_{x^2}f\}$ are linearly independent elements of $\mathcal{E}$ which is false. Let $c_{a,i}:=c_{a,i,0}$. We have $$\displaystyle Xf=\sum_{a,i}c_{a,i}(x^1)^a\{a\log(x^1)^i+i\log(x^1)^{i-1}\}\in\mathcal{E}\,.$$ Since $\dim\{\mathcal{E}\}=1$, $Xf$ must be a multiple of $f$. Thus, there is exactly one value of $a$ so $c_{a,i_0}\ne0$. Furthermore, one has $i_0=0$. This implies $f=(x^1)^a$ as desired.
The proof of Theorem \[T1.12\] (3) {#the-proof-of-theoremt1.123 .unnumbered}
----------------------------------
Let $\dim\{\mathcal{E}\}=2$. If Assertion (3c) fails, we can choose $0\ne f\in\mathcal{E}$ so that $f\ne(x^1)^a$ for any $a$. Expand $f$ in the form of Assertion (1) and choose $j_0$ maximal so $c_{a,i,j_0}\ne0$. [**Step 3a.**]{} Suppose $j_0\ge1$. Then $\{f,\partial_{x^2}f\}$ are linearly independent elements of $\mathcal{E}$ and hence are a basis for $\mathcal{E}$. Consequently $\partial_{x^2}^2f=0$ and $f$ is linear in $x^2$. Let $\mathcal{E}_1:=\partial_{x^2}f\cdot\mathbb{C}$. This subspace is clearly invariant under $X$ and $\partial_{x^2}$. Thus, by Assertion (2), $\partial_{x^2}f=(x^1)^\alpha$ for some $\alpha$. Consequently $$f(x^1,x^2)= h (x^1)+(x^1)^\alpha x^2\text{ for } h (x^1)=\sum_{a,i}c_{a,i}(x^1)^a(\log(x^1))^i\,.$$ We have $(X-\alpha-1)f=(X-\alpha-1) h $ is independent of $x^2$ and thus belongs to $\mathcal{E}_1$. Consequently, this is a multiple of $(x^1)^\alpha$, i.e. $$\sum_{a,i}c_{a,i}(x^1)^a\{(a-\alpha-1)(\log(x^1))^i+i(\log(x^1))^{i-1}\}=c_0(x^1)^\alpha\,.$$ We must therefore have $c_{a,i}=0$ for $a\notin\{\alpha+1,\alpha\}$. If $a=\alpha+1$ or $a=\alpha$, we conclude $c_{\alpha,i}=0$ for $i>0$. We can eliminate the term involving $ (x^1)^\alpha $ by subtracting an appropriate multiple of $\partial_{x^2}f= (x^1)^\alpha $. Thus $\mathcal{E}$ has the form given in Assertion (3a).
Step 3b {#step-3b .unnumbered}
-------
Suppose $f=\sum_{a,i}c_{a,i}(x^1)^a(\log(x^1))^i$ is independent of $x^2$. We have $$(X-b)f=\sum_{a,i}c_{a,i}(x^1)^a\{(a-b)(\log(x^1))^i+i(\log(x^1))^{i-1}\}\,.$$ Choose $i_0$ maximal so $c_{a,i_0}\ne0$. If $i_0=0$, then $f=\sum_a c_a(x^1)^a$. Since $f\ne(x^1)^a$, there are at least two different exponents where $c_{a_i}\ne0$. Since $(X-a_i)f$ will be non-zero and not involve the exponent $a_i$, we conclude for dimensional reasons that there are exactly two such indices and that $\mathcal{E}=\operatorname{Span}\{(x^1)^{a_1},(x^1)^{a_2}\}$ contrary to our assumption. Thus $i_0>0$ and $c_{a_1,i_0}\ne0$. Suppose there is more than 1 exponent. Then $\{f,(X-a_1)f,(X-a_2)f\}$ would be linearly independent. Thus we could take $f=\sum_i(x^1)^a(\log(x^1))^i$. If $i_0\ge2$, we conclude $\{f,(X-a_1)f,(X-a_1)^2f\}$ are linearly independent. Thus $f=(x^1)^a\{c_0+c_1 \log(x^1)\}$. Again, applying $(X-a)$, we conclude $(x^1)^a\in \mathcal{E}$ and thus Assertion (3b) holds.
Type $\mathcal{B}$ surfaces {#typemathcalb-surfaces}
===========================
We refer to [@BGG16] for the proof of the following result.
\[T6.1\]
1. There are no surfaces which are both Type $\mathcal{A}$ and Type $\mathcal{C}$.
2. A type $\mathcal{B}$ surface is locally isomorphic to a Type $\mathcal{A}$ surface if and only if $(C_{12}{}^1,C_{22}{}^1,C_{22}{}^2)=(0,0,0)$; the Ricci tensor has rank $1$ in this instance.
3. A Type $\mathcal{B}$ surface is locally isomorphic to a Type $\mathcal{C}$ surface if and only if $C$ is linearly equivalent to one of the following two examples whose non-zero Christoffel symbols are $C_{11}{}^1=-1$, $C_{12}{}^2=-1$, $C_{22}{}^1=\pm1$. The associated Type $\mathcal{C}$ geometry is given by $ds^2=(x^1)^{-2}\{(dx^1)^2\pm(dx^2)^2\}$.
Throughout this section, we will let $\mathcal{M}$ be a Type $\mathcal{B}$ affine surface; we assume $\rho\ne0$ to ensure the geometry is not flat. In Section \[S6.1\], we examine when a Type $\mathcal{B}$ surface is strongly projectively flat. In Section \[S6.2\], we determine the dimension of the space of Yamabe solitons $E(0,\nabla)$. In Section \[S6.3\], we examine $\dim\{E(-1,\nabla)\}$. In Section \[S6.4\], we examine the general case where $\mu\notin\{0,-1\}$.
The proof of Theorem \[T1.13\] {#S6.1}
------------------------------
By Lemma \[L2.1\], $\mathcal{M}$ is strongly projectively flat if and only if $\rho_\nabla$ is symmetric and $\nabla\rho_\nabla$ is totally symmetric. A direct calculation shows the surfaces in Assertions (1) and (2) satisfy this condition. Conversely, $\rho_\nabla$ is symmetric if and only if $C_{22}{}^2=-C_{12}{}^1$. Impose this condition. It is then immediate that $\nabla\rho_\nabla(1,2;1)=\nabla\rho_\nabla(2,1;1)$ and $\nabla\rho_\nabla(1,2;2)=\nabla\rho_\nabla(2,1;2)$. The remaining two equations yield: $$\begin{aligned}
0&=&\nabla\rho_\nabla(1,2;1)-\nabla\rho_\nabla(1,1;2)\\
&=&C_{11}{}^1 C_{12}{}^1+3 C_{11}{}^2 C_{22}{}^1-2 C_{12}{}^1 C_{12}{}^2+2 C_{12}{}^1,\\
0&=&\nabla\rho_\nabla(1,2;2)-\nabla\rho_\nabla(2,2;1)\\
&=&2 C_{11}{}^1 C_{22}{}^1-6 (C_{12}{}^1)^2-4 C_{12}{}^2 C_{22}{}^1-2 C_{22}{}^1.\end{aligned}$$ Suppose first $C_{22}{}^1=0$. The second constraint yields $-6(C_{12}{}^1)^2=0$. Thus $C_{12}{}^1=0$ and $C_{22}{}^2=0$. This yields the surfaces of Assertion (1). We therefore assume $C_{22}{}^1\ne0$. We can rescale so $C_{22}{}^1=\varepsilon=\pm1$. Let $\tilde x^1=x^1$ and $\tilde x^2=c x^1+x^2$ define a shear. We then obtain $\tilde C_{12}{}^1=C_{12}{}^1-cC_{22}{}^1$. Thus by choosing $c$ appropriately, we assume that $C_{12}{}^1=0$. We impose these constraints. Our equations become, after dividing by [ $\varepsilon$,]{} $3C_{11}{}^2=0$ and $-2+2C_{11}{}^1-4C_{12}{}^2=0$. We set $C_{12}{}^2=v$ to obtain $C_{11}{}^1=1+2v$ and obtain the surfaces of Assertion (2).
The proof of Theorem \[T1.14\]: $\mu=0$ {#S6.2}
---------------------------------------
Let $\mathcal{M}$ be a Type $\mathcal{B}$ surface which is not flat, or equivalently, not Ricci flat. Consequently, $\dim\{E(0,\nabla)\}<3$ by Theorem \[T1.5\]. We have $1\in E(0,\nabla))$. Suppose, exceptionally, $\dim\{E(0,\nabla)\}=2$. We examine the 3 cases of Theorem \[T1.12\] seriatim.
Case 1 {#case-1 .unnumbered}
------
Suppose $E(0,\nabla)=\operatorname{Span}\{(x^1)^\alpha,(x^1)^\alpha(-cx^1+x^2)\}$. Since $1\in E(0,\nabla)$, we have $\alpha=0$ so $f=-cx^1+x^2$. The following equations yield Assertion (1): $$(Q_{11}):\ 0=cC_{11}{}^1-C_{11}{}^2,\quad (Q_{12}):\ 0=cC_{12}{}^1-C_{12}{}^2,\quad (Q_{22}):\ 0=cC_{22}{}^1-C_{22}{}^2\,.$$
Case 2 {#case-2 .unnumbered}
------
Suppose $E(0,\nabla)=\operatorname{Span}\{(x^1)^\alpha,(x^1)^\alpha\log(x^1)\}$. Since $1\in E(0,\nabla)$, we have $\alpha=0$ so $f=\log(x^1)$. The following equations yield Assertion (2): $$(Q_{11}):\ 0=-1-C_{11}{}^1,\quad(Q_{12}):\ 0=-C_{12}{}^1,\quad(Q_{22}):\ 0=-C_{22}{}^1\,.$$
[**Case 3.**]{} Suppose $E(0,\nabla)=\operatorname{Span}\{(x^1)^\alpha,(x^1)^\beta\}$ for $\alpha\ne\beta$. Since $1\in E(0,\nabla)$, we can take $\beta=0$ and $\alpha\ne0$. The following equations yield Assertion (3): $(Q_{11}):\ 0=\alpha(-1+\alpha-C_{11}{}^1),\ (Q_{12}):\ 0=-\alpha C_{12}{}^1,\ (Q_{22}):\ 0=-\alpha C_{22}{}^1$.
The proof of Theorem \[T1.15\]: $\mu=-1$ {#S6.3}
----------------------------------------
Let $\mathcal{M}$ be a Type $\mathcal{B}$ surface. By Theorem \[T1.5\], $\dim\{E(-1,\nabla)\}\neq 2,3$. Suppose $\dim\{E(-1,\nabla)\}=1$. By Theorem \[T1.12\], $f(x^1,x^2)=(x^1)^\alpha$ for some $\alpha$. As in the proof of Theorem \[T1.13\], we distinguish cases. We clear denominators. [**Case 1.**]{} Suppose $C_{22}{}^1=0$. $$\begin{aligned}
&&(Q_{11}):\ 0=\alpha ^2-\alpha (C_{11}{}^1+1)+C_{12}{}^2 (C_{11}{}^1-C_{12}{}^2+1)+C_{11}{}^2 (C_{22}{}^2-C_{12}{}^1),\\
&&(Q_{12}):\ 0=C_{12}{}^1 (-2 \alpha +2 C_{12}{}^2-1)+C_{22}{}^2,\ \
(Q_{22}):\ 0=C_{12}{}^1 (C_{22}{}^2-C_{12}{}^1)\,.\end{aligned}$$ If $C_{12}{}^1=0$, then Equation $(Q_{12})$ implies $C_{22}{}^2=0$. Thus $C_{12}{}^1=C_{22}{}^1=C_{22}{}^2=0$ so by Theorem \[T1.13\], $\mathcal{M}$ is projectively flat. This is false. Consequently, by Equation $(Q_{22})$, we have $C_{12}{}^1=C_{22}{}^2\ne0$. We set $\alpha=C_{12}{}^2\ne0$ to satisfy equations and thereby obtain the structure of Assertion (1). [**Case 2.**]{} Suppose $C_{22}{}^1\ne0$. We can rescale so $C_{22}{}^1=\varepsilon=\pm1$ and change coordinates to ensure $C_{12}{}^1=0$. We obtain $(Q_{11}):\ 0=\alpha ^2-(C_{11}{}^1+1) \alpha -(C_{12}{}^2)^2+C_{11}{}^1 C_{12}{}^2+C_{12}{}^2+C_{11}{}^2 C_{22}{}^2$[ ,]{} $(Q_{12}):\ 0=C_{22}{}^2-2 C_{11}{}^2 \epsilon,\qquad
(Q_{22}):\ 0=\alpha -C_{11}{}^1+C_{12}{}^2+1$. We set $C_{22}{}^2=2C_{11}{}^2\varepsilon$ and obtain $0=1+\alpha-C_{11}{}^1+C_{12}{}^2$. We set $\alpha=C_{11}{}^1-C_{12}{}^2-1$. The final Equation then yields $C_{11}{}^1=1+2C_{12}{}^2+\varepsilon(C_{11}{}^2)^2$. This is the structure of Assertion (2).
The proof of Theorem \[T1.17\] {#S6.4}
------------------------------
Let $\mu\notin\{0,-1\}$. Suppose $\mathcal{M}$ is a Type $\mathcal{B}$ surface with $\rho_{s,\nabla}\ne0$ and which is not also of Type $\mathcal{A}$. Suppose $\dim\{E(\mu,\nabla)\}\ge1$; $\dim\{E(\mu,\nabla)\}\le2$ by Theorem \[T1.5\]. By Theorem \[T1.12\], $(x^1)^\alpha\in E(\mu,\nabla)$. As in the proof of Theorem \[T1.15\], we distinguish cases.
Suppose first that $C_{22}{}^1=0$. Equation $(Q_{22})$ shows $0=C_{12}{}^1(C_{12}{}^1-C_{22}{}^2)\mu$. Since $\mu\ne0$, either $C_{12}{}^1$ or $C_{12}{}^1=C_{22}{}^2$. If $C_{12}{}^1=0$, we use Equation $(Q_{12})$ to see $0=-C_{22}{}^2\mu$ so $C_{22}{}^2=0$. This gives a Type $\mathcal{A}$ structure contrary to our assumption. We therefore obtain $C_{12}{}^1=C_{22}{}^2$. Equation $(Q_{12})$ shows $0=-C_{12}{}^1(\alpha+C_{12}{}^2\mu)$. Thus, $\alpha=-C_{12}{}^2\mu$. Equation $(Q_{11})$ shows $ 0=(C_{12}{}^2)^2\mu(1+\mu)$. Since $\mu\notin\{0,-1\}$, $C_{12}{}^2=0$. This implies $\alpha=0$ so $1\in E(\mu,\nabla)$ and $\rho_s=0$ contrary to our assumption.
We therefore have $C_{22}{}^1\ne0$. We can rescale the coordinates so $C_{22}{}^1=\varepsilon=\pm1$. We can then make a shear so $C_{12}{}^1=0$. We substitute these relations to obtain: $(Q_{11}):\ 0=\alpha ^2-\alpha (C_{11}{}^1+1)-\mu \left(C_{11}{}^1 C_{12}{}^2+C_{11}{}^2 C_{22}{}^2-(C_{12}{}^2)^2+C_{12}{}^2\right)$, $(Q_{12}):\ 0=C_{11}{}^2 \mu \epsilon -\frac{C_{22}{}^2 \mu }{2}$,$(Q_{22}):\ 0=\epsilon (\mu (-C_{11}{}^1+C_{12}{}^2+1)-\alpha )$. Since $\mu\ne0$, we have $C_{22}{}^2=2\varepsilon C_{11}{}^2$. We have: $$\label{E6.a}
C_{22}{}^1=\varepsilon,\quad C_{12}{}^1=0,\quad \alpha=\mu(1+C_{12}{}^2-C_{11}{}^1)\,.$$ The only remaining Equation is $(Q_{11}):\ 0=\mu\{(C_{11}{}^1)^2 (\mu +1)-2 C_{11}{}^1 (C_{12}{}^2 \mu +C_{12}{}^2+\mu )-2 (C_{11}{}^2)^2 \epsilon$ $+(C_{12}{}^2)^2 (\mu +1)+2 C_{12}{}^2 (\mu -1)+\mu -1\}$. Since $\mu\ne0$, we can solve Equation ($Q_{11}$) for $\mu$ to complete the proof of the first portion of Theorem \[T1.17\].
We now suppose $\dim\{E(\mu,\nabla)\}=2$. We examine the possibilities of Theorem \[T1.13\] seriatim.
Case 1 {#case-1-1 .unnumbered}
------
$E(\mu,\nabla)=\operatorname{Span}\{(x^1)^\alpha,(x^1)^\alpha(cx^1+x^2)\}$. Let $f=(x^1)^\alpha(cx^1+x^2)$. We have $\alpha=(1-C_{11}{}^1+C_{12}{}^2)\mu$. Equation ($Q_{22}$) shows $0=-(c+2C_{11}{}^2)$ so $c=-2 C_{11}{}^2$. After clearing denominators, we have $(Q_{11}):\ 0=C_{11}{}^2 \{-2 (C_{11}{}^1)^2+C_{11}{}^1 (6 C_{12}{}^2-3)+8 \varepsilon (C_{11}{}^2)^2$ $-4 (C_{12}{}^2)^2+9 C_{12}{}^2+5\}$, $(Q_{12}):\ 0=(C_{11}{}^1)^2-3 C_{11}{}^1 C_{12}{}^2-2\varepsilon (C_{11}{}^2)^2+2 (C_{12}{}^2)^2-C_{12}{}^2-1$.
Case 1a {#case-1a .unnumbered}
-------
If $C_{11}{}^2=0$, Equation ($Q_{11}$) is trivial and we obtain $(Q_{12}):\ 0=(C_{11}{}^1)^2-3 C_{11}{}^1 C_{12}{}^2+2 (C_{12}{}^2)^2-C_{12}{}^2-1$ $=(C_{11}{}^1-2 C_{12}{}^2-1) (C_{11}{}^1-C_{12}{}^2+1)$. If $C_{11}{}^1=1+2C_{12}{}^2$, then $\mu=-1$ which is false. If $C_{11}{}^1=C_{12}{}^2-1$, then we obtain the structure in Assertion (1).
Case 1b {#case-1b .unnumbered}
-------
Suppose $C_{11}{}^2\ne0$, we may divide the first equation by $C_{11}{}^2$ to see $(\tilde Q_{11}):\ 0=-2 (C_{11}{}^1)^2+C_{11}{}^1 (6 C_{12}{}^2-3)+8\varepsilon (C_{11}{}^2)^2-4 (C_{12}{}^2)^2+9 C_{12}{}^2+5$. We compute that: $(\tilde Q_{11})+4(Q_{12}):\ 0=2 (C_{11}{}^1)^2-3 C_{11}{}^1 (2 C_{12}{}^2+1)+4 (C_{12}{}^2)^2+5 C_{12}{}^2+1$ $=(2 C_{11}{}^1-4 C_{12}{}^2-1) (C_{11}{}^1-C_{12}{}^2-1)$. Since $(C_{11}{}^1-C_{12}{}^2-1)\ne0$, we obtain $2 C_{11}{}^1-4 C_{12}{}^2-1=0$. There is then a single remaining relation: $0=8 (C_{11}{}^2)^2 \epsilon +2 C_{12}{}^2+3$. We solve this for $C_{12}{}^2$ to obtain the structure of Assertion (2).
Case 2 {#case-2-1 .unnumbered}
------
$E(\mu,\nabla)=\operatorname{Span}\{(x^1)^\alpha,(x^1)^\alpha\log(x^1)\}$. Evaluating Equation ($Q_{22}$) at $x^1=1$ yields $\varepsilon=0$ which is impossible. Therefore this case does not arise.
Case 3 {#case-3 .unnumbered}
------
$E(\mu,\nabla)=\operatorname{Span}\{(x^1)^\alpha,(x^1)^\beta\}$ for $\alpha\ne\beta$. Since $\alpha$ is determined by Equation (\[E6.a\]), this case does not arise.
[99]{}
Z. Afifi, Riemann extensions of affine connected spaces, *Quart. J. Math., Oxford Ser. (2)* **5** (1954), 312–320.
M. Brozos-Vázquez, E. García-Río, and P. Gilkey, Homogeneous affine surfaces: Killing vector fields and gradient Ricci solitons, arXiv:1512.05515. To appear [*J. Math. Soc. Japan.*]{}
M. Brozos-Vázquez, E. García-Río, P. Gilkey, and X. Valle-Regueiro, Half conformally flat generalized quasi-Einstein manifolds, arXiv:1702.06714.
M. Brozos-Vázquez, E. García-Río, P. Gilkey, and X. Valle-Regueiro, A natural linear Equation in affine geometry: the affine quasi-Einstein equation, arXiv:1705.08352.
E. Calviño-Louzao, E. García–Río, P. Gilkey, and R. Vázquez-Lorenzo, The geometry of modified Riemannian extensions, [*Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci.*]{} [**465**]{} (2009), 2023-2040.
A. Derdzinski and W. Roter, Walker’s theorem without coordinates, [*J. Math. Phys.*]{} [**47**]{} (6) (2006) 062504 8 pp.
L. P. Eisenhart, *Non-Riemannian geometry* (Reprint of the 1927 original), Amer. Math. Soc. Colloq. Publ. **8**, American Mathematical Society, Providence, RI, 1990.
D.-S. Kim and Y. H. Kim, Compact Einstein warped product spaces with nonpositive scalar curvature, [*Proc. Amer. Math. Soc.*]{} [**131**]{} (2003), 2573–2576.
W. Kühnel and H.B. Rademacher, Conformal transformations of pseudo-Riemannian manifolds, *Recent developments in pseudo-Riemannian geometry*, 261–298, ESI Lect. Math. Phys., Eur. Math. Soc., Zürich, 2008.
K. Nomizu and T. Sasaki, *Affine differential geometry*. Cambridge Tracts in Mathematics, **111**, Cambridge University Press, Cambridge, 1994.
B. Opozda, A classification of locally homogeneous connections on 2-dimensional manifolds, [*Differential Geom. Appl.*]{} [**21**]{} (2004), 173–198.
Ch. Steglich, Invariants of Conformal and Projective Structures, *Results Math.* **27** (1995), 188–193.
E. M. Patterson and A. G. Walker, Riemann extensions, [*Quart. J. Math. Oxford Ser. (2)*]{} [**3**]{} (1952), 19–28.
[^1]: Suported by Project MTM2016-75897-P (AEI/FEDER, UE)
|
---
author:
- 'Lillian J. Ratliff and Tanner Fiez\'
bibliography:
- '2017TACLTrefs.bib'
title: Adaptive Incentive Design
---
Introduction {#sec:intro}
============
Problem Formulation {#sec:problemformulation}
===================
Utility Learning Formulation {#sec:incent_util}
============================
Incentive Design Formulation {#sec:incent_incent}
============================
Convergence in the Noise Free Case {#sec:main}
==================================
Convergence in the Presence of Noise {#sec:noise}
====================================
Numerical Examples {#sec:examples}
==================
Conclusion {#sec:discussion}
==========
We present a new method for adaptive incentive design when a planner faces competing agents of unknown type. Specifically, we provide an algorithm for learning the agents’ decision-making process and updating incentives. We provide convergence guarantees on the algorithm. We show that under reasonable assumptions, the agents’ true response is driven to the desired response and, under slightly more restrictive assumptions, the true preferences can be learned asymptotically. We provide several numerical examples that both verify the theory as well as demonstrate the performance when we relax the theoretical assumptions.
|
---
abstract: 'provide valuable services, with convenient features for mobile users. However, the location and other information disclosed through each query to the erodes user privacy. This is a concern especially because providers can be *honest-but-curious*, collecting queries and tracking users’ whereabouts and infer sensitive user data. This motivated both *centralized* and *decentralized* location privacy protection schemes for : anonymizing and obfuscating queries to not disclose exact information, while still getting useful responses. Decentralized schemes overcome disadvantages of centralized schemes, eliminating anonymizers, and enhancing users’ control over sensitive information. However, an insecure decentralized system could create serious risks beyond private information leakage. More so, attacking an improperly designed decentralized privacy protection scheme could be an effective and low-cost step to breach user privacy. We address exactly this problem, by proposing security enhancements for mobile data sharing systems. We protect user privacy while preserving accountability of user activities, leveraging pseudonymous authentication with mainstream cryptography. We show our scheme can be deployed with off-the-shelf devices based on an experimental evaluation of an implementation in a static automotive testbed.'
author:
- Hongyu Jin
- Panos Papadimitratos
bibliography:
- 'references.bib'
title: 'Resilient Privacy Protection for Location-based Services through Decentralization'
---
This work has been supported by the Swedish Foundation for Strategic Research (SSF) SURPRISE project and the KAW Academy Fellow Trustworthy IoT project.
|
---
abstract: 'We demonstrate that the origin of so called quantum probabilistic rule (which differs from the classical Bayes’ formula by the presence of $\cos \theta$-factor) might be explained in the framework of ensemble fluctuations which are induced by preparation procedures. In particular, quantum rule for probabilities (with nontrivial $\cos \theta$-factor) could be simulated for macroscopic physical systems via preparation procedures producing ensemble fluctuations of a special form. We discuss preparation and measurement procedures which may produce probabilistic rules which are neither classical nor quantum; in particular, hyperbolic ‘quantum theory.’'
author:
- |
Andrei Khrennikov\
Department of Mathematics, Statistics and Computer Sciences\
University of Växjö, S-35195, Sweden
title: 'On the mystery of quantum probabilistic rule: trigonometric and hyperbolic probabilistic behaviours'
---
Introduction
============
It is well known that the classical probabilistic rule based on the Bayes’ formula for conditional probabilities cannot be applied to quantum formalism, see, for example, \[1\]-\[3\] for extended discussions. In fact, all special properties of quantum systems are just consequences of violations of the classical probability rule, Bayes’ theorem \[1\]. In this paper we restrict our investigations to the two dimensional case. Here Bayes’ formula has the form $(i=1,2):$ $$\label{*1}
{\bf p}(A=a_i)={\bf p}(C=c_1){\bf p}(A=a_i/C=c_1)+
{\bf p}(C=c_2){\bf p}(A=a_i/C=c_2),$$ where $A$ and $C$ are physical variables which take, respectively, values $a_1, a_2$ and $c_1, c_2.$ Symbols ${\bf p}(A=a_i/C=c_j)$ denote conditional probabilities. There is a large diversity of opinions on the origin of violations of (\[\*1\]) in quantum mechanics. The common opinion is that violations of (\[\*1\]) are induced by special properties of quantum systems.
Let $\phi$ be a quantum state. Let $\{\phi_i\}_{i=1}^2$ be an orthogonal basis consisting of eigenvectors of the operator $\hat{C}$ corresponding to the physical observable $C.$ The quantum theoretical rule has the form $(i=1,2):$ $$\label{*2}
q_i = {\bf p}_1 {\bf p}_{1i} + {\bf p}_2 {\bf p}_{2i}\pm
2 \sqrt{{\bf p}_1{\bf p}_{1i} {\bf p}_2 {\bf p}_{2i}}\cos\theta,$$ where $q_i={\bf p}_\phi(A=a_i), {\bf p}_j={\bf p}_\phi(C=c_j), {\bf p}_{ij}={\bf p}_{\phi_i}(A=a_j), i,j=1,2.$ Here probabilities have indexes corresponding to quantum states. The common opinion is that this quantum probabilistic rule must be considered as a peculiarity of nature. However, there exists an opposition to this general opinion, namely the probabilistic opposition. The main domain of activity of this probabilistic opposition is Bell’s inequality and the EPR paradox \[4\] , see, for example, \[1\], \[5\]-\[11\]. The general idea supported by the probabilistic opposition is that special quantum behaviour can be understood on the basis of local realism, if we be careful with the probabilistic description of physical phenomena. It seems that the origin of all ‘quantum troubles’ is probabilistic rule (\[\*2\]). It seems that the violation of Bell’s inequality is just a new representation of the old contradiction between rules (\[\*1\]) and (\[\*2\]) (the papers of Accardi \[1\] and De Muynck, De Baere and Martens \[7\] contain extended discussions on this problem). Therefore, the main problem of the probabilistic justification of quantum mechanics is to find the clear probabilistic explanation of the origin of quantum probabilistic rule (\[\*2\]) and the violation of classical probabilistic rule (\[\*1\]) and explain why (\[\*2\]) is sometimes reduced to (\[\*1\]).
L. Accardi \[5\] introduced a notion of the [*statistical invariant*]{} to investigate the relation between classical Kolmogorovean and quantum probabilistic models, see also Gudder and Zanghi in \[6\]. He was also the first who mentioned that Bayes’ postulate is a “hidden axiom of the Kolmogorovean model... which limits its applicability to the statistical description of the natural phenomena ”, \[5\]. In fact, this investigation plays a crucial role in our analysis of classical and quantum probabilistic rules.
An interesting investigation on this problem is contained in the paper of J. Shummhammer \[11\]. He supports the idea that quantum probabilistic rule (\[\*2\]) is not a peculiarity of nature, but just a consequence of one special method of the probabilistic description of nature, so called method of [*[maximum predictive power]{}*]{}. We do not directly support the idea of Shummhammer. It seems that the origin of (\[\*2\]) is not only a consequence of the use of one special method for the description of nature, but merely a consequence of our manipulations with nature, ensembles of physical systems, in quantum preparation/measurement procedures.
In this paper we provide probabilistic analysis of quantum rule (\[\*2\]). In our analysis ‘probability’ has the meaning of the [*frequency probability,*]{} namely the limit of frequencies in a long sequence of trials (or for a large statistical ensemble). Hence, in fact , we follow to R. von Mises’ approach to probability \[12\]. It seems that it would be impossible to find the roots of quantum rule (\[\*2\]) in the conventional probability framework, A. N. Kolmorogov, 1933, \[13\]. In the conventional measure-theoretical framework probabilities are defined as sets of real numbers having some special mathematical properties. Classical rule (\[\*1\]) is merely a consequence of the definition of conditional probabilities. In the Kolmogorov framework to analyse the transition from (\[\*1\]) to (\[\*2\]) is to analyse the transition from one definition to another. In the frequency framework we can analyse behaviour of trails which induce one or another property of probability. Our analysis shows that quantum probabilistic rule (\[\*2\]) can be explained on the basis of ensemble fluctuations (one of possible sourses of ensemble fluctuations is so called ensemble nonreproducibility, see De Baere \[7\]; see also \[10\] for the statistical variant of nonreproducibility). Such fluctuations can generate (under special conditions) the cos $\theta$-factor in (\[\*2\]). Thus trigonometric fluctuations of quantum probabilities can be explained without using the wave arguments.
An unexpected consequence of our analysis is that quantum probability rule (\[\*2\]) is just one of possible perturbations (by ensemble fluctuations) of classical probability rule (\[\*1\]). In principle, there might exist experiments which would produce perturbations of classical probabilistic rule (\[\*1\]) which differ from quantum probabilistic rule (\[\*2\]).
Quantum formalism and ensemble fluctuations
===========================================
[**1. Frequency probability theory.**]{} The frequency definition of probability is more or less standard in quantum theory; especially in the approach based on preparation and measurement procedures, \[14\], \[3\].
Let us consider a sequence of physical systems $
\pi= (\pi_1,\pi_2,..., \pi_N,... )\;.
$ Suppose that elements of $\pi$ have some property, for example, position, and this property can be described by natural numbers: $ L=\{1,2,...,m \},$ the set of labels. Thus, for each $\pi_j\in \pi,$ we have a number $x_j\in L.$ So $\pi$ induces a sequence $$\label{la1}
x=(x_1,x_2,..., x_N,...), \; \; x_j \in L.$$ For each fixed $\alpha \in L,$ we have the relative frequency $\nu_N(\alpha)= n_N(\alpha)/N$ of the appearance of $\alpha$ in $(x_1,x_2,..., x_N).$ Here $n_N(\alpha)$ is the number of elements in $(x_1,x_2,..., x_N)$ with $x_j=\alpha.$ R. von Mises \[12\] said that $x$ satisfies to the principle of the [*statistical stabilization*]{} of relative frequencies, if, for each fixed $\alpha \in L,$ there exists the limit $$\label{l0}
{\bf p} (\alpha)=\lim_{N\to \infty} \nu_N(\alpha) .$$ This limit is said to be a probability of $\alpha.$
We shall not consider so called principle of [*randomness,*]{} see \[12\] for the details. This principle, despite its importance for the foundations of probability theory, is not related to our frequency analysis. We shall be interested only in the statistical stabilization of relative frequencies.
[**2. Preparation and measurement procedures and quantum formalism.**]{} We consider a statistical ensemble S of quantum particles described by a quantum state $\phi.$ This ensemble is produced by some preparation procedure ${\cal E},$ see, for example, \[14\], \[3\] for details. There are two discrete physical observables $C=c_1, c_2$ and $A=a_1, a_2.$
The total number of particles in S is equal to N. Suppose that $n_{i}^{c}, i=1,2,$ particles in $S$ would give the result $C=c_i$ and $n_{i}^a, i=1,2,$ particles in $S$ would give the result $A=a_i.$ Suppose that, among those particles which would produce $C=c_i,$ there are $n_{ij}, i,j, =1,2,$ particles which would give the result $A=a_j$ (see (R) and (C) below to specify the meaning of ‘would give’). So
$n_i^c=n_{i1}+n_{i2}, n_j^a=n_{1j}+n_{2j}, i, j=1,2.$
\(R) We can use an objective realist model in that both $C$ and $A$ are [*objective properties*]{} of a quantum particle, see \[2\], \[3\], \[10\] for the details. In such a model we can consider in $S$ sub-ensembles ${\rm{S_j(C)}}$ and ${\rm{S_j(A), j=1, 2, }}$ of particles having properties ${\rm{C=c_j}}$ and $A=a_j,$ respectively. Set ${\rm{S_{ij}(A,C)=S_i(C)\cap S_j(A).}}$ Then $n_{ij}$ is the number of elements in the ensemble ${\rm{S_{ij}(A,C).}}$ We remark that the ‘existence’ of the objective property $(C=c_i$ and $A=a_j)$ need not imply the possibility to measure this property. For example, such a measurement is impossible in the case of incompatible observables. So in general $(C=c_i$ and $A=a_j)$ is a kind of hidden property.
\(C) We can use so called [*contextualist*]{} realism, see, for example, \[3\] and De Muynck, De Baere and Martens in \[7\]. Here we cannot assume that a quantum system determines uniquely the result of a measurement. This result depends not only on properties of a quantum particle, but also on the experimental arrangement. Here $n_{ij}$ is the number of particles which would produce $C=c_i$ and $A=a_j.$ We remark that the latter statement, in fact, contains [*counterfactuals*]{}: $C=c_i$ and $A=a_j$ could not be measured simultaneously, see, for example, \[3\] for the use of counterfactuals in quantum theory.
The quantum experience says that the following frequency probabilities are well defined for all observables $C,A :$ $$\label{*3C}
{\bf p}_i={\bf p}_\phi(C=c_i)=\lim_{N\rightarrow\infty}{\bf p}_i^{(N)}, {\bf p}_i^{(N)}=
{\frac{n_i^c}{N}};$$ $$\label{*3A}
q_i={\bf p}_\phi(A=a_i)=\lim_{N\rightarrow\infty} q_i^{(N)}, q_i^{(N)}={\frac{n_i^a}{N}}.$$ Can we say something about behaviour of frequencies $\rm{\tilde{{\bf p}}_{ij}^{(N)}={\frac{n_{ij}}{N}}, N\rightarrow\infty}$? Suppose that they stabilize, when ${\rm{N\rightarrow \infty.}}$ This implies that probabilities $\tilde{{\bf p}}_{{\rm{ij}}}={\bf p}_\phi({\rm{C=c_i,A=a_j}})=\lim_{N\rightarrow\infty}
\tilde{{\bf p}}_{{\rm{ij}}}^{({\rm{N}})}$ would be well defined. The quantum experience says that (in general) this is not the case. Thus, in general, the frequencies $\tilde{{\bf p}}_{ij}^{(N)}$ fluctuate, when ${\rm{N\rightarrow\infty.}}$ Such fluctuations can, nevertheless, produce the statistical stabilization (\[\*3C\]), (\[\*3A\]), see \[10\] for the details.
[**Remark 2.1.**]{} [The common interpretation of experimental violations of Bell’s inequality is that realism and even contextualist realism cannot be used in quantum theory (at least in the local framework). However, Bell’s considerations only imply that we cannot use realist models under the assumption that $\rm{\tilde{{\bf p}}_{ij}^{(N)}}$ stabilize. The realist models with fluctuating frequencies $\rm{\tilde{{\bf p}}_{ij}^{(N)}}$ can coexist with violations of Bell’s inequality, see \[10\].]{}
Let us now consider statistical ensembles $T_i, i=1,2,$ of quantum particles described by the quantum states $\phi_{i}$ which are eigenstates of the operator $\hat{C}:$ $\hat{C} \phi_i =c_i \phi_i.$ These ensembles are produced by some preparation procedures ${\cal E}_i.$ For instance, we can suppose that particles produced by a preparation procedure ${\cal E}$ for the quantum state $\phi$ pass through additional filters $F_i, i=1,2.$ In quantum formalism we have $$\label{*4}
\phi=\sqrt{{\bf p}_1} \; \phi_1+\sqrt{{\bf p}_2} e^{i \theta} \; \phi_2 \;.$$ In the objective realist model (R) this representation may induce the illusion that ensembles $T_i, i=1,2,$ for states $\phi_i$ must be identified with sub-ensembles ${\rm{S_i(C)}}$ of the ensemble $S$ for the state $\phi.$ However, there are no physical reasons for such an identification. There are two main sources of troubles with this identification:
(a). The additional filter ${\rm{F}}_1$ (and ${\rm{F}}_2)$ changes the properties of quantum particles. The probability distribution of the property $A$ for the ensemble ${\rm{S_1(C)=\{{\pi\in S:C(\pi)=c_1}}}\}$ (and ${\rm{S_2(C))}}$ may differ from the corresponding probability distribution for the ensemble ${\rm{T_1}} ($and ${\rm{T_2).}} $ So different preparation procedures produce different distributions of properties. The same conclusion can be done for the contextualist realism: an additional filter changes possible reactions of quantum particles to measurement devices.
(b). As we have already mentioned, frequencies ${\rm{\tilde{{\bf p}}_{ij}^{(N)}={\frac{n_{ij}}{N}}}}$ must fluctuate (in the case of incompatible observables). Therefore, even if additional filters do not change properties of quantum particles, nonreproducibility implies that the distribution of the property $A$ may be essentially different for statistical ensembles ${\rm{S_1(C) }}$ and ${\rm{S_2(C) }}$ (sub-ensembles of $S)$ and ${\rm{T}}_1$ and ${\rm{T_2}}.$ Moreover, distributions may be different even for sub-ensembles ${\rm{S_1(C)}}$ and ${\rm{S}_1^\prime(C)}$ (or ${\rm{S_2(C)}}$ and ${\rm{S_2^\prime(C)),}} $ of two different ensembles S and ${\rm{S^\prime}}$ of quantum particles prepared in the same quantum state $\phi,$ see \[10\].
Fluctuations of physical properties which could be induced by (a) or (b) will be called [*[ensemble fluctuations.]{}*]{}
Suppose that ${\rm{m_{ij}}}$ particles in the ensemble ${\rm{T_i}}$ would produce the result $A=a_j, j=1,2.$ We can use the objective realist model, (R). Then $m_{ij}$ is just the number of particles in the ensemble $T_i$ having the objective property $A=a_j.$ We can also use the contextualist model, (C). Then $m_{ij}$ is the number of particles in the ensemble $T_i$ which in the process of an interaction with a measurement device for the physical observable $A$ would give the result $A=a_j.$
The quantum experience says that the following frequency probabilities are well defined:
${\rm{{\bf p}_{ij}={\bf p}_{\phi_i}(A=a_j)=lim_{N\rightarrow\infty}
{\bf p}_{ij}^{(N)}, {\bf p}_{ij}^{(N)}={\frac{m_{ij}}{n_i^c}}.}}$
Here it is assumed that an ensemble ${\rm{T_i}}$ consists of ${\rm{n_i^c}}$ particles, $i=1,2.$ It is also assumed that ${\rm{n_i^c=
n_i^c (N) \rightarrow \infty, N\rightarrow\infty.}}$ In fact, the latter assumption holds true if both probabilities ${\bf p}_i, i=1,2,$ are nonzero.
We remark that probabilities ${\bf p}_{ij}={\bf p}_{\phi_i}(A=a_j)$ cannot be (in general) identified with conditional probabilities ${\bf p}_\phi(A=a_j/C=c_i)={\frac{\tilde{{\bf p}}_{ij}}{{\bf p}_i}}.$ As we have remarked, these probabilities are related to statistical ensembles prepared by different preparation procedures, namely by ${\cal E}_i, i=1,2,$ and ${\cal E}.$
Let $\{ \psi_j \}_{j=1}^2$ be an orthonormal basis consisting of eigenvectors of the operator $A.$ We can restrict our considerations to the case: $$\label{*4a}
\phi_1 = \sqrt{{\bf p}_{11}} \;\psi_1 + e^{i\gamma_1} \sqrt{{\bf p}_{12}}\; \psi_2, \;\;
\phi_2= \sqrt{{\bf p}_{21}} \;\psi_1 + e^{i\gamma_2} \sqrt{{\bf p}_{22}} \;\psi_2\;.$$ As $(\phi_1, \phi_2)=0,$ we obtain:
$\rm{{\sqrt{{\bf p}_{11}{\bf p}_{21}}} + e^{i(\gamma_1 - \gamma_2)} {\sqrt{{\bf p}_{12} {\bf p}_{22}}} =0}.$
Hence, $\sin(\gamma_1-\gamma_2)=0$ (we suppose that all probabilities ${\bf p}_{ij} >0)$ and $\gamma_2=\gamma_1+\pi k. $ We also have
${\sqrt{{\bf p}_{11}{\bf p}_{21}}}
+\cos(\gamma_1-\gamma_2){\sqrt{{\bf p}_{12}{\bf p}_{22}}}=0.$
This implies that $k=2 l+1$ and ${\sqrt{{\bf p}_{11} {\bf p}_{21}}}={\sqrt{{\bf p}_{12} {\bf p}_{22}}}.$ As ${\bf p}_{12}=1-{\bf p}_{11} $ and $ {\bf p}_{21}=1-{\bf p}_{22},$ we obtain that $$\label{*4b}
{\bf p}_{11}={\bf p}_{22}, \; \; {\bf p}_{12}={\bf p}_{21}.$$ This equalities are equivalent to the condition: ${\bf p}_{11}+{\bf p}_{21}=1,
{\bf p}_{12}+{\bf p}_{22}=1.$ So the matrix of probabilities $({\bf p}_{ij})_{i,j=1}^2$ is so called [*double stochastic matrix,*]{} see, for example, \[3\] for general considerations.
Thus, in fact, $$\label{*q}
\phi_1={\sqrt{{\bf p}_{11}}}\;\psi_1+e^{i\gamma_1}{\sqrt{{\bf p}_{12}}}\;\psi_2,\;
\phi_2={\sqrt{{\bf p}_{21}}}\;\psi_1-e^{i\gamma_1}{\sqrt{{\bf p}_{22}}}\;\psi_2.$$ So $\rm{\varphi=d_1\psi_1+d_2\psi_2,}$ where
$d_1=\sqrt{{\bf p}_{1} {\bf p}_{11}} + e^{i\theta} \sqrt{{\bf p}_{2} {\bf p}_{21}}, \;
d_2=e^{i\gamma_1} \sqrt{{\bf p}_{1} {\bf p}_{12}} - e^{i(\gamma_1+\theta)}
\sqrt{{\bf p}_{2} {\bf p}_{22}}.$
Thus $$\label{*}
q_1={\bf p}_\phi (A=a_1)=|d_1|^2={\bf p}_1{\bf p}_{11}+{\bf p}_2{\bf p}_{21}+ 2 \sqrt{{\bf p}_1 {\bf p}_{11}
{\bf p}_2 {\bf p}_{21}}\cos \theta ;$$ $$\label{**}
q_2= {\bf p}_\phi(A=a_2) = |d_2|^2= {\bf p}_1 {\bf p}_{12}
+{\bf p}_2 {\bf p}_{22} -2 \sqrt{{\bf p}_1 {\bf p}_{12} {\bf p}_2 {\bf p}_{22}} \cos\theta .$$
[**3. Probablity relations connecting preparation procedures.**]{} Let us forget at the moment about the quantum theory. We consider an arbitrary preparation procedure ${\cal E}$ for microsystems or macrosystems. Suppose that ${\cal E}$ produced an ensemble $S$ of physical systems. Let $C (=c_1, c_2)$ and $A (= a_1, a_2)$ be physical quantities which can be measured for elements $\pi \in S.$ Let ${\cal E}_1$ and ${\cal E}_2$ be preparation procedures which are based on filters $F_1$ and $F_2$ corresponding, respectively, to values $c_1$ and $c_2$ of $C.$ Denote statistical ensembles produced by these preparation procedures by symbols $T_1$ and $T_2,$ respectively. Symbols $N, n_i^c, n_i^a, n_{ij}, m_{ij}$ have the same meaning as in the previous considerations. Probablities ${\bf p}_i, {\bf p}_{ij}, q_i$ are defined in the same way as in the previous considerations. The only difference is that, instead of indexes corresponding to quantum states, we use indexes corresponding to statistical ensembles: ${\bf p}_i= {\bf P}_S (C=c_i), q_i= {\bf P}_S (A=a_i), {\bf p}_{ij}= {\bf P}_{T_i} (A=a_i).$
In the classical frequency framework we obtain: $$q_1^{(N)} = \frac{n_1^a}{N} = \frac{n_{11}}{N} + \frac{n_{21}}{N} =
\frac{m_{11}}{N} + \frac{m_{21}}{N} + \frac{(n_{11} - m_{11})}{N} + \frac{(n_{21} - m_{21})}{N}.$$
But, for $i=1,2,$ we have
$\frac{m_{1i}}{N} = \frac{m_{1i}}{n_{1}^c}\cdot \frac{n_{1}^c}{N}
={\bf p}_{1i}^{(N)} {\bf p}_1^{(N)},\; \;
\frac{m_{2i}}{N} =\frac{m_{2i}}{n_2^c}\cdot \frac{n_{2}^c}{N}= {\bf p}_{2i}^{(N)}{\bf p}_2^{(N)}\;.
$
Hence $$\label{a*}
q_i^{(N)} = {\bf p}_{1}^{(N)} {\bf p}_{1i}^{(N)} + {\bf p}_{2}^{(N)} {\bf p}_{2i}^{(N)} + \delta_i^{(N)},$$ where $$\delta_i^{(N)} = \frac{1}{N} [(n_{1i} - m_{1i}) + (n_{2i} - m_{2i})], \; i= 1,2.$$ In fact, this rest term depends on the statistical ensembles $S, T_1, T_2,$ $\delta_i^{(N)} = \delta_i^{(N)}(S,T_1,T_2).$
[**4. Behaviour of fluctuations.**]{} First we remark that $\lim_{N\rightarrow\infty} \delta_i^{(N)}$ exists for all physical measurements. This is a consequence of the property of statistical stabilization of relative frequencies for physical observables (in classical as well as in quantum physics). It may be that this property is a peculiarity of nature. It may be that this is just a property of our measurement and preparation procedures, see \[10\] for an extended discussion. In any case we always observe that
${\rm{q_i^{(N)}\rightarrow q_i, {\bf p}_i^{(N)}\rightarrow {\bf p}_i, {\bf p}_{ij}^{(N)}\rightarrow {\bf p}_{ij}, N\rightarrow\infty.}}$
Thus there exist limits
$\delta_i =\lim_{N\rightarrow\infty}\delta_i^{(N)}=q_i-{\bf p}_1{\bf p}_{1i}-{\bf p}_2{\bf p}_{2i}.$
Suppose that ensemble fluctuations produce negligibly small (with respect to N) changes in properties of particles. Then $$\label{cl}
\delta_i^{(N)}\rightarrow 0, N\rightarrow \infty.$$ This asymptotic implies classical probablistic rule (\[\*1\]). In particular, this rule appears in all experiments of classical physics. Hence, preparation and measurement procedures of classical physics produce ensemble fluctuations with asymptotic (\[cl\]). We also have such a behaviour in the case of compatible observables in quantum physics. Moreover, the same classical probabilistic rule we can obtain for incompatible observables $C$ and $A$ if the phase factor $\theta= \frac{\pi}{2} + \pi k.$ Therefore classical probabilistic rule (\[\*1\]) is not directly related to commutativity of corresponding operators in quantum theory. It is a consequence of asymptotic (\[cl\]) for ensemble fluctuations.
Suppose now that filters $F_i, i =1,2,$ produce relatively large (with respect to N) changes in properties of particles. Then $$\label{*c}
\lim_{N\rightarrow\infty} \delta_i^{(N)}= \delta_i \not= 0.$$ Here we obtain probabilistic rules which differ from the classical one, (\[\*1\]). In particular, this implies that behaviour of ensemble fluctuations (\[\*c\]) cannot be produced in experiments of classical physics. A rather special class of ensemble fluctuations (\[\*c\]) is produced in experiments of quantum physics. However, ensemble fluctuations of form (\[\*c\]) are not reduced to quantum fluctuations (see further considerations).
To study carefully behaviour of fluctuations $\delta_i^{(N)},$ we represent them as:
$\delta_i^{(N)}=2 \sqrt{{\bf p}_1^{(N)} {\bf p}_{1i}^{(N)} {\bf p}_2^{(N)} {\bf p}_{2i}^{(N)}}
\lambda_i^{(N)},
$
where
$\lambda_i^{(N)} = \frac{1}{2 \sqrt{m_{1i} m_{2i}}}
[(n_{1i} - m_{1i}) + (n_{2i} - m_{2i})]\;.
$
We have used the fact:
${\bf p}_1^{(N)} {\bf p}_{1i}^{(N)} {\bf p}_2^{(N)} {\bf p}_{2i}^{(N)}
=
\frac{n_1^c}{N} \cdot \frac{m_{1i}}{n_1^c} \cdot\frac{n_2^c}{N} \cdot
\frac{m_{2i}}{n_{2}^c}
=\frac{m_{1i} m_{2i}}
{N^2}.
$
We have: $\delta_i
=2 \sqrt{{\bf p}_1 {\bf p}_{1i} {\bf p}_2 {\bf p}_{2i}} \;\lambda_{i},$ where the coefficients $\lambda_i=\lim_{N\rightarrow\infty} \lambda_i^{(N)}, i=1,2.$
In classical physics the coefficients $\rm{\lambda_i=0.}$ The same situation we have in quantum physics for all compatible observables as well as for some incompatible observables. In the general case in quantum physics we can only say that $$\label{*q1}
|\lambda_i|\leq 1 .$$
Hence, for quantum fluctuations, we always have: $$|{\frac{(n_{1i}-m_{1i})+(n_{2i}-m_{2i})}{2{\sqrt{m_{1i}m_{2i}}}}}|\leq 1, N \rightarrow \infty.$$ Thus quantum ensemble fluctuations induce a relatively small (but in general nonzero!) variations of properties.
[**4. Fluctuations which induce the quantum probabilistic rule.**]{} Let us consider preparation procedures ${\cal E}, {\cal E}_j, j=1,2,$ which have the deviations, when $N\rightarrow\infty$, of the following form $(i=1,2):$ $$\label{*7}
\epsilon_{1i}^{(N)}= n_{1i}-m_{1i}= 2 \xi_{1i}^{(N)} \sqrt{ m_{1i} m_{2i}},$$ $$\label{*8}
\epsilon_{2i}^{(N)}= n_{2i}-m_{2j} = 2 \xi_{2i}^{(N)} \sqrt{m_{1i}m_{2i}},$$
where the coefficients $\xi_{ij}$ satisfy the inequality $$\label{*8a}
|\xi_{1i}^{(N)}+\xi_{2i}^{(N)}|\leq1, N\rightarrow\infty.$$
Suppose that ${\rm{\lambda_i^{(N)}=\xi_{1i}^{(N)}+\xi_{2i}^{(N)}\rightarrow\lambda_i, N\rightarrow\infty,}}$ where ${\rm{|\lambda_i|\leq1.}}$ We can represent ${\rm{\lambda_i^{(N)}=\cos \theta_i^{(N)}.}}$ Then $\theta_i^{(N)}\rightarrow\theta_i, \rm{mod} 2 \pi,$ when $N\rightarrow\infty.$ Thus ${\rm{\lambda_i=\cos\theta_i.}}$
We obtained that: $$\label{*N}
\delta_i=2 \sqrt{ {\bf p}_1 {\bf p}_{1i} {\bf p}_2 {\bf p}_{2i}}
\cos\theta_i, i=1,2.$$ Thus fluctuations of the form (\[\*7\]), (\[\*8\]) produce the probability rule $(i=1,2):$ $$\label{*n}
q_i={\bf p}_1{\bf p}_{1i}+{\bf p}_2{\bf p}_{2i}+2{\sqrt{{\bf p}_1{\bf p}_2{\bf p}_{1i}{\bf p}_{2i}}}\cos\theta_i.$$
The usual probabilistic calculations give us
$
1 = q_1 + q_2 = {\bf p}_1 {\bf p}_{11} +
{\bf p}_2{\bf p}_{21} + +{\bf p}_1{\bf p}_{12}+{\bf p}_2{\bf p}_{22}+
$
$
2\; \sqrt{{\bf p}_1 {\bf p}_2 {\bf p}_{11} {\bf p}_{21}}
\cos\theta_1+
2\; \sqrt{{\bf p}_1{\bf p}_2{\bf p}_{12}{\bf p}_{22}} \cos \theta_2
$
$
=1+2 \sqrt{{\bf p}_1 {\bf p}_2}
[\sqrt{{\bf p}_{11}{\bf p}_{21}} \cos\theta_1+ \sqrt{{\bf p}_{12}{\bf p}_{22}}\cos\theta_2]\;.
$
Thus we obtain the relation: $$\label{*N2}
\sqrt{{\bf p}_{11} {\bf p}_{21}} \cos\theta_1 + \sqrt{{\bf p}_{12} {\bf p}_{22}} \cos\theta_2 =0 \;.$$ Suppose that ensemble fluctuations (\[\*7\]), (\[\*8\]), satisfy the additional condition $$\label{*N1}
\lim_{N\rightarrow\infty}{\bf p}_{11}^{(N)}=
\lim_{N\rightarrow\infty}{\bf p}_{22}^{(N)}.$$ This condition implies that the matrix of probabilities is a double stochastic matrix. Hence, we get $$\label{*N3}
\cos\theta_1= -\cos\theta_2\;.$$ So we demonstrated that ensemble fluctuations (\[\*7\]), (\[\*8\]) in the combination with double stochastic condition (\[\*N1\]) produce quantum probabilistic relations (\[\*\]), (\[\*\*\]).
It must be noticed that the existence of the limits $\lambda_i=\lim_{N\rightarrow\infty}\lambda_i^{(N)}$ does not imply the existence of limits $\xi_{1i}=\lim_{N\rightarrow\infty}\xi_{1i}^{(N)}$ and $\xi_{2i}=\lim_{N\rightarrow\infty}\xi_{2i}
^{(N)}.$ For example, let $\xi_{1i}^{(N)}
=\lambda_i \cos^2\alpha_i^{(N)}$ and $\xi_{2i}^{(N)}=
\lambda_i\sin^2\alpha_i^{(N)}$, where ‘phases’ $\alpha_i^{(N)}$ fluctuate $\rm{mod}\;\; 2\pi.$ Then numbers $\rm{\xi_{1i}}$ and $\rm{\xi_{2i}}$ are not defined, but $\rm{\lim_{N\rightarrow\infty} [\xi_{1i}^{(N)}
+\xi_{2i}^{(N)}]=\lambda_i, i=1,2,}$ exist.
If $\xi_{ij}^{(N)}$ stabilize, then probabilities for the simultaneous measurement of incompatible observables would be well defined:
${\bf p} (A=a_1, C=c_1)
=\lim_{N\rightarrow\infty}
\frac{n_{11}}{N}=
{\bf p}_1 {\bf p}_{11} + 2 \sqrt{{\bf p}_1 {\bf p}_2 {\bf p}_{11} {\bf p}_{21}}
\xi_{11}, \ldots.
$
The quantum formalism implies that in general such probabilities do not exist.
[**Remark 2.1.**]{} The magnitude of fluctuations can be found experimentally. Let C and A be two physical observables. We prepare free statistical ensembles $\rm{S, T_1, T_2}$ corresponding to states $\phi,
\phi_1, \phi_2.$ By measurements of C and A for $\rm{\pi\in S}$ we obtain frequencies $\rm{{\bf p}_1^{(N)},
{\bf p}_2^{(N)}, q_1^{(N)}, q_2^{(N)},}$ by measurements of A for $\rm{\pi\in T_1}$ and for $\rm{\pi\in T_2}$ we obtain frequencies $\rm{{\bf p}_{1i}^{(N)}.}$ We have
$\rm{f_i(N)=\lambda_i^{(N)}={\frac{q_i^{(N)}-{\bf p}_1^{(N)}
{\bf p}_{1i}^{(N)}-{\bf p}_2^{(N)}{\bf p}_{2i}^{(N)}}{2\;\sqrt{{\bf p}_1^{(N)}{\bf p}_{1i}^{(N)}{\bf p}_2^{(N)}
{\bf p}_{2i}^{(N)}}}}}$
It would be interesting to obtain graphs of functions $\rm{f_i(N)}$ for different pairs of physical observables. Of course, we know that $\lim_{N\to \infty} f_i(N)= \pm \cos\theta.$ However, it may be that such graphs can present a finer structure of quantum states.
On the magnitude of fluctuations which produce the classical probabilistic rule
===============================================================================
We remark that the classical probabilistic rule (which is induced by ensemble fluctuations with $\xi_i^{(N)}\rightarrow 0)$ can be observed for fluctuations having relatively large absolute magnitudes. For instance, let $$\label{*9}
\epsilon_{1i}^{(N)}=2\xi_{1i}^{(N)}
\sqrt{m_{1i}},\; \; \epsilon_{2i}^{(N)}=2\xi_{2i}^{(N)}
\sqrt{m_{2i}}, i=1,2,$$ where sequences of coefficients $\{\xi_{1i}^{(N)}\}$ and $\{\xi_{2i}^{(N)}\}$ are bounded $(N\rightarrow\infty).$ Here
$\lambda_i^{(N)}=
\frac{\xi_{1i}^{(N)}}
{\sqrt{m_{2i}}} + \frac{\xi_{2i}^{(N)}}{m_{1i}}
\rightarrow 0, N\rightarrow\infty
$
(as usual, we assume that ${\bf p}_{ij} >0).$
[**Example 3.1.**]{} Let $N\approx 10^6, n_1^c \approx n_2^c\approx 5\cdot 10^5,
m_{11}\approx m_{12}\approx m_{21}\approx m_{22}\approx 25\cdot 10^4.$ So ${\bf p}_1 = {\bf p}_2= 1/2; {\bf p}_{11}= {\bf p}_{12}= {\bf p}_{21}= {\bf p}_{22}= 1/2$ (symmetric state). Suppose we have fluctuations (\[\*9\]) with $\xi_{1i}^{(N)} \approx \xi_{2i}^{(N)} \approx 1/2.$ Then $\epsilon_{1i}^{(N)}\approx \epsilon_{2i}^{(N)} \approx 500.$ So $n_{ij}=24\cdot 10^4 \pm 500.$ Hence, the relative deviation $\frac{\epsilon_{ji}^{(N)}}{m_{ji}}
=\frac{500}{25\cdot 10^4} \approx 0.002.$ Thus fluctuations of the relative magnitude $\approx 0,002$ produce the classical probabilistic rule.
It is evident that fluctuations of essentially larger magnitude $$\label{*9a}
\epsilon_{1i}^{(N)}=2 \xi_{1i}^{(N)} (m_{1i})^{1/2} (m_{21})^{1/\alpha},\;
\epsilon_{2i}^{(N)}= 2 \xi_{2i}^{(N)}(m_{2i})^{1/2}(m_{1i})^{1/\beta}, \alpha, \beta > 2,$$ where $\{ \xi_{1i}^{(N)}\}$ and $\{\xi_{2i}^{(N)}\}$ are bounded sequences $(N\rightarrow\infty),$ also produce (for ${\bf p}_{ij} \not= 0)$ the classical probability rule.
[**Example 3.2.**]{} Let all numbers $N, \ldots, m_{ij}$ be the same as in Example 3.1 and let deviations have behaviour (\[\*9a\]) with $\alpha=\beta=4.$ Here the relative deviation $\frac{\xi_{ij}^{(N)}}{mij} \approx 0,045.$
Classical, quantum and ‘superquantum’ physics
=============================================
In this section we find relations between different classes of physical experiments. First we consider so called classical and quantum experiments. Classical experiments produce the classical probabilistic rule (Bayes’ formula). Therefore the corresponding ensemble fluctuations have the asymptotic $\delta_i^{(N)}\rightarrow 0, N\rightarrow\infty.$
Nevertheless, we cannot say that classical measurements give just a subclass of quantum measurements. In the classical domain we have no symmetric relations ${\bf p}_{11}={\bf p}_{22}$ and ${\bf p}_{12}={\bf p}_{21}$. This is the special condition which connects the preparation procedures ${\cal E}_1$ and ${\cal E}_2.$ This relation is a peculiarity of quantum preparation/measurement procedures.
Experiments with nonclassical probabilistic rules are characterized by the condition $\rm{\delta_i^{(N)}\not \rightarrow 0, N\rightarrow\infty.}$ Quantum experiments give only a particular class of nonclassical experiments. Quantum experiments produce ensemble fluctuations of form (\[\*7\]), (\[\*8\]), where coefficients $\xi_{1i}^{(N)}$ and $\xi_{20}^{(N)}$ satisfy (\[\*8a\]) and the orthogonality relation $$\label{*h}
\lim_{N\rightarrow\infty}
(\xi_{11}^{(N)}+\xi_{21}^{(N)})+\lim_{N\rightarrow\infty}(\xi_{12}^{(N)}+\xi_{22}^{(N)})=0\;.$$
In particular, nonclassical domain contains (nonquantum) experiments which satisfy condition of boundedness (\[\*8a\]), but not satisfy orthogonality relation (\[\*h\]). Here we have only the relation of quazi-orthogonality (\[\*N2\]). In this case the matrix of probabilities is not double stochastic. The corresponding probabilistic rule has the form: $$\label{*n7}
q_i={\bf p}_1{\bf p}_{1i}+{\bf p}_2{\bf p}_{2i}+2{\sqrt{{\bf p}_1{\bf p}_2{\bf p}_{1i}{\bf p}_{2i}}}\cos\theta_i.$$ Here in general ${\bf p}_{11} + {\bf p}_{21}\not= 1, {\bf p}_{12} + {\bf p}_{22}\not= 1.$
We remark that, in fact, (\[\*n7\]) and (\[\*N2\]) imply that
$q_1={\bf p}_1 {\bf p}_{11}+{\bf p}_2 {\bf p}_{21} +2 \;\sqrt{{\bf p}_1 {\bf p}_2 {\bf p}_{11}{\bf p}_{21}}
\cos\theta_1;$
$q_2={\bf p}_1{\bf p}_{12}+{\bf p}_2{\bf p}_{22}- 2 \sqrt{{\bf p}_1{\bf p}_2{\bf p}_{11}{\bf p}_{21}}
\cos\theta_1.$
Hyperbolic ‘quantum’ formalism
==============================
Let us consider ensembles $\rm{S, T_1, T_2}$ such that ensemble fluctuations have magnitudes (\[\*7\]), (\[\*8\]) where $$\label{*8b}
|\xi_{1i}^{(N)}+\xi_{2i}^{(N)}|\geq 1+c, c>0,
N\rightarrow\infty.$$ Here the coefficients $\lambda_i= \lim_{N\rightarrow\infty} (\xi_{1i}^{(N)}+\xi_{2i}^{(N)})$ can be represented in the form $\lambda_i= \rm{ch} \;\theta_i, i=1,2.$ The corresponding probability rule is the following
$q_i={\bf p}_1{\bf p}_{1i}+{\bf p}_2{\bf p}_{2i}+2
\sqrt{{\bf p}_1{\bf p}_2{\bf p}_{1i}{\bf p}_{2i}} \;\rm{ch} \; \theta_i, i=1,2.$
The normalization $q_1+q_2=1$ gives the orthogonality relation: $$\label{*N5}
\sqrt{{\bf p}_{11}
{\bf p}_{21}}\; \rm{ch} \;\theta_1+\sqrt{{\bf p}_{12}{\bf p}_{22}} \;\rm{ch} \;\theta_2 = 0\;.$$ Thus $\rm{ch} \theta_2=-\rm{ch} \theta_1 \sqrt{\frac{{\bf p}_{11} {\bf p}_{21}}{{\bf p}_{12} {\bf p}_{22}}}$ and, hence,
$q_2={\bf p}_1 {\bf p}_{12} + {\bf p}_2 {\bf p}_{22} - 2\; \sqrt{{\bf p}_1 {\bf p}_2{\bf p}_{11} {\bf p}_{21}}\;
\rm{ch}\theta_1.$
Such a formalism can be called a [*hyperbolic quantum formalism.*]{} It describes a part of nonclassical reality which is not described by ‘trigonometric quantum formalism’. Experiments (and preparation procedures ${\cal E}, {\cal E}_1, {\cal E}_2$) which produce hyperbolic quantum behaviour could be simulated on computer. On the other hand, at the moment we have no ‘natural’ physical phenomena which are described by the hyperbolic quantum formalism. ‘Trigonometric quantum behaviour’ corresponds to essentially better control of properties in the process of preparation than ‘hyperbolic quantum behaviour’. Of course, the aim of any experimenter is to approach ‘trigonometric behaviour’. However, in principle there might exist such natural phenomena that ‘trigonometric quantum behaviour’ could not be achieved. In any case even the possibility of computer simulation demonstrates that quantum mechanics (trigonometric) is not complete (in the sense that not all physical reality is described by the standard quantum formalism). [^1]
[**Example 6.1.**]{} Let ${\bf p}_1=\alpha, {\bf p}_2=1-\alpha, {\bf p}_{11}=\ldots={\bf p}_{22}=1/2.$ Then
$q_1= \frac{1}{2} +\sqrt{\alpha(1-\alpha)} \lambda_1,
q_2={\frac{1}{2}}-\sqrt{\alpha(1-\alpha)}
\lambda_1.$
If $\alpha$ is sufficiently small, then $\lambda_1$ can be, in principle, larger than 1:$\lambda_1= \rm{ch}\theta$.
Quantum behaviour for macroscopic systems
=========================================
Our analysis shows that ‘quantum statistical behaviour’ can be demonstrated by ensembles consisting of macroscopic systems; for example, balls having colours $C=c_1,$ red, or $c_2,$ blue, and weights $A=a_1=1$ or $a_2=2.$ Suppose that additional filters $F_i, i=1,2,$ produce fluctuations (\[\*7\]), (\[\*8\]), (\[\*h\]). Then, instead of classical Bayes’ formula (\[\*1\]), we obtain quantum probability rule (\[\*2\]).
In the context of the statistical simulation of quantum statistical behaviour via fluctuations (\[\*7\]), (\[\*8\]) (with (\[\*h\])) it would be useful to note that, in fact, we can choose constant coefficients $\rm{\xi_{ij}^{(N)}=\xi_{ij}.}$ Moreover, we have $\xi_{11}=-\xi_{12}$ and $\xi_{21}=-\xi_{22}.$ The latter is a consequence of the general relations: $$\label{*R}
\frac{\xi_{11}^{(N)}}{\xi_{12}^{(N)}}
\rightarrow-1,\; \;
\frac{\xi_{22}^{(N)}}{\xi_{21}^{(N)}} \rightarrow -1, N\rightarrow\infty.$$ Asymptotic (\[\*R\]) can be obtained from (\[\*7\]), (\[\*8\]):
[**Proof.**]{} By (\[\*7\]) we have $$\label{*q7}
(n_{11}-m_{11})+(n_{12}-m_{12})=
2 \xi_{11} \sqrt{m_{11}m_{21}}
+2 \xi_{12}
\sqrt{m_{12}m_{22}}.$$ The left hand side is equal to zero: $(n_{11}+n_{12})-(m_{11}+ m_{12})=n_1^c
-n_1^c=0$ (as the ensemble $T_1$ has $n_1^c$ elements). Hence, by (\[\*N1\]) we get $\xi_{11}=- \xi_{12}\; \sqrt{\frac{m_{12}}
{m_{21}} \frac{m_{22}}{m_{11}}} \rightarrow -\xi_{12}, N\rightarrow\infty\;$ (as ${\bf p}_{11}={\bf p}_{22}$ and ${\bf p}_{12}= {\bf p}_{21}).$ In the same way we obtain that $\xi_{21}= - \xi_{22} \; \sqrt{\frac{m_{12}}
{m_{21}} \frac{m_{22}}{m_{11}}} \rightarrow -\xi_{22}, N\rightarrow\infty\;.$
[**Conclusion.**]{} [*We demonstrated that so called quantum probabilistic rule has a natural explanation in the framework of ensemble fluctuations induced by preparation procedures. In particular, the quantum rule for probabilities (with nontrivial $\cos \theta$-factor) could be simulated for macroscopic physical systems via preparation procedures producing the special ensemble fluctuations.*]{}
Appendix: correlations between preparation procedures
=====================================================
In this section we study the frequency meaning of the fact that in the quantum formalism the matrix of probabilities is double stochastic. We remark that this is a consequence of orthogonality of quantum states $\phi_1$ and $\phi_2$ corresponding to distinct values of a physical observable $C.$ We have $$\label{*q8}
\frac{{\bf p}_{11}}{{\bf p}_{12}}
=\frac{{\bf p}_{22}}{{\bf p}_{21}} \;.$$
Suppose that (a), see section 2, is the origin of quantum behaviour. Hence, all quantum features are induced by the impossibility to create new ensembles $T_1$ and $T_2$ without to change properties of quantum particles. Suppose that, for example, the preparation procedure ${\cal E}_1$ practically destroys the property $A=a_1$ (transforms this property into the property $A=a_2)$. So ${\bf p}_{11}=0.$ As a consequence, the ${\cal E}_1$ makes the property $A = a_2$ dominating. So ${\bf p}_{12}\approx 1.$ Then the preparation procedure ${\cal E}_2$ [*must*]{} practically destroy the property $A=a_2$ (transforms this property into the property $A=a_1)$. So ${\bf p}_{22} \approx 0.$ As a consequence, the ${\cal E}_2$ makes the property $A=a_1$ dominating. So ${\bf p}_{21}\approx 1.$
Frequency relation (\[\*N1\]) can be represented in the following form: $$\label{*M}
\frac{m_{11}}{n_1^c} - \frac{m_{22}}{n_2^c} \approx 0,
N\rightarrow\infty\;.$$ We recall that the number of elements in the ensemble $T_i$ is equal to $n_i^c.$
Thus $$\label{*M1}
(\frac{n_{11} - m_{11}}{n_1^c}) -
(\frac{n_{22} - m_{22}}{n_2^c})
\approx \frac{n_{11}}
{n_1^c} - \frac{n_{22}}{n_2^c}.$$ This is nothing than the relation between fluctuations of property $A$ under the transition from the ensemble S to ensembles $T_1, T_2$ and distribution of this property in the ensemble S.
[**References**]{}
\[1\] L. Accardi, The probabilistic roots of the quantum mechanical paradoxes. [*The wave–particle dualism. A tribute to Louis de Broglie on his 90th Birthday*]{}, (Perugia, 1982). Edited by S. Diner, D. Fargue, G. Lochak and F. Selleri. D. Reidel Publ. Company, Dordrecht, 297–330(1984).
\[2\] B. d’Espagnat, [*Veiled Reality. An anlysis of present-day quantum mechanical concepts.*]{} (Addison-Wesley, 1995).
\[3\] A. Peres, [*Quantum Theory: Concepts and Methods.*]{} (Kluwer Academic Publishers, 1994).
\[4\] J. S. Bell, Rev. Mod. Phys., [**38**]{}, 447–452 (1966); J. F. Clauser , M.A. Horne, A. Shimony, R. A. Holt, Phys. Rev. Letters, [**49**]{}, 1804-1806 (1969); J. S. Bell, [*Speakable and unspeakable in quantum mechanics.*]{} (Cambridge Univ. Press, 1987); J.F. Clauser , A. Shimony, Rep. Progr.Phys., [**41**]{} 1881-1901 (1978).
\[5\] L. Accardi, Phys. Rep., [**77**]{}, 169-192 (1981). L. Accardi, A. Fedullo, Lettere al Nuovo Cimento [**34**]{} 161-172 (1982). L. Accardi, Quantum theory and non–kolmogorovian probability. In: Stochastic processes in quantum theory and statistical physics, ed. S. Albeverio et al., Springer LNP [**173**]{} 1-18 (1982).
\[6\] I. Pitowsky, Phys. Rev. Lett, [**48**]{}, N.10, 1299-1302 (1982); S. P. Gudder, J. Math Phys., [**25**]{}, 2397- 2401 (1984); S. P. Gudder, N. Zanghi, Nuovo Cimento B [**79**]{}, 291–301 (1984).
\[7\] A. Fine, Phys. Rev. Letters, [**48**]{}, 291–295 (1982); P. Rastal, Found. Phys., [**13**]{}, 555 (1983). W. De Baere, Lett. Nuovo Cimento, [**39**]{}, 234-238 (1984); [**25**]{}, 2397- 2401 (1984); W. De Muynck, W. De Baere, H. Martens, Found. of Physics, [**24**]{}, 1589–1663 (1994); W. De Muynck, J.T. Stekelenborg, Annalen der Physik, [**45**]{}, N.7, 222-234 (1988).
\[8\] L. Accardi, M. Regoli, Experimental violation of Bell’s inequality by local classical variables. To appear in: proceedings of the Towa Statphys conference, Fukuoka 8–11 November (1999), published by American Physical Society.
\[9\] L. Accardi, [*Urne e Camaleoni: Dialogo sulla realta, le leggi del caso e la teoria quantistica.*]{} (Il Saggiatore, Rome, 1997).
\[10\] A.Yu. Khrennikov, [*Non-Archimedean analysis: quantum paradoxes, dynamical systems and biological models.*]{} (Kluwer Acad.Publ., Dordreht, 1997); [*Interpretations of probability.*]{} (VSP Int. Publ., Utrecht, 1999); J. Math. Phys., [**41**]{}, 1768-1777 (2000).
\[11\] J. Summhammer, Int. J. Theor. Physics, [**33**]{}, 171-178 (1994); Found. Phys. Lett. [**1**]{}, 113 (1988); Phys.Lett., [**A136,**]{} 183 (1989).
\[12\] R. von Mises, [*The mathematical theory of probability and statistics*]{}. (Academic, London, 1964);
\[13\] A. N. Kolmogoroff, [*Grundbegriffe der Wahrscheinlichkeitsrechnung.*]{} (Springer Verlag, Berlin, 1933); reprinted: [*Foundations of the Probability Theory*]{}. (Chelsea Publ. Comp., New York, 1956).
\[14\] L. E. Ballentine, [*Quantum mechanics.*]{} (Englewood Cliffs, New Jersey, 1989).
[^1]: We can compare the hyperbolic quantum formalism with the hyperbolic geometry.
|
---
abstract: |
In a software project, esp. in open-source, a contribution is a valuable piece of work made to the project: writing code, reporting bugs, translating, improving documentation, creating graphics, etc. We are now at the beginning of an exciting era where software bots will make contributions that are of similar nature than those by humans.
Dry contributions, with no explanation, are often ignored or rejected, because the contribution is not understandable per se, because they are not put into a larger context, because they are not grounded on idioms shared by the core community of developers.
We have been operating a program repair bot called Repairnator for 2 years and noticed the problem of “dry patches”: a patch that does not say which bug it fixes, or that does not explain the effects of the patch on the system. We envision program repair systems that produce an “explainable bug fix”: an integrated package of at least 1) a patch, 2) its explanation in natural or controlled language, and 3) a highlight of the behavioral difference with examples.
In this paper, we generalize and suggest that software bot contributions must explainable, that they must be put into the context of the global software development conversation.
author:
- |
Martin Monperrus\
KTH Royal Institute of Technology\
`martin.monperrus@csc.kth.se`
bibliography:
- 'biblio-erc.bib'
- 'biblio-software-repair.bib'
title: |
Explainable Software Bot Contributions:\
Case Study of Automated Bug Fixes
---
To appear in “2019 IEEE/ACM 1st International Workshop on Bots in Software Engineering (BotSE)”
Introduction
============
The landscape of software bots is immense [@lebeuf:18], and will slowly be explored by far and large by software engineering research. In this paper, we focus on software bots that contribute to software projects, with the most noble sense of contribution: an act with an outcome that is considered concrete and valuable by the community.
The open-source world is deeply rooted in this notion of “contribution”: developers are called “contributors”. Indeed, “contributor” is both a better and more general term than developer for the following reasons. First, it emphasizes on the role within the project (bringing something) as opposed to the nature of the task (programming). Second, it covers the wide range of activities required for a successful software project, way beyond programming: reporting bugs, translating, improving documentation, creating graphics are all essential, and all fall under the word “contribution”.
Recently, we have explored one specific kind of contributions: bug fixes [@urli:hal-01691496; @arXiv-1810.05806]. A bug fix is a small change to the code so that a specific case that was not well-handled becomes correctly considered. Technically, it is a patch, a modification of a handful of source code lines in the program. The research area on automated program repair [@Monperrus2015] devises systems that automatically synthesize such patches. In the Repairnator project [@urli:hal-01691496; @arXiv-1810.05806], we went to the point of suggesting synthesized patches to real developers. Those suggestions were standard code changes on the collaborative development platform Github. In the rest of this paper, Repairnator is the name given to the program repair bot making those automated bug fixes.
A bug fixing suggestion on Github basically contains three parts: the source code patch itself, a title, and a textual message explaining the patch. The bug fixing proposal is called “pull-request”. From a pure technical perspective, only the code matters. However, there are plenty of human activities happening around pull requests: project developers triage them, integrators make code-review, impacted users comment on them. For all those activities, the title and message of the pull requests are of utmost importance. Their clarity directly impact the speed of merging in the main code base.
In the first phase of the Repairnator project [@urli:hal-01691496; @arXiv-1810.05806], we exclusively focused on the code part of the pull-request: Repairnator only created a source code patch, with no pull-request title and explanation, we simply used a generic title like “Fix failing build” and a short human-written message. Now, we realize that bot-generated patches must be put into context, so as to smoothly integrate into the software development conversation. A program repair bot must not only synthesize a patch but also synthesize the explanation coming with it: Repairnator must create explainable patches.
This is related to the research on explainable artificial intelligence, or “explainable AI” for short [@gunning2017explainable]. Explainable AI refers to decision systems, stating that all decisions made by an algorithm must come with a rationale, an explanation of the reasons behind the decision. Explainable AI is a reaction to purely black-box decisions made, for instance, by a neural network.
In this paper, we claim that contributions made by software bots must be explainable, contextualized. This is required for software bots to be successful, but more importantly, this is required to achieve a long-term smooth collaboration between humans and bots on software development.
To sum up, we argue in this paper that:
- Software bot contributions must be explainable.
- Software bot contributions must be put in the context of a global development conversation.
- Explainable contributions involve generation of natural language explanations and conversational features.
- Program repair bots should produce explainable patches.
![image](fig-overview-erc.pdf){width=".905\textwidth"}
Section \[sec:converstion\] presents the software development conversation, Section \[sec:bots-as-communicating-agents\] discusses why and how software bots must communicate. Section \[sec:explainable-patch-suggestion\] instantiates the concept in the realm of program repair bots.
The Software Development Conversation {#sec:converstion}
=====================================
Software developers work together on so-called “code repositories” and software development is a highly collaborative activity. In small projects, 5-10 software engineers interact together on the same code base. In large projects, 1000+ engineers are working in a coordinated manner to write new features, to fix software bugs, to ensure security and performance, etc. In an active source code repository, changes are committed to the code base every hour, minute, if not every second for some very hot and large software packages.
*All the interactions between developers is what forms the “software development conversation”.*
Nature of the Conversation
--------------------------
The software development conversation involves exchanging source code of course, but not only. When a developer proposes a change to the code, she has to explain to the other developers the intent and content of the change. Indeed, in mature projects with disciplined code review, *all code modifications come with a proper explanation of the change in natural language*. This concept of developers working and interacting together on the code repository is shown at the left-hand side of Figure \[fig-overview-vision\].
What is also depicted on Figure \[fig-overview-vision\] is the variety of focus in the software development conversation. The developers may discuss about new features, about fixing bugs, etc. Depending on expertise and job title, developers may take only part to one specific conversation. On Figure \[fig-overview-vision\], developer Madeleine is the most senior engineer, taking part to all discussions in the project. Junior developer Sylvester tends to only discuss on bug reports and the corresponding fixing pull requests.
Scale of the Conversation
-------------------------
In a typical software repository of a standard project in industry, 50+ developers work together. In big open-source projects as well as in giant repositories from big tech companies, the number of developers involved in the same repository goes into the thousands and more. For instance, the main repository of the Mozilla Firefox browser, [gecko-dev](https://github.com/mozilla/gecko-dev), has contributions from 3800+ persons. Table \[tab-extraordinary-repos\] shows the scale of this massive collaboration for some of the biggest open-source repositories ever.
Notably, the software development conversation is able to transcend traditional organization boundaries: it works even when developers work from different companies, or even when they are only loosely coupled individuals as in the case of open-source.
Channels
--------
The software development conversation happens in different channels.
*Oral channels* Historically, the software development conversation happens in meetings, office chats, coffee breaks, phone calls. This remains largely true in traditional organizations.
*Online channels* We have witnessed in the past decades the raise of decentralized development, with engineers scattered over offices, organizations and countries. In those contexts, a significant part of the software development conversation now takes place in online written channels: mailing-lists, collaborative development platforms (Github, Gitlab, etc), synchronous chats (IRC, Slack), online forums and Q&A sites (Stackoverflow), etc.
Source code contributions only represent a small part of the software development conversation. Most of the exchanges between developers are interactive, involving natural language. In the case of collaborative platforms such as Github, the bulk of the software development conversation happens as comments on issues and pull-requests. Software bots will become new voices in this conversation.
Software Bots as Communicating Agents {#sec:bots-as-communicating-agents}
=====================================
The software bot community now works on a different software development model, which is sketched at the right-hand side of Figure \[fig-overview-vision\]. Instead of only having human software developers working on a given code base, we will have code repositories on which both humans and bots would smoothly and effectively cooperate. Here, cooperation means two things. First that robots would be able to perform software development tasks traditionally done by humans: for instance a robot could be specialized in fixing bugs. Second that robots and humans would communicate to each other to exchange information and take together informed decisions.
Software Commits Contributors
--------------------------------------------- --------- --------------
<https://github.com/torvalds/linux> 798710 n-a
<https://github.com/chromium/chromium> 744581 n-a
<https://github.com/mozilla/gecko-dev> 631802 3851
<https://github.com/LibreOffice/core> 433945 853
<https://github.com/WebKit/webkit> 208041 n-a
<https://github.com/Homebrew/homebrew-core> 135248 7310
<https://github.com/NixOS/nixpkgs> 166699 1935
<https://github.com/odoo/odoo> 122698 873
: Some of the biggest code repositories ever in the open-source world (data from Jan 2019)[]{data-label="tab-extraordinary-repos"}
Human Developers as Target
--------------------------
Now, let us stress on the communication aspect of software bots. Software bots will not work alone, they will work together with human developers. As such, software bots must be able to communicate with human developers, using their language, given the human cognitive skills. Software bots will have to explain to other human developers what they are doing in each contribution, this is what can be called an *“explainable contribution”*. Developers would likely ask clarification questions to bots, bots would answer and this would form a proper engineering conversation.
Software Bot Identity
---------------------
A good conversation is never anonymous. This holds for the conversation between human developers and software bots.
*Name:* We believe it is important that software bots have a name and even their own character In Figure \[fig-overview-vision\] all bots are named. Moreover, they are all named after a positive robot from popular culture. Positive names and characters encourage developers to have a welcoming, forgiving attitude towards software bots. We envision that software bots will be engineered with different characters: for example some would be very polite, others would be more direct à la Torvalds.
*Adaptation:* We envision that sophisticated software bots will be able to adapt the way they communicate to developers: for instance, they would not explain their contributions in the same way depending on whether they target a newcomer in the project or an expert guru developer. The tailoring of communication style may require project-specific training in a data-driven manner, based on the past recorded interactions (or even developer specific training).
*Diversity:* In all cases, we think that it is good that all bots are different, this difference is what makes a development conversation rich and fun. Biodiversity is good, and similarly, we think that *software bot diversity may be crucial for a bot-human community to thrive*.
Contribution Format
-------------------
When human developers submit patches for review to other humans, it is known that the quality of the explanation coming with the patch is very important. The format depends on the practices and idiosyncrasies of each project. The patch explanation may be given in an email, as in the case of Linux, where the Linux Kernel Mailing List (aka LKML) plays a key role. The explanation may also be given as comment associated to a suggested modification, such as a pull request comment on the Github platform. A patch may be illustrated with some figure and visualization of the change. A software bot must take into account the specificity of targeted platform and community.
Many software bots will primarily produce contributions for humans. As such their contributions must match the human cognitive skills and must be tailored according to the culture of the targeted developer community. Humans prefer to interact with somebody, rather than with an anonymous thing, hence fun software bots will have a name and a specific character.
Explainable Patch Suggestion {#sec:explainable-patch-suggestion}
============================
Now, we put the concept of “explainable contribution” in the context of a program repair bot [@Monperrus2015]. In this section, we explain that program repair bots such as Repairnator must explain the patches they propose.
We envision research on synthesizing explanations in order to accompany patches synthesized by program repair tools. The synthesized explanation would first be an elaborated formulation of the patch in natural language, or in controlled natural language. Beyond that, the patch explanation would describe what the local and global effects on the computation are, with specific examples. Finally, as done by human developers, the explanation could come with a justification of the design choices made among other viable alternatives.
From Commit Summarization to Patch Explanation
----------------------------------------------
The goal of commit summarization is to reformulate a code change into natural language [@cortes2014automatically; @JiangAM17; @liu2018neural]. Commit summarization can be seen as both a broader task than patch suggestion (all changes can be summarized and not only patches), and smaller (only a few sentences, and even a single line in the context of extreme summarization are produced). We envision experiments on using the state-of-the-art of commit summarization [@liu2018neural] on patches. It will be very interesting to see the quality of the synthesized summaries on patches, incl. on patches from program repair tools: do they capture the essence of the patch? is the patch explanation clear?
*Visualization* A picture is worth 1000 words. Instead of text, the synthesized patch can be explained with a generated visualization. This idea can be explored based on research on commit, diff and pull-request visualization (e.g. [@DAmbros2010]).
Automatic Highlighting of Behavioral Differences
------------------------------------------------
For humans to understand a behavioral difference, a good strategy is to give them an actual input and highlight the key difference in the output. There are works that try to identify valuable input points on which the behavior of two program versions differ [@marinescu2013katch; @Shriver2017ESN]. In a patch explanation, the selected input must satisfy two requirements: 1) that input must be representative of the core behavioral difference introduced by the patch and 2) it must be easily understandable by a human developer (simple literals, simple values).
The format of this behavioral difference explanation is open. It may be a sentence, a code snippet, even a graphic. What is important is that it is both understandable and appealing to the developer. Importantly, the format must be set according to the best practices on communicating in code repositories (eg. communicating on Github).
Conversational Program Repair Bots
----------------------------------
Finally, an initial explanation of a patch may not be sufficient for the humans developers to perfectly understand the patch. We imagine conversational systems for patch explanation: developers would be able to ask questions about the patch behavior, and the program repair bot would answer to those questions. Such a system can be data-driven, based on the analysis of the millions of similar conversations that have happened in open-source repositories.
In the context of a program repair bot that produces bug fixes, an explainable bug fix means an integrated package: 1) a patch, 2) its explanation in natural language, and 3) a highlight of the behavioral difference with examples. The explanation might require a series of questions from the developers and answers from the bot, which requires advanced conversational features in the bot.
Conclusion
==========
In this paper, we have claimed and argued that software bots must produce explainable contributions. In order for them to seamlessly join the software development conversation, they have to be able to present and discuss their own contributions: the intent, the design rationales, etc.
In the context of a program repair bot such as Repairnator [@urli:hal-01691496; @arXiv-1810.05806], it means that the bot would be able to reformulate the patch in natural language, to highlight the behavioral change with specific, well-chosen input values, to discuss why a particular patch is better than another one.
Beyond explainable contributions, we have hypothesized that software bots must have their own identify and their own character, so as to bring diversity in the development conversation. It may even be that the diversity of participants in a software development conversation is what makes it creative and fun.
Acknowledgments: {#acknowledgments .unnumbered}
=================
I would like to thank my research group for the fertile discussions on this topic, and esp. Matias Martinez and Khashayar Etemadi for feedback on a draft. This work was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP).
|
---
abstract: |
We discuss the information entropy for a general open pointer-based simultaneous measurement and show how it is bound from below. This entropic uncertainty bound is a direct consequence of the structure of the entropy and can be obtained from the formal solution of the measurement dynamics. Furthermore, the structural properties of the entropy allow us to give an intuitive interpretation of the noisy influence of the pointers and the environmental heat bath on the measurement results.
[*Keywords*]{}: simultaneous pointer-based measurement, noisy measurement, conjugate observables, entropy, uncertainty relation, quantum mechanics
address: 'Institut f[ü]{}r Quantenphysik and Center for Integrated Quantum Science and Technology (IQ^ST^), Universit[ä]{}t Ulm, D-89069 Ulm, Germany'
author:
- Raoul Heese and Matthias Freyberger
bibliography:
- 'concept.bib'
title: 'Entropic uncertainty bound for open pointer-based simultaneous measurements of conjugate observables'
---
Introduction {#sec:Introduction}
============
The concept of pointer-based simultaneous measurements of conjugate observables is an indirect measurement model, which allows to dynamically describe the properties of a simultaneous quantum mechanical measurement process. Additional to the system to be measured (hereafter just called *system*), the model introduces two additional systems called *pointers*, which are coupled to the system and act as commuting meters from which the initial system observables can be read out after a certain interaction time. In this sense, the pointers represent the measurement devices used to simultaneously determine the system observables. Pointer-based simultaneous measurements date back to Arthurs and Kelly [@arthurs1965] and are based on von Neumann’s idea of indirect observation [@vonneumann1932].
In principle, any pair of conjugate observables like position and momentum or quadratures of the electromagnetic field [@schleich2001], whose commutator is well-defined and proportional to the identity operator, can straightforwardly be measured within the scope of pointer-based simultaneous measurements. We limit ourselves to the measurement of position and momentum in the following. An *open* pointer-based simultaneous measurement [@heese2014] also takes environmental effects into consideration by utilizing an environmental heat bath in the sense of the Caldeira-Leggett model [@caldeira1981; @caldeira1983a; @caldeira1983ae; @caldeira1983b], which leads to a quantum Brownian motion [@chou2008; @fleming2011a; @fleming2011b; @martinez2013] of the system and the pointers, whereas a *closed* pointer-based simultaneous measurement [@arthurs1965; @wodkiewicz1984; @stenholm1992; @buzek1995; @appleby1998a; @appleby1998b; @appleby1998c; @busch2007; @busshardt2010; @busshardt2011; @heese2013] does not involve any environmental effects. A schematic open pointer-based simultaneous measurement procedure is shown in Fig. \[fig:model\].
In this contribution we calculate the information entropy of an open pointer-based simultaneous measurement and discuss its properties as a measurement uncertainty. In particular, we make use of recent results [@heese2013; @heese2014], which we extend and generalize. In Sec. \[sec:generalopenpointer-basedsimultaneousmeasurements\], we present the formal dynamics of open pointer-based simultaneous measurements and then use these results to discuss the entropic uncertainty in Sec. \[sec:entropy\]. In the end, we arrive at a generic lower bound of this entropic uncertainty. Note that we solely use rescaled dimensionless variables [@heese2014] so that $\hbar = 1$.
![Principles of an open pointer-based simultaneous measurement of two conjugate observables, e.g., the simultaneous measurement of position and momentum. The measurement apparatus consists of two quantum mechanical systems, called pointers, which are bilinearly coupled to the quantum mechanical system to be measured. Additionally, an environmental heat bath in the sense of the Caldeira-Leggett model can disturb both the system and the pointers. After the interaction process, one observable of each pointer is directly measured (e.g., the position of each pointer) while the system itself is not subject to any direct measurement. However, from these measurement results, information about the initial system observables can then be inferred. In other words, the final pointer observables act as commuting meters from which the initial non-commuting system observables can be simultaneously read out. The price to be paid for this simultaneity comes in form of fundamental noise terms, which affect the inferred values. The corresponding uncertainties can be described by information entropies, which are bound from below.[]{data-label="fig:model"}](fig1.pdf)
Open pointer-based simultaneous measurements {#sec:generalopenpointer-basedsimultaneousmeasurements}
============================================
As indicated in the introduction, our model of open pointer-based simultaneous measurements consists of a system particle to be measured with mass $M_{\mathrm{S}}$, position observable $\hat{X}_{\mathrm{S}}$ and momentum observable $\hat{P}_{\mathrm{S}}$, which is coupled bilinearly to two pointer particles with masses $M_1$ and $M_2$, position observables $\hat{X}_1$ and $\hat{X}_2$, and momentum observables $\hat{P}_1$ and $\hat{P}_2$, respectively. Both the system and the pointers are bilinearly coupled to an environmental heat bath, which consists of a collection of $N$ harmonic oscillators with masses $m_1,\dots,m_N$, position observables $\hat{q}_1,\dots,\hat{q}_N$, and momentum observables $\hat{k}_1,\dots,\hat{k}_N$. In this section, we first present the general Hamiltonian for this model and then briefly discuss the resulting dynamics.
Hamiltonian {#sec:generalopenpointer-basedsimultaneousmeasurements:hamiltonian}
-----------
The general Hamiltonian for our model reads $$\begin{aligned}
\label{eq:H}
\hat{\mathscr{H}}(t) \equiv \hat{H}_{\mathrm{free}} + \hat{H}_{\mathrm{int}}(t) + \hat{H}_{\mathrm{bath}}(t)\end{aligned}$$ and therefore consists of three parts. First, the free evolution Hamiltonian $$\begin{aligned}
\label{eq:H:free}
\hat{H}_{\mathrm{free}} \equiv \frac{\hat{P}_{\mathrm{S}}^2}{2 M_{\mathrm{S}}} + \frac{\hat{P}_1^2}{2 M_1} + \frac{\hat{P}_2^2}{2 M_2},\end{aligned}$$ which simply describes the dynamics of the undisturbed system and pointers. Second, the interaction Hamiltonian $$\begin{aligned}
\label{eq:H:int}
\hat{H}_{\mathrm{int}}(t) \equiv C_{\mathrm{S}}(t) \hat{X}_{\mathrm{S}}^2 + C_1(t) \hat{X}_1^2 + C_2(t) \hat{X}_2^2 + ( \hat{X}_{\mathrm{S}}, \hat{P}_{\mathrm{S}} ) \mathbf{C}(t) ( \hat{X}_1, \hat{X}_2, \hat{P}_1, \hat{P}_2 )^T,\end{aligned}$$ which describes possible quadratic potentials with the coupling strengths $C_{\mathrm{S}}(t)$, $C_1(t)$, and $C_2(t)$, respectively, as well as bilinear interactions between the system observables and the pointer observables via the $2\times4$ coupling matrix $\mathbf{C}(t)$. These interactions are necessary for an information transfer between system and pointers and are therefore a prerequisite of pointer-based simultaneous measurements. The existence of quadratic potentials is on the other hand not essential, but may be reasonable from a physical point of view when regarding confined particles. One possible interaction Hamiltonian would be the interaction Hamiltonian of the classic Arthurs and Kelly model [@arthurs1965], which can be written as $\hat{H}_{\mathrm{int}} = \kappa ( \hat{X}_{\mathrm{S}} \hat{P}_1 + \hat{P}_{\mathrm{S}} \hat{P}_2 )$ with an arbitrary coupling strength $\kappa \neq 0$.
Lastly, Eq. [(\[eq:H\])]{} contains the bath Hamiltonian [@caldeira1981; @caldeira1983a; @caldeira1983ae; @caldeira1983b] $$\begin{aligned}
\label{eq:H:bath}
\hat{H}_{\mathrm{bath}}(t) \equiv \frac{1}{2} \hat{\mathbf{k}}^{T} \mathbf{m}^{-1} \hat{\mathbf{k}}+ \frac{1}{2} \hat{\mathbf{q}}^{T} \mathbf{c} \hat{\mathbf{q}} + \hat{\mathbf{q}}^{T} \mathbf{g}(t) ( \hat{X}_{\mathrm{S}}, \hat{X}_1, \hat{X}_2 )^T,\end{aligned}$$ which describes the independent dynamics of the bath particles with the $N \times N$ diagonal mass matrix $\mathbf{m}$ containing $m_1,\dots,m_N$, and the $N \times N$ symmetric and positive definite bath-internal coupling matrix $\mathbf{c}$; as well as the coupling of system and pointer positions to the bath positions with the coupling strength $$\begin{aligned}
\label{eq:g}
\mathbf{g}(t) \equiv g(t) \mathbf{g},\end{aligned}$$ which consists of the time-dependent scalar $g(t)$ and the time-independent $N\times 3$ matrix $\mathbf{g}$. To simplify our notation, we make use of the vectorial bath positions $\hat{\mathbf{q}}^{T} \equiv (\hat{q}_1, \dots, \hat{q}_N)$ and the vectorial bath momenta $\hat{\mathbf{k}}^{T} \equiv (\hat{k}_1, \dots, \hat{k}_N)$.
In particular, since all of the following calculations only rely on the bilinear structure of the Hamiltonian, Eq. [(\[eq:H\])]{}, we do not further specify the coupling strengths and therefore consider quite general measurement configurations. Note that the time-dependencies of the coupling strengths in Eqs. [(\[eq:H:int\])]{} and [(\[eq:H:bath\])]{} allow us to design specific coupling pulses for system-pointer interactions [@busshardt2010] or switch-on functions for the bath [@fleming2011a; @fleming2011c].
Dynamics
--------
To determine the complete system and pointer dynamics, it is necessary to solve the coupled Heisenberg equations of motion $$\begin{aligned}
\label{eq:Heisenberg}
& \phantom{=.} \frac{\partial}{\partial t} \left( \hat{X}_{\mathrm{S}}(t), \hat{X}_1(t), \hat{X}_2(t), \hat{P}_{\mathrm{S}}(t), \hat{P}_1(t), \hat{P}_2(t) \right) \nonumber \\
& = i \left[ \hat{\mathscr{H}}(t), \left( \hat{X}_{\mathrm{S}}(t), \hat{X}_1(t), \hat{X}_2(t), \hat{P}_{\mathrm{S}}(t), \hat{P}_1(t), \hat{P}_2(t) \right) \right],\end{aligned}$$ which take on the form of a Volterra integro-differential equation [@burton2005] after explicitly solving the dynamics of the bath particles [@ford1988; @fleming2011a; @fleming2011b]. The general solution of Eq. [(\[eq:Heisenberg\])]{} can formally be expressed by means of a resolvent [@grossmann1968; @becker2006] and leads to system and pointer positions and momenta, which are linearly propagated from their initial values (i.e., the homogeneous solution) and are affected by an additive noise term (i.e., the inhomogeneous solution). Explicit analytical solutions, which are naturally of the same structure as the formal solution, can only be found for specific choices of the Hamiltonian, Eq. [(\[eq:H\])]{}. Otherwise, numerical approaches [@chen1998; @baker2000] might be necessary. Interestingly, since we strive after a general discussion of the dynamics, the ensured existence of a solution and its formal structure is sufficient for all further considerations. We assume in this context that possible unphysical artifacts of the modeling [@weiss1999; @fleming2011a], e.g., renormalization problems, have been treated adequately [@heese2014]. Without loss of generality, we choose $t = 0$ as the initial time.
For the description of a pointer-based simultaneous measurement, we do in fact not need knowledge about the complete system and pointer dynamics. As outlined in Fig. \[fig:model\], the measurement process provides that the two pointers are being measured after an interaction time $t$ in such a way, that we either read out the position or the momentum of each pointer. Consequently, we have knowledge about either $\hat{X}_1(t)$ or $\hat{P}_1(t)$ of the first pointer, and, additionally, about either $\hat{X}_2(t)$ or $\hat{P}_2(t)$ of the second pointer, which is a total of four different possible measurement combinations. To summarize these measurement combinations, we define the measurement vector $\hat{\mathbf{w}}(t)$, which consists of the two pointer observables chosen to be read out, e.g., $\hat{\mathbf{w}}(t) = (\hat{X}_1(t), \hat{X}_2(t))^T$ when measuring both pointer positions. As also outlined in Fig. \[fig:model\], the information gained from measuring the observables in the measurement vector allows us to infer the initial system observables $\hat{X}_{\mathrm{S}}(0)$ and $\hat{P}_{\mathrm{S}}(0)$. For this reason, the connection between the measured observables after a certain interaction time $t$ and the initial system observables builds the framework for a description of pointer-based simultaneous measurements.
With this purpose in mind, we can extract $$\begin{aligned}
\label{eq:inferredsystem}
\begin{pmatrix} \hat{\mathcal{X}}(t) \\ \hat{\mathcal{P}}(t) \end{pmatrix} = \begin{pmatrix} \hat{X}_{\mathrm{S}}(0) \\ \hat{P}_{\mathrm{S}}(0) \end{pmatrix} + \mathbf{B}(t) \hat{\mathbf{J}} + (\mathbf{\Lambda} \star \hat{\boldsymbol{\xi}})(t)\end{aligned}$$ from any (formal) solution of the Heisenberg equations of motion, Eq. [(\[eq:Heisenberg\])]{}. Here we have introduced the so-called generalized inferred observables $$\begin{aligned}
\label{eq:inferredobservables}
\begin{pmatrix} \hat{\mathcal{X}}(t) \\ \hat{\mathcal{P}}(t) \end{pmatrix} \equiv \mathbf{A}(t) \hat{\mathbf{w}}(t),\end{aligned}$$ which are given by a rescaled measurement vector $\hat{\mathbf{w}}(t)$ and can on the other hand be understood as the effectively measured observables from which the system observables can be directly read out. Since the two measured pointer observables in $\hat{\mathbf{w}}(t)$ initially commute and are subject to a unitary evolution, one has $$\begin{aligned}
\label{eq:XPcommute}
[\hat{\mathcal{X}}(t),\hat{\mathcal{P}}(t)] = 0,\end{aligned}$$ which means that the inferred observables can be determined simultaneously.
Furthermore, Eqs. [(\[eq:inferredsystem\])]{} and [(\[eq:inferredobservables\])]{} contain the coefficient matrices $\mathbf{A}(t)$, $\mathbf{B}(t)$, and $\mathbf{\Lambda}(t,s)$ with $0 \leq s \leq t$, for which we do not need to give an explicit expression. In general, they can be straightforwardly calculated from the resolvent of the formal solution of the complete system and pointer dynamics. An exemplary calculation can be found in Ref. [@heese2014]. We also make use of the initial value vector $$\begin{aligned}
\label{eq:J}
\hat{\mathbf{J}} \equiv ( \hat{X}_1(0), \hat{X}_2(0), \hat{P}_1(0), \hat{P}_2(0) )^{T},\end{aligned}$$ and the stochastic force [@weiss1999] $$\begin{aligned}
\label{eq:stochasticforce}
\hat{\boldsymbol{\xi}}(t) \equiv -\mathbf{g}^T(t) \left( \mathbf{m}^{-\frac{1}{2}} \cos( \boldsymbol{\omega} t) \mathbf{m}^{\frac{1}{2}} \hat{\mathbf{q}} + \mathbf{m}^{-\frac{1}{2}} \sin( \boldsymbol{\omega} t) \boldsymbol{\omega}^{-1} \mathbf{m}^{-\frac{1}{2}} \hat{\mathbf{k}} \right)\end{aligned}$$ with the symmetric bath frequency matrix [@fleming2011b] $$\begin{aligned}
\label{eq:omega}
\boldsymbol{\omega} \equiv \sqrt{ \mathbf{m}^{-\frac{1}{2}} \mathbf{c} \mathbf{m}^{-\frac{1}{2}} }.\end{aligned}$$ In particular, the stochastic force results from the homogeneous bath dynamics and describes the noisy influence of the bath on the measurement results, Eq. [(\[eq:H:bath\])]{}. The symbol $\star$ in Eq. [(\[eq:inferredsystem\])]{} represents the integral $$\begin{aligned}
\label{eq:star}
(f \star g)(x) \equiv \int \limits_{0}^{x} \! \mathrm d y f(x,y) g(y)\end{aligned}$$ for two arbitrary functions $f(x,y)$ and $g(x)$. In case of $f(x,y) = f(x-y)$, Eq. [(\[eq:star\])]{} is called a Laplace convolution.
The central aspect of the dynamics contained in Eq. [(\[eq:inferredsystem\])]{} can be understood when considering the expectation value $$\begin{aligned}
\label{eq:inferredexpecation}
\Braket{ \begin{pmatrix} \hat{\mathcal{X}}(t) \\ \hat{\mathcal{P}}(t) \end{pmatrix} } = \Braket{ \begin{pmatrix} \hat{X}_{\mathrm{S}}(0) \\ \hat{P}_{\mathrm{S}}(0) \end{pmatrix} } + \mathbf{s}(t)\end{aligned}$$ of Eq. [(\[eq:inferredsystem\])]{}, where the brackets refer to the mean value with respect to the initial state. Most importantly, Eq. [(\[eq:inferredexpecation\])]{} shows that the knowledge about the first moments of the inferred observables, Eq. [(\[eq:inferredobservables\])]{}, which can be determined by measuring the two pointer observables contained in the measurement vector $\hat{\mathbf{w}}(t)$, allows us to infer the first moments of the initial system observables. The second and third term on the right-hand side of Eq. [(\[eq:inferredsystem\])]{} simply add the shift $$\begin{aligned}
\label{eq:s}
\mathbf{s}(t) \equiv \mathbf{B}(t) \braket{ \hat{\mathbf{J}} } + (\mathbf{\Lambda} \star \braket{ \hat{\boldsymbol{\xi}}})(t)\end{aligned}$$ to this relation. Assuming a suitably separable initial state, this shift is determined by the initial state of the pointers and the bath, but independent of the state of the system. In this sense, our model is only of any practical purpose if the initial pointer and bath states are sufficiently well-known to determine $\mathbf{s}(t)$. Generally, the initial pointer states are at our disposal and can therefore be chosen in such a way that the first term on the right-hand side of Eq. [(\[eq:s\])]{} vanishes. It is furthermore reasonable to suppose that an environmental heat bath in thermal equilibrium leads to a vanishing expectation value of the stochastic force $\hat{\boldsymbol{\xi}}(t)$, Eq. [(\[eq:stochasticforce\])]{}. Consequently, presuming a vanishing shift $\mathbf{s}(t)$ seems appropriate for a typical measurement configuration. We will confirm this presumption further below for a specifically chosen initial state. Note that the linearity of Eqs. [(\[eq:inferredsystem\])]{} and [(\[eq:inferredobservables\])]{}, which is essential for the inference process, Eq. [(\[eq:inferredexpecation\])]{}, is a direct result of the bilinear structure of the Hamiltonian, Eq. [(\[eq:H\])]{}.
We assume here the non-pathological case that the interaction Hamiltonian $\hat{H}_{\mathrm{int}}(t)$, Eq. [(\[eq:H:int\])]{}, and the measurement vector $\hat{\mathbf{w}}(t)$ are chosen in such a way that inferring system observables from the pointers is possible in the first place, which is defined by the existence of the coefficient matrix $\mathbf{A}(t)$, Eq. [(\[eq:inferredobservables\])]{}. In other words, we require a sufficient information transfer from the system to the pointers. For example, for the classic Arthurs and Kelly model [@arthurs1965] with the interaction Hamiltonian mentioned in Sec. \[sec:generalopenpointer-basedsimultaneousmeasurements:hamiltonian\] and no environmental heat bath, measuring both pointer positions leads to an existing matrix $\mathbf{A}(t)$ for $t>0$, but measuring both pointer momenta does not [@busshardt2010].
Seeing now the role played by the inferred observables, we can quantify the uncertainty of a simultaneous pointer-based measurement with the help of the so-called noise operators [@arthurs1988] $$\begin{aligned}
\label{eq:noiseoperators}
\begin{pmatrix} \hat{N}_{\mathcal{X}}(t) \\ \hat{N}_{\mathcal{P}}(t) \end{pmatrix} \equiv \begin{pmatrix} \hat{\mathcal{X}}(t) - \hat{X}_{\mathrm{S}}(0) \\ \hat{\mathcal{P}}(t) - \hat{P}_{\mathrm{S}}(0) \end{pmatrix} = \mathbf{B}(t) \hat{\mathbf{J}} + (\mathbf{\Lambda} \star \hat{\boldsymbol{\xi}})(t),\end{aligned}$$ which are defined as the difference between the inferred observables $\hat{\mathcal{X}}(t)$ and $\hat{\mathcal{P}}(t)$, Eq. [(\[eq:inferredobservables\])]{}, and the respective initial system observables $\hat{X}_{\mathrm{S}}(0)$ and $\hat{P}_{\mathrm{S}}(0)$. The second equal sign in Eq. [(\[eq:noiseoperators\])]{} directly follows from Eq. [(\[eq:inferredsystem\])]{}. The expectation values $$\begin{aligned}
\label{eq:noiseoperators:exp}
\Braket{ \begin{pmatrix} \hat{N}_{\mathcal{X}}(t) \\ \hat{N}_{\mathcal{P}}(t) \end{pmatrix} } = \mathbf{s}(t)\end{aligned}$$ of the noise operators correspond to the shift $\mathbf{s}(t)$, Eq. [(\[eq:s\])]{}. As we have mentioned above, this shift can typically be presumed to vanish. Particularly interesting for our considerations is the symmetrized covariance matrix $$\begin{aligned}
\label{eq:noiseoperators:cov}
\begin{pmatrix} \Braket{ \hat{N}_{\mathcal{X}}^2(t) } - \Braket{ \hat{N}_{\mathcal{X}}(t) }^2 & \Braket{\hat{N}_{\mathcal{X}\mathcal{P}}(t)} - \Braket{ \hat{N}_{\mathcal{X}}(t) }\Braket{ \hat{N}_{\mathcal{P}}(t) } \\ \Braket{\hat{N}_{\mathcal{X}\mathcal{P}}(t)} - \Braket{ \hat{N}_{\mathcal{X}}(t) }\Braket{ \hat{N}_{\mathcal{P}}(t) } & \Braket{ \hat{N}_{\mathcal{P}}^2(t) } - \Braket{ \hat{N}_{\mathcal{P}}(t) }^2 \end{pmatrix} \equiv \begin{pmatrix} \delta_{\mathcal{X}}^2(t) & \delta_{\mathcal{X}\mathcal{P}}(t) \\ \delta_{\mathcal{X}\mathcal{P}}(t) & \delta_{\mathcal{P}}^2(t) \end{pmatrix}\end{aligned}$$ of the noise operators, where $$\begin{aligned}
\label{eq:noiseoperators:NXP}
\hat{N}_{\mathcal{X}\mathcal{P}}(t) \equiv \frac{1}{2} \left( \hat{N}_{\mathcal{X}}(t)\hat{N}_{\mathcal{P}}(t) + \hat{N}_{\mathcal{P}}(t)\hat{N}_{\mathcal{X}}(t) \right)\end{aligned}$$ stands for the symmetrized noise operator. To simplify our notation, we have also introduced the noise terms $\delta_{\mathcal{X}}^2(t)$ and $\delta_{\mathcal{P}}^2(t)$, which represent the diagonal elements of the covariance matrix, and the correlation term $\delta_{\mathcal{X}\mathcal{P}}(t)$, which represents the off-diagonal elements. Since a precise knowledge of $\mathbf{s}(t)$, Eqs. [(\[eq:s\])]{} and [(\[eq:noiseoperators:exp\])]{}, is necessary for the success of the inference process, Eq. [(\[eq:inferredexpecation\])]{}, the associated variances given by the noise terms $\delta_{\mathcal{X}}^2(t)$ and $\delta_{\mathcal{P}}^2(t)$ can be considered as an intuitive measure for the respective measurement uncertainty. Indeed, in the scope of variance-based uncertainty relations, the noise terms play an important role; see, e.g., Ref. [@ozawa2005] and references therein. Note that the noise terms are also referred to as “errors of retrodiction" [@appleby1998a; @appleby1998b; @appleby1998c]. We will revisit them further below, where we will see that they arise naturally in our entropic description of the measurement uncertainty.
The formal dynamics of the inferred observables $\hat{\mathcal{X}}(t)$ and $\hat{\mathcal{P}}(t)$, Eq. [(\[eq:inferredsystem\])]{}, build the framework for all of the following considerations. Most importantly, the intimate connection between $\hat{\mathcal{X}}(t)$ and $\hat{\mathcal{P}}(t)$, which result from the measured pointer observables in $\hat{\mathbf{w}}(t)$, Eq. [(\[eq:inferredobservables\])]{}, and the initial system observables $\hat{X}_{\mathrm{S}}(0)$ and $\hat{P}_{\mathrm{S}}(0)$ can clearly be seen from Eq. [(\[eq:inferredsystem\])]{}. It is this connection which lies at the heart of the pointer-based measurement scheme: information about the non-commuting system observables can be gathered from knowledge about the commuting inferred observables. At this point it seems natural to ask: What is the uncertainty of such an indirect measurement process? In the next section, we answer this question with the help of information entropy.
Entropy {#sec:entropy}
=======
The uncertainty principle [@heisenberg1927; @folland1997; @busch2007] is of central importance for quantum mechanical measurements. It manifests itself in the form of uncertainty relations [@busch2014], usually written in terms of variances [@busch2013a; @busch2013b] or information entropies [@buscemi2014; @coles2014]. The concept of information entropies goes back to Ref. [@shannon1948], whereas its general usage in the scope of uncertainty relations has been pioneered by Ref. [@hirschman1957; @beckner1975; @bialynicki1975]. In the context of closed pointer-based simultaneous measurements [@arthurs1965; @stenholm1992], the uncertainty principle leads to well-known variance-based uncertainty relations [@appleby1998a; @appleby1998b; @appleby1998c], which have extensively been discussed, e.g., in the scope of phase-space measurements [@wodkiewicz1984] or energy and timing considerations [@busshardt2010; @busshardt2011]. A respective information entropic form has been derived in Ref. [@buzek1995] and improved in Ref. [@heese2013]. Variances can also be used to describe an uncertainty relation for open pointer-based simultaneous measurements [@heese2014]. However, in comparison with information entropies, variances suffer from two major drawbacks [@hilgevoord1990; @buzek1995; @bialynicki2011]: First, they can become divergent for specific probability distributions and second, they may not reflect what one would intuitively consider as the “width" of a probability distribution. On the other hand, a disadvantage of information entropies is that they are usually much more difficult to calculate than variances. This, however, is only a technical limitation. Therefore, we concentrate on information entropic uncertainties in the following.
In this section, we first introduce the so-called collective entropy as a total measure of uncertainty in the context of open pointer-based simultaneous measurements. Choosing a separable initial state then allows us to calculate the marginal probability distributions this collective entropy is based on. As a result, we can discuss the structural properties of the collective entropy, including an extension of a previously known lower bound [@heese2013].
Collective entropy
------------------
First concepts of information entropies as an uncertainty measure for pointer-based simultaneous measurements can be found in Ref. [@buzek1995], which serves as a foundation for the present section. Since information entropies are based on probability distributions, we first need to define suitable probability distributions before we can deal with the actual entropies.
The probability of measuring an inferred position $\mathcal{X}$ and an inferred momentum $\mathcal{P}$ is given by the joint probability distribution [@wodkiewicz1984; @wodkiewicz1986] $$\begin{aligned}
\label{eq:prXP}
\mathrm{pr}(\mathcal{X},\mathcal{P};t) \equiv \Braket{ \delta(\mathcal{X}-\hat{\mathcal{X}}(t)) \delta(\mathcal{P}-\hat{\mathcal{P}}(t)) }\end{aligned}$$ with the Dirac delta distributions $\delta(\mathcal{X}-\hat{\mathcal{X}}(t))$ and $\delta(\mathcal{P}-\hat{\mathcal{P}}(t))$, which contain the inferred observables $\hat{\mathcal{X}}(t)$ and $\hat{\mathcal{P}}(t)$, Eq. [(\[eq:inferredobservables\])]{}, respectively. Likewise, the probability of measuring either the inferred position $\mathcal{X}$ or the inferred momentum $\mathcal{P}$ is given by the marginal probability distribution of inferred position
\[eq:Pmarginal\] $$\begin{aligned}
\mathrm{pr}_{\mathcal{X}}(\mathcal{X};t) \equiv \int \limits_{- \infty}^{+ \infty} \! \mathrm d \mathcal{P}\ \mathrm{pr}(\mathcal{X},\mathcal{P};t)
\end{aligned}$$ and inferred momentum $$\begin{aligned}
\mathrm{pr}_{\mathcal{P}}(\mathcal{P};t) \equiv \int \limits_{- \infty}^{+ \infty} \! \mathrm d \mathcal{X}\ \mathrm{pr}(\mathcal{X},\mathcal{P};t),
\end{aligned}$$
respectively.
With the help of the marginal probability distributions, Eq. [(\[eq:Pmarginal\])]{}, one can define the collective entropy [@heese2013] $$\begin{aligned}
\label{eq:S}
S(t) \equiv S_{\mathcal{X}}(t) + S_{\mathcal{P}}(t),\end{aligned}$$ which describes the total uncertainty of a simultaneous pointer-based measurement process. It consists of the sum of the marginal entropy of inferred position
\[eq:Smarginal\] $$\begin{aligned}
\label{eq:Smarginal:X}
S_{\mathcal{X}}(t) \equiv - \int \limits_{- \infty}^{+ \infty} \! \mathrm d \mathcal{X}\ \mathrm{pr}_{\mathcal{X}}(\mathcal{X};t) \ln \mathrm{pr}_{\mathcal{X}}(\mathcal{X};t)\end{aligned}$$ and the marginal entropy of inferred momentum $$\begin{aligned}
\label{eq:Smarginal:P}
S_{\mathcal{P}}(t) \equiv - \int \limits_{- \infty}^{+ \infty} \! \mathrm d \mathcal{P}\ \mathrm{pr}_{\mathcal{P}}(\mathcal{P};t) \ln \mathrm{pr}_{\mathcal{P}}(\mathcal{P};t).\end{aligned}$$
So far, our considerations have been completely general.
Separable initial state {#sec:entropy:separable initial state}
-----------------------
A more detailed calculation of the collective entropy, Eq. [(\[eq:S\])]{}, is only possible if we choose a more specific initial state $\hat{\varrho}(0)$ for the measurement configuration. First of all, it is a common approach to assume that the bath is initially in thermal equilibrium and separable from the system and the pointers [^1]. Furthermore, it seems natural to choose initially localized and separable pointer states. These localized pointer states are then propagated by the Hamiltonian, Eq. [(\[eq:H\])]{}, in such a way, Eq. [(\[eq:inferredobservables\])]{}, that their new location can be read out at end of the interaction process to infer the system observables, Eq. [(\[eq:inferredsystem\])]{}. A straightforward realization for localized and separable pointer states are squeezed vacuum states [@barnett1997; @schleich2001], which have already been used in the classic Arthurs and Kelly model [@arthurs1965]. Our system to be measured should, on the other hand, not be subject to any assumptions, so we describe it by a general density matrix.
Thus, for our model of open pointer-based simultaneous measurements, we choose a separable initial state $$\begin{aligned}
\label{eq:InitialState}
\hat{\varrho}(0) \equiv \hat{\varrho}_{\mathrm{S}}(0) \otimes \ket{\sigma_1} \bra{\sigma_1} \otimes \ket{\sigma_2} \bra{\sigma_2} \otimes \hat{\varrho}_{\mathrm{B}}(0).\end{aligned}$$ It consists of the initial state $\hat{\varrho}_{\mathrm{S}}(0)$ of the system to be measured, the squeezed vacuum states $\ket{\sigma_1}$ and $\ket{\sigma_2}$ of the pointers, which can be written as [@schleich2001] $$\begin{aligned}
\label{eq:initialpointerstate}
\braket{x | \sigma_k} \equiv \left( \frac{1}{2 \pi \sigma_k^2} \right)^{\frac{1}{4}} \exp \left[ - \frac{x^2}{4 \sigma_k^2} \right]\end{aligned}$$ in position space with variances $\sigma_k^2$ for $k \in \{1,2\}$, and the thermal state of the bath $$\begin{aligned}
\label{eq:initialbathstate}
\hat{\varrho}_{\mathrm{B}}(0) \equiv \frac{1}{Z} \exp \left[ - \beta \left\{ \frac{1}{2} \hat{\mathbf{k}}^{T} \mathbf{m}^{-1} \hat{\mathbf{k}}+ \frac{1}{2} \hat{\mathbf{q}}^{T} \mathbf{c} \hat{\mathbf{q}} \right\} \right]\end{aligned}$$ with the thermal energy $\beta^{-1}$ and the normalizing partition function $Z$.
The choice of a specific initial state of the bath allows us to determine the statistical properties of the stochastic force $\hat{\boldsymbol{\xi}}(t)$, Eq. [(\[eq:stochasticforce\])]{}. In particular, one has $$\begin{aligned}
\braket{\hat{\boldsymbol{\xi}}(t)} = 0\end{aligned}$$ and $$\begin{aligned}
\frac{1}{2} \braket{ \hat{\boldsymbol{\xi}}(t_1) \hat{\boldsymbol{\xi}}{}^{T}(t_2) + \hat{\boldsymbol{\xi}}(t_2) \hat{\boldsymbol{\xi}}{}^{T}(t_1) } \equiv g(t_1) g(t_2) \boldsymbol{\nu}(t_1-t_2)\end{aligned}$$ with the noise kernel [@fleming2011b] $$\begin{aligned}
\label{eq:nu}
\boldsymbol{\nu}(t) \equiv \frac{1}{2} \int \limits_{0}^{\infty} \! \mathrm{d} \omega \coth \left( \frac{\beta \omega}{2} \right) \cos ( \omega t ) \mathbf{I}(\omega),\end{aligned}$$ which contains the spectral density [@weiss1999] $$\begin{aligned}
\label{eq:I}
\mathbf{I}(\omega) \equiv \mathbf{g}^{T} \mathbf{m}^{-\frac{1}{2}} \omega^{-1} \delta( \omega \mathds{1} - \boldsymbol{\omega} ) \mathbf{m}^{-\frac{1}{2}} \mathbf{g}.\end{aligned}$$ Here we have made use of the Dirac delta distribution $\delta( \omega \mathds{1} - \boldsymbol{\omega} )$ with the identity matrix $\mathds{1}$ and the bath frequency matrix $\boldsymbol{\omega}$, Eq. [(\[eq:omega\])]{}. In brief, the noise kernel describes the noisy influence of the bath on the measurement process. Note that we do not further specify the structure of the spectral density [^2] and keep it as a general phenomenological expression. Nevertheless, we assume a continuous bath with $N\rightarrow\infty$ for which the definition of a spectral density makes sense in the first place.
We furthermore remark that for our chosen initial state $\hat{\varrho}(0)$, Eq. [(\[eq:InitialState\])]{}, the expectation value shift $\mathbf{s}(t)$, Eq. [(\[eq:s\])]{}, vanishes for all times, i.e., $$\begin{aligned}
\label{eq:s=0}
\mathbf{s}(t) = 0.\end{aligned}$$ This means that the first moments of the inferred observables after a certain interaction time $t$ directly correspond to the first moments of the initial system observables, Eq. [(\[eq:inferredexpecation\])]{}. As we have already stated above, this can be considered as a typical behavior.
Marginal probability distributions {#sec:entropy:marginal probability distributions}
----------------------------------
The choice of the initial state $\hat{\varrho}(0)$, Eq. [(\[eq:InitialState\])]{}, allows us to explicitly calculate the the marginal probability distributions, Eq. [(\[eq:Pmarginal\])]{}. It seems natural to assume that these marginal probability distributions do not directly correspond to the initial position distribution $\braket{x|\hat{\varrho}_{\mathrm{S}}(0)|x}$ or the initial momentum distribution $\braket{p|\hat{\varrho}_{\mathrm{S}}(0)|p}$ of the system, but should in addition also incorporate the disturbance from the indirect measurement via the pointers as well as the noisy effects of the bath. Since pointers and bath are initially in a Gaussian state, Eqs. [(\[eq:initialpointerstate\])]{} and [(\[eq:initialbathstate\])]{}, their influence on the marginal probability distributions can also be expected to have a Gaussian shape. Indeed, the calculations shown in \[sec:appendix:marginal probability distributions\] lead us to the broadened distributions
\[eq:Pmarginal2\] $$\begin{aligned}
\label{eq:Pmarginal2:X}
\mathrm{pr}_{\mathcal{X}}(\mathcal{X};t) = \frac{1}{\sqrt{2 \pi} \delta_{\mathcal{X}}(t)} \int \limits_{- \infty}^{+ \infty} \! \mathrm d x \braket{x|\hat{\varrho}_{\mathrm{S}}(0)|x} \exp \left[ - \frac{(\mathcal{X}-x)^2}{2 \delta_{\mathcal{X}}^2(t)} \right]
\end{aligned}$$ and $$\begin{aligned}
\label{eq:Pmarginal2:P}
\mathrm{pr}_{\mathcal{P}}(\mathcal{P};t) = \frac{1}{\sqrt{2 \pi} \delta_{\mathcal{P}}(t)} \int \limits_{- \infty}^{+ \infty} \! \mathrm d p \braket{p|\hat{\varrho}_{\mathrm{S}}(0)|p} \exp \left[ - \frac{(\mathcal{P}-p)^2}{2 \delta_{\mathcal{P}}^2(t)} \right],
\end{aligned}$$
respectively with the noise terms $\delta_{\mathcal{X}}^2(t)$ and $\delta_{\mathcal{P}}^2(t)$, which represent the diagonal elements of the covariance matrix of the noise operators, Eq. [(\[eq:noiseoperators:cov\])]{}. As also shown in \[sec:appendix:marginal probability distributions\], this covariance matrix can be expressed as $$\begin{aligned}
\label{eq:noiseoperators:cov:calc2}
\begin{pmatrix} \delta_{\mathcal{X}}^2(t) & \delta_{\mathcal{X}\mathcal{P}}(t) \\ \delta_{\mathcal{X}\mathcal{P}}(t) & \delta_{\mathcal{P}}^2(t) \end{pmatrix} = & \phantom{+.} \frac{1}{2} \mathbf{B}(t) \left( \braket{\hat{\mathbf{J}} \hat{\mathbf{J}}^{T}} + \braket{\hat{\mathbf{J}} \hat{\mathbf{J}}^{T}}^{T} \right) \mathbf{B}^{T}(t) \nonumber \\
& + \int \limits_{0}^{t} \! \mathrm d t_1 \int \limits_{0}^{t} \! \mathrm d t_2 g(t_1) g(t_2) \mathbf{\Lambda}(t,t_1) \boldsymbol{\nu}(t_1-t_2) \mathbf{\Lambda}^{T} (t,t_2).\end{aligned}$$ Here we have recalled the time-dependent coupling strength of the bath $g(t)$, Eq. [(\[eq:g\])]{}, the coefficient matrices $\mathbf{B}(t)$ and $\mathbf{\Lambda}(t,s)$ from the dynamics of the inferred observables, Eq. [(\[eq:inferredsystem\])]{}, the initial value vector $\hat{\mathbf{J}}$, Eq. [(\[eq:J\])]{}, and the noise kernel $\boldsymbol{\nu}(t)$, Eq. [(\[eq:nu\])]{}. Note that the non-diagonal terms in Eq. [(\[eq:noiseoperators:cov:calc2\])]{} correspond to the correlation term $\delta_{\mathcal{X}\mathcal{P}}(t)$, Eq. [(\[eq:noiseoperators:cov\])]{}, which is of no further importance in the following.
Briefly summarized, the marginal probability distributions, Eq. [(\[eq:Pmarginal2\])]{}, both consist of a convolution of the system’s initial probability distributions with a Gaussian noise function. This noise function itself is determined by a convolution of a Gaussian pointer noise function, Eq. [(\[eq:FphiResult\])]{}, with a Gaussian bath noise function, Eq. [(\[eq:FbathResult\])]{}. In other words, both the pointers and the bath act as a Gaussian filter [@buzek1995] through which we are forced to look during the measurement process, and which leave us with a distorted image of the initial probability distributions of the system. The noise terms $\delta_{\mathcal{X}}^2(t)$ and $\delta_{\mathcal{P}}^2(t)$ give a description for this combined disturbance of the measurement results from the pointers and the bath. Since we use Gaussian-shaped initial states for the pointers and the bath, Eqs. [(\[eq:initialpointerstate\])]{} and [(\[eq:initialbathstate\])]{}, which are fully characterized by their second moments, the noise terms also contain only second moments: The first term on the right-hand side of Eq. [(\[eq:noiseoperators:cov:calc2\])]{} describes the pointer-based variance contributions to the noise terms, whereas the second term describes the bath-based variance contributions. Both increased pointer-based variance contributions and increased bath-based variance contributions increase the effective disturbance of the marginal probability distributions, Eq. [(\[eq:Pmarginal2\])]{}. In this sense, the noise terms can also be considered as “extrinsic" or “measurement uncertainties" in comparison with the “intrinsic" or “preparation uncertainties" of the system given by $\braket{x|\hat{\varrho}_{\mathrm{S}}(0)|x}$ and $\braket{p|\hat{\varrho}_{\mathrm{S}}(0)|p}$, respectively. This classification of the disturbance is a concept similarly used for variances as uncertainties of pointer-based simultaneous measurements [@appleby1998a; @heese2014].
An optimal measurement configuration for a pointer-based measurement is consequently defined by a minimal noise term product. In \[sec:appendix:minimal noise term product\] we show that the product of the noise terms is bound from below by [^3] $$\begin{aligned}
\label{ineq:dXdP}
\delta_{\mathcal{X}}(t) \delta_{\mathcal{P}}(t) \geq \frac{1}{2},\end{aligned}$$ which defines the best possible accuracy of any measurement apparatus. This statement is closely related to the fact that the variance-based uncertainty of a pointer-based measurement is bound from below by one, see, e.g., Ref. [@arthurs1965; @arthurs1988; @busshardt2010; @heese2014].
Lower bound of the collective entropy
-------------------------------------
In Ref. [@heese2013] we have discussed the collective entropy for closed pointer-based simultaneous measurements with pure initial system states and, with the help of Ref. [@lieb1978], have established a lower bound of this collective entropy. Interestingly, although we have used an open-pointer based measurement and possibly mixed initial system states in the present manuscript, the structure of the marginal probability distributions, Eq. [(\[eq:Pmarginal2\])]{}, which determine the collective entropy, Eq. [(\[eq:S\])]{}, is similar to the structure of the marginal probability distribution for closed pointer-based simultaneous measurements. Therefore, with only slight changes, we can adapt the derivation of a lower bound of the collective entropy of closed pointer-based simultaneous measurements to the collective entropy of open pointer-based simultaneous measurements. In the following, we will briefly recapitulate this derivation.
First of all, we recall a theorem from Ref. [@lieb1978], which states that the information entropy $$\begin{aligned}
\label{eq:Lieb:S}
S[ f ] \equiv - \int \limits_{- \infty}^{+ \infty} \! \mathrm d x\ f(x) \ln f(x)\end{aligned}$$ of a Fourier convolution $$\begin{aligned}
\label{eq:ast}
(f \ast g)(x) \equiv \int \limits_{- \infty}^{+ \infty} \! \mathrm d y f(x-y) g(y)\end{aligned}$$ of two probability distributions $f(x)$ and $g(x)$ is bound from below by $$\begin{aligned}
\label{ineq:Lieb:Theorem}
S[ f * g ] \geq \lambda S[ f ] + ( 1 - \lambda ) S[ g ] - \frac{\lambda \ln \lambda + (1 - \lambda) \ln ( 1 - \lambda )}{2}\end{aligned}$$ with an arbitrary weighting parameter $\lambda \in [0,1]$. Equality in Ineq. [(\[ineq:Lieb:Theorem\])]{} holds true if and only if both $f(x)$ and $g(x)$ are Gaussian distributions with variances $\sigma_f^2$ and $\sigma_g^2$, respectively, and the weighting parameter reads $$\begin{aligned}
\label{eqn:GaussianParameter}
\lambda = \frac{\sigma_f^2}{\sigma_f^2+\sigma_g^2}.\end{aligned}$$ It is clear that the marginal entropies, Eq. [(\[eq:Smarginal\])]{}, which contain the marginal probability distributions, Eq. [(\[eq:Pmarginal2\])]{}, also represent information entropies of convolutions as given by the left-hand side of Ineq. [(\[ineq:Lieb:Theorem\])]{}. Therefore, we can apply the lower bound of Ineq. [(\[ineq:Lieb:Theorem\])]{} to each of the marginal entropies in the collective entropy $S(t)$, Eq. [(\[eq:S\])]{}. In particular, we insert the initial position and momentum distributions of the system in place of $f(x)$ and their associated Gaussian filter functions in place of $g(x)$. For reasons of simplicity, we use the same weighting parameter $\lambda$ for both of these lower bounds. In a next step, we can eliminate the dependency on the system to be measured by making use of the entropic uncertainty relation [@everett1956; @hirschman1957; @bialynicki1975; @bialynicki2006] $$\begin{aligned}
\label{ineq:Hirsch}
S[\braket{x|\hat{\varrho}_{\mathrm{S}}(0)|x}] + S[\braket{p|\hat{\varrho}_{\mathrm{S}}(0)|p}] \geq 1 + \ln \pi.\end{aligned}$$ It is based on the Babenko-Beckner inequality [@babenko1961; @beckner1975]. Saturation in Ineq. [(\[ineq:Hirsch\])]{} occurs only for pure Gaussian states [@lieb1990; @oezaydin2004]. For a detailed discussion of this and similar entropic uncertainty relations, also see Ref. [@bialynicki2011]. The usage of Ineq. [(\[ineq:Hirsch\])]{}, which contains the density matrix $\hat{\varrho}_{\mathrm{S}}(0)$ of the system to be measured, is the main difference to the derivation from Ref. [@heese2013], where the system to be measured was limited to pure states and a simplified form of Ineq. [(\[ineq:Hirsch\])]{} with pure states had been used.
Accordingly, we arrive at the lower bound $$\begin{aligned}
\label{ineq:Lieb:SingleParam}
S(t) \geq 1 - \lambda \ln \frac{\lambda}{\pi} + (1-\lambda) \ln \left( \frac{2 \pi \delta_{\mathcal{X}}(t) \delta_{\mathcal{P}}(t) }{1-\lambda} \right)\end{aligned}$$ of the collective entropy $S(t)$, Eq. [(\[eq:S\])]{}. In particular, this lower bound depends on the noise terms $\delta_{\mathcal{X}}(t)$ and $\delta_{\mathcal{P}}(t)$, Eq. [(\[eq:noiseoperators:cov:calc2\])]{}. A maximization of the right-hand side of Ineq. [(\[ineq:Lieb:SingleParam\])]{} with respect to $\lambda$ reveals the optimal weighting parameter $\lambda = 1/(1+2\delta_{\mathcal{X}}(t) \delta_{\mathcal{P}}(t))$ and finally leads us to the result $$\begin{aligned}
\label{ineq:eur}
S(t) \geq 1 + \ln \bigg[ 2 \pi \Big( \delta_{\mathcal{X}}(t) \delta_{\mathcal{P}}(t) + \frac{1}{2} \Big) \bigg].\end{aligned}$$ This entropic uncertainty bound for open pointer-based simultaneous measurements is of the same form as the entropic uncertainty bound for closed pointer-based simultaneous measurements from Ref. [@heese2013]. Moreover, it is an extension of the more well-known entropic uncertainty bound $S(t) \geq 1 + \ln (2 \pi)$ from Ref. [@buzek1995], to which it can be reduced in case of minimal noise terms, Ineq. [(\[ineq:dXdP\])]{}.
Due to the equality conditions of Ineqs. [(\[ineq:Lieb:Theorem\])]{} and [(\[ineq:Hirsch\])]{}, equality in Ineq. [(\[ineq:eur\])]{} occurs only for initial system states $\hat{\varrho}_{\mathrm{S}}(0)$, which are minimal uncertainty states in the sense of Heisenberg’s uncertainty relation [@ballentine1998], i.e., pure Gaussian states for which the position variance $\sigma_{x}^2$ and the momentum variance $\sigma_{p}^2$ obey $\sigma_{x}^2 \sigma_{p}^2 = 1/4$; and which additionally fulfill $\sigma_{x}^2 = \delta_{\mathcal{X}}(t)/(2 \delta_{\mathcal{P}}(t))$ for a given set of noise terms [@busshardt2011]. Such initial system states can be understood as “minimal entropy states" [@heese2013]. Note that mixed system states with Gaussian density matrices [@mann1993] are not sufficient for equality.
Summarized, the collective entropy of an open pointer-based measurement, Eq. [(\[eq:S\])]{}, behaves exactly like the collective entropy of a closed pointer-based measurement. Particularly, the collective entropy is bound from below by the sharp entropic uncertainty bound given by Ineq. [(\[ineq:eur\])]{}. The only effect of the environmental heat bath is to modify the collective entropy by modifying the noise terms, Eq. [(\[eq:noiseoperators:cov:calc2\])]{}. An optimal measurement accuracy can be achieved for minimal noise terms, Ineq. [(\[ineq:dXdP\])]{}, and requires a pure Gaussian system state with a specific variance which leads to equality in Ineq. [(\[ineq:eur\])]{}.
Conclusion
==========
We have shown that it is possible to determine a formal expression for the collective entropy of a general open pointer-based simultaneous measurement. This collective entropy has a very intuitive structure from which the noisy influence of the pointers and the bath on the measurement result can be understood. Moreover, this structure allows us to show that the collective entropy has a sharp lower bound, which can only be reached for specific pure Gaussian system states. In particular, our results are valid for any bilinear interaction between the system and the pointers and any initially mixed system state and thus extend various previous results on this topic.
Several simplifications have been made to perform our calculations. First of all, the Hamiltonian only includes bilinear terms. However, we expect that terms of higher order would only lead to relatively small correction terms without fundamentally changing our results. Second, ideal single variable measurements of the pointer observables have to be performed in order to realize the measurement procedure, which is an inherent conceptional weakness of pointer-based simultaneous measurements. Yet we hope that with the help of the environmental heat bath, it might be possible to replace this rather theoretical measurement process by a more natural decoherence process [@zurek2003]. Finally, we have used a separable initial state with squeezed states as the initial pointer states and a thermal state as the initial bath state. Although this approach is a clear limitation of our results, we think that our choice is reasonable. It would nevertheless be interesting to discuss different initial states and specifically examine the influence of entanglement and preparation energy on the collective entropy.
Acknowledgement {#acknowledgement .unnumbered}
===============
R. H. gratefully acknowledges a grant from the Landesgraduiertenf[ö]{}rderungsgesetz of the state of Baden-W[ü]{}rttemberg.
Marginal probability distributions {#sec:appendix:marginal probability distributions}
==================================
In this appendix section we briefly describe how to calculate the marginal probability distributions, Eq. [(\[eq:Pmarginal2\])]{}, from the definition, Eqs. [(\[eq:prXP\])]{} and [(\[eq:Pmarginal\])]{}, with the help of the chosen initial state, Eq. [(\[eq:InitialState\])]{}. Our calculations are mainly based on characteristic functions and their Fourier transforms.
Inserting the formal dynamics of the inferred observables, Eq. [(\[eq:inferredsystem\])]{}, into the joint probability distribution, Eq. [(\[eq:prXP\])]{}, leads to $$\begin{aligned}
\label{eq:prXP0}
\mathrm{pr}(\mathcal{X},\mathcal{P};t) = \operatorname{tr} \Big\{ & \hat{\varrho}(0) \delta \left[ \mathcal{X} - \hat{X}_{\mathrm{S}}(0) - (1,0) \mathbf{B}(t) \hat{\mathbf{J}} - (1,0) (\mathbf{\Lambda} \star \hat{\boldsymbol{\xi}})(t) \right] \nonumber \\
& \phantom{..} \times \delta \left[ \mathcal{P} - \hat{P}_{\mathrm{S}}(0) - (0,1) \mathbf{B}(t) \hat{\mathbf{J}} - (0,1) (\mathbf{\Lambda} \star \hat{\boldsymbol{\xi}})(t) \right] \Big\}.\end{aligned}$$ The Dirac delta distributions in Eq. [(\[eq:prXP0\])]{} can be expressed as Fourier transforms of unity, i.e., $\delta(x) = \mathcal{F} \left\{ 1 \right\}(x)$. Here we use the notation $$\begin{aligned}
\label{eq:FT}
\mathcal{F} \left\{f\right\}(x) \equiv \frac{1}{2 \pi} \int \limits_{- \infty}^{+ \infty} \! \mathrm d \alpha\ \exp[ \operatorname{i} \alpha x ] f(\alpha)\end{aligned}$$ for the Fourier transform of an arbitrary function $f(x)$, and an analogous notation for Fourier transforms of functions of two variables. Thus, the separability of the initial state $\hat{\varrho}(0)$, Eq. [(\[eq:InitialState\])]{}, allows to rewrite Eq. [(\[eq:prXP0\])]{} as $$\begin{aligned}
\label{eq:prXP1}
\mathrm{pr}(\mathcal{X},\mathcal{P};t) = \mathcal{F}\left\{ F_{\mathrm{S}} F_{\mathrm{P}} F_{\mathrm{B}} \right\}(\mathcal{X},\mathcal{P}) = \left( \mathcal{F}\left\{F_{\mathrm{S}}\right\} \ast \mathcal{F}\left\{F_{\mathrm{P}}\right\} \ast \mathcal{F}\left\{F_{\mathrm{B}}\right\} \right)(\mathcal{X},\mathcal{P})\end{aligned}$$ with the characteristic function of the system $$\begin{aligned}
\label{eq:Fpsi}
F_{\mathrm{S}}(\alpha_1,\alpha_2) \equiv \operatorname{tr} \left\{ \hat{\varrho}_{\mathrm{S}}(0) \exp[ -\operatorname{i} \{ \alpha_1 \hat{X}_{\mathrm{S}}(0) + \alpha_2 \hat{P}_{\mathrm{S}}(0) \} ] \right\},\end{aligned}$$ the characteristic function of the pointers $$\begin{aligned}
\label{eq:Fphi}
F_{\mathrm{P}}(\alpha_1,\alpha_2) \equiv \bra{\sigma_1} \otimes \bra{\sigma_2} \exp[ -\operatorname{i} \{ (\alpha_1,\alpha_2) \mathbf{B}(t) \hat{\mathbf{J}} \} ] \ket{\sigma_1} \otimes \ket{\sigma_2},\end{aligned}$$ and the characteristic function of the bath $$\begin{aligned}
\label{eq:Fbath}
F_{\mathrm{B}}(\alpha_1,\alpha_2) \equiv \operatorname{tr} \left\{ \hat{\varrho}_{\mathrm{B}}(0) \exp[ -\operatorname{i} \{ (\alpha_1,\alpha_2) (\mathbf{\Lambda} \star \hat{\boldsymbol{\xi}})(t) \} ] \right\}.\end{aligned}$$ The symbols $\star$ and $\ast$ are defined in Eqs. [(\[eq:star\])]{} and [(\[eq:ast\])]{}, respectively.
In the following, we calculate the Fourier transforms of the characteristic functions, Eqs. [(\[eq:Fpsi\])]{} to [(\[eq:Fbath\])]{}, one after another with the help of Wigner functions [@schleich2001]. This allows us to explicitly perform the convolution in Eq. [(\[eq:prXP1\])]{}. By straightforward integration of the resulting Gaussian expressions we finally arrive at the marginal probability distributions, Eq. [(\[eq:Pmarginal2\])]{}.
System
------
First of all, the Fourier transform of the characteristic function of the system, Eq. [(\[eq:Fpsi\])]{}, corresponds to the respective Wigner function $$\begin{aligned}
\label{eq:FpsiResult}
W_{\mathrm{S}}(x,p) = \mathcal{F}\left\{F_{\mathrm{S}}\right\}(x,p)\end{aligned}$$ of the system state. We can also use this connection between characteristic function and Wigner function to determine the Fourier transforms of the other two characteristic functions, Eqs. [(\[eq:Fphi\])]{} and [(\[eq:Fbath\])]{}, in the following.
Pointers
--------
The Wigner function $W_{\sigma}$ of a squeezed vacuum state $\ket{\sigma}$ with position variance $\sigma^2$ can be written as $$\begin{aligned}
\label{eq:Wsigma}
W_{\sigma}(x,p) \equiv \mathcal{G}\left(-\frac{1}{2 \sigma^2},-2 \sigma^2,0,\frac{1}{\pi};x,p\right),\end{aligned}$$ where we have introduced the general Gaussian function $$\begin{aligned}
\label{eq:Gaussian}
\mathcal{G}(a,b,c,n;x,p) \equiv n \exp \left[ a x^2 + b p^2 + c x p \right].\end{aligned}$$ Equation [(\[eq:Wsigma\])]{} corresponds to the Fourier transform of the generic characteristic function $$\begin{aligned}
F_{\sigma}'(\alpha_1,\alpha_2) \equiv \braket{ \sigma | \exp[ -\operatorname{i} ( \alpha_1 \hat{x} + \alpha_2 \hat{p} ) ] | \sigma }\end{aligned}$$ of squeezed vacuum states as we have used them for the pointer states, Eq. [(\[eq:initialpointerstate\])]{}, where $\hat{x}$ and $\hat{p}$ stand for the position and momentum observables, respectively, in the Hilbert space of $\ket{\sigma}$. Looking at this relation the other way round yields $$\begin{aligned}
F_{\sigma}'(x,p) = \mathcal{F}^{-1}\left\{ W_{\sigma} \right\}(x,p) = \mathcal{G}\left(-\frac{\sigma^2}{2},-\frac{1}{8 \sigma^2},0,1;x,p\right),\end{aligned}$$ where $\mathcal{F}^{-1}\left\{f\right\}(x)$ denotes the inverse Fourier transform of an arbitrary function $f(x)$ with $\mathcal{F}\left\{\mathcal{F}^{-1}\left\{f\right\}\right\}(x) = f(x)$, Eq. [(\[eq:FT\])]{}. Thus, the characteristic function of the pointers, Eq. [(\[eq:Fphi\])]{}, can be written as $$\begin{aligned}
F_{\mathrm{P}}(\alpha_1,\alpha_2) & = F_{\sigma_1}'( (\alpha_1,\alpha_2) \mathbf{B}(t) (1,0,0,0)^{T} , (\alpha_1,\alpha_2) \mathbf{B}(t) (0,0,1,0)^{T} ) \nonumber \\
& \phantom{=.} \times F_{\sigma_2}'( (\alpha_1,\alpha_2) \mathbf{B}(t) (0,1,0,0)^{T} , (\alpha_1,\alpha_2) \mathbf{B}(t) (0,0,0,1)^{T} )\end{aligned}$$ and one has $$\begin{aligned}
\label{eq:FphiResult}
\mathcal{F}\left\{F_{\mathrm{P}}\right\}(x,p) = \mathcal{G}\left(\frac{a_{\mathrm{P}}(t)}{d_{\mathrm{P}}(t)},\frac{b_{\mathrm{P}}(t)}{d_{\mathrm{P}}(t)},\frac{c_{\mathrm{P}}(t)}{d_{\mathrm{P}}(t)},\frac{1}{2 \pi \sqrt{ d_{\mathrm{P}}(t) }};x,p\right) \end{aligned}$$ with the coefficients
$$\begin{aligned}
\label{eq:abcp}
\begin{pmatrix} -2 b_{\mathrm{P}}(t) & c_{\mathrm{P}}(t) \\ c_{\mathrm{P}}(t) & -2 a_{\mathrm{P}}(t) \end{pmatrix} \equiv \mathbf{B}(t) \mathbf{V} \mathbf{B}^{T}(t)\end{aligned}$$
and $$\begin{aligned}
d_{\mathrm{P}}(t) \equiv 4 a_{\mathrm{P}}(t) b_{\mathrm{P}}(t) - c_{\mathrm{P}}^2(t),\end{aligned}$$
where $$\begin{aligned}
\label{eq:JJ}
\mathbf{V} \equiv \frac{1}{2} \left( \braket{\hat{\mathbf{J}} \hat{\mathbf{J}}^{T}} + \braket{\hat{\mathbf{J}} \hat{\mathbf{J}}^{T}}^{T} \right) = \begin{pmatrix} \sigma_1^2 & 0 & 0 & 0 \\ 0 & \sigma_2^2 & 0 & 0 \\ 0 & 0 & \frac{1}{4 \sigma_1^2} & 0 \\ 0 & 0 & 0 & \frac{1}{4 \sigma_2^2} \end{pmatrix}\end{aligned}$$ denotes the symmetrized pointer covariance matrix, which directly follows from the definition of the initial value vector $\hat{\mathbf{J}}$, Eq. [(\[eq:J\])]{}, and the initial pointer states, Eq. [(\[eq:initialpointerstate\])]{}. In conclusion, Eq. [(\[eq:FphiResult\])]{} can be calculated solely from the coefficient matrix $\mathbf{B}(t)$, Eq. [(\[eq:inferredsystem\])]{}, and the initial pointer position variances $\sigma_1^2$ and $\sigma_2^2$, Eq. [(\[eq:initialpointerstate\])]{}.
Bath
----
The Wigner function of the thermal bath state $\hat{\varrho}_{\mathrm{B}}(0)$, Eq. [(\[eq:initialbathstate\])]{}, reads $$\begin{aligned}
W_{\mathrm{B}}(\mathbf{q},\mathbf{k}) & \equiv \operatorname{det} \left[ \tanh \left( \frac{\boldsymbol{\omega} \beta}{2} \right) \pi^{-1} \right] \nonumber \\
& \phantom{\equiv.} \times \exp \Bigg[ - \mathbf{q}^{T} \mathbf{m}^{\frac{1}{2}} \tanh \left( \frac{\boldsymbol{\omega} \beta}{2} \right) \boldsymbol{\omega} \mathbf{m}^{\frac{1}{2}} \mathbf{q} \nonumber \\
& \phantom{\equiv \times \exp \Bigg[.} - \mathbf{k}^{T} \mathbf{m}^{-\frac{1}{2}} \tanh \left( \frac{\boldsymbol{\omega} \beta}{2} \right) \boldsymbol{\omega}^{-1} \mathbf{m}^{-\frac{1}{2}} \mathbf{k} \Bigg].\end{aligned}$$ Analogously to the calculation of the characteristic function of the pointers, the generic characteristic bath function $$\begin{aligned}
F_{\mathrm{B}}'(\boldsymbol{\alpha}_{\mathbf{1}},\boldsymbol{\alpha}_{\mathbf{2}}) \equiv \operatorname{tr} \left\{ \hat{\varrho}_{\mathrm{B}}(0) \exp[ -\operatorname{i} ( \boldsymbol{\alpha}_{\mathbf{1}}^T \hat{\mathbf{q}} + \boldsymbol{\alpha}_{\mathbf{2}}^T \hat{\mathbf{k}} ) ] \right\}\end{aligned}$$ of baths in thermal equilibrium, which is connected to the characteristic bath function, Eq. [(\[eq:Fbath\])]{}, via $$\begin{aligned}
\label{eq:Fbath1}
F_{\mathrm{B}}(\alpha_1,\alpha_2) = F_{\mathrm{B}}' \left( (\mathbf{\Lambda} \star \mathbf{u_q})^T(t) \left( \alpha_1, \alpha_2 \right)^T, (\mathbf{\Lambda} \star \mathbf{u_k})^T(t) \left( \alpha_1, \alpha_2 \right)^T \right),\end{aligned}$$ can be written as $$\begin{aligned}
F_{\mathrm{B}}'(\mathbf{q},\mathbf{k}) & = \mathcal{F}^{-1}\left\{ W_{\mathrm{B}} \right\}(\mathbf{q},\mathbf{k}) \nonumber \\
& = \exp \Bigg[ - \frac{1}{4} \mathbf{q}^{T} \mathbf{m}^{-\frac{1}{2}} \coth \left( \frac{\boldsymbol{\omega} \beta}{2} \right) \boldsymbol{\omega}^{-1} \mathbf{m}^{-\frac{1}{2}} \mathbf{q} \nonumber \\
& \phantom{= \exp \Bigg[} - \frac{1}{4} \mathbf{k}^{T} \mathbf{m}^{\frac{1}{2}} \coth \left( \frac{\boldsymbol{\omega} \beta}{2} \right) \boldsymbol{\omega} \mathbf{m}^{\frac{1}{2}} \mathbf{k} \Bigg].\end{aligned}$$ Here we have made use of the abbreviations $$\begin{aligned}
\mathbf{u_q}(t) \equiv - \mathbf{g}^{T}(t) \mathbf{m}^{-\frac{1}{2}} \cos(\boldsymbol{\omega} t) \mathbf{m}^{\frac{1}{2}}\end{aligned}$$ and $$\begin{aligned}
\mathbf{u_k}(t) \equiv - \mathbf{g}^{T}(t) \mathbf{m}^{-\frac{1}{2}} \sin(\boldsymbol{\omega} t) \boldsymbol{\omega}^{-1} \mathbf{m}^{-\frac{1}{2}}.\end{aligned}$$ Explicitly performing the Fourier transform of Eq. [(\[eq:Fbath1\])]{} leads to $$\begin{aligned}
\label{eq:FbathResult}
\mathcal{F}\left\{F_{\mathrm{B}}\right\}(x,p) = \mathcal{G}\left(\frac{a_{\mathrm{B}}(t)}{d_{\mathrm{B}}(t)},\frac{b_{\mathrm{B}}(t)}{d_{\mathrm{B}}(t)},\frac{c_{\mathrm{B}}(t)}{d_{\mathrm{B}}(t)},\frac{1}{2 \pi \sqrt{ d_{\mathrm{B}}(t) }};x,p\right) \end{aligned}$$ with the coefficients
$$\begin{aligned}
\label{eq:abcb}
\begin{pmatrix} - 2 b_{\mathrm{B}}(t) & c_{\mathrm{B}}(t) \\ c_{\mathrm{B}}(t) & - 2 a_{\mathrm{B}}(t) \end{pmatrix} \equiv ( \mathbf{\Lambda} \star \mathbf{v} )(t) ( \mathbf{\Lambda} \star \mathbf{v} )^T(t)\end{aligned}$$
and $$\begin{aligned}
d_{\mathrm{B}}(t) \equiv 4 a_{\mathrm{B}}(t) b_{\mathrm{B}}(t) - c_{\mathrm{B}}^2(t),\end{aligned}$$
where $$\begin{aligned}
\label{eq:vv}
\mathbf{v}(t_1) \mathbf{v}^{T}(t_2) \equiv g(t_1) g(t_2) \boldsymbol{\nu}(t_1-t_2)\end{aligned}$$ with the noise kernel $\boldsymbol{\nu}(t)$, Eq. [(\[eq:nu\])]{}.
As a result, Eq. [(\[eq:FbathResult\])]{} is determined by the coefficient matrix $\mathbf{\Lambda}(t,s)$, Eq. [(\[eq:inferredsystem\])]{}, and the noise kernel $\boldsymbol{\nu}(t)$, Eq. [(\[eq:nu\])]{}. Note that for a turned off bath (i.e., $\mathbf{g}(t) = 0$ for all $t\geq0$), one has $d_{\mathrm{B}}(t) = 0$ and thus Eq. [(\[eq:FbathResult\])]{} is not well-defined. However, Eq. [(\[eq:FbathResult\])]{} can in this case be understood as a Dirac delta distribution so that the following considerations are still applicable with $a_{\mathrm{B}}(t) = 0$, $b_{\mathrm{B}}(t) = 0$ and $c_{\mathrm{B}}(t) = 0$.
Coefficients
------------
In a next step, we show how the pointer coefficients $a_{\mathrm{P}}(t)$, $b_{\mathrm{P}}(t)$ and $c_{\mathrm{P}}(t)$, Eq. [(\[eq:abcp\])]{}, and the bath coefficients $a_{\mathrm{B}}(t)$, $b_{\mathrm{B}}(t)$, and $c_{\mathrm{B}}(t)$, Eq. [(\[eq:abcb\])]{}, are related to the noise terms $\delta_{\mathcal{X}}^2(t)$ and $\delta_{\mathcal{P}}^2(t)$ and the correlation term $\delta_{\mathcal{X}\mathcal{P}}(t)$, Eq. [(\[eq:noiseoperators:cov\])]{}. For this purpose, we recall the noise operators $\hat{N}_{\mathcal{X}}(t)$ and $\hat{N}_{\mathcal{P}}(t)$, Eq. [(\[eq:noiseoperators\])]{}. According to Eqs. [(\[eq:noiseoperators:exp\])]{} and [(\[eq:s=0\])]{}, we can simplify their covariance matrix, Eq. [(\[eq:noiseoperators:cov\])]{}, to $$\begin{aligned}
\label{eq:noiseoperators:cov2}
\begin{pmatrix} \delta_{\mathcal{X}}^2(t) & \delta_{\mathcal{X}\mathcal{P}}(t) \\ \delta_{\mathcal{X}\mathcal{P}}(t) & \delta_{\mathcal{P}}^2(t) \end{pmatrix} = \Braket{ \begin{pmatrix} \hat{N}_{\mathcal{X}}^2(t) & \hat{N}_{\mathcal{X}\mathcal{P}}(t) \\ \hat{N}_{\mathcal{X}\mathcal{P}}(t) & \hat{N}_{\mathcal{P}}^2(t) \end{pmatrix} }.\end{aligned}$$ Here we have also recalled the symmetrized noise operator $\hat{N}_{\mathcal{X}\mathcal{P}}(t)$, Eq. [(\[eq:noiseoperators:NXP\])]{}. A straightforward calculation using the definition of the noise operators, Eq. [(\[eq:noiseoperators\])]{}, shows that $$\begin{aligned}
\label{eq:noiseoperators:cov:calc}
\begin{pmatrix} \delta_{\mathcal{X}}^2(t) & \delta_{\mathcal{X}\mathcal{P}}(t) \\ \delta_{\mathcal{X}\mathcal{P}}(t) & \delta_{\mathcal{P}}^2(t) \end{pmatrix} = \mathbf{B}(t) \mathbf{V} \mathbf{B}^{T}(t) + ( \mathbf{\Lambda} \star \mathbf{v} )(t) ( \mathbf{\Lambda} \star \mathbf{v} )^T(t)\end{aligned}$$ with the abbreviations introduced in Eqs. [(\[eq:JJ\])]{} and [(\[eq:vv\])]{}. Here we have also made use of the symmetry relation [@fleming2011b] $$\begin{aligned}
\braket{ \hat{\boldsymbol{\xi}}(t_1) \hat{\boldsymbol{\xi}}{}^{T}(t_2) } = \braket{ \hat{\boldsymbol{\xi}}(t_1) \hat{\boldsymbol{\xi}}{}^{T}(t_2) }^{T}\end{aligned}$$ for the stochastic force $\hat{\boldsymbol{\xi}}(t)$, Eq. [(\[eq:stochasticforce\])]{}. A comparison of Eq. [(\[eq:noiseoperators:cov:calc\])]{} with Eqs. [(\[eq:abcp\])]{} and [(\[eq:abcb\])]{} immediately reveals $$\begin{aligned}
\label{eq:noiseterms}
\begin{pmatrix} \delta_{\mathcal{X}}^2(t) & \delta_{\mathcal{X}\mathcal{P}}(t) \\ \delta_{\mathcal{X}\mathcal{P}}(t) & \delta_{\mathcal{P}}^2(t) \end{pmatrix} = \begin{pmatrix} - 2 ( b_{\mathrm{P}}(t) + b_{\mathrm{B}}(t) ) & c_{\mathrm{P}}(t) + c_{\mathrm{B}}(t) \\ c_{\mathrm{P}}(t) + c_{\mathrm{B}}(t) & - 2 ( a_{\mathrm{P}}(t) + a_{\mathrm{B}}(t) ) \end{pmatrix},\end{aligned}$$ which relates the noise and correlation terms on the left-hand side with the pointer and bath coefficients on the right-hand side.
Probability distributions
-------------------------
At this point, we can collect our previous results. By inserting Eqs. [(\[eq:FpsiResult\])]{}, [(\[eq:FphiResult\])]{} and [(\[eq:FbathResult\])]{} into Eq. [(\[eq:prXP1\])]{}, we arrive at the final expression for the joint probability distribution $$\begin{aligned}
\label{eq:prXP2}
\mathrm{pr}(\mathcal{X},\mathcal{P};t) = \left( W_{\mathrm{S}} \ast \mathcal{G}\left(-\frac{1}{2 \Delta_{\mathcal{X}}^2(t)},-\frac{1}{2 \Delta_{\mathcal{P}}^2(t)},\gamma(t),\frac{1}{2 \pi \sqrt{d(t)}}\right) \right) (\mathcal{X},\mathcal{P})\end{aligned}$$ with the coefficients
\[eq:coefficients\] $$\begin{aligned}
\Delta_{\mathcal{X}}^2(t) \equiv \frac{-d(t)}{2 ( a_{\mathrm{P}}(t) + a_{\mathrm{B}}(t) ) },\end{aligned}$$ $$\begin{aligned}
\Delta_{\mathcal{P}}^2(t) \equiv \frac{-d(t)}{2 ( b_{\mathrm{P}}(t) + b_{\mathrm{B}}(t) ) },\end{aligned}$$ $$\begin{aligned}
\gamma(t) \equiv \frac{ c_{\mathrm{P}}(t) + c_{\mathrm{B}}(t) }{ d(t) },\end{aligned}$$ and $$\begin{aligned}
d(t) \equiv 4 ( a_{\mathrm{P}}(t) + a_{\mathrm{B}}(t) ) ( b_{\mathrm{P}}(t) + b_{\mathrm{B}}(t) ) - ( c_{\mathrm{P}}(t) + c_{\mathrm{B}}(t) )^2.\end{aligned}$$
Finally, the marginal probability distributions, Eq. [(\[eq:Pmarginal2\])]{}, follow from Eq. [(\[eq:prXP2\])]{} by straightforward integration as defined in Eq. [(\[eq:Pmarginal\])]{} when we make use of the marginals
$$\begin{aligned}
\int \limits_{- \infty}^{+ \infty} \! \mathrm d p W_{\mathrm{S}}(x,p) = \braket{x|\hat{\varrho}_{\mathrm{S}}(0)|x}\end{aligned}$$
and $$\begin{aligned}
\int \limits_{- \infty}^{+ \infty} \! \mathrm d x W_{\mathrm{S}}(x,p) = \braket{p|\hat{\varrho}_{\mathrm{S}}(0)|p}\end{aligned}$$
of the initial Wigner function of the system $W_{\mathrm{S}}(x,p)$, Eq. [(\[eq:FpsiResult\])]{}, where $\braket{x|\hat{\varrho}_{\mathrm{S}}(0)|x}$ and $\braket{p|\hat{\varrho}_{\mathrm{S}}(0)|p}$ stand for the initial position distribution and the initial momentum distribution of the system state, respectively. In particular, the resulting marginal probability distributions, Eq. [(\[eq:Pmarginal2\])]{}, contain the noise terms $\delta_{\mathcal{X}}^2(t)$ and $\delta_{\mathcal{P}}^2(t)$, Eq. [(\[eq:noiseterms\])]{}.
Minimal noise term product {#sec:appendix:minimal noise term product}
==========================
We show in this appendix section that the product of the noise terms $\delta_{\mathcal{X}}^2(t)$ and $\delta_{\mathcal{X}}^2(t)$, Eq. [(\[eq:noiseoperators:cov\])]{}, obeys Ineq. [(\[ineq:dXdP\])]{}. Our proof is closely related to the considerations in Ref. [@arthurs1988]. Due to the definition of the noise terms, Eq. [(\[eq:noiseoperators:cov\])]{}, we can make use of Robertson’s uncertainty relation [@robertson1929], to establish the lower bound $$\begin{aligned}
\label{ineq:dxdpRobertson}
\delta_{\mathcal{X}}(t) \delta_{\mathcal{P}}(t) \geq \frac{1}{2} \left| \Braket{ \left[ \hat{N}_{\mathcal{X}}(t), \hat{N}_{\mathcal{P}}(t) \right] } \right|,\end{aligned}$$ which contains the commutator $$\begin{aligned}
\label{eq:Ncommutator}
\left[ \hat{N}_{\mathcal{X}}(t), \hat{N}_{\mathcal{P}}(t) \right] = \left[ \hat{\mathcal{X}}(t), \hat{\mathcal{P}}(t) \right] - \left[ \hat{\mathcal{X}}(t), \hat{P}_{\mathrm{S}}(0) \right] - \left[ \hat{X}_{\mathrm{S}}(0), \hat{\mathcal{P}}(t) \right] + \left[ \hat{X}_{\mathrm{S}}(0), \hat{P}_{\mathrm{S}}(0) \right]\end{aligned}$$ of the noise operators $\hat{N}_{\mathcal{X}}(t)$ and $\hat{N}_{\mathcal{P}}(t)$, Eq. [(\[eq:noiseoperators\])]{}. The first commutator on the right-hand side of Eq. [(\[eq:Ncommutator\])]{} vanishes due to Eq. [(\[eq:XPcommute\])]{}. The second and third commutators read $$\begin{aligned}
\left[ \hat{\mathcal{X}}(t), \hat{P}_{\mathrm{S}}(0) \right] = \left[ \hat{X}_{\mathrm{S}}(0), \hat{\mathcal{P}}(t) \right] = \left[ \hat{X}_{\mathrm{S}}(0), \hat{P}_{\mathrm{S}}(0) \right],\end{aligned}$$ since only the first term on the right-hand side of Eq. [(\[eq:inferredsystem\])]{} acts in the initial system’s Hilbert space. As a result, we have $$\begin{aligned}
\label{eq:Ncommutator:1}
\left[ \hat{N}_{\mathcal{X}}(t), \hat{N}_{\mathcal{P}}(t) \right] = - \left[ \hat{X}_{\mathrm{S}}(0), \hat{P}_{\mathrm{S}}(0) \right] = -i.\end{aligned}$$ Inserting Eq. [(\[eq:Ncommutator:1\])]{} into Ineq. [(\[ineq:dxdpRobertson\])]{} finally leads to Ineq. [(\[ineq:dXdP\])]{}.
We remark that it would also be possible to use Schr[ö]{}dinger’s uncertainty relation [@schroedinger1930] instead of Robertson’s uncertainty relation, Ineq. [(\[ineq:dxdpRobertson\])]{}. This approach would additionally incorporate the correlation term $\delta_{\mathcal{X}\mathcal{P}}(t)$, Eq. [(\[eq:noiseoperators:cov\])]{}, on the right-hand side of Ineq. [(\[ineq:dXdP\])]{}, i.e., $$\begin{aligned}
\delta_{\mathcal{X}}^2(t) \delta_{\mathcal{P}}^2(t) \geq \delta_{\mathcal{X}\mathcal{P}}^2(t) + \frac{1}{4}.\end{aligned}$$ We do not discuss this extension to Ineq. [(\[ineq:dXdP\])]{} in more detail. It could, however, serve as an interesting point of origin for further considerations.
References {#references .unnumbered}
==========
[^1]: For baths which are not initially separable from the system and the pointers, respectively, our method of calculating the entropy in \[sec:appendix:marginal probability distributions\] may fail. However, by choosing an appropriate time-dependency of the Hamiltonian, Eq. [(\[eq:H\])]{}, a smooth switch-on of the bath can be introduced to realize a more physically reasonable model; see, e.g., Ref. [@fleming2011a; @fleming2011c] and references therein.
[^2]: A common choice for Eq. [(\[eq:I\])]{} is an Ohmic spectral density with $\mathbf{I}(\omega) \sim \omega$ and a cut-off term for high frequencies. See, e.g., Ref. [@weiss1999] for a more detailed discussion.
[^3]: We remark that we have assumed in Ref. [@heese2014] (in Eq. (55b)) that the pointer-based variance product, i.e., the product of the diagonal elements of the first term in Eq. [(\[eq:noiseoperators:cov:calc2\])]{}, is bound from below by $1/4$. While this assumption always holds true for a closed pointer-based measurement due to Ineq. [(\[ineq:dXdP\])]{}, it is in fact not generally valid for an open pointer-based measurement.
|
---
abstract: 'We describe various studies relevant for top physics at future circular collider projects currently under discussion. We show how highly-massive top-antitop systems produced in proton-proton collisions at a center-of-mass energy of 100 TeV could be observed and employed for constraining top dipole moments, investigate the reach of future proton-proton and electron-positron machines to top flavor-changing neutral interactions, and discuss top parton densities.'
address:
- 'CERN, PH-TH, CH-1211 Geneva 23, Switzerland'
- 'Institut Pluridisciplinaire Hubert Curien/Département Recherches Subatomiques, Université de Strasbourg/CNRS-IN2P3, 23 rue du Loess, F-67037 Strasbourg, France'
author:
- Benjamin Fuks
bibliography:
- 'fuks.bib'
title: |
Opportunities with top quarks at\
future circular colliders
---
A future circular collider facility at CERN
===========================================
The Large Hadron Collider (LHC) at CERN has delivered very high quality results during its first run in 2009-2013, with in particular the discovery of a Higgs boson with a mass of about 125 GeV in 2012. Unfortunately, no hint for the presence of particles beyond the Standard Model has been observed. Deviations from the Standard Model are however still allowed and expected to show up either through precision measurements of indirect probes, or directly at collider experiments. In this context, high precision would require to push the intensity frontier further and further, whereas bringing the energy frontier beyond the LHC regime would provide handles on new kinematical thresholds. Along these lines, a design study for a new accelerator facility aimed to operate at CERN in the post-LHC era has been undertaken. This study focuses on a machine that could collide protons at a center-of-mass energy of $\sqrt{s}=100$ TeV, that could be built in a tunnel of about 80-100 km in the area of Geneva and benefit from the existing infrastructure at CERN [@fcchh]. A possible intermediate step in this project could include an electron-positron machine with a collision center-of-mass energy ranging from 90 GeV (the $Z$-pole) to 350 GeV (the top-antitop threshold), with additional working points at $\sqrt{s}=160$ GeV (the $W$-boson pair production threshold) and 240 GeV (a Higgs factory) [@fccee]. In parallel, highly-energetic lepton-hadron and heavy ion collisions are also under investigation.
Both the above-mentioned future circular collider (FCC) setups are expected to deliver a copious amount of top quarks. More precisely, one trillion of them are expected to be produced in 10 ab$^{-1}$ of proton-proton collisions at $\sqrt{s}=100$ TeV and five millions of them in the same amount of electron-positron collisions at $\sqrt{s}=350$ GeV (which will in particular allows for top mass and width measurements at an accuracy of about 10 MeV [@Gomez-Ceballos:2013zzn]). This consequently opens the door for an exploration of the properties of the top quark, widely considered as a sensitive probe to new physics given its mass close to the electroweak scale, with an unprecedented accuracy. This is illustrated below with three selected examples.
Top pair production in 100 TeV proton-proton collisions
=======================================================
The top quark pair-production cross section for proton-proton collisions at $\sqrt{s}=100$ TeV reaches 29.4 nb at the next-to-leading order accuracy in QCD, as calculated with [[MadGraph5]{}a[MC@NLO]{}]{} [@Alwall:2014hca] and the NNPDF 2.3 set of parton densities [@Ball:2012cx]. A very large number of $t\bar t$ events are thus expected to be produced for integrated luminosities of several ab$^{-1}$, with a significant number of them featuring a top-antitop system whose invariant-mass lies in the multi-TeV range. Whereas kinematical regimes never probed up to now will become accessible, standard $t\bar t$ reconstruction techniques may not be sufficient to observe such top quarks that are highly boosted, with a transverse momentum ($p_T$) easily exceeding a few TeV. In addition, it is not clear how current boosted top tagging techniques, developed in the context of the LHC, could be applied. Consequently, it could be complicated to distinguish a signal made of a pair of highly boosted top quarks from the overwhelming multijet background.
![\[fig:ttbar\]*Left*: distributions of the $z$ variable of Eq. (\[eq:z\]) for proton-proton collisions at $\sqrt{s}=100$ TeV. We present predictions for top-antitop (red dashed) and multijet (blue plain) production, after selecting events as described in the text. We have fixed $M_{jj}^{\rm cut}$ to 6 TeV and normalized the results to 100 fb$^{-1}$. *Right*: constraints on the top dipole moments derived from measurements at the Tevatron and the LHC (gray), and from predictions at the LHC (red, $\sqrt{s}=14$ TeV) and the FCC (black, $M_{jj}^{\rm cut} = 10$ TeV).](zz "fig:"){width=".57\columnwidth"} ![\[fig:ttbar\]*Left*: distributions of the $z$ variable of Eq. (\[eq:z\]) for proton-proton collisions at $\sqrt{s}=100$ TeV. We present predictions for top-antitop (red dashed) and multijet (blue plain) production, after selecting events as described in the text. We have fixed $M_{jj}^{\rm cut}$ to 6 TeV and normalized the results to 100 fb$^{-1}$. *Right*: constraints on the top dipole moments derived from measurements at the Tevatron and the LHC (gray), and from predictions at the LHC (red, $\sqrt{s}=14$ TeV) and the FCC (black, $M_{jj}^{\rm cut} = 10$ TeV).](dip "fig:"){width=".41\columnwidth"}
To demonstrate that this task is already manageable with basic considerations [@inprep], we have analyzed, by means of the [MadAnalysis]{} 5 package [@Conte:2012fm], leading-order hard-scattering events simulated with [[MadGraph5]{}a[MC@NLO]{}]{} and matched to the parton showering and hadronization algorithms included in [Pythia]{} 8 [@Sjostrand:2007gs]. We have considered, in our analysis, jets with a $p_T > 1$ TeV that have been reconstructed with [FastJet]{} [@Cacciari:2011ma] and an anti-$k_T$ jet algorithm with a radius parameter $R=0.2$ [@Cacciari:2008gp]. We preselect events featuring at least two jets with a pseudorapidity $|\eta|<2$ and at least one muon lying in a cone of $R=0.2$ of any of the selected jets. The invariant mass of the system comprised of the two leading jets is additionally constrained to be larger than a threshold $M_{jj}^{\rm cut}$. We then investigate the properties of the selected muons relatively to those of the related jet. In this context, we present on Figure \[fig:ttbar\] (left) the distribution in a $z$ variable defined as the ratio of the muon transverse momentum $p_T(\mu_i)$ to the corresponding jet transverse momentum $p_T(j_i)$, maximized over the $n$ final-state muons of the event, z \_[i=1,…n]{} . \[eq:z\]Muons arising from multijet events are mostly found to carry a small fraction of the jet transverse momentum, which is inferred by their production mechanism ($B$- and $D$-meson decays). This contrasts with muons induced by prompt decays of top quarks that can gather a significant fraction of the top $p_T$. Imposing the $z$-variable to be larger than an optimized threshold $z^{\rm cut}$, it becomes possible to obtain signal over background ratios $S/B$ of order one and extract the $t\bar t$ signal at the $5\sigma$ level (defined by $S/\sqrt{S+B}$). We study, in Table \[tab:ttbar\], the $z^{\rm cut}$ value for different invariant-mass threshold $M_{jj}^{\rm cut}$, and present the associated $S/B$ ratio together with the luminosity necessary for a signal extraction at $5\sigma$.
[llll]{} $M_{jj}^{\rm cut}$ & $z^{\rm cut}$ & $S/B$ & ${\cal L}_{5\sigma}$\
6 TeV & 0.5 & 0.39 & 36.1 fb$^{-1}$\
10 TeV & 0.5 & 0.74 & 202 fb$^{-1}$\
15 TeV & 0.4 & 0.25 & 2.35 ab$^{-1}$\
On Figure \[fig:ttbar\] (right), we illustrate how a measurement of the fiducial cross section related to the above selection with $M_{jj}^{\rm cut} = 10$ TeV could be used to constrain the top chromomagnetic and chromoelectric dipole moments $d_V$ and $d_A$. In our conventions, they are defined by = |t \^ (d\_V + i d\_A \_5) T\_a t G\_\^a , where $g_s$ denotes the strong coupling, $T^a$ the fundamental representation matrices of $SU(3)$, $m_t$ the top mass and $G_{\mu\nu}$ the gluon field strength tensor. We have imported this Lagrangian into [[MadGraph5]{}a[MC@NLO]{}]{} by using [FeynRules]{} [@Alloul:2013bka; @Degrande:2011ua] to estimate the new physics contributions to the $t\bar t$ signal and extract bounds on the top dipole moments. While current limits have been derived from total rate measurements at the Tevatron and the LHC (gray), the FCC predictions with $M_{jj}^{\rm cut} = 10$ TeV (black) correspond to 1 ab$^{-1}$ of collisions, and we have superimposed the expectation for the future LHC run at $\sqrt{s} = 14$ TeV (red) after setting $M_{jj}^{\rm cut} = 2$ TeV and using standard top tagging efficiencies [@CMS:2014fya]. The FCC is hence expected to constrain top dipole moments to lie in the ranges $-0.0022 < d_V < 0.0031$ and $|d_A| < 0.0026$, which gets close to the reach of indirect probes like the electric dipole moment of the neutron ($|d_A|<0.0012$ [@Kamenik:2011dk]) or $b\to s \gamma$ transitions ($-0.0038 < d_V < 0.0012$ [@Martinez:2001qs]).
Flavor-changing neutral interactions of the top quark
=====================================================
In the Standard Model, the top flavor-changing couplings to the neutral bosons are suppressed due the unbroken QCD and QED symmetries and the GIM mechanism. Many new physics extensions however predict an enhancement of those interactions, whose hints are therefore searched for either in the anomalous decay of a $t\bar t$ pair, or in an anomalous single top production. An effective approach for describing those effects consists of supplementing the Standard Model Lagrangian by dimension-six operators that give rise to a basis of top anomalous couplings that can be chosen minimal [@AguilarSaavedra:2008zc]. Taking the example of the three operators = \^ \^\_L u\_R + \^([|Q]{}\_L T\_[2k]{}) \^ u\_R W\_\^k + \^\_L \^ T\_a u\_R G\_\^a + [h.c.]{} , \[eq:l6\]where flavor indices are understood for clarity, we indeed observe that flavor-changing top couplings to the Higgs boson, the $Z$-boson and the gluon are induced after electroweak symmetry breaking. In our notation, $\Lambda$ denotes the new physics scale, $\Phi$ ($Q$) a weak doublet of Higgs (left-handed quark) fields and $u_R$ a right-handed up-type quark field. Assuming the Wilson coefficients $\bar c$ of order 1, current ATLAS and CMS data constrains $\Lambda\simeq$ 1 TeV (4-5 TeV in the case of the ${\cal O}_{{\scriptscriptstyle}G}$ operator) [@topatlas; @b2gcms]. A naive estimate of the FCC sensitivity to $\Lambda$ can then be derived from these numbers by rescaling both the signal and background with the relevant cross sections and luminosities. Assuming 10 ab$^{-1}$ of 100 TeV proton-proton collisions, one finds that the LHC limits could be increased by a factor of about 20 [@zupan].
Limits on new physics operators such as those of Eq. (\[eq:l6\]) could also be obtained in electron-positron collisions at high energies by relying, for instance, on single top production [@Khanpour:2014xla]. Signal and background have been simulated using the tools introduced in the previous section together with a modeling of the detector effects by Gaussian smearing. Exploitation of the kinematical configuration of the signal with a multivariate technique has then been found sufficient to extract the signal from the background and derive bounds on the anomalous top interactions, that have been translated in terms of limits on rare top decay branching ratios on Figure \[fig:fcncpdf\] (left).
Top parton density in the proton
================================
In proton-proton collisions at $\sqrt{s}=100$ TeV, all quarks including the top would appear essentially massless, so that it may seem appropriate to investigate processes with initial-state top quarks. While a six-flavor-number scheme (6FNS) allows for the resummation of collinear logarithms of the process scale over the top mass into a top density, this is only justified when these logarithms are large, the five-flavor-number scheme (5FNS) calculation spoiling in this case perturbative QCD. Both the 5FNS and 6FNS computations can however be consistently matched to guarantee accurate predictions for any scale. In the ACOT scheme [@Aivazis:1993pi], the 5FNS and 6FNS results are summed and the matching is achieved by subtracting from the top density $f_t$ its leading logarithmic approximation $f_t^0$ that is already included in the 5FNS calculation, (ppX\^0) =& (t|t X\^0) + f\_g (tg X\^0 t)\
& + f\_g (g|t X\^0 |t) + f\_gf\_g (gg X\^0 t|t) , \[eq:acot\]where $X^0$ denotes any electrically and color neutral final state and $f_g$ the gluon density.
![\[fig:fcncpdf\]*Left*: limits on top decays into a neutral electroweak boson and a lighter quark. Current LHC bounds have been indicated, together with the expectation for 10 ab$^{-1}$ of electron-positron collisions at $\sqrt{s}=240$ GeV. Figure taken from Ref. [@Khanpour:2014xla]. *Right*: Total cross section for the production of a heavy Higgs boson in the 6FNS (red), 5FNS (blue) and ACOT scheme (black) for proton-proton collisions at $\sqrt{s}=100$ TeV. Figure taken from Ref. [@Han:2014nja].](brza "fig:"){width=".45\columnwidth"} ![\[fig:fcncpdf\]*Left*: limits on top decays into a neutral electroweak boson and a lighter quark. Current LHC bounds have been indicated, together with the expectation for 10 ab$^{-1}$ of electron-positron collisions at $\sqrt{s}=240$ GeV. Figure taken from Ref. [@Khanpour:2014xla]. *Right*: Total cross section for the production of a heavy Higgs boson in the 6FNS (red), 5FNS (blue) and ACOT scheme (black) for proton-proton collisions at $\sqrt{s}=100$ TeV. Figure taken from Ref. [@Han:2014nja].](pdf "fig:"){width=".53\columnwidth"}
Figure \[fig:fcncpdf\] (right) shows the production of a heavy neutral Higgs boson [@Han:2014nja], and compare leading-order predictions in the 5FNS (blue), 6FNS (red) and ACOT scheme (black) for proton-proton collisions at $\sqrt{s}=100$ TeV. For small Higgs masses, the subtraction of the leading logarithmic terms in Eq. (\[eq:acot\]) cancels almost entirely the 6FNS contribution, the ACOT result mostly being the 5FNS ones. In this region, the logarithms in the top mass are small so that their resummation into a top density is not justified. For larger masses, they start to play a role, although the use of the 6FNS alone still yields a large overestimation of the cross section. Predictions including top densities should consequently be matched to the 5FNS result, as also found for charged Higgs production [@Dawson:2014pea], and not employed as such.
Conclusions
===========
We have discussed three top physics cases that are relevant for collisions at future circular colliders. We have shown how highly massive top-antitop systems could be observed in proton-proton collisions at a center-of-mass energy of 100 TeV and further used to constrain top dipole moments. We have then sketched how constraints on top flavor-changing neutral interactions would improve both at future proton-proton and electron-positron colliders, and finally investigated the issue of the top parton density, relevant for proton-proton collisions at energies much larger than the top mass.
I am grateful the conference organizers for setting up this very nice Top2014 event, as well as to J.A. Aguilar-Saavedra, M.L. Mangano and T. Melia for enlightening discussions on top physics at the FCC. I am also grateful to T. Han, H. Khanpour, S. Khatibi, M. Khatiri Yanehsari, M.M. Najafabadi, J. Sayre and S. Westhoff for providing material included in this report. This work has been partly supported by the LHC Physics Center at CERN (LPCC).
References {#references .unnumbered}
==========
|
---
abstract: 'We model the newly synthesized magic-angle twisted bilayer-graphene superconductor with two $p_{x,y}$-like Wannier orbitals on the superstructure honeycomb lattice, where the hopping integrals are constructed via the Slater-Koster formulism by symmetry analysis. The characteristics exhibited in this simple model are well consistent with both the rigorous calculations and experiment observations. A van Hove singularity and Fermi-surface (FS) nesting are found in the doping levels relevant to the correlated insulator and unconventional superconductivity revealed experimentally, base on which we identify the two phases as weak-coupling FS instabilities. Then, with repulsive Hubbard interactions turning on, we performed random-phase-approximation (RPA) based calculations to identify the electron instabilities. As a result, we find chiral $d+id$ topological superconductivity bordering the correlated insulating state near half-filling, identified as noncoplanar chiral spin-density wave (SDW) ordered state, featuring quantum anomalous Hall effect. The phase-diagram obtained in our approach is qualitatively consistent with experiments.'
author:
- 'Cheng-Cheng Liu'
- 'Li-Da Zhang'
- 'Wei-Qiang Chen'
- Fan Yang
title: 'Chiral Spin Density Wave and $d+id$ Superconductivity in the Magic-Angle-Twisted Bilayer Graphene'
---
** The newly revealed “high-temperature superconductivity (SC)"[@SC] in the “magic-angle" twisted bilayer-graphene (MA-TBG) has caught great research interests[@Volovik2018; @Roy2018; @Po2018; @Xu2018; @Yuan2018; @Baskaran2018; @Phillips2018; @Kivelson2018]. In such a system, the low energy electronic structure can be dramatically changed by the twist. It was shown that some low energy flat bands, which are well separated with other high energy bands, appear when the twist angle is around $1.1^{\circ}$. A correlated insulating state is observed when the flat bands are near half-filled [@Mott]. Doping this correlated insulator leads to SC with highest critical temperature $T_c$ up to 1.7 K. This system looks similar to the cuprates in terms of phase diagram and the high ratio of $T_c$ over the Fermi-temperature $T_F$. In fact, it was argued that the insulating state was a Mott insulator, while the MA-TBG was an analogy of cuprate superconductor. Since the structure of the MA-TBG is in situ tunable, it was proposed that this system can serve as a good platform to study the pairing mechanism of the high-$T_c$ SC, the biggest challenge of condensed-matter physics.
However, the viewpoint that the SC in MA-TBG is induced by doping a Mott-insulator suffers from the following three inconsistencies with experimental results. Firstly, the so-called “Mott-gap" extrapolated from the temperature-dependent conductance is just about 0.31 meV[@Mott], which is much lower than the band width of the low energy emergent flat bands ($\sim$10 meV). Such a tiny “Mott-gap" can hardly be consistent with the “Mott-physics". Secondly, the behaviors upon doping into this insulating phase is different from those for doping a Mott-insulator, as analyzed in the following for the positive filling as an example. In the case of electron doping with respect to the half-filling, the system has small Fermi pocket with area proportional to doping, which is consistent with a doped Mott insulator[@SC]. However, in the hole doping case, slightly doping leads to a large Fermi surface (FS) with area proportional to the electron concentration of the whole bands instead of the hole concentration with respect to the half-filling[@SC; @Mott]. Such behavior obviously conflicts with the “Mott-physics". Thirdly, some samples which exhibit the so-called “Mott-insulating" behavior at high temperature become SC upon lowering the temperature[@Notice]. Such a behavior is more like to be caused by the competing between SC and some other kind of orders, such as density waves, instead of “Mott physics".
In this Letter, we study the problem from weak coupling approach, wherein electrons on the FS acquire effective attractions through exchanging spin fluctuations, which leads to Cooper pairing. After analyzing the characteristics of the low energy emergent band structure, an effective $p_{x,y}$-orbital tight-binding model[@Yuan2018] on the emergent honeycomb lattice is adopted, but with the hopping integrals newly constructed via the Slater-Koster formulism[@Slater1954], which is re-derived based on the symmetry of the system (Supplementary Material I[@SupplMater]). The characteristics of the constructed band structure is qualitatively consistent with both the rigorous multi-band tight-binding results[@Nguyen2017; @Moon2012] and experiments[@SC; @Mott]. Moreover the band degeneracy at high-symmetric points or lines is compatible with the corresponding irreducible representations[@Yuan2018]. Then after the Hubbard-Hund interaction turns on, we performed RPA based calculations to study the electron instabilities. Our results identify the correlated insulator near half-filling as FS-nesting induced noncoplanar chiral SDW insulator, featuring quantum anomalous Hall effect (QAHE). Bordering this SDW insulator is chiral $d+id$ topological superconducting state. The obtained phase diagram is qualitatively consistent with experiments.
** For the MA-TBG, the small twist angle between the two graphene layers causes Moire pattern which results in much enlarged unit cell, consequently thousands of energy bands are taken into account[@Nguyen2017; @Moon2012], and the low-energy physics are dramatically changed[@Nguyen2017; @Moon2012; @Fang2015; @Santos2007; @Santos2012; @Shallcross2008; @Bistritzer2011; @Bistritzer2010; @Uchida2014; @Mele2011; @Mele2010; @Sboychakov2015; @Morell2010; @Trambly2010; @Latil2007; @Trambly2012; @Gonza2013; @Luis2017; @Cao2016; @Ohta2012; @Kim2017; @Huder2018; @Li2017]. Remarkably, four low energy nearly-flat bands with a total bandwidth of about 10 meV emerge which are well isolated from the high energy bands. Since both the correlated insulating and the superconducting phases emerge when these low energy bands are partially filled, it’s urgent to provide an effective model with relevant degrees of freedom to capture the low energy band structure.
By analyzing the degeneracy and representation of the flat bands at all three of the high symmetry points $\Gamma$, $K$ and $M$, a honeycomb lattice rather than the triangular one should be adopted to model the low-energy physics of MA-TBG[@Po2018; @Yuan2018]. The emergent honeycomb lattice consists of two sublattices originating from different layers. Further symmetry analysis shows the related Wannier orbitals on each site belong to the $p_x$ and $p_y$ symmetry[@Yuan2018]. Therefore, we can construct the hopping integrals between the $p_{x,y}$-like orbitals on the honeycomb lattice via the Slater-Koster formulism[@Slater1954] based on symmetry analysis[@SupplMater], which reflects coexisting $\sigma$ and $\pi$ bondings[@Wu2008a; @Wu2008b; @Zhang2014; @Liu2014; @Yang2015]. Our tight-binding (TB) model up to the next nearest neighbor (NNN) hoping thus obtained reads, $$\label{tb}
H_{tb}=\sum_{i\mu,j\nu,\sigma}t_{i\mu,j\nu}c_{i\mu\sigma}^{\dagger}c_{j\nu\sigma}-\mu_c\sum_{i\mu\sigma}c_{i\mu\sigma}^{\dagger}c_{i\mu\sigma}.$$ Here $\mu,\nu=x,y$ represent the $p_{x},p_{y}$ orbitals shown in Fig. \[band\](a), $i,j$ stand for the site and $\mu_c$ is the chemical potential determined by the filling $\delta\equiv n/n_s-1$ relative to charge neutrality. Here $n$ is the average electron number per unit cell, $n_s=4$ is the $n$ for charge neutrality. The hopping integral $t_{i\mu,j\nu}$ can be obtained as $$\label{slater_koster}
t_{i\mu,j\nu}=t_{\sigma}^{ij}\cos\theta_{\mu,ij}\cos\theta_{\nu,ij}+t_{\pi}^{ij}\sin\theta_{\mu,ij}\sin\theta_{\nu,ij},$$ where $\theta_{\mu,ij}$ denotes the angle from the direction of $\mu$ to that of $\mathbf{r}_{j}-\mathbf{r}_{i}$. The Slater-Koster parameters $t_{\sigma/\pi}^{ij}$ represent the hopping integrals contributed by $\sigma/\pi$- bondings. More details about the band structure are introduced in Supplementary Materials II[@SupplMater].
![(a) Schematic diagram for our model. The dashed rhombus labels the unit cell of the emergent honeycomb lattice with the $p_{x},p_{y}$-like Wannier orbitals on each site. (b) Band structure and (c) DOS of MA-TBG. The red, black and green horizontal dashed lines in (c) label the filling $\delta$ of $-\frac{1}{2}$, 0 and $\frac{1}{2}$ respectively. The Slater-Koster parameters $t_{\sigma/\pi}^{ij}$ are chosen as $t^{(1)}_{\sigma}=2$ meV, $t^{(1)}_{\pi}=t^{(1)}_{\sigma}/1.56$, $t^{(2)}_{\sigma}=t^{(1)}_{\sigma}/7$, and $t^{(2)}_{\pi}=t^{(2)}_{\sigma}/1.56$. Here the superscript (1)/(2) represents NN or NNN bondings respectively.[]{data-label="band"}](model_band_DOS.pdf){width="48.00000%"}
The band structure of our TB model (\[tb\]) is shown in Fig. \[band\](b). Investigating the degeneracy pattern here, one finds that the $\Gamma$-point is doubly degenerate for each two bands, the $M$-point is non-degenerate, and as for the two $K$-points, both degenerate Dirac crossing and non-degenerate splitting (Dirac mass generation) coexist. Such degeneracy pattern is consistent with both symmetry representation[@Yuan2018; @Po2018] and rigorous results[@Nguyen2017; @Moon2012]. Note that the Dirac mass generation is important[@Yuan2018] in understanding the quantum oscillation experiment, wherein 4-fold Landau level degeneracy (including spin-degeneracy) is observed near the charge neutrality[@SC]. The model parameters $t_{\sigma/\pi}^{ij}$ (introduced in the caption of Fig. \[band\](a)) are tuned so that the renormalized Fermi velocity (which is about $\frac{1}{25}$ of that of monolayer graphene), as well as the total band width (about 10meV), are consistent with experiments. Note that due to the presence of NNN-hoppings, the band structure is particle-hole asymmetric with the negative (n-) energy part flatter than the positive (p-) one, consistent with experiment[@SC]. The density of state (DOS) shown in Fig. \[band\](c) suggests that the $\pm\frac{1}{2}$ fillings relevant to experiments are near the Van-Hove (VH) fillings $\delta_V$ ($\approx\pm 0.425$) introduced below.
![The evolution of FS. (a)-(c) FS at fillings -0.40, -0.425 ($\delta_V$) and -0.45. (d)-(f) FS at fillings 0.41, 0.425 ($\delta_V$) and 0.44. The central hexagons in every plot in black dashed line indicate the Brillouin Zone. The FS-nesting vectors $\bm{Q}_{\alpha}$ $(\alpha=1,2,3)$ are marked in (b) and (e).[]{data-label="FS"}](FS_evolution.pdf){width="48.00000%"}
The evolution of the FS with filling near $\pm\frac{1}{2}$ is shown in Fig. \[FS\]. One finds that in both the n- and p- parts, a Lifshitz transition takes place at some critical fillings $\delta_V$, which changes the FS topology. At the Lifshitz transition point, there is VH singularity (VHS) at the three $M$-points, and the FS is nearly-perfect nested, as shown in Fig. \[FS\](b) and (e), with the three marked nesting vectors $\bm{Q}_{\alpha}$ $(\alpha=1,2,3)$ also near the three $M$-points. One finds that the situation of FS-nesting is asymmetric before and after the Lifshitz transition: before $\delta_V$, there are three FS patches with bad nesting; after $\delta_V$, two FS patches are left with the outer one well nested. This asymmetry on FS-nesting is closely related to the asymmetry in the phase diagram studied below. Note that the $|\delta_V|$ for the p- and n- parts are only approximately equal. As shown in Supplementary Materials II[@SupplMater], these characteristics don’t obviously change with model parameters in reasonable range and $\delta_V$ is generally near $\pm\frac{1}{2}$.
It’s proposed here that the SC detected in the MA-TBG is driven not by electron-phonon coupling but by electron-electron interactions (See Supplementary Materials III[@SupplMater] for the analysis). We adopt the following repulsive Hubbard-Hund model proposed in Ref[@Yuan2018], $$\begin{aligned}
\label{model}
H=&H_{tb}&+H_{int}\nonumber\\
H_{int}=&U&\sum_{i\mu}n_{i\mu\uparrow}n_{i\mu\downarrow}+V\sum_{i}
n_{ix}n_{iy}+J_{H}\sum_{i}\Big[\sum_{\sigma\sigma^{\prime}}\nonumber\\&c^{\dagger}_{ix\sigma}&c^{\dagger}_{iy\sigma^{\prime}}
c_{ix\sigma^{\prime}}c_{iy\sigma}+(c^{\dagger}_{ix\uparrow}c^{\dagger}_{ix\downarrow}
c_{iy\downarrow}c_{iy\uparrow}+h.c.)\Big]\end{aligned}$$ where $U=V+2J_H$. Adopting $U=1.5$ meV, we have considered both $J_H=0.1U$ and $J_H=0$, with the two cases giving qualitatively the same results. Strictly speaking, the Hubbard-Hund interactions $H_{int}$ describing the atomic interactions do not apply here for our extended effective orbitals. However, as will be seen, the electron instabilities here are mainly determined by the VHS and the FS-nesting, and will not be strongly affected by the concrete formulism of the interactions, as long as it’s repulsive. Therefore, the model(\[model\]) can be a good start point.
** We adopt the standard multi-orbital RPA approach[@RPA1; @RPA2; @RPA3; @RPA4] to study the electron instabilities of the system. We start from the normal-state susceptibilities in the particle-hole channel and consider its renormalization due to interactions up to the RPA level. Through exchanging spin or charge fluctuations represented by these susceptibilities, the electrons near the FS acquire effective attractions, which leads to pairing instability for arbitrarily weak interactions. However, when the repulsive interaction strength $U$ rises to some critical value $U_c$, the spin susceptibility diverges, which leads to SDW order. More details can be found in Supplementary Materials III[@SupplMater].
The bare susceptibility tensor is defined as, $$\begin{aligned}
\label{chi0}
\chi^{(0)l_1l_2}_{l_3l_4}(\bm{k},\tau)\equiv
&\frac{1}{N}\sum_{\bm{k}_1\bm{k}_2}\left\langle
T_{\tau}c_{l_1}^{\dagger}(\bm{k}_1,\tau)
c_{l_2}(\bm{k}_1+\bm{k},\tau)\right. \nonumber\\
&\left.\times c_{l_3}^{\dagger}(\bm{k}_2+\bm{k},0)
c_{l_4}(\bm{k}_2,0)\right\rangle_0.\end{aligned}$$ Here $\langle\cdots\rangle_0$ denotes the thermal average for the non-interacting system, and $l_{i(i=1,...,4)}=1,2,3,4$ are the sublattice-orbital indices. Fourier transformed to the imaginary frequency space, the obtained $\chi^{(0)l_1,l_2}_{l_3,l_4}(\bm{k},i\omega_n)$ can be taken as a matrix with $l_1l_2/l_3l_4$ to be the row/column indices. The largest eigenvalue $\chi(\bm{k})$ of the zero-frequency susceptibility matrix $\chi^{(0)l_1l_2}_{l_3l_4}(\bm{k},i\omega_n=0)$ as function of $\bm{k}$ for $\delta\to\delta_V=-0.425$ is shown in the whole Brillouin Zone in Fig. \[magnet\](a), and along the high-symmetry lines in Fig. \[magnet\](b).
Figure \[magnet\](a) and (b) show that the distribution of $\chi(\bm{k})$ for $\delta\to\delta_{V}$ peaks near the three $M$- points in the Brillouin Zone, which originates from the FS-nesting shown in Fig. \[FS\](b). In the thermal-dynamic limit, these peaks will diverge due to diverging DOS. Therefore arbitrarily weak interactions will induce density-wave type of instability. The RPA treatment shows that repulsive interactions suppress the charge susceptibility $\chi^{(c)}$ and enhance the spin susceptibility $\chi^{(s)}$(Supplementary Materials III[@SupplMater]). Therefore, SDW order emerges for arbitrarily weak Hubbard interactions in our model at $\delta_{V}$. We identify the correlated insulator observed by experiment[@Mott] to be the SDW insulator proposed here at $\delta_V$, which is near $\pm\frac{1}{2}$.
![Distribution of $\chi(\bm{k})$ for $\delta\to\delta_V=-0.425$ (a) in the Brillouin Zone, and (b) along the high-symmetric lines. (c) The filling dependence of $U_c$ (for $J_H=0.1U$), with the horizontal line representing $U=1.5$ meV adopted in our calculations.[]{data-label="magnet"}](magnetism.pdf){width="48.00000%"}
Note that the competition among the three-fold degenerate FS-nesting vectors $\bm{Q}_{\alpha}$ $(\alpha=1,2,3)$ will drive noncoplanar SDW order with spin chirality, featuring QAHE[@TaoLi; @Martin; @Kato; @Ying]. To clarify this point, let’s extrapolate the eigenvectors $\xi(\bm{Q_\alpha})$ corresponding to the largest eigenvalue of the susceptibility matrix $\chi^{(0)}(\bm{k},i\omega_n=0)$ for $\bm{k}\to \bm{Q_\alpha}$. Defining the magnetic order parameters $\bm{S}_{i\mu\nu}\equiv \left\langle c^{\dagger}_{i\mu s}\bm{\sigma}_{s s^{\prime}}c_{i\nu s^{\prime}}\right\rangle$, the divergence of $\chi(\bm{Q}_{\alpha})$ requires spontaneous generation of magnetic order with $\bm{S}_{i\mu\nu}\propto\xi_{\mu\nu}(\bm{Q_\alpha})e^{i\bm{Q}_\alpha\cdot\bm{R}_i}\bm{n}_\alpha$, with the global unit vector $\bm{n}_\alpha$ pointing anywhere. Now we have three degenerate $\bm{Q}_\alpha$, which perfectly fit into the three spatial dimensions: $\bm{S}_{i\mu\nu}\propto(\xi_{\mu\nu}(\bm{Q_1})e^{i\bm{Q}_1\cdot\bm{R}_i},\xi_{\mu\nu}(\bm{Q_2})e^{i\bm{Q}_2\cdot\bm{R}_i},
\xi_{\mu\nu}(\bm{Q_3})e^{i\bm{Q}_3\cdot\bm{R}_i})$. Such noncoplanar SDW order with spin chirality may lead to nontrivial topological Chern-number in the band structure, resulting in QAHE[@TaoLi; @Martin; @Kato; @Ying].
When the filling is away from $\delta_V$, SDW order only turns on when $U>U_c$, where the renormalized spin susceptibility tensor $\chi^{(s)}$ diverges. The filling-dependence of $U_c$ for $J_H=0.1U$ is shown in Fig. \[magnet\](c) (the case of $J_H=0$ yields similar result), where SDW order is only present within a narrow regime centering at the two $\delta_V$. When $U<U_c$, through exchanging short-ranged spin and charge fluctuations between a Cooper pair, an effective pairing interaction vertex $V^{\alpha\beta}(\bm{k},\bm{k}')$ will be developed, which leads to the following linearized gap equation near $T_c$(See Supplementary Materials III[@SupplMater]): $$\begin{aligned}
\label{gapeq}
-\frac{1}{(2\pi)^2}\sum_{\beta}\oint_{FS}
dk'_{\Vert}\frac{V^{\alpha\beta}(\bm{k},\bm{k}')}
{v^{\beta}_{F}(\bm{k}')}\Delta_{\beta}(\bm{k}')=\lambda
\Delta_{\alpha}(\bm{k}).\end{aligned}$$ Here $\beta$ labels the FS patch, $v^{\beta}_F(\bm{k}')$ is the Fermi velocity, and $k'_{\parallel}$ is the tangent component of $\bm{k}'$. Eq.(\[gapeq\]) becomes an eigenvalue problem after discretization, with the eigenvector $\Delta_{\alpha}(\bm{k})$ representing the gap form factor and the eigenvalue $\lambda$ determining corresponding $T_c$ through $T_c\propto e^{-1/\lambda}$. From symmetry analysis (See Supplementary Materials IV[@SupplMater]), each solved $\Delta_{\alpha}(\bm{k})$ is attributed to one of the 5 irreducible representations of the $D_6$ point-group of our model (or $D_3$ point-group for real material with spin-SU(2) symmetry), which corresponds to $s,(p_{x},p_{y}),(d_{x^2-y^2},d_{xy}),f_{x^3-3xy^2},f^{\prime}_{y^3-3yx^2}$ wave pairings respectively. Note that only intra-band pairing is considered here.
![The doping dependence of the largest eigenvalues $\lambda$ of all pairing symmetries for (a) $J_H=0.1U$ and (b) $J_H=0$. Note that the shown eigenvalue for the $f$-wave is the larger one of the two different f-symmetries. The vertical bold grey lines indicate the SDW regime. (c) and (d) are the gap functions of $d_{x^2-y^2}$ and $d_{xy}$-wave symmetries at doping $\delta=-0.5$, respectively.[]{data-label="SC"}](SC.pdf){width="48.00000%"}
The filling-dependence of the largest pairing eigenvalue for each pairing symmetry in the superconducting regimes is shown in Fig. \[SC\](a) for $J_H=0.1U$ and (b) for $J_H=0$, in together with the SDW regimes.No obvious difference is found between Fig. \[SC\](a) and (b). Figure \[SC\](a) or (b) can also be viewed as the phase diagram, which exhibits the following three remarkable features. Firstly, although SDW order induced by FS-nesting is present at $\delta_V$ for both p- and n- fillings, the SC order is only obvious on the n- part, because the bands in the n- part is flatter, which leads to higher DOS. Secondly, the SC order is strong near the SDW regime, as the spin-fluctuation there is strong. This feature makes the system look similar to the cuprates. Note that the unphysical divergence of $\lambda$ ($T_c$ will be very high) just bordering the SDW regime is only an artifact of the RPA treatment, which can be eliminated through introducing self-consistent correction to the single-particle Green’s function, as done in the FLEX approach[@FLEX1; @FLEX2; @FLEX3]. Thirdly, the SC order on the left side of $\delta_V$ is stronger than that on its right side for the n- part. This asymmetry is due to the asymmetry in FS-nesting: the FS in Fig. \[FS\](c) (the left side of $\delta_V$) is better nested than that in Fig. \[FS\](a)(the right side of $\delta_V$). All these features are well consistent with experiments.
From Fig. \[SC\](a) or (b), the leading pairing symmetry near $\delta_V$ relevant to experiment is the degenerate $(d_{x^2-y^2}, d_{xy})$ doublets, with their gap form factors shown in Fig. \[SC\](c) and (d). While the gap function of $d_{x^2-y^2}$ depicted in Fig.4(c) is symmetric about the x- and y- axes in the reciprocal lattice, that of $d_{xy}$ depicted in Fig.4(d) is asymmetric about them. This singlet pairing symmetry is driven by the antiferromagnetic spin-fluctuations here. Physically, the key factors which determine the formation of d-wave SC are the VHS and FS-nesting. Firstly, the VHS takes place at the three time-reversal-invariant momenta, which only supports singlet pairings[@Hongyao]. Secondly, the location of the VHS and the FS-nesting vectors on the outer FS here at $\delta_V$ is nearly the same as those of the single-layer graphene at quarter doping[@Nandkishore2012]. Then from the renormalization group (RG) analysis[@Nandkishore2012; @Wang2012; @Nandkishore2014], both systems should share the same pairing symmetry, i.e. the degenerate d-wave. Therefore, the d-wave SC here is mainly determined by the features of the FS, and depends little on the details of the repulsive interactions.
The degenerate $(d_{x^2-y^2}, d_{xy})$ doublets will further mix into fully-gapped $d_{x^2-y^2}\pm i d_{xy}$ ($d+id$) SC in the ground state to minimize the energy (See Supplementary Materials V[@SupplMater]). This chiral pairing state breaks time-reversal-symmetry and belongs to class C topological SC[@Schnyder], characterized by integer topological quantum number Z and thus can host topologically protected boundary fermion modes, which appeals for experimental verification.
** Note that there are two FS patches at $\delta_V$ in our model, with only the outer one well nested, as shown in Fig. \[FS\](b) and (e). Thus for weak $U$, the FS-nesting driven SDW order can only gap out the outer FS, leaving the inner pocket untouched. However, as argued in Ref[@SC], the interaction strength in the MA-TBG is not weak in comparison with the bandwidth. Therefore, the SDW order might be strong enough to touch and gap out the inner pocket as well, leading to a tiny net gap on that pocket, which might be related to the so-called “Mott-gap" of 0.31 meV detected by experiment[@Mott]. Moreover, this gap caused by noncoplanar SDW order will easily be closed by Zeeman-coupling to an applied external magnetic field[@Kato], driving the metal-insulator transition detected by experiment[@SC].
The asymmetry on the situation of FS-nesting on different doping sides of $\delta_V$ shown in Fig.2 might also be related to the asymmetry observed in quantum oscillation experiments[@SC]. As the FS for the side $|\delta|\gtrsim|\delta_V|$ is better nested than that for the other side, it’s possible for some range of $U$ that SDW order only emerges on that side, in which case a small pocket with the area proportional to $|\delta|-|\delta_V|$ only emerges on the side $|\delta|\gtrsim|\delta_V|$, consistent with quantum oscillation experiments[@SC].
We notice a peak in the DOS near the band bottom, as shown in Fig. \[band\](c). Careful investigation into the band structure reveals that the peak is caused by band flattening near the bottom. The FS there only includes two small pockets, and no VHS or FS-nesting can be found, as shown in Supplementary Materials II[@SupplMater]. In this region, ferromagnetic metal caused by Stoner criteria instead of SC might be developed.
In conclusion, our adopted effective $p_{x,y}$-orbital TB model on the emergent honeycomb lattice with the newly constructed hopping integrals well captures the main characteristics of real system. Remarkably, Lifshitz transitions take place at VH fillings near $\pm\frac{1}{2}$. The VHS and the FS-nesting with three-fold degenerate nesting vectors drive the system into noncoplanar chiral SDW order, featuring QAHE. Bordering the chiral SDW phase is $d+id$ chiral topological SC (TSC) driven by short-ranged antiferromagnetic fluctuations. The phase diagram of our model is similar with experiments. The MA-TBG thus might provide the first realization of the intriguing $d+id$ chiral TSC proposed previously[@Baskaran; @Black; @Gonzalez; @Nandkishore2012; @Wang2012; @Black2012; @Liu2013; @Nandkishore2014] in graphene-related systems.
Acknowledgements {#acknowledgements .unnumbered}
================
We are grateful to the helpful discussions with Fu-Chun Zhang, Yuan-Ming Lu, Qiang-Hua Wang, Yi-Zhuang You, Noah Fan-Qi Yuan, Xian-Xin Wu, Hong Yao, Zheng-Cheng Gu, Yue-Hua Su, Yi Zhou, Tao Li and Yugui Yao. This work is supported by the NSFC (Grant Nos. 11674025, 11774028, 11604013, 11674151, 11334012, 11274041), the National Key Research and Development Program of China (No. 2016YFA0300300), Beijing Natural Science Foundation (Grant No. 1174019), and Basic Research Funds of Beijing Institute of Technology (No. 2017CX01018).
[\*]{}
Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras and P. Jarillo-Herrero, Nature **556** 43 (2018). G. E. Volovik, Pis¡¯ma Zh. Eksp. Teor. Fiz. 107, 537 (2018) \[JETP Lett. 107, 516 (2018)\].
B. Roy and V. Juricic, arXiv: 1803.11190v1.
H.C. Po, L.j. Zou, A. Vishwanath, and T.Senthi, Phys. Rev. X 8, 031089 (2018).
C. Xu and L. Balents, Phys. Rev. Lett. 121, 087001 (2018). N. F. Q. Yuan and L. Fu, Phys. Rev. B 98, 045103 (2018).
G. Baskaran, arXiv:1804.00627.
B. Padhi, C. Setty, P. W. Phillips, Nano Lett. 18, 6175(2018).
J. F. Dodaro, S. A. Kivelson, Y. Schattner, X-Q. Sun, C. Wang, Phys. Rev. B 98, 075154 (2018).
Y. Cao, V. Fatemi, A. Demir, S. Fang, S. L. Tomarken, J. Y. Luo, J. D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, E. Kaxiras, R. C. Ashoori, and P. Jarillo-Herrero, Nature **556** 80 (2018). See the uppermost two curves in the Fig.4a of Ref[@SC].
J. C. Slater and G. F. Koster, Phys. Rev. **94**, 1498 (1954).
See Supplemental Material for the derivation of the Slater-Koster formulism from symmetry analysis, more details of our band structure, the multi-orbital RPA approach, the $d+id$ superconducting state, and classification of the pairing symmetry, which contains[@RPA2; @RPA3].
S. Graser, T. A. Maier, P. J. Hirschfeld and D. J. Scalapino, New Journal of Physics **11**, 025016 (2009).
T. A. Maier, S. Graser, P. J. Hirschfeld, and D. J. Scalapino, Phys. Rev. B **83**, 100515(R) (2011).
Nguyen N. T. Nam and M. Koshino, Phys. Rev. B. **96**, 075311 (2017).
P. Moon and M. Koshino, Phys. Rev. B. **85**, 195458 (2012).
S. Fang and E. Kaxiras, Phys. Rev. B. **93**, 235153 (2016).
J. M. B. Lopes dos Santos, N. M. R. Peres, and A. H. Castro Neto, Phys. Rev. Lett. **99**, 256802 (2007).
J. M. B. Lopes dos Santos, Phys. Rev. B. **86**, 155449 (2012) .
S. Shallcross, S. Sharma and O. A. Pankratov, Phys. Rev. Lett. **101**, 056803 (2008).
R. Bistritzer and A. H. MacDonald, PNAS **108**, 12233 (2011).
R. Bistritzer and A. H. MacDonald, Phys. Rev. B. **81**, 245412 (2010).
K. Uchida, S. Furuya, J.I. Iwata, and A. Oshiyama, Phys. Rev. B. **90**, 155451 (2014).
E. J. Mele, Phys. Rev. B. **84**, 235439 (2011).
E. J. Mele, Phys. Rev. B. **81**, 161405R (2010).
A. O. Sboychakov, A. L. Rakhmanov, A. V. Rozhkov, and F. Nori, Phys. Rev. B. **92**, 075402 (2015).
E. S. Morell, J. D. Correa, P. Vargas, M. Pacheco, and Z. Barticevic, Phys. Rev. B. **82**, 121407R (2010).
G. Trambly de Laissardiere, D. Mayou, and L. Magaud, Nano Lett.**10**, 804 (2010).
S. Latil, V. Meunier, and L. Henrard, Phys. Rev. B. **76**, 201402R (2007).
G. Trambly de Laissardiere, D. Mayou, and L. Magaud, Phys. Rev. B. **86**, 125413 (2012).
J. Gonzalez, Phys. Rev. B. **88**, 125434 (2013).
Luis A. Gonzalez-Arraga, J. L. Lado, F. Guinea and P. S. Jose, Phys. Rev. Lett. **119**, 107201 (2017).
Y. Cao, J. Y. Luo, V. Fatemi, S. Fang, J. D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Phys. Rev. Lett. **117**, 116804 (2016).
T. Ohta, J. T. Robinson, P. J. Feibelman, A. Bostwick, E. Rotenberg, and T. E. Beechem, Phys. Rev. Lett. **109**, 186807 (2012).
K. Kim, A. DaSilva, S. Huang, B. Fallahazad, S. Larentis, T. Taniguchi, K. Watanabe, B. J. LeRoy, A. H. MacDonald and E. Tutuc, PNAS **14**, 3364 (2017).
L. Huder, A. Artaud, T. Le Quang, G. T. de Laissardiere, A. G. M. Jansen, G. Lapertot, C. Chapelier, and V. T. Renard, Phys. Rev. Lett. **120**, 156405 (2018).
S.-Y. Li, K.-Q. Liu, L.-J. Yin, W.-X. Wang, W. Yan, X.-Q. Yang, J.-K. Yang, H. Liu, H. Jiang, and L. He, Phys. Rev. B. **96**, 155416 (2017).
C. Wu and S. D. Sarma, Phys. Rev. B. **77**, 235107 (2008).
C. Wu, Phys. Rev. Lett. **101**, 186807 (2008).
G.-F. Zhang, Y. Li, and C. Wu, Phys. Rev. B. **90**, 075114 (2014).
C.-C. Liu, S. Guan, Z. Song, S. A. Yang, J. Yang, and Y. Yao, Phys. Rev. B. **90**, 085431 (2014).
F. Yang, C.-C. Liu, Y.-Z. Zhang, Y. Yao, D.-H. Lee, Phys. Rev. B **91**, 134514 (2015).
K. Kuroki, S. Onari, R. Arita, H. Usui, Y. Tanaka, H. Kontani and H. Aoki, Phys. Rev. Lett. **101**, 087004 (2008).
X. Wu, F. Yang, C. Le, H. Fan, and J. Hu, Phys. Rev. B **92**, 104511 (2015).
T. Li, Europhys. Lett. 97, 37001 (2012).
I. Martin and C. D. Batista, Phys. Rev. Lett. **101**, 156402(2008).
Yasuyuki Kato, Ivar Martin, C. D. Batista, Phys. Rev. Lett. **105**, 266405 (2010).
S. Jiang, A. Mesaros and Y. Ran, Phys. Rev. X **4**, 031048 (2014).
T. Takimoto, T. Hotta, and K. Ueda, Phys. Rev. B **69**, 104504 (2004).
K. Yada and H. Kontani, J. Phys. Soc. Jpn. **74**, 2161 (2005).
K. Kubo, Phys. Rev. B **75**, 224509 (2007).
H. Yao, and F. Yang, Phys. Rev. B **92**, 035132 (2015).
S. Pathak, V. B. Shenoy, and G. Baskaran, Phys. Rev. B **81**, 085431(2010).
A. M. Black-Schaffer and S. Doniach, Phys. Rev. B **75**, 134512(2007).
J. Gonzalez, Phys. Rev. B **78**, 205431 (2008).
R. Nandkishore, L. S. Levitov and A. V. Chubukov, Nat. Phys. **8**, 158 (2012).
W.-S. Wang, Y.-Y. Xiang, Q.-H. Wang, F. Wang, F. Yang, and D.-H. Lee, Phys. Rev. B **85**, 035414 (2012).
R. Nandkishore, R. Thomale, and A. V. Chubukov, Phys. Rev. B **89**, 144501 (2014).
A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. W. Ludwig, Phys. Rev. B **78**, 195125 (2008).
A. M. Black-Schaffer, Phys. Rev. Lett. **109**, 197001 (2012).
F. Liu, C.-C. Liu, K. Wu, F. Yang and Y. Yao, Phys. Rev. Lett. **111**, 066804 (2013).
Appendix:Re-derivation of the tight-binding model with explicit parameters by symmetry
======================================================================================
The twisted bilayer-graphene has the $D_{3}$ point-group symmetry. The emergent honeycomb superstructure for MA-TBG is slightly buckled, as shown in Fig.\[Fig\_TB\_symmetry\]. The lattice constant of the emergent honeycomb lattice is about 12.8 nm, much larger than its buckling height (0.335 nm). As a result, the emergent honeycomb lattice approximately has $D_{6}$ point-group symmetry.
The two relevant orbitals consist of two Wannier orbitals with x/y symmetry, recorded as $\phi_{x},\phi_{y}$, which can be taken as the basis functions of the two-dimensional (2D) irreducible representations $E$ of point-group $D_{3}$.
![The emergent honeycomb superstructure for MA-TBG and the various neighbors. A, B label the two sublattices, $d_{i=1-3}^{A,B}$ denote the three nearest neighboring vectors from A or B, and $R_{i=1-6}^{A,B}$ denote the six next nearest neighboring vectors from A or B. The black rhombus shows the two-dimensional (2D) unit cell for the emergent honeycomb superstructure.[]{data-label="Fig_TB_symmetry"}](Fig_TB_by_symmetry.pdf)
The matrix elements of the Hamiltonian in momentum space can be easily obtained as $$H_{\mu\nu}\left(k\right)=\sum_{\mathbf{\delta}}{\displaystyle e^{i\mathbf{k}\cdot\mathbf{\delta}}E_{\mu\nu}\left(\mathbf{\delta}\right)},$$ with the hopping integral between the orbitals $|\phi_{\mu}>$ at site $\mathbf{0}$ and $|\phi_{\nu}>$ at site $\mathbf{\delta}$ defined as $$E_{\mu\nu}\left(\mathbf{\delta}\right)\equiv<\phi_{\mu}\left(\mathbf{r}\right)|\hat{H}|\phi_{\nu}\left(\mathbf{r}-\delta\right)>.$$ Given $E_{\mu\nu}\left(\mathbf{\delta}\right)$, the hopping integrals between site $\mathbf{0}$ and all the other symmetry-related sites $\hat{g}\mathbf{\delta}$ can be obtained by $$E_{\mu\nu}\left(\hat{g}\mathbf{\delta}\right)=D\left(\hat{g}\right)E_{\mu\nu}\left(\mathbf{\delta}\right)\left[D\left(\hat{g}\right)\right]^{\dagger},$$ where $\hat{g}$ is the symmetry operator of point-group $D_{3}$, $D\left(\hat{g}\right)$ is the $2\times2$ representation matrix of the 2d irreducible representations $E$. Considering the sublattice degree of freedom, the final Hamiltonian is $4\times4$ matrix. As shown in Fig.\[Fig\_TB\_symmetry\], the point-group $D_{3}$ has the generators: three-fold rotation along $\mathbf{z}$ axis $C_{3z}$ and two-fold rotation along $\mathbf{y}$ axis $C_{2y}$.
Firstly, let’s consider the nearest-neighbor (NN) hopping integrals. The $A$ site has three NN $B$ sites, with the corresponding vectors $\vec{AB}$ labeled by $\mathbf{d}_{i=1-3}^{A}$. By the symmetry operator $C_{3z}$, $\mathbf{d}_{1}^{A}$ is rotated to $\mathbf{d}^{A}$, which is equivalent to $\mathbf{d}_{2}^{A}$. Similarly, $\mathbf{d}_{1}^{A}$ is rotated to $\mathbf{d}_{3}^{A}$ by the symmetry operator $C_{3z}^{2}$. We define $$E_{\mu\nu}\left(\mathbf{d}_{1}^{A}\right)\equiv\left(\begin{array}{cc}
t_{\sigma}^{(1)} & 0\\
0 & t_{\pi}^{(1)}
\end{array}\right),$$ with the NN hopping integral defined as $t_{\sigma/\pi}^{(1)}\equiv<\phi_{x/y}\left(\mathbf{r}\right)|\hat{H}|\phi_{x/y}\left(\mathbf{r}-\mathbf{d}_{1}^{A}\right)>$.
From group theory, under the $E$-representation of $D_3$ with $p_{x,y}$-symmetric basis functions, the representation matrix for $C_{3z}$ and $C_{3z}^{2}$ can be obtained as $$D\left(C_{3z}\right)=\left(\begin{array}{cc}
-\frac{1}{2} & -\frac{\sqrt{3}}{2}\\
\frac{\sqrt{3}}{2} & -\frac{1}{2}
\end{array}\right),D\left(C_{3z}^{2}\right)=\left(\begin{array}{cc}
-\frac{1}{2} & \frac{\sqrt{3}}{2}\\
-\frac{\sqrt{3}}{2} & -\frac{1}{2}
\end{array}\right).$$ From the $E_{\mu\nu}\left(\mathbf{d}_{1}^{A}\right)$ and by using Eq. (3), we have $$E_{\mu\nu}\left(\mathbf{d}_{2}^{A}\right)=E_{\mu\nu}\left(C_{3z}\mathbf{d}_{1}^{A}\right)=D\left(C_{3z}\right)E_{\mu\nu}\left(\mathbf{d}_{1}^{A}\right)\left[D\left(C_{3z}\right)\right]^{\dagger},$$ $$=\left(\begin{array}{cc}
\frac{1}{4}t_{\sigma}^{(1)}+\frac{3}{4}t_{\pi}^{(1)} & -\frac{\sqrt{3}}{4}t_{\sigma}^{(1)}+\frac{\sqrt{3}}{4}t_{\pi}^{(1)}\\
-\frac{\sqrt{3}}{4}t_{\sigma}^{(1)}+\frac{\sqrt{3}}{4}t_{\pi}^{(1)} & \frac{3}{4}t_{\sigma}^{(1)}+\frac{1}{4}t_{\pi}^{(1)}
\end{array}\right).$$ And similarly, we have $$E_{\mu\nu}\left(\mathbf{d}_{3}^{A}\right)=E_{\mu\nu}\left(C_{3z}^{2}\mathbf{d}_{1}^{A}\right)=D\left(C_{3z}^{2}\right)E_{\mu\nu}\left(\mathbf{d}_{1}^{A}\right)\left[D\left(C_{3z}^{2}\right)\right]^{\dagger},$$ $$=\left(\begin{array}{cc}
\frac{1}{4}t_{\sigma}^{(1)}+\frac{3}{4}t_{\pi}^{(1)} & \frac{\sqrt{3}}{4}t_{\sigma}^{(1)}-\frac{\sqrt{3}}{4}t_{\pi}^{(1)}\\
\frac{\sqrt{3}}{4}t_{\sigma}^{(1)}-\frac{\sqrt{3}}{4}t_{\pi}^{(1)} & \frac{3}{4}t_{\sigma}^{(1)}+\frac{1}{4}t_{\pi}^{(1)}
\end{array}\right).$$
The $B$ site also has three NN $A$ sites, with the corresponding vectors $\vec{BA}$ labeled as $\mathbf{d}_{i=1-3}^{B}$. By using the aforementioned procedure, we can obtain $E_{\mu\nu}\left(\mathbf{d}_{i=1-3}^{B}\right)$. In the basis $\left\{ |\phi_{x}^{A}>,|\phi_{y}^{A}>,|\phi_{x}^{B}>,|\phi_{y}^{B}>\right\} $, the NN hopping $E_{\mu\nu}\left(\mathbf{d}_{i=1-3}^{A}\right)$ correspond to the upper right off-diagonal $2\times2$ sub-block of the Hamiltonian, and the bottom left off-diagonal sub-block, $E_{\mu\nu}\left(\mathbf{d}_{i=1-3}^{B}\right)$ can be more readily obtained by Hermitian conjugation. It is especially worth emphasizing that the tight-binding model Hamiltonian obtained by symmetry here is exactly the same as the tight-binding model Hamiltonian obtained by the Slater-Koster method in the main text. For example, we can derive $E_{\mu\nu}\left(\mathbf{d}_{2}^{A}\right)$ by using the formula Eq. (2) in the main text. As shown in Fig.\[Fig\_TB\_symmetry\], the angles in Eq. (2) in the main text read $\theta_{x,\mathbf{d}_{2}^{A}}=2\pi/3$ and $\theta_{y,\mathbf{d}_{2}^{A}}=\pi/6$ . The four matrix elements read $$E_{1,1}\left(\mathbf{d}_{2}^{A}\right)=t_{\sigma}^{(1)}\cos\left(\frac{2\pi}{3}\right)^{2}+t_{\pi}^{(1)}\sin\left(\frac{2\pi}{3}\right)^{2}=\frac{1}{4}t_{\sigma}^{(1)}+\frac{3}{4}t_{\pi}^{(1)},$$
$$E_{1,2}\left(\mathbf{d}_{2}^{A}\right)=t_{\sigma}^{(1)}\cos\left(\frac{2\pi}{3}\right)\cos\left(\frac{\pi}{6}\right)+t_{\pi}^{(1)}\sin\left(\frac{2\pi}{3}\right)\sin\left(\frac{\pi}{6}\right)=-\frac{\sqrt{3}}{4}t_{\sigma}^{(1)}+\frac{\sqrt{3}}{4}t_{\pi}^{(1)},$$
$$E_{2,1}\left(\mathbf{d}_{2}^{A}\right)=t_{\sigma}^{(1)}\cos\left(\frac{\pi}{6}\right)\cos\left(\frac{2\pi}{3}\right)+t_{\pi}^{(1)}\sin\left(\frac{\pi}{6}\right)\sin\left(\frac{2\pi}{3}\right)=-\frac{\sqrt{3}}{4}t_{\sigma}^{(1)}+\frac{\sqrt{3}}{4}t_{\pi}^{(1)},$$ $$E_{2,2}\left(\mathbf{d}_{2}^{A}\right)=t_{\sigma}^{(1)}\cos\left(\frac{\pi}{6}\right)^{2}+t_{\pi}^{(1)}\sin\left(\frac{\pi}{6}\right)^{2}=\frac{3}{4}t_{\sigma}^{(1)}+\frac{1}{4}t_{\pi}^{(1)}.$$ As a result, we have the same tight-binding Hamiltonian by using the two methods.
Then let’s come to the next-nearest-neighbor (NNN) hopping integrals. The $A$ site has six NNN $A$ sites, with the corresponding vectors labeled by $\mathbf{R}_{i=1-6}^{A}$, which are split into two categories $\mathbf{R}_{i=1,3,5}^{A}$ and $\mathbf{R}_{i=4,6,2}^{A}$, as shown in Fig.\[Fig\_TB\_symmetry\]. Under the symmetry operator $C_{3z}$, $\mathbf{R}_{1}^{A}\stackrel{C_{3z}}{\rightarrow}\mathbf{R}_{3}^{A}\stackrel{C_{3z}}{\rightarrow}\mathbf{R}_{5}^{A}$ and $\mathbf{R}_{4}^{A}\stackrel{C_{3z}}{\rightarrow}\mathbf{R}_{6}^{A}\stackrel{C_{3z}}{\rightarrow}\mathbf{R}_{2}^{A}$. We define $$E_{\mu\nu}\left(\mathbf{R}_{1}^{A}\right)\equiv\left(\begin{array}{cc}
t_{\pi}^{(2)} & 0\\
0 & t_{\sigma}^{(2)}
\end{array}\right),$$ with the NNN hopping integral defined as $t_{\pi/\sigma}^{(2)}\equiv<\phi_{x/y}\left(\mathbf{r}\right)|\hat{H}|\phi_{x/y}\left(\mathbf{r}-\mathbf{R}_{1}^{A}\right)>$. From the $E_{\mu\nu}\left(\mathbf{R}_{1}^{A}\right)$ and by using Eq. (3), we have $$E_{\mu\nu}\left(\mathbf{R}_{3}^{A}\right)=E_{\mu\nu}\left(C_{3z}\mathbf{R}_{1}^{A}\right)=D\left(C_{3z}\right)E_{\mu\nu}\left(\mathbf{R}_{1}^{A}\right)\left[D\left(C_{3z}\right)\right]^{\dagger},$$ $$=\left(\begin{array}{cc}
\frac{3}{4}t_{\sigma}^{(2)}+\frac{1}{4}t_{\pi}^{(2)} & \frac{\sqrt{3}}{4}t_{\sigma}^{(2)}-\frac{\sqrt{3}}{4}t_{\pi}^{(2)}\\
\frac{\sqrt{3}}{4}t_{\sigma}^{(2)}-\frac{\sqrt{3}}{4}t_{\pi}^{(2)} & \frac{1}{4}t_{\sigma}^{(2)}+\frac{3}{4}t_{\pi}^{(2)}
\end{array}\right).$$ And $$E_{\mu\nu}\left(\mathbf{R}_{5}^{A}\right)=E_{\mu\nu}\left(C_{3z}^{2}\mathbf{R}_{1}^{A}\right)=D\left(C_{3z}^{2}\right)E_{\mu\nu}\left(\mathbf{R}_{1}^{A}\right)\left[D\left(C_{3z}^{2}\right)\right]^{\dagger},$$ $$=\left(\begin{array}{cc}
\frac{3}{4}t_{\sigma}^{(2)}+\frac{1}{4}t_{\pi}^{(2)} & -\frac{\sqrt{3}}{4}t_{\sigma}^{(2)}+\frac{\sqrt{3}}{4}t_{\pi}^{(2)}\\
-\frac{\sqrt{3}}{4}t_{\sigma}^{(2)}+\frac{\sqrt{3}}{4}t_{\pi}^{(2)} & \frac{1}{4}t_{\sigma}^{(2)}+\frac{3}{4}t_{\pi}^{(2)}
\end{array}\right).$$
As aforementioned, the emergent honeycomb lattice approximately has $D_{6}$ point-group symmetry. $\mathbf{R}_{1}^{A}$ is rotated to $\mathbf{R}_{4}^{A}$ by $C_{2z}=C_{6z}^3$, whose representation matrix is $-I$ with $I$ being a unit matrix. Thus we obtain $E_{\mu\nu}\left(\mathbf{R}_{4}^{A}\right)=E_{\mu\nu}\left(\mathbf{R}_{1}^{A}\right),$ and we can directly write $E_{\mu\nu}\left(\mathbf{R}_{6}^{A}\right)=E_{\mu\nu}\left(\mathbf{R}_{3}^{A}\right)$ and $E_{\mu\nu}\left(\mathbf{R}_{2}^{A}\right)=E_{\mu\nu}\left(\mathbf{R}_{5}^{A}\right)$ . The $B$ site also has six NNN $B$ sites, the corresponding vectors labeled by $\mathbf{R}_{i=1-6}^{B}$, and $\mathbf{R}_{i=1-6}^{B}$ = $\mathbf{R}_{i=1-6}^{A}$, therefore, $E_{\mu\nu}\left(\mathbf{R}_{i=1-6}^{B}\right)=E_{\mu\nu}\left(\mathbf{R}_{i=1-6}^{A}\right)$. And we can check that the NNN tight-binding Hamiltonian by the two methods are also exactly the same. Consequently, we have re-derived the same total tight-binding Hamiltonian here by strict and precise symmetry method, which justifies our tight-binding model in the main text.
Appendix:The explicit formulism of our Hamiltonian in the reciprocal space and the robust nesting
=================================================================================================
In the basis $\left\{ p_{x}^{A},p_{y}^{A},p_{x}^{B},p_{y}^{B}\right\} $, where A, B denote the sublattice in the emergent honeycomb lattice superstructure, the total Hamiltonian up to the next nearest neighbor hopping terms in the reciprocal space can be written as $$H\left(k\right)=H_{0}+H_{1}+H_{2},$$ where the three terms are the chemical potential, the nearest neighbor hopping and the next nearest neighbor hopping terms respectively. $$H_{0}=-\mu I_{4},$$ $$H_{1}=\left(\begin{array}{cccc}
0 & 0 & h13 & h14\\
& 0 & h23 & h24\\
& \dagger & 0 & 0\\
& & & 0
\end{array}\right),$$ $$H_{2}=\left(\begin{array}{cccc}
hn11 & hn12 & 0 & 0\\
& hn22 & 0 & 0\\
& \dagger & hn11 & hn12\\
& & & hn22
\end{array}\right).$$
The above matrix elements are given as $$\begin{split}
&h13=\frac{1}{4}(t_{\pi}+3t_{\sigma})\left(e^{i\left(\frac{k_{y}}{2\sqrt{3}}-\frac{k_{x}}{2}\right)}+e^{i\left(\frac{k_{y}}{2\sqrt{3}}+\frac{k_{x}}{2}\right)}\right)+t_{\pi}e^{-i\frac{k_{y}}{\sqrt{3}}}, \\
& h14=-\frac{\sqrt{3}}{4}(t_{\pi}-t_{\sigma})\left(-1+e^{ik_{x}}\right)e^{-\frac{1}{6}i\left(3k_{x}-\sqrt{3}k_{y}\right)},\\
&h23=h14,\\
&h24=\frac{1}{4}(3t_{\pi}+t_{\sigma})\left(e^{i\left(\frac{k_{y}}{2\sqrt{3}}-\frac{k_{x}}{2}\right)}+e^{i\left(\frac{k_{y}}{2\sqrt{3}}+\frac{k_{x}}{2}\right)}\right)+t_{\sigma}e^{-i\frac{k_{y}}{\sqrt{3}}},\\
&hn11=(3t_{\pi2}+t_{\sigma2})\cos\left(\frac{k_{x}}{2}\right)\cos\left(\frac{\sqrt{3}k_{y}}{2}\right)+2t_{\sigma2}\cos\left(k_{x}\right),\\
&hn12=\sqrt{3}(t_{\pi2}-t_{\sigma2})\sin\left(\frac{k_{x}}{2}\right)\sin\left(\frac{\sqrt{3}k_{y}}{2}\right),\\
&hn22=(t_{\pi2}+3t_{\sigma2})\cos\left(\frac{k_{x}}{2}\right)\cos\left(\frac{\sqrt{3}k_{y}}{2}\right)+2t_{\pi2}\cos\left(k_{x}\right),
\end{split}$$ where $t_{\pi},t_{\sigma} (t_{\pi2},t_{\sigma2})$ are the (next) nearest hopping Slater-Koster parameters.
In addition to the Slater-Koster parameters used in the text, we also choose two other sets of parameters to address the robustness of FS nesting at the Van-Hove dopings, with one set of Slater-Koster parameters chosen as $t^{(1)}_{\sigma}=2$ meV, $t^{(1)}_{\pi}=t^{(1)}_{\sigma}/1.56$, $t^{(2)}_{\sigma}=t^{(1)}_{\sigma}/10$, and $t^{(2)}_{\pi}=t^{(2)}_{\sigma}/1.56$ in Fig.\[nesting\_robust\] (a)(b), and the other set chosen as $t^{(1)}_{\sigma}=2$ meV, $t^{(1)}_{\pi}=t^{(1)}_{\sigma}/2$, $t^{(2)}_{\sigma}=t^{(1)}_{\sigma}/7$, and $t^{(2)}_{\pi}=t^{(2)}_{\sigma}/2$ in Fig.\[nesting\_robust\] (c)(d). All the three cases clearly show the nearly perfect FS-nesting, and thus demonstrating that the FS nesting is robust. Furthermore, the VH dopings for the three groups of parameters are all near $\pm\frac{1}{2}$. Note that due to the relationship $t^{(1)}_{\sigma}/t^{(1)}_{\pi}=t^{(2)}_{\sigma}/t^{(2)}_{\pi}$ adopted here, the absolute values of $\delta_V$ at the negative and positive doping parts are approximately equal. Such equivalence relation will be slightly changed if we do not adopt the relationship. It should be noticed that the peak in DOS (Fig.1 (c) in text) below the red dashed line come from the flat band bottom, where the FSs shown in Fig.\[FS\_Gamma\] only contain small pockets. No VHS or FS-nesting can be found there.
Appendix:The multi-orbital RPA approach
=======================================
It’s proposed here that the SC detected in the MA-TBG is not driven by electron-phonon coupling. The reason behind this point lies in that only acoustic phonon modes with wave length comparable or longer than the size of a unit cell in the Moire pattern can efficiently couple with our low energy effective orbitals through changing the hopping integral shown in Eq.(2) in the main text. The Debye frequency of such phonon modes will be too low to support the “high $T_c$ SC”. The SC detected here is more likely to be driven by electron-electron interactions. We adopt the following repulsive Hubbard-Hund model listed in Eq.(3) in the main text,
The Hamiltonian adopted in our calculations is $$\begin{aligned}
H&=H_{tb}+H_{int}\nonumber\\
&H_{int}=U\sum_{i\mu}n_{i\mu\uparrow}n_{i\mu\downarrow}+
V\sum_{i}n_{ix}n_{iy}+J_{H}\sum_{i}\Big[
\sum_{\sigma\sigma^{\prime}}c^{\dagger}_{ix\sigma}c^{\dagger}_{iy\sigma^{\prime}}
c_{ix\sigma^{\prime}}c_{iy\sigma}+(c^{\dagger}_{ix\uparrow}c^{\dagger}_{ix\downarrow}
c_{iy\downarrow}c_{iy\uparrow}+h.c.)\Big]\label{H-H-model_app}\end{aligned}$$ where $i$ is site index, $\mu$ is orbital index, $x$ and $y$ denote orbitals $p_x$ and $p_y$, respectively, $\sigma$ and $\sigma^{\prime}$ are spin indices.
Let’s define the following bare susceptibility for the non-interacting case ($U=V=J_H=0$), $$\begin{aligned}
\chi^{(0)l_{1},l_{2}}_{l_{3},l_{4}}\left(\bm{q},\tau\right)\equiv
\frac{1}{N}\sum_{\bm{k}_{1},\bm{k}_{2}}\left<T_{\tau}c^{\dagger}_{l_{1}}(\bm{k}_{1},\tau)
c_{l_{2}}(\bm{k}_{1}+\bm{q},\tau)c^{\dagger}_{l_{3}}(\bm{k}_{2}+\bm{q},0)c_{l_{4}}(\bm{k}_{2},0)
\right>_0,\label{bare}\end{aligned}$$ where $l_{i}$ $(i=1,\cdots,4)$ denote orbital indices. The explicit formulism of $\chi^{(0)}$ in the momentum-frequency space is, $$\begin{aligned}
\chi^{(0)l_{1},l_{2}}_{l_{3},l_{4}}\left(\bm{q},i\omega_n\right)
=\frac{1}{N}\sum_{\bm{k},\alpha,\beta}
\xi^{\alpha}_{l_{4}}(\bm{k})\xi_{l_{1}}^{\alpha,*}(\bm{k})
\xi^{\beta}_{l_{2}}(\bm{k}+\bm{q})\xi^{\beta,*}_{l_{3}}(\bm{k}+\bm{q})
\frac{n_{F}(\varepsilon^{\beta}_{\bm{k+q}})-n_{F}
(\varepsilon^{\alpha}_{\bm{k}})}{i\omega_n
+\varepsilon^{\alpha}_{\bm{k}}-\varepsilon^{\beta}_{\bm{k}+\bm{q}}},\label{explicit_free}\end{aligned}$$ where $\alpha,\beta=1,...,4$ are band indices, $\varepsilon^{\alpha}_{\bm{k}}$ and $\xi^{\alpha}_{l}\left(\bm{k}\right)$ are the $\alpha-$th eigenvalue and eigenvector of the $h(\bm{k})$ matrix respectively and $n_F$ is the Fermi-Dirac distribution function.
![(color online). Feyman’s diagram for the renormalized susceptibilities in the RPA level. \[RPA\_diagram\] ](RPA_diagram.pdf){height="10"}
When interactions turn on, we define the spin ($\chi^{(s)}$) and charge ($\chi^{(c)}$) susceptibility as follow, $$\begin{aligned}
\chi^{(c)l_{1},l_{2}}_{l_{3},l_{4}}\left(\bm{q},\tau\right)\equiv&
\frac{1}{2N}\sum_{\bm{k}_{1},\bm{k}_{2},\sigma_{1},\sigma_{2}}\left<T_{\tau}
C^{\dagger}_{l_{1},\sigma_{1}}(\bm{k}_{1},\tau)
C_{l_{2},\sigma_{1}}(\bm{k}_{1}+\bm{q},\tau)
C^{\dagger}_{l_{3},\sigma_{2}}(\bm{k}_{2}+\bm{q},0)C_{l_{4},\sigma_{2}}
(\bm{k}_{2},0)\right>,
\nonumber\\
\chi^{\left(s^{z}\right)l_{1},l_{2}}_{l_{3},l_{4}}
\left(\bm{q},\tau\right)\equiv&
\frac{1}{2N}\sum_{\bm{k}_{1},\bm{k}_{2},\sigma_{1},\sigma_{2}}\sigma_{1}\sigma_{2}
\left<T_{\tau}C^{\dagger}_{l_{1},\sigma_{1}}(\bm{k}_{1},\tau)
C_{l_{2},\sigma_{1}}(\bm{k}_{1}+\bm{q},\tau)C^{\dagger}_{l_{3},\sigma_{2}}
(\bm{k}_{2}+\bm{q},0)
C_{l_{4},\sigma_{2}}(\bm{k}_{2},0)\right>,\nonumber\\
\chi^{\left(s^{+-}\right)l_{1},l_{2}}_{l_{3},l_{4}}\left(\bm{q},\tau\right)
\equiv&
\frac{1}{N}\sum_{\bm{k}_{1},\bm{k}_{2}}\left<T_{\tau}
C^{\dagger}_{l_{1}\uparrow}(\bm{k}_{1},\tau)
C_{l_{2}\downarrow}(\bm{k}_{1}+\bm{q},\tau)C^{\dagger}_{l_{3}\downarrow}
(\bm{k}_{2}+\bm{q},0)
C_{l_{4}\uparrow}(\bm{k}_{2},0)\right>,\nonumber\\
\chi^{\left(s^{-+}\right)l_{1},l_{2}}_{l_{3},l_{4}}\left(\bm{q},\tau\right)\equiv&
\frac{1}{N}\sum_{\bm{k}_{1},\bm{k}_{2}}\left<T_{\tau}
C^{\dagger}_{l_{1}\downarrow}(\bm{k}_{1},\tau)
C_{l_{2}\uparrow}(\bm{k}_{1}+\bm{q},\tau)C^{\dagger}_{l_{3}\uparrow}
(\bm{k}_{2}+\bm{q},0)
C_{l_{4}\downarrow}(\bm{k}_{2},0)\right>.\end{aligned}$$ Note that in non-magnetic state we have $\chi^{\left(s^{z}\right)}=\chi^{\left(s^{+-}\right)}=\chi^{\left(s^{-+}\right)}\equiv\chi^{\left(s\right)}$, and when $U=V=J_H=0$ we have $\chi^{(c)}=\chi^{(s)}=\chi^{(0)}$. In the RPA level, the renormalized spin/charge susceptibilities for the system are, $$\begin{aligned}
\chi^{(s)}\left(\bm{q},i\nu\right)=&\left[I-\chi^{(0)}
\left(\bm{q},i\nu\right)U^{(s)}\right]^{-1}\chi^{(0)}
\left(\bm{q},i\nu\right),\nonumber\\
\chi^{(c)}\left(\bm{q},i\nu\right)=&\left[I+\chi^{(0)}
\left(\bm{q},i\nu\right)U^{(c)}\right]^{-1}\chi^{(0)}
\left(\bm{q},i\nu\right),\label{RPA_SUS}\end{aligned}$$ where $\chi^{(s,c)}\left(\bm{q},i\nu_{n}\right)$, $\chi^{(0)}\left(\bm{q},i\nu_{n}\right)$ and $U^{(s,c)}$ are operated as $16\times 16$ matrices (the upper or lower two indices are viewed as one number). Labelling orbitals $\left\{ p_{x}^{A},p_{y}^{A},p_{x}^{B},p_{y}^{B}\right\} $ as $\left\{ 1,2,3,4\right\} $, the nonzero elements of the matrix $U^{(s,c)l_{1}l_{2}}_{l_{3}l_{4}}$ are listed as follows: $$\begin{aligned}
&U^{(s)11}_{11}=U^{(s)22}_{22}=U^{(s)33}_{33}=U^{(s)44}_{44}=U \nonumber\\
&U^{(s)11}_{22}=U^{(s)22}_{11}=U^{(s)33}_{44}=U^{(s)44}_{33}=J_{H} \nonumber\\
&U^{(s)12}_{12}=U^{(s)21}_{21}=U^{(s)34}_{34}=U^{(s)43}_{43}=J_{H} \nonumber\\
&U^{(s)12}_{21}=U^{(s)21}_{12}=U^{(s)34}_{43}=U^{(s)43}_{34}=V\end{aligned}$$
$$\begin{aligned}
&U^{(c)11}_{11}=U^{(c)22}_{22}=U^{(c)33}_{33}=U^{(c)44}_{44}=U \nonumber\\
&U^{(c)11}_{22}=U^{(c)22}_{11}=U^{(c)33}_{44}=U^{(c)44}_{33}=2V-J_{H} \nonumber\\
&U^{(c)12}_{12}=U^{(c)21}_{21}=U^{(c)34}_{34}=U^{(c)43}_{43}=J_{H} \nonumber\\
&U^{(c)12}_{21}=U^{(c)21}_{12}=U^{(c)34}_{43}=U^{(c)43}_{34}=2J_{H}-V\end{aligned}$$
The Feyman’s diagram of RPA is shown in Fig.\[RPA\_diagram\] For repulsive Hubbard-interactions, the spin susceptibility is enhanced and the charge susceptibility is suppressed. Note that there is a critical interaction strength $U_c$ which depends on the ratio $J_H/U$. When $U>U_c$, the denominator matrix $I-\chi^{(0)}\left(\bm{q},i\nu\right)U^{(s)}$ in Eq.(\[RPA\_SUS\]) will have zero eigenvalues for some $\bm{q}$ and the renormalized spin susceptibility diverges there, which invalidates the RPA treatment. This divergence of spin susceptibility for $U>U_c$ implies magnetic order. When $U<U_c$, the short-ranged spin or charge fluctuations would mediate Cooper pairing in the system.
Let’s consider a Cooper pair with momentum/orbital $(\bm{k'}t,-\bm{k'}s)$, which could be scattered to $(\bm{k}p,-\bm{k}q)$ by exchanging charge or spin fluctuations. In the RPA level, The effective interaction induced by this process is as follow, $$\begin{aligned}
V^{RPA}_{eff}=\frac{1}{N}\sum_{pqst,\bm{k}\bm{k}'}\Gamma^{pq}_{st}(\bm{k},\bm{k}')
c_{p}^{\dagger}(\bm{k})c_{q}^{\dagger}(-\bm{k})c_{s}(-\bm{k}')c_{t}(\bm{k}'),
\label{effective_vertex}\end{aligned}$$ We consider the three processes in Fig.\[RPA\_effective\] which contribute to the effective vertex $\Gamma^{pq}_{st}(\bm{k},\bm{k}')$, where (a) represents the bare interaction vertex, (b) and (c) represent the two second order perturbation processes during which spin or charge fluctuations are exchanged between a cooper pair. In the singlet channel, the effective vertex $\Gamma^{pq}_{st}(\bm{k},\bm{k}')$ is given as follow, $$\begin{aligned}
\Gamma^{pq(s)}_{st}(\bm{k},\bm{k}')=\left(\frac{U^{(c)}+3U^{(s)}}{4}\right)^{pt}_{qs}+
\frac{1}{4}\left[3U^{(s)}\chi^{(s)}\left(\bm{k}-\bm{k}'\right)U^{(s)}-U^{(c)}
\chi^{(c)}\left(\bm{k}-\bm{k}'\right)U^{(c)}\right]^{pt}_{qs}+\nonumber\\
\frac{1}{4}\left[3U^{(s)}\chi^{(s)}\left(\bm{k}+\bm{k}'\right)U^{(s)}-U^{(c)}
\chi^{(c)}\left(\bm{k}+\bm{k}'\right)U^{(c)}\right]^{ps}_{qt},\end{aligned}$$ while in the triplet channel, it is $$\begin{aligned}
\Gamma^{pq(t)}_{st}(\bm{k},\bm{k}')=\left(\frac{U^{(c)}-U^{(s)}}{4}\right)^{pt}_{qs}-
\frac{1}{4}\left[U^{(s)}\chi^{(s)}\left(\bm{k}-\bm{k}'\right)U^{(s)}+U^{(c)}
\chi^{(c)}\left(\bm{k}-\bm{k}'\right)U^{(c)}\right]^{pt}_{qs}+\nonumber\\
\frac{1}{4}\left[U^{(s)}\chi^{(s)}\left(\bm{k}+\bm{k}'\right)U^{(s)}+U^{(c)}
\chi^{(c)}\left(\bm{k}+\bm{k}'\right)U^{(c)}\right]^{ps}_{qt},\end{aligned}$$ Notice that the vertex $\Gamma^{pq}_{st}(\bm{k},\bm{k}')$ has been symmetrized for the singlet case and anti-symmetrized for the triplet case. Generally we neglect the frequency-dependence of $\Gamma$ and replace it by $\Gamma^{pq}_{st}(\bm{k},\bm{k}')\approx\Gamma^{pq}_{st}(\bm{k},\bm{k}',0)$. Usually, only the real part of $\Gamma$ is adopted [@Scalapino1; @Scalapino2]
Considering only intra-band pairings, we obtain the following effective pairing interaction on the FS, $$\begin{aligned}
V_{eff}=
\frac{1}{N}\sum_{\alpha\beta,\bm{k}\bm{k'}}V^{\alpha\beta}(\bm{k,k'})c_{\alpha}^{\dagger}(\bm{k})
c_{\alpha}^{\dagger}(-\bm{k})c_{\beta}(-\bm{k}')c_{\beta}(\bm{k}'),\label{pairing_interaction}\end{aligned}$$ where $\alpha,\beta=1,\cdots,4$ are band indices and $V^{\alpha\beta}(\bm{k,k'})$ is $$\begin{aligned}
V^{\alpha\beta}(\bm{k,k'})=\sum_{pqst,\bm{k}\bm{k'}}\Gamma^{pq}_{st}(\bm{k,k'},0)\xi_{p}^{\alpha,*}(\bm{k})
\xi_{q}^{\alpha,*}(-\bm{k})\xi_{s}^{\beta}(-\bm{k'})\xi_{t}^{\beta}(\bm{k'}).\label{effective_potential}\end{aligned}$$ From the effective pairing interaction (\[pairing\_interaction\]), one can obtain the following linearized gap equation [@Scalapino1; @Scalapino2] to determine the $T_c$ and the leading pairing symmetry of the system, $$\begin{aligned}
-\frac{1}{(2\pi)^2}\sum_{\beta}\oint_{FS}
dk'_{\Vert}\frac{V^{\alpha\beta}(\bm{k,k'})}{v^{\beta}_{F}(\bm{k'})}
\Delta_{\beta}(\bm{k'})=\lambda
\Delta_{\alpha}(\bm{k}).\label{eigenvalue_Tc}\end{aligned}$$ This equation can be looked upon as an eigenvalue problem, where the normalized eigenvector $\Delta_{\alpha}(\bm{k})$ represents the relative gap function on the $\alpha-$th FS patches near $T_c$, and the eigenvalue $\lambda$ determines $T_c$ via $T_{c}=E_ce^{-1/\lambda}$, where the cut off energy $E_c$ is of order band-width. The leading pairing symmetry is determined by the largest eigenvalue $\lambda$ of Eq. (\[eigenvalue\_Tc\]).
Appendix:Classification of the pairing symmetry
===============================================
Let’s start from the effective Hamiltonian obtained from exchanging spin fluctuations: $$\begin{aligned}
H=H_{tb}+\frac{1}{N}\sum_{\bm{k}\bm{k}'\alpha\beta}
V^{\alpha\beta}(\bm{k},\bm{k}')
c^{\dagger}_{\alpha}(\bm{k})
c^{\dagger}_{\alpha}(-\bm{k})
c_{\beta}(-\bm{k}')
c_{\beta}(\bm{k}')\end{aligned}$$ This normal state Hamiltonian should be invariant under the point group $G\equiv\{\hat{P}\}$, where $\hat{P}$ is any operation in the point group. The action of $\hat{P}$ on any electron operator is $$\begin{aligned}
\hat{P}c_{\alpha}(\bm{k})\hat{P}^{\dagger}=c_{\alpha}(\hat{P}\bm{k}).\end{aligned}$$ Note that the band index $\alpha$ will not be changed by symmetry operation. From the invariant of $H$ under the point group $G$, i.e. $\hat{P}H\hat{P}^{\dagger}=H$, we have $$\begin{aligned}
\label{symmetry}
V^{\alpha\beta}(\hat{P}\bm{k},\hat{P}\bm{k}')
=V^{\alpha\beta}(\bm{k},\bm{k}')\end{aligned}$$
The linearized gap equation $$\begin{aligned}
-\frac{1}{(2\pi)^2}\sum_{\beta}\oint_{FS}
dk'_{\Vert}\frac{V^{\alpha\beta}(\bm{k,k'})}{v^{\beta}_{F}(\bm{k'})}
\Delta_{\beta}(\bm{k'})=\lambda
\Delta_{\alpha}(\bm{k}).\label{eigenvalue_Tc2}\end{aligned}$$ can be rewritten as $$\begin{aligned}
\frac{1}{(2\pi)^2}\sum_{\beta}\iint_{\Delta E}
d^{2}\bm{k}'V^{\alpha\beta}(\bm{k,k'})
\Delta_{\beta}(\bm{k'})=-\lambda \Delta E
\Delta_{\alpha}(\bm{k}).\label{eigenvalue_Tc3}\end{aligned}$$ where the integral $\iint$ is performed within a narrow energy window near the FS with the width of the window $\Delta E\to 0$. After discreteness in the lattice, Eq. (\[eigenvalue\_Tc3\]) can be taken as an eigenvalue problem with $\lambda$ proportional to the eigenvalue and $\Delta_{\alpha}(\bm{k})$ proportional to the eigenvector.
From Eq. (\[eigenvalue\_Tc3\]) and Eq. (\[symmetry\]), we can find that each solved $\Delta_{\alpha}(\bm{k})$ belong to an irreducible representation of the point group. In Table I, we list all the irreducible representation and the basis functions of the $D_6$ point group of our model in two spatial dimensions, with the pairing symmetry of each basic function marked. There are 5 different pairing symmetries, i.e. s-wave, $(p_x,p_y)$ (degenerate $p$-wave), $(d_{x^2-y^2},d_{xy})$ (degenerate $d$-wave), $f_{x\left(x^{2}-3y^{2}\right)}$ ($f$-wave), $f_{y\left(3x^{2}-y^{2}\right)}$ ($f'$-wave), each has definite parity, either even or odd.
The real material has the $D_3$ point group. In Table II, we list all its irreducible representation and its basis functions in two spatial dimensions. In general, the irreducible representation, and hence the classification of the pairing symmetry for $D_6$- and $D_3$- point-groups are different. For example, the $s$-wave and $f$-wave belong to different irreducible representations for $D_6$ point-group and thus they generally will not mix, but they belong to the same irreducible representations for $D_3$ point-group and thus will generally mix. However, in the presence with SU(2) spin symmetry (i.e. without SOC), the Pauli’s principle requires the gap function to be either odd or even according to whether the spin status is triplet or singlet. In this case, the odd function and even function in the same irreducible representation of $D_3$ can be distinguished through spin status and generally will not mix, and therefore the two point groups have the same pairing symmetry classification, i.e., $s, p, d, f, f'$.
$D_{6}$ $E$ $2C_{6}\left(z\right)$ $2C_{3}\left(z\right)$ $C_{2}\left(z\right)$ $3C_{2}^{'}$ $3C_{2}^{''}$ odd functions even functions
--------- ------ ------------------------ ------------------------ ----------------------- -------------- --------------- ---------------------------------------- ----------------------------------------
$A_{1}$ $+1$ $+1$ $+1$ $+1$ $+1$ $+1$ $\left(x^{2}+y^{2}\right)$ $s$-wave
$A_{2}$ $+1$ $+1$ $+1$ $+1$ $-1$ $-1$
$B_{1}$ $+1$ $-1$ $+1$ $-1$ $+1$ $-1$ $x\left(x^{2}-3y^{2}\right)$ $f$-wave
$B_{2}$ $+1$ $-1$ $+1$ $-1$ $-1$ $+1$ $y\left(3x^{2}-y^{2}\right)$ $f'$-wave
$E_{1}$ $+2$ $+1$ $-1$ $-2$ 0 0 $\left(x,y\right)$ $p$-wave
$E_{2}$ $+2$ $-1$ $-1$ $+2$ 0 $0$ $\left(x^{2}-y^{2},xy\right)$ $d$-wave
: Character table for point group $D_{6}$ and possible superconductivity pairing symmetry.
$D_{3}$ $E$ $2C_{3}\left(z\right)$ $3C_{2}^{'}$ odd functions even functions
--------- ------ ------------------------ -------------- ---------------------------------------- ----------------------------------------
$A_{1}$ $+1$ $+1$ $+1$ $x\left(x^{2}-3y^{2}\right)$ $f$-wave $\left(x^{2}+y^{2}\right)$ $s$-wave
$A_{2}$ $+1$ $+1$ $-1$ $y\left(3x^{2}-y^{2}\right)$ $f'$-wave
$E$ $+2$ $-1$ $0$ $\left(x,y\right)$ $p$-wave $\left(x^{2}-y^{2},xy\right)$ $d$-wave
: Character table for point group $D_{3}$ in case with spin-$SU(2)$ symmetry and possible superconductivity pairing symmetry.
Appendix:The $d+id$ superconducting state
=========================================
Since the $d_{x^2-y^2}$ and $d_{xy}$ pairing states are degenerate, they would probably mix to lower the energy below the critical temperature $T_c$. If we consider the Hamiltonian with the interaction between the spin $\uparrow$ and $\downarrow$: $$\begin{aligned}
H=\sum_{\bm{k}\alpha\sigma}\varepsilon^{\alpha}_{\bm{k}}
c^{\dagger}_{\alpha\sigma}(\bm{k})c_{\alpha\sigma}(\bm{k})
+\frac{1}{N}\sum_{\bm{k}\bm{k}'\alpha\beta}V^{\alpha\beta}(\bm{k},\bm{k}')
c^{\dagger}_{\alpha\uparrow}(\bm{k})c^{\dagger}_{\alpha\downarrow}(-\bm{k})
c_{\beta\downarrow}(-\bm{k}')c_{\beta\uparrow}(\bm{k}')\end{aligned}$$ the total mean-field energy $$\begin{aligned}
\label{ene}
E=&\langle H\rangle
=\sum_{\bm{k}\alpha\sigma}\varepsilon^{\alpha}_{\bm{k}}
\Big\langle c^{\dagger}_{\alpha\sigma}(\bm{k})
c_{\alpha\sigma}(\bm{k})\Big\rangle
+\frac{1}{N}\sum_{\bm{k}\bm{k}'\alpha\beta}
V^{\alpha\beta}(\bm{k},\bm{k}')
\Big\langle c^{\dagger}_{\alpha\uparrow}(\bm{k})
c^{\dagger}_{\alpha\downarrow}(-\bm{k})\Big\rangle
\Big\langle c_{\beta\downarrow}(-\bm{k}')
c_{\beta\uparrow}(\bm{k}')\Big\rangle \nonumber\\
=&\sum_{\bm{k}\alpha}\varepsilon^{\alpha}_{\bm{k}}
\left[1-\frac{\varepsilon^{\alpha}_{\bm{k}}-\mu}
{\sqrt{(\varepsilon^{\alpha}_{\bm{k}}-\mu)^2
+|\Delta^{\alpha}_{\bm{k}}|^2}}\right]+\frac{1}{4N}\sum_{\bm{k}\bm{k}'\alpha\beta}
V^{\alpha\beta}(\bm{k},\bm{k}')\frac{(\Delta^{\alpha}_{\bm{k}})^*}
{\sqrt{(\varepsilon^{\alpha}_{\bm{k}}-\mu)^2
+|\Delta^{\alpha}_{\bm{k}}|^2}}
\frac{\Delta^{\beta}_{\bm{k}'}}
{\sqrt{(\varepsilon^{\beta}_{\bm{k}'}-\mu)^2
+|\Delta^{\beta}_{\bm{k}'}|^2}},\end{aligned}$$ where the chemical potential $\mu$ is determined by the constraint of the average electron number in the superconducting state. If we further set $\Delta^{\alpha}_{\bm{k}}\equiv\frac{1}{N}\sum_{\bm{k}'\beta}
V^{\alpha\beta}(\bm{k},\bm{k}')\Big\langle c_{\beta\downarrow}(-\bm{k}')
c_{\beta\uparrow}(\bm{k}')\Big\rangle=K_1d^{\alpha}_{x^2-y^2}(\bm{k})+(K_2+iK_3)
d^{\alpha}_{xy}(\bm{k})$, where $d^{\alpha}_{x^2-y^2}(\bm{k})$ and $d^{\alpha}_{xy}(\bm{k})$ denote the normalized gap functions of corresponding symmetries, the mixing coefficients $K_1$, $K_2$, and $K_3$ can be determined by the minimization of the total mean-field energy. Our energy minimization gives $K_1=\pm K_3$ and $K_2=0$, which leads to the fully-gapped $d_{x^2-y^2}\pm i d_{xy}$ (abbreviated as $d+id$) pairing state. This mixture of the two $d$-wave pairings satisfies the requirement that the gap nodes should avoid the FS to lower the energy. Similarly, one can verify that the degenerate $p_x$ and $p_y$ pairing states will also mix into the fully-gapped $p_x\pm ip_y$ (abbreviated as $p+ip$) pairing state to lower the energy below $T_c$.
[\*]{}
S. Graser, T. A. Maier, P. J. Hirschfeld and D. J. Scalapino, New Journal of Physics **11**, 025016 (2009).
T. A. Maier, S. Graser, P.J. Hirschfeld and D.J. Scalapino, Phys. Rev. B **83**, 100515(R)(2011).
|
---
abstract: 'A game-theoretic approach for studying power control in multiple-access networks with transmission delay constraints is proposed. A non-cooperative power control game is considered in which each user seeks to choose a transmit power that maximizes its own utility while satisfying the user’s delay requirements. The utility function measures the number of reliable bits transmitted per joule of energy and the user’s delay constraint is modeled as an upper bound on the delay outage probability. The Nash equilibrium for the proposed game is derived, and its existence and uniqueness are proved. Using a large-system analysis, explicit expressions for the utilities achieved at equilibrium are obtained for the matched filter, decorrelating and minimum mean square error multiuser detectors. The effects of delay constraints on the users’ utilities (in bits/Joule) and network capacity (i.e., the maximum number of users that can be supported) are quantified.'
author:
- '\'
title: '[A Non-Cooperative Power Control Game in Delay-Constrained Multiple-Access Networks]{}'
---
Introduction
============
In wireless networks, power control is used for resource allocation and interference management. In multiple-access CDMA systems such as the uplink of cdma2000, the purpose of power control is for each user terminal to transmit enough power so that it can achieve the desired quality of service (QoS) without causing unnecessary interference for other users in the network. Depending on the particular application, QoS can be expressed in terms of throughput, delay, battery life, etc. Since in many practical situations, the users’ terminals are battery-powered, an efficient power management scheme is required to prolong the battery life of the terminals. Hence, power control plays an even more important role in such scenarios.
Consider a multiple-access DS-CDMA network where each user wishes to locally and selfishly choose its transmit power so as to maximize its utility and at the same time satisfy its delay requirements. The strategy chosen by each user affects the performance of other users through multiple-access interference. There are several questions to ask concerning this interaction. First of all, what is a reasonable choice of a utility function that measures energy efficiency and takes into account delay constraints? Secondly, given such a utility function, what strategy should a user choose in order to maximize its utility? If every user in the network selfishly and locally picks its utility-maximizing strategy, will there be a stable state at which no user can unilaterally improve its utility (Nash equilibrium)? If such an equilibrium exists, will it be unique? What will be the effect of delay constraint on the energy efficiency of the network?
Game theory is the natural framework for modeling and studying such a power control problem. Recently, there has been a great deal of interest in applying game theory to resource allocation is wireless networks. Examples of game-theoretic approaches to power control are found in [@GoodmanMandayam00; @JiHuang98; @Saraydar02; @Xiao01; @Zhou01; @Alpcan; @Sung; @Meshkati_TCOMM]. In [@GoodmanMandayam00; @JiHuang98; @Saraydar02; @Xiao01; @Zhou01], power control is modeled as a non-cooperative game in which users choose their transmit powers in order to maximize their utilities. In [@Meshkati_TCOMM], the authors extend this approach to consider a game in which users can choose their uplink receivers as well as their transmit powers. All the power control games proposed so far assume that the traffic is not delay sensitive. Their focus is entirely on the trade-offs between throughput and energy consumption without taking into account any delay constraints. In this work, we propose a non-cooperative power control game that does take into account a transmission delay constraint for each user. Our focus here is on energy efficiency. Our approach allows us to study networks with both delay tolerant and delay sensitive traffic/users and quantify the loss in energy efficiency due to the presence of users with stringent delay constraints.
The organization of the paper is as follows. In Section \[system model\], we present the system model and define the users’ utility function as well as the model used for incorporating delay constraints. The proposed power control game is described in Section \[proposed game\], and the existence and uniqueness of Nash equilibrium for the proposed game is discussed in Section \[Nash equilibrium\]. In Section \[multiclass\], we extend the analysis to multi-class networks and derive explicit expressions for the utilities achieved at Nash equilibrium. Numerical results and conclusions are given in Sections \[Numerical results\] and \[conclusions\], respectively.
System Model {#system
model}
============
We consider a synchronous DS-CDMA network with $K$ users and processing gain $N$ (defined as the ratio of symbol duration to chip duration). We assume that all $K$ user terminals transmit to a receiver at a common concentration point, such as a cellular base station or any other network access point. The signal received by the uplink receiver (after chip-matched filtering) sampled at the chip rate over one symbol duration can be expressed as $$\label{eq1}
{\mathbf{r}} = \sum_{k=1}^{K} \sqrt{p_k} h_k \ b_k {\mathbf{s}}_k +
{\mathbf{w}} ,$$ where $p_k$, $h_k$, $b_k$ and ${\mathbf{s}}_k$ are the transmit power, channel gain, transmitted bit and spreading sequence of the $k^{th}$ user, respectively, and $\mathbf{w}$ is the noise vector which is assumed to be Gaussian with mean $\mathbf{0}$ and covariance $\sigma^2 \mathbf{I}$. We assume random spreading sequences for all users, i.e., $ {\mathbf{s}}_k =
\frac{1}{\sqrt{N}}[v_1 ... v_N]^T$, where the $v_i$’s are independent and identically distributed (i.i.d.) random variables taking values in {$-1,+1$} with equal probabilities.
Utility Function
----------------
To pose the power control problem as a non-cooperative game, we first need to define a suitable utility function. It is clear that a higher signal to interference plus noise ratio (SIR) level at the output of the receiver will result in a lower bit error rate and hence higher throughput. However, achieving a high SIR level requires the user terminal to transmit at a high power which in turn results in low battery life. This tradeoff can be quantified (as in [@GoodmanMandayam00]) by defining the utility function of a user to be the ratio of its throughput to its transmit power, i.e., $$\label{eq2}
u_k = \frac{T_k}{p_k} \ .$$ Throughput is the net number of information bits that are transmitted without error per unit time (sometimes referred to as *goodput*). It can be expressed as $$\label{eq3}
T_k = \frac{L}{M} R_k f(\gamma_k) ,$$ where $L$ and $M$ are the number of information bits and the total number of bits in a packet, respectively. $R_k$ and $\gamma_k$ are the transmission rate and the SIR for the $k^{th}$ user, respectively; and $f(\gamma_k)$ is the “efficiency function" which is assumed to be increasing and S-shaped (sigmoidal) with $f(\infty)=1$. We also require that $f(0)=0$ to ensure that $u_k=0$ when $p_k=0$. In general, the efficiency function depends on the modulation, coding and packet size. A more detailed discussion of the efficiency function can be found in [@Meshkati_TCOMM]. Note that for a sigmoidal efficiency function, the utility function in is a quasiconcave function of the user’s transmit power. The throughput $T_k$ in (\[eq3\]) could also be replaced with any increasing concave function such as the Shannon capacity formula as long as we make sure that $u_k = 0$ when $p_k=0$.
Based on (\[eq2\]) and (\[eq3\]), the utility function for user $k$ can be written as $$\label{eq4}
u_k = \frac{L}{M} R \frac{f(\gamma_k)}{p_k}\ .$$ For the sake of simplicity, we have assumed that the transmission rate is the same for all users, i.e., $R_1 = ... = R_K = R$. All the results obtained here can be easily generalized to the case of unequal rates. The utility function in , which has units of *bits/Joule*, captures very well the tradeoff between throughput and battery life and is particularly suitable for applications where energy efficiency is crucial.
Delay Constraints {#delay constraint}
-----------------
Let $X$ represent the (random) number of transmissions required for a packet to be received without any errors. The assumption is that if a packet has one or more errors, it will be retransmitted. We also assume that retransmissions are independent from each other. It is clear that the transmission delay for a packet is directly proportional to $X$. Therefore, any constraint on the transmission delay can be equivalently expressed as a constraint on the number of transmissions. Assuming that the packet success rate is given by the efficiency function $f(\gamma)$[^1], the probability that exactly $m$ transmissions are required for the successful transmission of the packet is given by $$\label{eq5}
\textrm{Pr}\{X=m\}= f(\gamma) \left( 1-f(\gamma) \right)^{m-1} ,$$ and, hence, $\textrm{E}\{X\}=\frac{1}{f(\gamma)}$. We model the delay requirements of a particular user (or equivalently traffic type) as a pair $(D,\beta)$, where $$\label{eq6}
\textrm{Pr}\{X\leq D\}\geq \beta .$$ In other words, we would like the number of transmissions to be at most $D$ with a probability larger than or equal to $\beta$. For example, $(2,0.9)$, i.e., $D=2$ and $\beta=0.9$, implies that 90% of the time we need at most two transmissions to successfully receive a packet. Note that can equivalently be represented as an upper bound on the delay outage probability, i.e.,
$$P_{delay\ outage}\triangleq \textrm{Pr}\{X > D\}\leq 1-\beta .$$
Based on , the delay constraint in can be expressed as $$\sum_{m=1}^D f(\gamma) \left(1-f(\gamma)\right)^{m-1} \geq
\beta ,$$ or $$\label{eq7}
f(\gamma) \geq \eta(D,\beta) ,$$ where $$\label{eq7b}
\eta(D,\beta)=1-(1-\beta)^{\frac{1}{D}} .$$ Here, we have explicitly shown that $\eta$ is a function of $D$ and $\beta$. Since $f(\gamma)$ is an increasing function of $\gamma$, we can equivalently express as $$\label{eq8}
\gamma \geq \tilde{\gamma} ,$$ where $$\label{eq9}
\tilde{\gamma}= f^{-1} \left( \eta (D,\beta) \right).$$ Therefore, the delay constraint in translates into a lower bound on the SIR. Since different users could have different delay requirements, $\tilde{\gamma}$ is user dependent. We make this explicit by writing $$\label{eq10}
\tilde{\gamma}_k= f^{-1} \left( \eta_k \right) ,$$ where $\eta_k=1-(1-\beta_k)^{\frac{1}{D_k}}$. A more stringent delay requirement, i.e., a smaller $D$ and/or a larger $\beta$, will result in a higher value for $\tilde{\gamma}$. Without loss of generality, we have assumed that all the users in the network have the same efficiency function. It is straightforward to relax this assumption.
Power Control Game with Delay Constraints {#proposed game}
=========================================
We propose a power control game in which each user decides how much power to transmit in order to maximize its own utility and at the same time satisfy its delay requirements. We have shown in Section \[delay constraint\] that the delay requirements of a user translate into a lower bound on the user’s output SIR. Therefore, each user will seek to maximize its utility while satisfying its SIR requirement. This can be captured by defining a *delay-constrained* utility for user $k$ as $$\label{eq11}
\tilde{u}_k = \left\{\begin{array}{ll}
u_k & \ \ \textrm{if} \ \ \gamma_k \geq \tilde{\gamma}_k \\
0 & \ \ \textrm{if} \ \ \gamma_k < \tilde{\gamma}_k \\
\end{array}\right. ,$$ where $u_k$ and $\tilde{\gamma}_k$ are given by and , respectively.
Let $\tilde{G}=[{\mathcal{K}}, \{A_k \}, \{\tilde{u}_k \}]$ denote the proposed non-cooperative game where ${\mathcal{K}}=\{1, ... ,
K \}$, and $A_k=[0,P_{max}]$, which is the strategy set for the $k^{th}$ user. Here, $P_{max}$ is the maximum allowed power for transmission. We assume that only those users whose delay requirements can be met are admitted into the network. For example, for the conventional matched filter, this translates into having $$\sum_{k=1}^K \frac{1}{1+\frac{N}{\tilde{\gamma}_k}} <1
.$$ This assumption makes sense because admitting a user that cannot meet its delay requirement only causes unnecessary interference for other users.
The resulting non-cooperative game can be expressed as the following maximization problem: $$\label{eq12}
\max_{p_k} \ \tilde{u}_k \ \textrm{for} \ \ k=1,...,K ,$$ where the $p_k$’s are constrained to be non-negative. The above maximization can equivalently be written as $$\label{eq13}
\max_{p_k} \ u_k \ \ \textrm{subject to} \ \ \gamma_k \geq \tilde{\gamma}_k \ \ \textrm{for} \ \
k=1,...,K.$$ Let us first solve the above maximization by ignoring the constraints on SIR. For all linear receivers, we have $$\label{eq14}
\frac{\partial \gamma_k}{\partial p_k} = \frac{\gamma_k}{p_k} \ .$$ Taking the derivative of $u_k$ with respect to $p_k$ and taking advantage of , we obtain $$\frac{\partial
u_k}{\partial p_k}= \frac{f'(\gamma_k)}{p_k}\frac{\partial
\gamma_k}{\partial p_k} - \frac{f(\gamma_k)}{p_k^2}=\frac{\gamma_k
f'(\gamma_k) - f(\gamma_k)}{p_k^2} .$$ Therefore, the unconstrained utility function for user $k$ is maximized when the user’s output SIR is equal to $\gamma^*$, where $\gamma^*$ is the (positive) solution to $$\label{eq15b}
f(\gamma) = \gamma \ f'(\gamma) .$$ It can be shown that for a sigmoidal efficiency function, $\gamma^*$ always exists and is unique. In addition, for all $\gamma_k<\gamma^*$, $u_k$ is increasing in $p_k$ and for all $\gamma_k>\gamma^*$, $u_k$ is decreasing in $p_k$ [@Rod03b]. Therefore, $\tilde{u}_k$ is maximized when user $k$ transmits at a power level that achieves $\tilde{\gamma}_k^*$ at the output of the uplink receiver, where $$\label{eq15}
\tilde{\gamma}_k^*= \max\{\tilde{\gamma}_k,\gamma^*\} .$$
In the next section, we investigate the existence and uniqueness of Nash equilibrium for our proposed game.
Nash Equilibrium for the Proposed Game {#Nash
equilibrium}
======================================
The Nash equilibrium for the proposed game is a set strategies (power levels) for which no user can unilaterally improve its own (delay-constrained) utility function. We now state the following proposition.
\[prop1\] The Nash equilibrium for the non-cooperative game $\tilde{G}$ is given by $\tilde{p}_k^*=\min \{p_k^*, P_{max}\}$, for $k=1,
\cdots, K$, where $p_k^*$ is the transmit power that results in an SIR equal to $\tilde{\gamma}^*$ at the output of the receiver with $\tilde{\gamma}_k^*= \max\{\tilde{\gamma}_k,\gamma^*\}$. Furthermore, this equilibrium is unique.
[Proof:]{} Based on the arguments presented in Section \[proposed game\], $\tilde{u}_k$ is maximized when the transmit power $p_k$ is such that $\gamma_k=\tilde{\gamma}_k^*=
\max\{\tilde{\gamma}_k,\gamma^*\}$. If $\tilde{\gamma}_k$ cannot be achieved, the user must transmit at maximum power level to maximize its utility. Let us define $\tilde{p}_{k}$ as the power level for which the output SIR for user $k$ is equal to $\tilde{\gamma}_k$. Since user $k$ is admitted into the network only if it can meet its delay requirements, we have $\tilde{p}_{k}
\leq P_{max}$. In addition, because $\tilde{u}_k=0$ for $p_k<\tilde{p}_{k}$, there is no incentive for user $k$ to transmit at a power level smaller than $\tilde{p}_{k}$. Therefore, we can restrict the set of strategies for user $k$ to $\left[\tilde{p}_{k} , P_{max}\right]$. In this interval, $\tilde{u}_k=u_k$ and hence the utility function is continuous and quasiconcave. This guarantees existence of a Nash equilibrium for the proposed power control game.
Furthermore, for a sigmoidal efficiency function, $\gamma^*$, which is the (positive) solution of $f(\gamma) = \gamma \
f'(\gamma)$, is unique and as a result $\tilde{\gamma}_k^*$ is unique for $k=1,2,...,K$. Because of this and the one-to-one correspondence between the transmit power and the output SIR, the Nash equilibrium is unique.
The above proposition suggests that at Nash equilibrium, the output SIR for user $k$ is $\tilde{\gamma}^*_k$, where $\tilde{\gamma}^*_k$ depends on the efficiency function through $\gamma^*$ as well as user $k$’s delay constraint through $\tilde{\gamma}_k$. Note that this result does not depend on the choice of the receiver and is valid for all linear receivers including the matched filter, the decorrelator and the (linear) minimum mean square error (MMSE) detector.
Multi-class Networks {#multiclass}
====================
Let us now consider a network with $C$ classes of users. The assumption is that all the users in the same class have the same delay requirements characterized by the corresponding $D$ and $\beta$. Based on Proposition \[prop1\], at Nash equilibrium, all the users in class $c$ will have the same output SIR, $\tilde{\gamma}^{* (c)}= \max\{\tilde{\gamma}^{(c)},\gamma^*\}$, where $\tilde{\gamma}^{(c)}=f^{-1} \left( \eta^{(c)} \right)$. Here, $\eta^{(c)}$ depends on the delay requirements of class $c$, namely $D^{(c)}$ and $\beta^{(c)}$, through . The goal is to quantify the effect of delay constraints on the energy efficiency of the network or equivalently on the users’ utilities.
In order to obtain explicit expressions for the utilities achieved at equilibrium, we use a large-system analysis similar to the one presented in [@TseHanly99] and [@Comaniciu03]. We consider the asymptotic case in which $K, N \rightarrow \infty $ and $\frac{K}{N} \rightarrow \alpha < \infty$. This allows us to write SIR expressions that are independent of the spreading sequences of the users. Let $K^{(c)}$ be the number of users in class $c$, and define $\alpha^{(c)}=\lim_{K,N\rightarrow \infty}
\frac{K^{(c)}}{N}$. Therefore, we have $\sum_{c=1}^{C}
\alpha^{(c)} = \alpha$.
It can be shown that for the matched filter (MF), the decorrelator (DE), and the MMSE detector, the minimum power required by user $k$ in class $c$ to achieve an output SIR equal to $\tilde{\gamma}^{* (c)}$ is given by the following equations: Note that we have implicitly assumed that $P_{max}$ is sufficiently large so that the target SIRs (i.e., $\tilde{\gamma}^{* (c)}$’s) can be achieved by all users. Furthermore, since $\tilde{\gamma}^{* (c)} \geq
\tilde{\gamma}^{(c)}$ for $c=1,\cdots, C$, we have $\tilde{u}_k =
u_k = \frac{L}{M} R \frac{f(\tilde{\gamma}^{* (c)})}{p_k}$. Therefore, for the matched filter, the decorrelator, and the MMSE detector, the utilities achieved at the Nash equilibrium are given by Note that, based on the above equations, we have ${\tilde{u}_k^{MMSE}>\tilde{u}_k^{DE}>\tilde{u}_k^{MF}}$. This means that the MMSE reciever achieves the highest utility as compared to the decorrelator and the matched filter. Also, the network capacity (i.e., the number of users that can be admitted into the network) is the highest when the MMSE detector is used. For the specific case of no delay constraints, $\tilde{\gamma}^{*
(c)}=\gamma^*$ for all $c$ and – reduce to Comparing – with –, we observe that the presence of users with stringent delay requirements results not only in a reduction in the utilities of those users but also a reduction in the utilities of other users in the network. A stringent delay requirement results in an increase in the user’s target SIR (remember $ \tilde{\gamma}_k^*=
\max\{\tilde{\gamma}_k,\gamma^*\}$). Since $\frac{f(\gamma)}{\gamma}$ is maximum when $\gamma=\gamma^*$, a target SIR larger than $\gamma^*$ results in a reduction in the utility of the corresponding user. In addition, because of the higher target SIR for this user, other users in the network experience a higher level of interference and hence are forced to transmit at a higher power which in turn results in a reduction in their utilities (except for the decorrelator, in which case the multiple-access interference is completely removed). Also, since $
\tilde{\gamma}_k^*\geq \gamma^*$ and $\sum_{c=1}^{C}
\alpha^{(c)}=\alpha$, the presence of delay-constrained users causes a reduction in the system capacity (again, except for the decorrelator). Through –, we have quantified the loss in the utility (in bits/Joule) and in network capacity due to users’ delay constraints for the matched filter, the decorrelator and the MMSE receiver. The sensitivity of the loss to the delay parameters (i.e., $D$ and $\beta$) depends on the efficiency function, $f(\gamma)$.
Numerical Results {#Numerical results}
=================
Let us consider the uplink of a DS-CDMA system with processing gain 100. We assume that each packet contains 100 bits of information and no overhead (i.e., $L=M=100$). The transmission rate, $R$, is $100Kbps$ and the thermal noise power, $\sigma^2$, is $5\times 10^{-16}Watts$. A useful example for the efficiency function is $f(\gamma)= (1- e^{-\gamma})^M$. This serves as an approximation to the packet success rate that is very reasonable for moderate to large values of $M$ [@Saraydar02]. We use this efficiency function for our simulations. Using this, with $M=100$, the solution to (\[eq15b\]) is $\gamma^*=6.48 = 8.1dB$.
Fig. \[fig1\] shows the target SIR as a function of $\beta$ for $D=1,2,$ and 3. It is observed that, as expected, a more stringent delay requirement (i.e., a higher $\beta$ and/or a lower $D$) results in a higher target SIR.
We now consider a network where the users can be divided into two classes: delay sensitive (class $A$) and delay tolerant (class $B$). For users in class $A$, we choose $D_A=1$ and $\beta_A=0.99$ (i.e., delay sensitive). For users in class $B$, we let $D_B=3$ and $\beta_B=0.90$ (i.e., delay tolerant). Based on these choices, $\tilde{\gamma}^*_A=9.6 dB$ and $\tilde{\gamma}^*_B=\gamma^*=8.1
dB$. Without loss of generality and to keep the comparison fair, we also assume that all the users are 100 meters away from the uplink receiver. The system load is assumed to be $\alpha$ (i.e., $\frac{K}{N}=\alpha$) and we let $\alpha_A$ and $\alpha_B$ represent the load corresponding to class $A$ and class $B$, respectively, with ${\alpha_A + \alpha_B =\alpha}$.
We first consider a lightly loaded network with $\alpha=0.1$ (see Fig. \[fig2\]). To demonstrate the performance loss due to the presence of users with stringent delay requirements (i.e., class $A$), we plot $\frac{u_A}{u}$ and $\frac{u_B}{u}$ as a function of the fraction of the load corresponding to class $A$ users (i.e., $\frac{\alpha_A}{\alpha}$). Here, $u_A$ and $u_B$ are the utilities of users in class $A$ and class $B$, respectively, and $u$ represents the utility of the users if they all had loose delay requirements which means $\tilde{\gamma}^*_k=\gamma^*$ for all $k$. Fig. \[fig2\] shows the loss for the matched filter, the decorrelator, and the MMSE detector. We observe from the figure that for the matched filter both classes of users suffer significantly due to the presence of delay sensitive traffic. For example, when half of the users are delay sensitive, the utilities achieved by class $A$ and class $B$ users are, respectively, 50% and 60% of the utilities for the case of no delay constraints. For the decorrelator, only class $A$ users suffer and the reduction in utility is smaller than that of the matched filter. For the MMSE detector, the reduction in utility for class $A$ users is similar to that of the decorrelator, and the reduction in utility for class $B$ is negligible.
We repeat the experiment for a highly loaded network with $\alpha=0.9$ (see Fig. \[fig3\]). Since the matched filter cannot handle such a significant load, we have shown the plots for the decorrelator and MMSE detector only. We observe from Fig. \[fig3\] that because of the higher system load, the reduction in the utilities is more significant for the MMSE detector compared to the case of $\alpha=0.1$. It should be noted that for the decorrelator the reduction in utility of class $A$ users is independent of the system load. This is because the decorrelator completely removes the multiple-access interference.
It should be further noted that in Figs. \[fig2\] and \[fig3\] we have only plotted the ratio of the utilities (not the actual values). As discussed in Section \[multiclass\], the achieved utilities for the MMSE detector are larger than those of the decorrelator and the matched filter.
Conclusions
===========
We have proposed a game-theoretic approach for studying power control in multiple-access networks with (transmission) delay constraints. We have considered a non-cooperative game where each user seeks to choose a transmit power that maximizes its own utility while satisfying the user’s delay requirements. The utility function measures the number of reliable bits transmitted per joule of energy. We have modeled the delay constraint as an upper bound on the delay outage probability. We have derived the Nash equilibrium for the proposed game and have shown that it is unique. The results are applicable to all linear receivers. In addition, we have used a large-system analysis to derive explicit expressions for the utilities achieved at equilibrium for the matched filter, decorrelator and MMSE detector. The reductions in the users’ utilities (in bits/Joule) and network capacity due to the presence of users with stringent delay constraints have been quantified.
[10]{}
D. J. Goodman and N. B. Mandayam, “Power control for wireless data,” [ *IEEE Personal Communications*]{}, vol. 7, pp. 48–54, April 2000.
H. Ji and C.-Y. Huang, “Non-cooperative uplink power control in cellular radio systems,” [*Wireless Networks*]{}, vol. 4, pp. 233–240, April 1998.
C. U. Saraydar, N. B. Mandayam, and D. J. Goodman, “Efficient power control via pricing in wireless data networks,” [*IEEE Transactions on Communications*]{}, vol. 50, pp. 291–303, February 2002.
M. Xiao, N. B. Shroff, and E. K. P. Chong, “Utility-based power control in cellular wireless systems,” [*Proceedings of the Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM)*]{}, pp. 412–421, AK, USA, April 2001.
C. Zhou, M. L. Honig, and S. Jordan, “Two-cell power allocation for wireless data based on pricing,” [*Proceedings of the $39^{th}$ Annual Allerton Conference on Communication, Control, and Computing*]{}, Monticello, IL, USA, October 2001.
T. Alpcan, T. Basar, R. Srikant, and E. Altman, “[CDMA]{} uplink power control as a noncooperative game,” [*Proceedings of the $40^{th}$ [IEEE]{} Conference on Decision and Control*]{}, pp. 197–202, Orlando, FL, USA, December 2001.
C. W. Sung and W. S. Wong, “A noncooperative power control game for multirate [CDMA]{} data networks,” [*IEEE Transactions on Wireless Communications*]{}, vol. 2, pp. 186–194, January 2003.
F. Meshkati, H. V. Poor, S. C. Schwartz, and N. B. Mandayam, “A utility-based appraoch to power control and receiver design in wireless data networks.” To appear in [*[IEEE Transactions on Communications]{}*]{}.
V. Rodriguez, “An analytical foundation for resource management in wireless communication,” [*Proceedings of the [IEEE]{} Global Telecommunications Conference*]{}, pp. 898–902, San Francisco, CA, USA, December 2003.
D. N. C. Tse and S. V. Hanly, “Linear multiuser receivers: Effective interference, effective bandwidth and user capacity,” [*IEEE Transactions on Information Theory*]{}, vol. 45, pp. 641–657, March 1999.
C. Comaniciu and H. V. Poor, “Jointly optimal power and admission control for delay sensitive traffic in [CDMA]{} networks with [LMMSE]{} receivers,” [ *IEEE Transactions on Signal Processing, Special Issue on Signal Processing in Networking*]{}, vol. 51, pp. 2031–2042, August 2003.
[^1]: This assumption is valid in many practical systems (see [@Meshkati_TCOMM] for further details).
|
---
abstract: 'Deep learning approaches to cyclone intensity estimation have recently shown promising results. However, suffering from the extreme scarcity of cyclone data on specific intensity, most existing deep learning methods fail to achieve satisfactory performance on cyclone intensity estimation, especially on classes with few instances. To avoid the degradation of recognition performance caused by scarce samples, we propose a context-aware CycleGAN which learns the latent evolution features from adjacent cyclone intensity and synthesizes CNN features of classes lacking samples from unpaired source classes. Specifically, our approach synthesizes features conditioned on the learned evolution features, while the extra information is not required. Experimental results of several evaluation methods show the effectiveness of our approach, even can predicting unseen classes.'
address: |
School of Information and Communication Engineering,\
Beijing University of Post and Telecommunications, Beijing, China
bibliography:
- 'arixv\_cyclone.bib'
title: 'CYCLONE INTENSITY ESTIMATE WITH CONTEXT-AWARE CYCLEGAN'
---
=1
context-aware CycleGAN, cyclone intensity estimation, feature generation
Introduction {#sec:intro}
============
Cyclone intensity estimation is an important task in meteorology for predicting disruptive of the cyclone. The intensity of a cyclone, which is defined as the maximum wind speed near the cyclone center, is the most critical parameter of a cyclone [@cyclone; @CycloneClassify]. The main assumption of estimation method is that cyclones with similar intensity tend to have a similar pattern [@cyclone]. The cyclone features represented by the early estimation approaches rely on the human-constructed features which are sparse and subjectively biased [@cyclone]. With remarkable progress of deep learning, using Convolutional Neural Networks (CNN) to estimate the intensity of cyclones [@CycloneClassify; @CycloneRotation] has obtained increasing attentions.
\[fig:example\]
Existing deep learning approaches for cyclone intensity estimation can be roughly split into two categories: classification based methods [@CycloneClassify] and regression based methods [@CycloneRotation]. Classification approaches estimate cyclone intensities by treating each intensity label as an independent fixed class and use a cross-entropy loss to optimize model. Regression methods estimate exact intensity values of cyclones by using mean squared error (MSE) as a loss function. However, a common intrinsic problem in existing approaches is that they all ignore negative effects of specific cyclone data distribution, as shown in Fig 1 (b), where some cyclone classes only contain few instances.
![image](WithCyc.jpg){width="80.00000%" height="4.6cm"}
\[fig:pipline\]
A natural approach to address this problem is synthesizing required samples to supplement training set. Recently, generating training data with generative adversarial networks (GAN) obtains increasing attentions due to the ability of generation conditioned on specific attributes or categories, e.g. synthesize disgust or sad emotion images from neutral class with Cycle-Consistent Adversarial Networks (CycleGAN) [@Emotion]. Most of existing approaches [@featureGeneration; @featureGenerationCycle] rely on extra attribute information to train GAN model and generate samples. However, in this work, each cyclone sample only has an intensity label and some cyclone classes are extremely lacking samples, which make existing methods perform poorly. Evolution features, the difference of features between adjacent cyclone classes, has a similar pattern on each fixed cyclone intensity interval. As illustrated in Fig.1 (a), the highlighted points are evolution features between images of adjacent cyclone intensities, which are located on the eye of cyclones and the border of cyclones and sea.
In this paper, we propose a context-aware CycleGAN to synthesize CNN features of cyclone classes lacking samples in the absence of extra information. In particular, the generator of context-aware CycleGAN is modified to learn cyclone evolution features conditioned on a given context intensity interval which is the difference in intensity between each two samples involved in training. Moreover, the generator is regularized by a classification loss to regulate the intensity characteristic of synthetic features and the adversarial loss is improved with the Wasserstein distance [@WGAN] for a training stability. Based on the learned evolution features, our method is able to synthesize features of any cyclone intensity with a contextual cyclone sample.
We summarize our contributions as follows:
- Propose the concept of evolution features which focus on the difference of features between adjacent classes instead of normal features of classes.
- Improve CycleGAN to synthesize features under the constraint of a context intensity interval where a classification loss is optimized to regulate cyclone intensity.
Our approach {#sec:format}
============
Different from conventional generative methods, our method is suitable to synthesize features for classes lacking samples and classes which have context-dependent without extra information. Specific to the cyclone intensity estimation, the context refers to the interval of adjacent cyclone intensities. Therefore, the key of our model is the ability to generate required CNN features from unpaired source classes by relying on evolution features of a fixed context intensity interval shown in Fig 2.
We begin by defining the problem of our interest. Let $X={(f_x,s_x,c_{x\rightarrow y})}$ and $Y={(f_y,s_y,c_{y\rightarrow x})}$ where X and Y are source and target classes, $f\in \mathbb{R}^{d_x}$ is the CNN features of cyclone image extracted by the CNN, $s$ denotes the intensity labels in $S=\{s_1,...,s_K\}$ consisting of K discrete speeds, $c_{x\rightarrow y}$ is the context attribute vector of label from $s_x$ to $s_y$, so as the $c_{y\rightarrow x}$. To improve the generalization ability of our method, the CNN features $f$ is connected with the random noise, as in [@featureGeneration; @featureGenerationCycle].
Context-aware CycleGAN {#sec:generator}
----------------------
In general, GAN [@WGAN] consists of a generative network ($G$) and a discriminative network ($D$) that are iteratively trained in a two-player minimax game manner. CycleGAN is used to realize the unpaired translation between source classes ($X$) and target classes ($Y$). There are two generator $G_{X\rightarrow Y}$ and $G_{Y \rightarrow X}$ in CycleGAN. $G_{X\rightarrow Y}$ learns a mapping $X\rightarrow Y$ and $G_{Y \rightarrow X}$ learns a mapping $Y \rightarrow X$ simultaneously [@cycleGAN].
**Context-aware Transform.** Learning evolution features from adjacent cyclone classes is critical to our method. Hence, our generator consists of a single hidden layer and a context-aware transform layer conditioned on the context $c$, which focus on the evolution features.
Formally, the context-aware transform layer in the generator transforms the input $f_i$ to the output $f_o$ by relying on the context vector $c$. The transformation is denoted as $f_o=g(f,c)=r(W^c f+b)$, where the $g()$ is the function of transform layer, $r()$ is the relu function, $b$ is the bias of the layer and $W^c$ is the weight parameter. In particular, the weight parameter $W^c$ is designed as the summation of the following two terms: $$W^c=\overline{W}_p+V_pE(c),
\label{weight}$$ where $\overline{W}_p$ is the independent weight of the context attributes, $E(c)$ is the desired shape context representation, which is turned from the context intensity interval by using a function denoted as $E(·)$, and $V_p$ transforms the context attributes to the weight of auxiliary transform layer. All of parameters in are learnt during the training stage [@contextTranfer; @context-aware]. Moreover, the second term in is an auxiliary transform layer generated from the context attribute $c$, which focuses on learning evolution features.
**Objective Function.** Only using evolution features do not guarantee label-preserving transformation. For generating features of expected intensity, the classification loss of pertained classifier is proposed to be minimized as the regular term of generator. Besides, adversarial losses in CycleGAN are applied to iteratively train the generators and discriminators [@cyclone].
In particular, the regularized classification loss $L_{cls}$ is defined as: $$\mathcal {L}_{cls}(s_y, \widetilde{f}_y;\theta)= -E_{\widetilde{f}_y\sim P_{\widetilde{f}_y}}[log P(s_y|\widetilde{f}_y;\theta)],
\vspace{-8pt}
\label{classification_loss}$$ where $\widetilde{f}_y$ is features synthesized by $G_{X\rightarrow Y}$ from the real features $f_x$, $s_y$ is the class label of $\widetilde{f}_y$ and $(s_y|\widetilde{f}_y;\theta)$ denotes the probability of the synthetic features predicted with its true label $s_y$ which is the output of liner softmax with the parameter $\theta$ pretrained on real features. $L_{cls}(s_x, \widetilde{f}_x;\theta)$ is analogously defined with the same parameter $\theta$. Hence, the full objective is: $$\begin{aligned}
\mathcal{L}(G_{X\rightarrow Y}, &G_{Y\rightarrow X}, D_X, D_Y) =\mathcal{L}_{GAN}(G_{X\rightarrow Y},D_Y)\\+
&\mathcal{L}_{GAN}(G_{Y\rightarrow X},D_X)+
\lambda_1 \mathcal{L}_{cyc}(G_{X\rightarrow Y}, G_{Y\rightarrow X})\\+
&\beta(\mathcal{L}_{cls}(s_y, \widetilde{f}_y;\theta)+\mathcal {L}_{cls}(s_x, \widetilde{f}_x;\theta)),
\vspace{-8pt}
\label{all_loss}
\end{aligned}$$ where $\beta$ is a balance term to weight the classification loss, The first three terms are adversarial loss.
In order to improve the stability of training, the adversarial loss $L_{GAN}(G_{X\rightarrow Y}, D_Y)$ and $L_{GAN}(G_{Y\rightarrow X},D_X)$ are the Wasserstein loss of WGAN [@WGAN]. Besides, the generators rely on the context attributes which can be considered as a special situation of conditional GAN [@conditionalGAN; @contionalCycleGAN]. Therefore, the context intensity interval is also fed into the discriminator. The conditional adversarial loss with Wasserstein distance [@WGAN; @WGAN-GP] are defined as: $$\begin{scriptsize}
\begin{aligned}
\mathcal {L}_{GAN}(G_{X\rightarrow Y}, &D_Y)=E[D_Y(f_y,c_{x\rightarrow y})]- E[D_Y(\widetilde{f}_y, c_{x\rightarrow y})]\\ -&\lambda_2 E[(\|{\nabla}_{\widehat{f}_{y}}D_Y(\hat{f}_{y}, c_{x\rightarrow y})\|_2-1)^2],
\end{aligned}
\end{scriptsize}
\label{generate loss}$$ where $\widetilde{f}_y=G_{X\rightarrow Y}(f_x, c_{x\rightarrow y})$, $\widehat{f}_{y}=\alpha f_y+(1-\alpha)$ with $\alpha \sim U(0, 1)$ and $\lambda_2$ is the penalty coefficient. The first two terms approximate the Wasserstein distance and the third term is the gradient penalty[@WGAN-GP]. And $L_{GAN}(G_{Y\rightarrow X}, D_X)$ is defined analogously as equation (4).
Synthesizing&Classification {#sec:classification}
---------------------------
Based on the structure and loss function described above, the generator is trained to learn the evolution features of adjacent cyclone classes. With a real specific cyclone sample as input, the trained context-aware CycleGAN relies on the evolution features which are learned from the fixed context interval to synthesize CNN features of the cyclone whose intensity is adjacent to the input sample. Moreover, generated features is allowed to train a classifier with real features. The standard softmax classifier parameter $\theta_{cls}^*$ are minimized with the cross-entropy loss and the final prediction is defined as follows:
Methods f1(%) MAE RMSE
--------------------------------------- ----------- ---------- ----------
Pradhan e.t(2018)[@CycloneClassify]\* 56.63 3.05 6.21
Chen e.t(2018)[@CycloneRotation]\* 51.27 4.27 8.30
AlexNet 60.30 2.61 5.94
ResNet 57.58 **2.37** **4.97**
Ours **61.41** 2.49 5.70
: The estimation result of different methods. \* denotes the implement result with the original model.[]{data-label="tab:CNN"}
$$y^*= argmax P(y|f_x, \theta_{cls}^*),
\vspace{-3pt}$$ where $f_x$ is features of a testing sample and $y^*$ is the final predicting intensity.
Experiments {#sec:pagestyle}
===========
In this section, we firstly introduce the details of our implementation, then the experimental details including datasets, evaluation criteria and features used in our experiments are introduced. Finally, results of our approach compared with other estimation models especially on classes with few training instances are shown.
Implementation Details
----------------------
Both of the discriminator $D_X$ and $D_Y$ consist of two MLP layers with LeakyReLU activation and the hidden layers of discriminator contain 4096 hidden units due to the application of Wasserstein loss. We use Adam to optimize the classifier and SGD to optimize our context-aware CycleGAN. And the learning rate is 0.0001 with decay 0.9 for every 10 epochs while training the classifier and 0.001 while training the basic CNN network. The balance term $\beta$ in the loss function is 0.001 and the hyperparameters are $\lambda_1$=10 and $\lambda_2$=10 as suggested in [@WGAN-GP; @cycleGAN]. All noises are independent and drawn from a unit Gaussian distribution with 128 dims. We also do not apply batch normalization as empirical evaluation showed a significant degradation of the result when batch normalization is used.
Experiment Details
------------------
**Datasets and Evaluation Criteria**. The total 7686 cyclone images with 151 cyclone intensities are collected from Pacific Ocean. And we train our model by using 6118 training images and perform estimation results on 1560 testing images. The cyclone image example is shown in Fig 1 and the number of images with different intensities is in Fig 1 (b). Besides, the size of cyclone image is $400\times400$. Note that the center of cyclones are all placed at the middle grid of each image and data augmentation is abbreviated as “Data Aug” in following experiments.
\[tab:instances\]
Following the evaluation metrics in [@CycloneClassify], we use f1-score, mean absolute error (MAE) and root-mean-square intensity error (RMSE). The estimated intensities are determined as the category with the highest probability. MAE and RMSE measure the error between estimated intensities and actual intensities, which present the error in the form of average absolute value or square of two intensity difference in m/s. The f1-score is widely used to measure the performance of classification models.
**Features**. To extract effective real CNN features, we compare the performance of ResNet [@ResNet], AlexNet [@AlexNet] and other CNN structures in [@CycloneRotation; @CycloneClassify], which are the latest related approaches in the cyclone intensity estimation task. It is worth nothing that the dimension of the fully connected layers in above structures are adjusted to accommodate the image size and desired feature dimension. And the performance of these models is shown in Table \[tab:CNN\]. We can find that the performance of AlexNet is the best. Therefore, we use the 512-dim visual feature extracted by the AlexNet as the representation of the cyclone images. Besides, we generate 512-dim CNN features with our context-aware CycleGAN for synthetic features of cyclone images.
![The distribution of the absolute error between estimate and truth cyclone intensity with same setting as Table \[tab:instances\]. The highlighted labels indicate that the generated features of these classes are mixed with the real features to train the classifier.](box-cycle.png){width="1\columnwidth"}
\[fig:pipline\]
Results and Analysis
--------------------
**Effect of CNN Architectures.** Due to the CNN encoder provides real features which is the important guiding information to the discriminator, the effect of CNN encoder is critical in our experiments. Hence, we compare the performance of different CNN encoders and estimation results are shown in Table \[tab:CNN\]. The prominent result is that ResNet with a deeper architecture has worse performance than other models on f1-score. We conjecture that this problem may be caused by the quality and size of training dataset, e.g. the noise of images produced by the sea fogs. In addition, the size of dataset is a major concern in our experiments, which may cause overfitting of CNN encoders. Moreover, in our experiments, the model in Chen et al.[@CycloneClassify] which is designed for the regression task dose not perform better than the model in Pradhan et al.[@CycloneRotation] designed for classification.
**Classification with Generated Features.** The experimental results in Table \[tab:instances\] clearly demonstrate the advantage of our proposed context-aware CycleGAN in cyclone intensity estimation. It is obviously that using features of cyclone classes lacking samples generated by context-aware CycleGAN to train the classifier provides remarkable improvements over the basic models represented by AlexNet. In particular, our approach achieves the best estimation results in 5 out of 6 the cyclone classes on three evaluation metrics. The only exception is f1-score on intensity 16, where the best result is achieved by Data Augmentation. Besides, the Data Augmentation achieves the same MAE and RMSE results on intensity 65 as our context-aware CycleGAN. Compared to Data Augmentation, our approach achieves a better f1-score with same MAE and RMSE on intensity 60 and a lower error on intensity 16, which prove the effectiveness of our approach. A critical question of cyclone intensity estimation is the error distribution, especially for the classes lacking samples. To address this question, we show the error distribution of estimation and truth cyclone intensity in Fig 3. The significantly smaller rectangles prove the effectiveness of our method.
conclusion {#sec:conclusion}
==========
In this paper, we propose context-aware CycleGAN to generate features depended on learned evolution features of a context interval without extra information. Our proposed aprraoch is able to tackle the problem related to extreme scarcity of data while context evolution features are existing between different classes, e.g. cyclone intensity estimation in this paper. Experiments on the cyclone dataset have shown significant improvements with synthetic features on classes lacking samples. Further, we will investigate more effective representation for cyclone images.
acknowledge {#sec:acknowledge}
===========
This work was supported by National Natural Science Foundation of China (61702047), Beijing Natural Science Foundation (4174098) and the Fundamental Research Funds for the Central Universities (2017RC02).
|
---
abstract: |
This article studies the structure of the automorphism groups of general graph products of groups. We give a complete characterisation of the automorphisms that preserve the set of conjugacy classes of vertex groups for arbitrary graph products. Under mild conditions on the underlying graph, this allows us to provide a simple set of generators for the automorphism groups of graph products of *arbitrary* groups. We also obtain information about the geometry of the automorphism groups of such graph products: lack of property (T), acylindrical hyperbolicity.
The approach in this article is geometric and relies on the action of graph products of groups on certain complexes with a particularly rich combinatorial geometry. The first such complex is a particular Cayley graph of the graph product that has a *quasi-median* geometry, a combinatorial geometry reminiscent of (but more general than) CAT(0) cube complexes. The second (strongly related) complex used is the Davis complex of the graph product, a CAT(0) cube complex that also has a structure of right-angled building.
author:
- Anthony Genevois and Alexandre Martin
title: Automorphisms of graph products of groups from a geometric perspective
---
Introduction and main results
=============================
Graph products of groups, which have been introduced by Green in [@GreenGP], define a class of group products that, loosely speaking, interpolates between free and direct products. For a simplicial graph $\Gamma$ and a collection of groups $\mathcal{G}=\{ G_v \mid v \in V(\Gamma) \}$ indexed by the vertex set $V(\Gamma)$ of $\Gamma$, the *graph product* $\Gamma \mathcal{G}$ is defined as the quotient $$\left( \underset{v \in V(\Gamma)}{\ast} G_v \right) / \langle \langle gh=hg, \ h \in G_u, g \in G_v, \{ u,v \} \in E(\Gamma) \rangle \rangle,$$ where $E(\Gamma)$ denotes the edge set of $\Gamma$. The two extreme situations where $\Gamma$ has no edge and where $\Gamma$ is a complete graph respectively correspond to the free product and the direct sum of the groups belonging to the collection $\mathcal{G}$. Graph products include two intensively studied families of groups: right-angled Artin groups and right-angled Coxeter groups. Many articles have been dedicated to the study of the automorphism groups of these particular examples of graph products. In particular, the automorphism groups of right-angled Coxeter groups have been intensively studied in relation with the famous rigidity problem for Coxeter groups, see for instance [@RACGrigidity].
Beyond these two cases, the automorphism groups of general graph products of groups are poorly understood. Most of the literature on this topic imposes very strong conditions on the graph products involved, either on the underlying graph (as in the case of the automorphisms groups of free products [@OutSpaceFreeProduct; @HorbezHypGraphsForFreeProducts; @HorbezTitsAlt]) or on the vertex groups (most of the case, they are required to be abelian or even cyclic [@AutGPabelianSet; @AutGPabelian; @AutGPSIL; @RuaneWitzel]). Automorphism groups of graph products of more general groups (and their subgroups) are essentially uncharted territory. For instance, the following general problem is still unsolved:\
**General Problem.** Find a natural / simple generating set for the automorphism group of a general graph product of groups.\
The first result in that direction is the case of right-angled Artin groups or right-angled Coxeter groups, solved by Servatius [@RAAGServatius] and Laurence [@RAAGgenerators]. More recently, Corredor–Guttierez described a generating set for automorphism groups of graph products of cyclic groups [@AutGPabelianSet], using previous work of Guttierez–Piggott–Ruane [@AutGPabelian]. Beyond these cases however, virtually nothing is known about the automorphism group of a graph product.
Certain elements in the generating sets of a right-angled Artin groups naturally generalise to more general graph products, and we take a moment to mention them as they play an important role in the present work:
- For an element $g\in \Gamma{{\mathcal G}}$, the *inner automorphism* $\iota(g)$ is defined by $$\iota(g): \Gamma{{\mathcal G}}\rightarrow \Gamma{{\mathcal G}}, ~~x \mapsto gxg^{-1}.$$
- Given an isometry $\sigma : \Gamma \to \Gamma$ and a collection of isomorphisms $\Phi = \{ \varphi_u : G_u \to G_{\sigma(u)} \mid u \in V(\Gamma) \}$, the *local automorphism* $(\sigma, \Phi)$ is the automorphism of $\Gamma \mathcal{G}$ induced by $$\left\{ \begin{array}{ccc} \bigcup\limits_{u \in V(\Gamma)} G_u & \to & \Gamma \mathcal{G} \\ g & \mapsto & \text{$\varphi_u(g)$ if $g \in G_u$} \end{array} \right. .$$ For instance, in the specific case of right-angled Artin groups, graphic automorphisms (i.e. automorphisms of $\Gamma{{\mathcal G}}$ induced by a graph automorphism of $\Gamma$) and inversions [@ServatiusCent] are local automorphisms.
- Given a vertex $u \in V(\Gamma)$, a connected component $\Lambda$ of $\Gamma \backslash \mathrm{star}(u)$ and an element $h \in G_u$, the *partial conjugation* $(u, \Lambda,h)$ is the automorphism of $\Gamma \mathcal{G}$ induced by $$\left\{ \begin{array}{ccc} \bigcup\limits_{u \in V(\Gamma)} G_u & \to & \Gamma \mathcal{G} \\ g & \mapsto & \left\{ \begin{array}{cl} g & \text{if $g \notin \langle \Lambda \rangle$} \\ hgh^{-1} & \text{if $g \in \langle \Lambda \rangle$} \end{array} \right. \end{array} \right. .$$ Notice that an inner automorphism of $\Gamma \mathcal{G}$ is always a product of partial conjugations.
The goal of this article is to describe the structure (and provide a generating set) for much larger classes of graphs products of groups by adopting a new geometric perspective. In a nutshell, the strategy is to consider the action of graph products $\Gamma{{\mathcal G}}$ on an appropriate space and to show that this action can be extended to an action of $\mathrm{Aut}(\Gamma{{\mathcal G}})$ on $X$, in order to exploit the geometry of this action. Such a ‘rigidity’ phenomenon appeared for instance in the work of Ivanov on the action of mapping class groups of hyperbolic surfaces on their curve complexes: Ivanov showed that an automorphism of the mapping class group induces an automorphism of the underlying curve complex [@IvanovAut]. Another example is given by the Higman group: in [@MartinHigman], the second author computed the automorphism group of the Higman group $H_4$ by first extending the action of $H_4$ on a CAT(0) square complex naturally associated to its standard presentation to an action of $\mathrm{Aut}(H_4)$. In this article, we construct such rigid actions for large classes of graph products of groups. The results from this article vastly generalise earlier results obtained by the authors in a previous version (not intended for publication, but still available on the arXiv as [@GPcycle]).\
We now present the main results of this article. **For the rest of this introduction, we fix a finite simplicial graph $\Gamma$ and a collection ${{\mathcal G}}$ of groups indexed by $V(\Gamma)$.**
#### The subgroup of conjugating automorphisms.
When studying automorphisms of graph products of groups, an important subgroup consists of those automorphisms that send vertex groups to conjugates of vertex groups. This subgroup already appears for instance in the work of Tits [@TitsAutCoxeter], Corredor–Gutierrez [@AutGPabelianSet], and Charney–Gutierez–Ruane [@AutGPabelian]. A description of this subgroup was only available for right-angled Artin groups and other graph products of *cyclic* groups by work of Laurence [@RAAGgenerators].
A central result of this paper is a complete characterisation of this subgroup under *no restriction on the vertex groups or the underlying graph*. More precisely, let us call a *conjugating automorphism* of $\Gamma \mathcal{G}$ an automorphism $\varphi$ of $\Gamma \mathcal{G}$ such that, for every vertex group $G_v \in \mathcal{G}$, there exists a vertex group $G_w \in \mathcal{G}$ and an element $g \in \Gamma \mathcal{G}$ such that $\varphi(G_v)=gG_wg^{-1}$. We prove the following:
*The subgroup of conjugating automorphisms of $\Gamma \mathcal{G}$ is exactly the subgroup of $ \mathrm{Aut}(\Gamma{{\mathcal G}})$ generated by the local automorphisms and the partial conjugations.*
#### Generating set and algebraic structure.
With this characterisation of conjugating automorphisms at our disposal, we are able to completely describe the automorphism group of large classes of graph products, and in particular to give a generating set for such automorphism groups. To the authors’ knowledge, this result represents the first results on the algebraic structure of automorphism groups of graph products of general (and in particular non-abelian) groups.
*If $\Gamma$ is a finite connected simplicial graph of girth at least $5$ and without vertices of valence $<2$, then $\mathrm{Aut}(\Gamma{{\mathcal G}})$ is generated by the partial conjugations and the local automorphisms.*
This description of the automorphism group in Theorem B simplifies further in the case where $\Gamma$ is in addition assumed not to contain any separating star (following [@RAAGatomic], a finite connected graph without vertex of valence $<2$, whose girth is at least $5$, and that does not contain separating stars is called *atomic*), then we get the following decomposition:
*If $\Gamma$ is an atomic graph, then*
$$\mathrm{Aut}(\Gamma\mathcal{G})
\simeq \Gamma\mathcal{G} \rtimes \left( \left( \prod\limits_{v \in \Gamma} \mathrm{Aut}(G_v) \right) \rtimes \mathrm{Sym}(\Gamma \mathcal{G}) \right),$$ *where $\mathrm{Sym}(\Gamma \mathcal{G})$ is an explicit subgroup of the automorphism group of $\Gamma$.*
Actually, we obtain a stronger statement characterising isomorphisms between graph products of groups (see Theorem \[thm:ConjIsom\]), which in the case of right-angled Coxeter groups is strongly related to the so-called *strong rigidity* of these groups, and to the famous isomorphism problem for general Coxeter groups, see [@RACGrigidity].
It should be noted that while the previous theorems impose conditions on the underlying graph, it can be used to obtain information about more general Coxeter or Artin groups. Indeed, if the vertex groups in our graph product are (arbitrary) Coxeter groups, then the resulting graph product is again a Coxeter group (with a possibly much wilder underlying graph), and Corollary C can thus be interpreted as a form of strong rigidity of these Coxeter groups *relative to their vertex groups*: up to conjugation *and automorphisms of the vertex groups*, an automorphism of the graph product comes from a (suitable) isometry of the underlying graph.\
The explicit computation in Corollary C can be used to study the subgroups of such automorphism groups. In particular, since satisfying the Tits alternative is a property stable under graph products [@AM] and under extensions, one can deduce from Corollary C a combination theorem for the Tits Alternative for such automorphism groups.\
We mention also an application of this circle of ideas to the study of automorphism groups of graph products of finite groups, with no requirement on the underlying graph. The following result was only known for graph products of *cyclic* groups by work of Corredor–Gutierrez [@AutGPabelianSet]:
*If all the groups of ${{\mathcal G}}$ are finite, then the subgroup of conjugating automorphisms of $\Gamma{{\mathcal G}}$ has finite index in $\mathrm{Aut}(\Gamma \mathcal{G})$.*
As an interesting application, we are able to determine precisely when a graph product of finite groups has a finite outer automorphism group. See Corollary \[cor:OutFinite\] for a precise statement.
#### Geometry of the automorphism group.
While the previous results give us information about the algebraic structure of the automorphism groups of graph products, the geometric point of view used in this article also allows us to obtain some information about their geometry.\
The first property we investigate is the notion of *acylindrical hyperbolicity* introduced by Osin in [@OsinAcyl], which unifies several known classes of groups with ‘negatively curved’ features such as relatively hyperbolic groups and mapping class groups (we refer to [@OsinSurvey] for more information). One of the most striking consequences of the acylindrical hyperbolicity of a group is its *SQ-universality* [@DGO], that is, every countable group embeds into a quotient of the group we are looking at. Loosely speaking, such groups are thus very far from being simple.
For general graph products, we obtain the following:
*If $\Gamma$ is an atomic graph and if $\mathcal{G}$ is collection of *finitely generated* groups, then $\mathrm{Aut}(\Gamma{{\mathcal G}})$ is acylindrically hyperbolic.*
Let us mention that prior to this result, very little was known about the acylindrical hyperbolicity of the automorphism group of a graph product, even in the case of right-angled Artin groups. At one end of the RAAG spectrum, $\mathrm{Aut}({{\mathbb Z}}^n)= \mathrm{GL}_n({{\mathbb Z}})$ is a higher rank lattice for $n \geq 3$, and thus does not have non-elementary actions on hyperbolic spaces by a recent result of Haettel [@HaettelRigidity]. The situation is less clear for $\mathrm{Aut}(F_n)$: while it is known that $\mathrm{Out}(F_n)$ is acylindrically hyperbolic for $n \geq 2$ [@BestvinaFeighnOutFnHyp], the case of $\mathrm{Aut}(F_n)$ seems to be open. For right-angled Artin groups whose outer automorphism group is finite, such as right-angled Artin group over atomic graphs, the problem boils down to the question of the acylindrical hyperbolicity of the underlying group, for which a complete answer is known [@MinasyanOsinTrees].\
Another property of a more geometric nature group that can be investigated from our perspective is *Kazhdan’s property (T)*. Property (T) for a group imposes for instance strong restrictions on the possible homomorphisms starting from that group and plays a fundamental role in several rigidity statements, including Margulis’ superrigidity. We only possess a fragmented picture of the status of property (T) for automorphism groups of right-angled Artin groups. At one end of the RAAG spectrum, $\mathrm{Aut}({{\mathbb Z}}^n)= \mathrm{GL}_n({{\mathbb Z}})$ is known to have property (T) for $n \geq 3$. In the opposite direction, it is known that $\mathrm{Aut}(F_n)$ does not have property (T) for $n =2$ and $3$ [@McCoolPropTaut; @GrunewaldLubotzkyAutF; @BogopolskiVikentievAutF]; that it has property (T) for $n \geq 5$ by very recent results of [@Aut5PropT; @AutFnPropT]; and the case $n=4$ is still open. About more general right-angled Artin groups, a few general criteria can be found in [@RAAGandT; @VastAutRAAG]. (See also [@VastAutRACG] for right-angled Coxeter groups.) We obtain the following result:
*If $\Gamma$ is an atomic graph, then $\mathrm{Aut}(\Gamma{{\mathcal G}})$ does not have Property (T).*
We emphasize that this result does not assume any knowledge of the vertex groups of the graph product, or the size of its outer automorphism group. In particular, by allowing vertex groups to be arbitrary right-angled Artin groups, this result provides a very large class of right-angled Artin groups whose automorphism groups do not have property (T).\
#### Structure of the paper.
Let us now detail the strategy and structure of this article. In Section 2, we recall a few general definitions and statements about graph products, before introducing the two main complexes studied in this paper: the Davis complex of a graph product of groups $\Gamma{{\mathcal G}}$, and a certain Cayley graph $X(\Gamma, {{\mathcal G}})$ of the graph product that has a particularly rich combinatorial geometry (namely, a *quasi-median* geometry). Ideally, one would like to show that the action of the graph product on one of these complexes extends to an action of the automorphism group. However, this does not hold in general: in the case of right-angled Artin groups for instance, the presence of transvections shows that conjugacy classes of vertex groups are not preserved by automorphisms in general. In Section 3, a different action is considered, namely the action of $\Gamma{{\mathcal G}}$ on the *transversality graph* associated to the graph $X(\Gamma, {{\mathcal G}})$, and we show that this action extends to an action of Aut$(\Gamma{{\mathcal G}})$. (This transversality graph turns out to be naturally isomorphic to the intersection graph of parallelism classes of hyperplanes in the Davis complex.) This action allows us to prove the central result of our article: the characterisation of conjugating automorphisms stated in Theorem A. The algebraic structure of certain automorphism groups is also proved in this section (Theorems B and D). Finally, Section 4 focuses on the case of graph products of groups over atomic graphs. We first prove that the action of the graph product on its Davis complex extends to an action of its automorphism group. Such a rich action on a CAT(0) cube complex is then used to prove Theorems E and F.\
#### The point of view of quasi-median graphs.
We take a moment to justify the point of view adopted in this article, and in particular the central role played by the quasi-median geometry of some Cayley graph of a graph product. This is a very natural object associated to the group, and its geometry turns out to be both similar and simpler than that of the (perhaps more familiar) Davis complex. Quasi-median graphs have been studied in great detail by the first author [@Qm]. However, we wish to emphasize that **we provide in this article self-contained proofs of all the combinatorial/geometric results about this graph, in order to avoid relying on the (yet unpublished, at the time of writing) manuscript** [@Qm]. In particular, no prerequisite on quasi-median geometry is needed to read this article.
Let us explain further the advantages of this (quasi-median) graph over the Davis complex. First, the geodesics of this graph encode the normal forms of group elements, which makes its geometry more natural and easier to work with; see Section \[section:CubicalLikeGeom\] for more details. Moreover, although a quasi-median graph is not the $1$-skeleton of a CAT(0) cube complex, it turns out to have essentially the same type of geometry. More precisely, *hyperplanes* may be defined in quasi-median graphs in a similar fashion, and so that the geometry reduces to the combinatorics of hyperplanes, as for CAT(0) cube complexes; see for instance Theorem \[thm:QmHyperplanes\] below. Roughly speaking, quasi-median graphs may be thought of as ‘almost’ CAT(0) cube complexes in which hyperplanes cut the space into at least two pieces but possibly more. The analogies between these classes of spaces go much further, and we refer to [@Qm Section 2] for a dictionary between concepts/results in CAT(0) cube complexes and their quasi-median counterparts. Hyperplanes in the quasi-median graph turn out to be easier to work with than hyperplanes in the Davis complex, due to the absence of parallel hyperplanes, which makes some of the arguments simpler and cleaner. Hyperplanes in this quasi-median graph are closely related to the *tree-walls* of the Davis complex introduced by M. Bourdon in [@Bourdon] for certain graph products of groups, and used in a previous version of this article (not intended for publication) [@GPcycle].
Quasi-median graphs provide a convenient combinatorial framework that encompasses and unifies many of the tools used to study graph products until now: the normal forms proved in E. Green’s thesis [@GreenGP] (see also [@GPvanKampen] for a more geometric approach), the action on a right-angled building with an emphasis on the combinatorics of certain subspaces (tree-walls) [@Bourdon; @TW; @CapraceTW; @UnivRAbuildings], the action on a CAT(0) cube complex (Davis complex), etc. An indirect goal of this article is thus to convince the reader that the quasi-median graph associated to a graph product of groups provides a rich and natural combinatorial setting to study this group, and ought to be investigated further. Finally, let us mention that decomposing non-trivially as a graph product can be characterised by the existence of an appropriate action on a quasi-median graph, see [@Qm Corollary 10.57]. Thus, quasi-median geometry is in a sense ‘the’ natural geometry associated with graph products.
#### Acknowledgements.
We are grateful to Olga Varghense for having communicated to us a proof of Corollary \[cor:OutFinite\], replacing a wrong argument in a preliminary version. The first author thanks the university of Vienna for its hospitality in December 2015, where this project originated; and from March to May 2018 where he was a visitor, supported by the Ernst Mach Grant ICM-2017-06478 under the supervision of Goulnara Arzhantseva. The second author thanks the MSRI for its support during the program ‘Geometric Group Theory’ in 2016, Grant DMS-1440140, and the Isaac Newton Institute for its support during the program ‘Non-positive curvature group actions and cohomology’ in 2017, EPSRC Grant EP/K032208/1, during which part of this research was conducted. The second author was partially supported by the ERC grant 259527, the Lise Meitner FWF project M 1810-N25, and the EPSRC New Investigator Award EP/S010963/1.
Geometries associated to graph products of groups {#sec:geom}
=================================================
Generalities about graph products {#section:GP}
---------------------------------
\[sec:generalities\]
Given a simplicial graph $\Gamma$, whose set of vertices is denoted by $V(\Gamma)$, and a collection of groups $\mathcal{G}=\{ G_v \mid v \in V(\Gamma) \}$ (the *vertex groups*), the *graph product* $\Gamma \mathcal{G}$ is defined as the quotient
$\left( \underset{v \in V(\Gamma)}{\ast} G_v \right) / \langle \langle gh=hg, \ h \in G_u, g \in G_v, \{ u,v \} \in E(\Gamma) \rangle \rangle$,
where $E(\Gamma)$ denotes the set of edges of $\Gamma$. Loosely speaking, it is obtained from the disjoint union of all the $G_v$’s, called the *vertex groups*, by requiring that two adjacent vertex groups commute. Notice that, if $\Gamma$ has no edges, $\Gamma \mathcal{G}$ is the free product of the groups of $\mathcal{G}$; on the other hand, if $\Gamma$ is a complete graph, then $\Gamma \mathcal{G}$ is the direct sum of the groups of $\mathcal{G}$. Therefore, a graph product may be thought of as an interpolation between free and direct products. Graph products also include two classical families of groups: If all the vertex groups are infinite cyclic, $\Gamma \mathcal{G}$ is known as a *right-angled Artin group*; and if all the vertex groups are cyclic of order two, then $\Gamma \mathcal{G}$ is known as a *right-angled Coxeter group*.
**Convention.** In all the article, we will assume for convenience that the groups of $\mathcal{G}$ are non-trivial. Notice that it is not a restrictive assumption, since a graph product with some trivial factors can be described as a graph product over a smaller graph all of whose factors are non-trivial.
If $\Lambda$ is an *induced* subgraph of $\Gamma$ (ie., two vertices of $\Lambda$ are adjacent in $\Lambda$ if and only if they are adjacent in $\Gamma$), then the subgroup, which we denote by $\langle \Lambda \rangle$, generated by the vertex groups corresponding to the vertices of $\Lambda$ is naturally isomorphic to the graph product $\Lambda \mathcal{G}_{|\Lambda}$, where $\mathcal{G}_{|\Lambda}$ denotes the subcollection of $\mathcal{G}$ associated to the set of vertices of $\Lambda$. This observation is for instance a consequence of the normal form described below.\
**For the rest of Section \[sec:geom\], we fix a finite simplicial graph $\Gamma$ and a collection ${{\mathcal G}}$ of groups indexed by $V(\Gamma)$.**
#### Normal form.
A *word* in $\Gamma \mathcal{G}$ is a product $g_1 \cdots g_n$ where $n \geq 0$ and where, for every $1 \leq i \leq n$, $g_i$ belongs to $G_i$ for some $G_i \in \mathcal{G}$; the $g_i$’s are the *syllables* of the word, and $n$ is the *length* of the word. Clearly, the following operations on a word does not modify the element of $\Gamma \mathcal{G}$ it represents:
- delete the syllable $g_i=1$;
- if $g_i,g_{i+1} \in G$ for some $G \in \mathcal{G}$, replace the two syllables $g_i$ and $g_{i+1}$ by the single syllable $g_ig_{i+1} \in G$;
- if $g_i$ and $g_{i+1}$ belong to two adjacent vertex groups, switch them.
A word is *reduced* if its length cannot be shortened by applying these elementary moves. Given a word $g_1 \cdots g_n$ and some $1\leq i<n$, if the vertex group associated to $g_i$ is adjacent to each of the vertex groups of $g_{i+1}, ..., g_n$, then the words $g_1 \cdots g_n$ and $g_1\cdots g_{i-1} \cdot g_{i+1} \cdots g_n \cdot g_i$ represent the same element of $\Gamma{{\mathcal G}}$; We say that $g_i$ *shuffles to the right*. Analogously, one can define the notion of a syllable shuffling to the left. If $g=g_1 \cdots g_n$ is a reduced word and $h$ is a syllable, then a reduction of the product $gh$ is given by
- $g_1 \cdots g_n$ if $h=1$;
- $g_1 \cdots g_{i-1} \cdot g_{i+1} \cdots g_n$ if $g_i$ and $h$ belong to the same vertex group, $g_i$ shuffles to the right and $g_i=h^{-1}$;
- $g_1 \cdots g_{i-1} \cdot g_{i+1} \cdots g_n \cdot (g_ih)$ if $g_i$ and $h$ belong to the same vertex group, $g_i$ shuffles to the right, $g_i \neq h^{-1}$ and $(g_ih)$ is thought of as a single syllable.
In particular, every element of $\Gamma \mathcal{G}$ can be represented by a reduced word, and this word is unique up to applying the operation (O3). It is worth noticing that a reduced word has also minimal length with respect to the generating set $\bigcup\limits_{u\in V(\Gamma)} G_u$. We refer to [@GreenGP] for more details (see also [@GPvanKampen] for a more geometric approach). We will also need the following definition:
Let $g \in \Gamma{{\mathcal G}}$. The *head* of $g$, denoted by $\mathrm{head}(g)$, is the collection of the first syllables appearing in the reduced words representing $g$. Similarly, the *tail* of $g$, denoted by $\mathrm{tail}(g)$, is the collection of the last syllables appearing in the reduced words representing $g$.
#### Some vocabulary.
We conclude this paragraph with a few definitions about graphs used in the article.
- A subgraph $\Lambda \subset \Gamma$ is a *join* if there exists a partition $V(\Lambda)= A \sqcup B$, where $A$ and $B$ are both non-empty, such that any vertex of $A$ is adjacent to any vertex of $B$.
- Given a vertex $u \in V(\Gamma)$, its *link*, denoted by $\mathrm{link}(u)$, is the subgraph generated by the neighbors of $u$.
- More generally, given a subgroup $\Lambda \subset \Gamma$, its *link*, denoted by $\mathrm{link}(\Lambda)$, is the subgraph generated by the vertices of $\Gamma$ which are adjacent to all the vertices of $\Lambda$.
- Given a vertex $u \in V(\Gamma)$, its *star*, denoted by $\mathrm{star}(u)$, is the subgraph generated by $\mathrm{link}(u) \cup \{u\}$.
- More generally, given a subgraph $\Lambda \subset \Gamma$, its *star*, denoted by $\mathrm{star}(\Lambda)$, is the subgraph generated by $\mathrm{link}(\Lambda) \cup \Lambda$.
The quasi-median graph associated to a graph product of groups {#section:CubicalLikeGeom}
--------------------------------------------------------------
This section is dedicated to the geometry of the following Cayley graph of $\Gamma{{\mathcal G}}$: $$X(\Gamma, \mathcal{G}) : = \mathrm{Cayl} \left( \Gamma \mathcal{G}, \bigcup\limits_{u \in V(\Gamma)} G_u \backslash \{1 \} \right),$$ ie., the graph whose vertices are the elements of the groups $\Gamma \mathcal{G}$ and whose edges link two distinct vertices $x,y \in \Gamma \mathcal{G}$ if $y^{-1}x$ is a non-trivial element of some vertex group. Like in any Cayley graph, edges of $X(\Gamma, \mathcal{G})$ are labelled by generators, namely by elements of vertex groups. By extension, paths in $X(\Gamma, \mathcal{G})$ are naturally labelled by words of generators. In particular, geodesics in $X(\Gamma, \mathcal{G})$ correspond to words of minimal length. More precisely:
\[prop:distinGP\] Fix two vertices $g,h \in X(\Gamma, \mathcal{G})$. If $s_1 \cdots s_n$ is a reduced word representing $g^{-1}h$, then $$g, \ gs_1, \ gs_1s_2, \ldots, gs_1 \cdots s_{n-1}, \ gs_1 \cdots s_{n-1}s_n = h$$ define a geodesic in $X(\Gamma, \mathcal{G})$ from $g$ to $h$. Conversely, if $s_1, \ldots, s_n$ is the sequence of elements of $\Gamma \mathcal{G}$ labelling the edges of a geodesic in $X(\Gamma, \mathcal{G})$ from $g$ to $h$, then $s_1 \cdots s_n$ is a reduced word representing $g^{-1}h$. As a consequence, the distance in $X(\Gamma, \mathcal{G})$ between $g$ and $h$ coincides with the length $|g^{-1}h|$ of any reduced word representing $g^{-1}h$.
#### The Cayley graph as a complex of prisms.
The first thing we want to highlight is that the Cayley graph $X(\Gamma, \mathcal{G})$ has naturally the structure of a *complex of prisms*. We begin by giving a few definitions:
Let $X$ be a graph. A *clique* of $X$ is a maximal complete subgraph. A *prism* $P \subset X$ is an induced subgraph which decomposes as a product of cliques of $X$ in the following sense: There exist cliques $C_1, \ldots, C_k \subset P$ of $X$ and a bijection between the vertices of $P$ and the $k$-tuples of vertices $C_1 \times \cdots \times C_k$ such that two vertices of $P$ are linked by an edge if and only if the two corresponding $k$-tuples differ on a single coordinate. The number of factors, namely $k$, is referred to as the *cubical dimension* of $P$. More generally, the *cubical dimension* of $X$ is the highest cubical dimension of its prisms.
The first observation is that cliques of $X(\Gamma, \mathcal{G})$ correspond to cosets of vertex groups.
\[lem:CliqueStab\] The cliques of $X(\Gamma, \mathcal{G})$ coincide with the cosets $gG_u$, where $g \in \Gamma \mathcal{G}$ and $u \in V(\Gamma)$.
First of all, observe that the edges of a triangle of $X(\Gamma, \mathcal{G})$ are labelled by elements of $\Gamma \mathcal{G}$ that belong to the same vertex group. Indeed, if the vertices $x,y,z \in X(\Gamma,\mathcal{G})$ generate a triangle, then $z^{-1}x$, $z^{-1}y$ and $y^{-1}x$ are three non-trivial elements of vertex groups such that $z^{-1}x = (z^{-1}y) \cdot (y^{-1}x)$. Of course, the product $(z^{-1}y) \cdot (y^{-1}x)$ cannot be reduced, which implies that $(z^{-1}y)$ and $(y^{-1}x)$ belong to the same vertex group, say $G_u$. From the previous equality, it follows that $z^{-1}x$ belongs to $G_u$ as well, concluding the proof of our claim. We record its statement for future use:
\[fact:Triangle\] In $X(\Gamma, \mathcal{G})$, the edges of a triangle are labelled by elements of a common vertex group.
As a consequence, the edges of any complete subgraph of $X(\Gamma, \mathcal{G})$ are all labelled by elements of the same vertex group. Thus, we have proved that any clique of $X(\Gamma, \mathcal{G})$ is generated by $gG_u$ for some $g \in \Gamma \mathcal{G}$ and $u \in V(\Gamma)$. Conversely, fix some $g \in \Gamma \mathcal{G}$ and $u \in V(\Gamma)$. By definition of $X(\Gamma, \mathcal{G})$, it is clearly a complete subgraph, and, if $C$ denotes a clique containing $gG_u$, we already know that $C=hG_v$ for some $h \in \Gamma \mathcal{G}$ and $v \in V(\Gamma)$. Since $$\langle G_u,G_v \rangle = \left\{ \begin{array}{cl} G_u \times G_v & \text{if $u$ and $v$ are adjacent in $\Gamma$} \\ G_u \ast G_u & \text{if $u$ and $v$ are non-adjacent and distinct} \\ G_u=G_v & \text{if $u=v$} \end{array} \right.,$$ it follows from the inclusion $gG_u \subset C = hG_v$ that $u=v$. Finally, since two cosets of the same subgroup either coincide or are disjoint, we conclude that $gG_u=C$ is a clique of $X(\Gamma, \mathcal{G})$.
Next, we observe that prisms of $X(\Gamma, \mathcal{G})$ correspond to cosets of subgroups generated by complete subgraphs of $\Gamma$.
\[lem:PrismGP\] The prisms of $X(\Gamma, \mathcal{G})$ coincide with the cosets $g \langle \Lambda \rangle$ where $g \in \Gamma \mathcal{G}$ and where $\Lambda \subset \Gamma$ is a complete subgraph.
If $g \in \Gamma \mathcal{G}$ and if $\Lambda \subset \Gamma$ is a complete subgraph, then $g \langle \Lambda \rangle$ is the product of the cliques $gG_u$ where $u \in \Lambda$. A fortiori, $g \langle\Lambda \rangle$ is a prism. Conversely, let $P$ be a prism of $X(\Gamma, \mathcal{G})$. Fix a vertex $g \in P$ and let $\mathcal{C}$ be a collection of cliques all containing $g$ such that $P$ is the product of the cliques of $\mathcal{C}$. As a consequence of Lemma \[lem:CliqueStab\], there exists a subgraph $\Lambda \subset \Gamma$ such that $\mathcal{C}= \{ g G_u \mid u\in \Lambda \}$. Fix two distinct vertices $u,v \in \Lambda$ and two elements $a \in G_u$, $b\in G_v$. Because $P$ is a prism, the edges $(g,ga)$ and $(g,gb)$ generate a square in $X(\Gamma, \mathcal{G})$. Let $x$ denote its fourth vertex. It follows from Proposition \[prop:distinGP\] that $g^{-1}x$ has length two and that the geodesics from $g$ to $x$ are labelled by the reduced words representing $g^{-1}x$. As $g$ and $x$ are opposite vertices in a square, there exist two geodesics between them. The only possibility is that $g^{-1}x=ab$ and that $a$ and $b$ belong to adjacent vertex groups, so that $g$, $ga$, $gb$ and $gab=gba$ are the vertices of our square. A fortiori, $u$ and $v$ are adjacent in $\Gamma$. The following is a consequence of our argument, which we record for future use:
\[fact:Square\] Two edges of $X(\Gamma,\mathcal{G})$ sharing an endpoint generate a square if and only if they are labelled by adjacent vertex groups. If so, two opposite sides of the square are labelled by the same element of $\Gamma \mathcal{G}$.
Thus, we have proved that $\Lambda$ is a complete subgraph of $\Gamma$. Since the prisms $P$ and $g \langle \Lambda \rangle$ both coincide with the product of the cliques of $\mathcal{C}$, we conclude that $P=g \langle \Lambda \rangle$, proving our lemma.
An immediate consequence of Lemma \[lem:PrismGP\] is the following statement:
\[cor:cubdim\] The cubical dimension of $X(\Gamma, \mathcal{G})$ is equal to $\mathrm{clique}(\Gamma)$, the maximal cardinality of a complete subgraph of $\Gamma$.
#### Hyperplanes.
Now, our goal is to define *hyperplanes* in $X(\Gamma, \mathcal{G})$ and to show that they behave essentially in the same way as hyperplanes in CAT(0) cube complexes.
\[def:hyp\] A *hyperplane* of $X(\Gamma, {{\mathcal G}})$ is a class of edges with respect to the transitive closure of the relation claiming that two edges which are opposite in a square or which belong to a common triangle are equivalent. The *carrier* of a hyperplane $J$, denoted by $N(J)$, is the subgraph of $X(\Gamma, {{\mathcal G}})$ generated by $J$. Two hyperplanes $J_1$ and $J_2$ are *transverse* if they intersect a square along two distinct pairs of opposite edges, and they are *tangent* if they are distinct, not transverse and if their carriers intersect.
We refer to Figure \[figure3\] for examples of hyperplanes in a graph. We begin by describing the hyperplanes of $X(\Gamma, \mathcal{G})$. For convenience, for every vertex $u \in V(\Gamma)$ we denote by $J_u$ the hyperplane which contains all the edges of the clique $G_u$. Our description of the hyperplanes of $X(\Gamma, \mathcal{G})$ is the following:
\[thm:HypStab\] For every hyperplane $J$ of $X(\Gamma, \mathcal{G})$, there exist some $g \in \Gamma \mathcal{G}$ and $u \in V(\Gamma)$ such that $J=gJ_u$. Moreover, $N(J)= g \langle \mathrm{star}(u) \rangle$ and $\mathrm{stab}(J)= g \langle \mathrm{star}(u) \rangle g^{-1}$.
The key step in proving Theorem \[thm:HypStab\] is the following characterisation:
\[prop:EdgesDualHyp\] Fix a vertex $u \in V(\Gamma)$ and two adjacent vertices $x,y \in X(\Gamma ,\mathcal{G})$. The following statements are equivalent:
- the edge $(x,y)$ is dual to the hyperplane $J_u$;
- $x \in \langle \mathrm{star}(u) \rangle$ and $x^{-1}y \in G_u$;
- the projections of $x$ and $y$ onto the clique $G_u$ are distinct.
The third point requires an explanation. The *projection* of a vertex onto a clique refers to the unique vertex of the clique which minimises the distance to our initial vertex. The existence of such a projection is justified by the following lemma:
\[lem:ProjClique\] Fix a vertex $u \in V(\Gamma)$ and let $g \in X(\Gamma, \mathcal{G})$. There exists a unique vertex of the clique $G_u$ minimising the distance to $g$, namely $\left\{ \begin{array}{cl} 1 & \text{if $\mathrm{head}(g) \cap G_u = \emptyset$} \\ \mathrm{head}(g) \cap G_u & \text{if $\mathrm{head}(g) \cap G_u \neq \emptyset$} \end{array} \right.$.
Suppose that $\mathrm{head}(g) \cap G_u = \emptyset$. Then, for every $h \in G_u$, one has $$d(g,h)=|h^{-1}g|= |g| +1 =d(g,1)+1>d(g,1)$$ since the product $h^{-1}g$ is necessarily reduced. It shows that $1$ is the unique vertex of $G_u$ minimising the distance to $g$. Next, suppose that $\mathrm{head}(g) \cap G_u \neq \emptyset$. Thus, we can write $g$ as a reduced product $hg'$ where $h$ belongs to $G_u \backslash \{ 1 \}$ and where $g' \in \Gamma \mathcal{G}$ satisfies $\mathrm{head}(g') \cap G_u = \emptyset$. Notice that $\mathrm{head}(g) \cap G_u = \{h\}$. Then $$d(g,k) = |k^{-1}g| = |(k^{-1}h) \cdot g'| = |g'| +1 = d(g,h) +1>d(g,h)$$ for every $k \in G_u \backslash \{h \}$ since the product $(k^{-1}h) \cdot g'$ is necessarily reduced. This proves that $h$ is the unique vertex of $G_u$ minimising the distance to $g$.
Suppose that $(i)$ holds. There exists a sequence of edges $$(x_1,y_1), \ (x_2,y_2), \ldots, (x_{n-1},y_{n-1}), \ (x_n,y_n)=(x,y)$$ such that $(x_1,y_1) \subset G_u$, and such that, for every $1 \leq i \leq n-1$, the edges $(x_i,y_i)$ and $(x_{i+1},y_{i+1})$ either belong to the same triangle or are opposite in a square. We argue by induction over $n$. If $n=1$, there is nothing to prove. Now suppose that $x_{n-1} \in \langle \mathrm{star}(u) \rangle$ and that $x_{n-1}^{-1}y_{n-1} \in G_u$. If $(x_{n-1},y_{n-1})$ and $(x_n,y_n)$ belong to the same triangle, it follows from Fact \[fact:Triangle\] that $x_n \in \langle \mathrm{star}(u) \rangle$ and $x_n^{-1}y_n \in G_u$. Otherwise, if $(x_{n-1},y_{n-1})$ and $(x_n,y_n)$ are opposite sides in a square, we deduce from Fact \[fact:Triangle\] that there exists some $a \in \langle \mathrm{link}(u) \rangle$ such that either $\left\{ \begin{array}{l} x_n=x_{n-1}a \\ y_n = y_{n-1}a \end{array} \right.$ or $\left\{ \begin{array}{l} x_n=y_{n-1}a \\ y_n = x_{n-1}a \end{array} \right.$. As a consequence, $x_n \in \langle \mathrm{star}(u) \rangle$ and $x_n^{-1}y_n \in G_u$. Thus, we have proved the implication $(i) \Rightarrow (ii)$.
Now, suppose that $(ii)$ holds. There exists some $\ell \in G_u \backslash \{ 1 \}$ such that $y=x \ell$, and, since $\langle \mathrm{star}(u) \rangle = G_u\times \langle \mathrm{link}(u) \rangle$, we can write $x$ as a reduced product $ab$ for some $a \in G_u$ and $b \in \langle \mathrm{link}(u) \rangle$. Notice that $y$ is represented by the reduced product $(a \ell) \cdot b$. We deduce from Lemma \[lem:ProjClique\] that the projections of $x$ and $y$ onto the clique $G_u$ are $a$ and $a \ell$ respectively. They are distinct since $\ell \neq 1$. Thus, we have proved the implication $(ii) \Rightarrow (iii)$.
Suppose that $(iii)$ holds. Write $x$ as a product $a \cdot x_1 \cdots x_n$, where $a \in G_u$ and $x_1, \ldots, x_n \in \Gamma \mathcal{G}$ are generators, such that $a=1$ and $x_1 \cdots x_n$ is reduced if $\mathrm{head}(g) \cap G_u = \emptyset$, and such that $a \cdot x_1 \cdots x_n$ is reduced otherwise; notice that in the latter case, $\mathrm{head}(g) \cap G_u = \{a \}$. According to Lemma \[lem:ProjClique\], the projection of $x$ onto the clique $G_u$ is $a$. Next, because $x$ and $b$ are adjacent, there exists a generator $b \in \Gamma \mathcal{G}$ such that $y = xb$. Since $x$ and $y$ must have different projections onto the clique $G_u$, we deduce from Lemma \[lem:ProjClique\] that necessarily $b$ shuffles to the left in the product $x_1 \cdots x_n \cdot b$ (see Section \[section:GP\] for the definition) and belongs to $G_u$. (In this case, the projection of $y$ onto the clique $G_u$ is $ab$, which distinct from $a$ since $b \neq 1$.) As a consequence, the $x_i$’s belong to $\langle \mathrm{link}(u) \rangle$. Finally, it is sufficient to notice that any two consecutive edges of the sequence $$(a,ab), \ (ax_1,abx_1), \ldots, (ax_1 \cdots x_{n-1},abx_1\cdots x_{n-1}), \ (ax_1 \cdots x_n, abx_1 \cdots x_n)= (x,y)$$ are opposite sides of a square in order to deduce that $(x,y)$ and $(a,ab) \subset G_u$ are dual to the same hyperplane, namely $J_u$. Thus, we have proved the implication $(iii) \Rightarrow (i)$.
Let $J$ be a hyperplane of $X(\Gamma, \mathcal{G})$. Fixing a clique $C$ dual to $J$, we know from Lemma \[lem:CliqueStab\] that there exist $g \in \Gamma \mathcal{G}$ and $u \in V(\Gamma)$ such that $C=gG_u$, hence $J=gJ_u$. It is a consequence of Proposition \[prop:EdgesDualHyp\] that a vertex of $X(\Gamma,\mathcal{G})$ belongs to $N(J_u)$ if and only if it belongs to $\langle \mathrm{star}(u) \rangle$, so $N(J)=gN(J_u)=g \langle \mathrm{star}(u) \rangle$.
It remains to show that $\mathrm{stab}(J)= g \langle \mathrm{star}(u) \rangle g^{-1}$. Fix a non-trivial element $a \in G_u$. Then the hyperplane dual to the edge $(g,ga)$ is $J$. If $h \in \mathrm{stab}(J)$, then $J$ must be also dual to $h \cdot (g,ga)$, and we deduce from Proposition \[prop:EdgesDualHyp\] that $hg$ must belong to $g \langle \mathrm{star}(u)$, hence $h \in g \langle \mathrm{star}(u) \rangle g^{-1}$. Conversely, if $h$ belongs to $g \langle \mathrm{star}(u) \rangle g^{-1}$, then $hg \in g \langle \mathrm{star}(u) \rangle$ and $(hg)^{-1}(ga) \in G_u$ so that Proposition \[prop:EdgesDualHyp\] implies that $gJ_u=J$ is the hyperplane dual to the edge $h \cdot (g,ga)$, hence $hJ=J$ since these two hyperplanes turn out to be dual to the same edge. This concludes the proof of the theorem.
It is worth noticing that, as a consequence of Theorem \[thm:HypStab\], the hyperplanes of $X(\Gamma, \mathcal{G})$ are naturally labelled by $V(\Gamma)$. More precisely, since any hyperplane $J$ of $X(\Gamma, \mathcal{G})$ is a translate of some $J_u$, we say that the corresponding vertex $u \in V(\Gamma)$ *labels* $J$. Equivalently, by noticing that the edges of $X(\Gamma, \mathcal{G})$ are naturally labelled by vertices of $\Gamma$, the vertex of $\Gamma$ labelling a hyperplane coincides with the common label of all its edges (as justified by Facts \[fact:Triangle\] and \[fact:Square\]). Let us record the following elementary but quite useful statement:
\[lem:transverseimpliesadj\] Two transverse hyperplanes of $X(\Gamma, \mathcal{G})$ are labelled by adjacent vertices of $\Gamma$, and two tangent hyperplanes of $X(\Gamma, \mathcal{G})$ are labelled by distinct vertices of $\Gamma$.
The assertion about transverse hyperplanes is a direct consequence of Fact \[fact:Square\]. Now, let $J_1$ and $J_2$ be two tangent hyperplanes, and let $u_1,u_2 \in V(\Gamma)$ denote their labels respectively. Since these two hyperplanes are tangent, there exists a vertex $g \in X(\Gamma, \mathcal{G})$ which belongs to both $N(J_1)$ and $N(J_2)$. Fix two cliques, say $C_1$ and $C_2$ respectively, containing $g$ and dual to $J_1$ and $J_2$. According to Lemma \[lem:CliqueStab\], we have $C_1=gG_{u_1}$ and $C_2=gG_{u_2}$. Clearly, $u_1$ and $u_2$ must be distinct, since otherwise $C_1$ and $C_2$ would coincide, contradicting the fact that $J_1$ and $J_2$ are tangent. Therefore, our two hyperplanes $J_1$ and $J_2$ are indeed labelled by distinct vertices of $\Gamma$.
Now we want to focus on the second goal of this paragraph by showing that hyperplanes of $X(\Gamma, \mathcal{G})$ are closely related to its geometry. Our main result is the following:
\[thm:QmHyperplanes\] The following statements hold:
- For every hyperplane $J$, the graph $X(\Gamma, \mathcal{G}) \backslash \backslash J$ is disconnected. Its connected components are called *sectors*.
- Carriers of hyperplanes are convex.
- For any two vertices $x,y \in X(\Gamma, \mathcal{G})$, $d(x,y)= \# \{ \text{hyperplanes separating $x$ and $y$} \}$.
- A path in $X(\Gamma, \mathcal{G})$ is a geodesic if and only if it intersects each hyperplane at most once.
In this statement, we denoted by $X(\Gamma, \mathcal{G}) \backslash \backslash J$, where $J$ is a hyperplane, the graph obtained from $X(\Gamma, \mathcal{G})$ by removing the interiors of the edges of $J$.
Let $J$ be a hyperplane of $X(\Gamma, \mathcal{G})$. Up to translating $J$ by an element of $\Gamma \mathcal{G}$, we may suppose without loss of generality that $J=J_u$ for some $u \in V(\Gamma)$. Fix a non-trivial element $a \in G_u$. We claim that the vertices $1$ and $a$ are separated by $J_u$. Indeed, if $x_1, \ldots, x_n$ define a path from $1$ to $a$ in $X(\Gamma,\mathcal{G})$, there must exist some $1 \leq i \leq n-1$ such that the projections of $x_i$ and $x_{i+1}$ onto the clique $G_u$ are distinct, since the projections of $1$ and $a$ onto $G_u$ are obviously $1$ and $a$ respectively and are distinct. It follows from Proposition \[prop:EdgesDualHyp\] that the edge $(x_i,x_{i+1})$ is dual to $J_u$. This concludes the proof of the first point in the statement of our theorem.
The convexity of carriers of hyperplanes is a consequence of the characterisation of geodesics given by Proposition \[prop:distinGP\] and of the description of carriers given by Theorem \[thm:HypStab\].
Let $\gamma$ be a geodesic between two vertices $x,y \in X(\Gamma, \mathcal{G})$. Suppose by contradiction that it intersects a hyperplane at least twice. Let $e_1, \ldots, e_n$ be the sequence of edges corresponding to $\gamma$. Fix two indices $1 \leq i <j \leq n$ such that $e_i$ and $e_j$ are dual to the same hyperplane, say $J$, and such that the subpath $\rho = e_{i+1} \cup \cdots \cup e_{j-1}$ intersects each hyperplane at most once and does not intersect $J$. Notice that, as a consequence of the convexity of $N(J)$, this subpath must be contained in the carrier $N(J)$. Therefore, any hyperplane dual to an edge of $\rho$ must be transverse to $J$. It follows that, if $x_i$ denotes the generator labelling the edge $e_i$ for every $1 \leq i \leq n$, then $x_i$ shuffles to the end in the product $x_i \cdot x_{i+1} \cdots x_{j-1}$; moreover, $x_i$ and $x_j$ belong to the same vertex group. Consequently, the product $x_1 \cdots x_n$ is not reduced, contradicting the fact that $\gamma$ is a geodesic according to Proposition \[prop:distinGP\]. Thus, we have proved that a geodesic in $X(\Gamma, \mathcal{G})$ intersects each hyperplane at most once. This implies the inequality $$d(x,y) \leq \# \{ \text{hyperplanes separating $x$ and $y$}\}.$$ The reverse inequality is clear since any path from $x$ to $y$ must intersect each hyperplane separating $x$ and $y$, proving the equality. As a consequence, if a path between $x$ and $y$ intersects each hyperplane at most once, then its length coincides with the number of hyperplanes separating $x$ and $y$, which coincides itself with the distance between $x$ and $y$. A fortiori, such a path must be a geodesic. This concludes the proof of the third and fourth points in the statement of the theorem.
#### Projections on parabolic subgroups.
We saw in Lemma \[lem:ProjClique\] that it is possible to project naturally vertices of $X(\Gamma, \mathcal{G})$ onto a given clique. Now, we want to extend this observation to a wider class of subgraphs. More precisely, if $\Lambda$ is a subgraph of $\Gamma$, then we claim that vertices of $X(\Gamma, \mathcal{G})$ project onto the subgraph $\langle \Lambda \rangle$. This covers cliques but also carriers of hyperplanes according to Theorem \[thm:HypStab\]. Before stating and proving the main result of this paragraph, we would like to emphasize that, as a consequence of Proposition \[prop:distinGP\], such a subgraph $\langle \Lambda \rangle$ is necessarily convex.
\[prop:ProjHyp\] Fix a subgraph $\Lambda \subset \Gamma$ and a vertex $g \in X(\Gamma, \mathcal{G})$. There exists a unique vertex $x$ of $\langle \Lambda \rangle$ minimising the distance to $g$. Moreover, any hyperplane separating $g$ from $x$ separates $g$ from $\langle \Lambda \rangle$ (ie., $g$ and $\langle \Lambda \rangle$ lie in distinct sectors delimited by the hyperplane).
Fix a vertex $x \in \langle \Lambda \rangle$ minimising the distance to $g$, and a geodesic $[g,x]$ from $g$ to $x$. We say that an edge of $[g,x]$ is *bad* if the hyperplane dual to it crosses $\langle \Lambda \rangle$. Let $e$ be the bad edge of $[g,x]$ which is closest to $x$. As a consequence, the edges of $[g,x]$ between $e$ and $x$ have their hyperplanes which are disjoint from $\langle \Lambda \rangle$. This implies that these hyperplanes are all transverse to the hyperplane $J$ dual to $e$, so that, as a consequence of Lemma \[lem:transverseimpliesadj\], the syllable $s$ of $g^{-1}x$ labelling $e$ belongs to the tail of $g^{-1}x$. Moreover, the fact that $J$ crosses $\langle \Lambda \rangle$ implies that it is labelled by a vertex of $\Lambda$, hence $s \in \langle \Lambda \rangle$. We deduce from Proposition \[prop:distinGP\] that there exists a geodesic from $g$ to $x$ whose last edge is labelled by $s$. Since $s$ and $x$ both belong to $\langle \Lambda \rangle$, it follows that the penultimate vertex along our geodesic, namely $xs^{-1}$, belongs to $\langle \Lambda \rangle$ and satisfies $d(g,xs^{-1}) <d(g,x)$, contradicting the definition of $x$. Thus, we have proved that a geodesic from $g$ to $x$ does not contain any bad edge. In other words, any hyperplane separating $g$ from $x$ separates $g$ from $\langle \Lambda \rangle$. This proves the second assertion of our proposition.
Now, suppose that $y \in \langle \Lambda \rangle$ is a second vertex minimising the distance to $g$. If $x$ and $y$ are distinct, then there exists a hyperplane $J$ separating them. Because such a hyperplane necessarily crosses $\langle \Lambda \rangle$, we deduce from the first paragraph of our proof that $J$ does not separate $g$ from $x$; similarly, $J$ does not separate $g$ from $y$. But this implies that $J$ does not separate $x$ and $y$, contradicting the choice of $J$. This proves that $x$ and $y$ necessarily coincide, concluding the proof of our proposition.
Below, we record several easy consequences of Proposition \[prop:ProjHyp\].
\[cor:hypsepprojections\] Let $\Lambda$ be a subgraph of $\Gamma$ and let $x,y \in X(\Gamma, \mathcal{G})$ be two vertices. The hyperplanes separating the projections of $x$ and $y$ onto $\langle \Lambda \rangle$ are precisely the hyperplanes separating $x$ and $y$ which intersect $\langle \Lambda \rangle$. In particular, any hyperplane separating these projections also separates $x$ and $y$.
Let $x',y' \in \langle \Lambda \rangle$ denote respectively the projections of $x$ and $y$ onto $\langle \Lambda \rangle$. If $J$ is a hyperplane separating $x'$ and $y'$ then it has to cross $\langle \Lambda \rangle$. As a consequence of Proposition \[prop:ProjHyp\], $J$ cannot separate $x$ and $x'$ nor $y$ and $y'$. Therefore, it has to separate $x$ and $y$. Conversely, suppose that $J$ is a hyperplane separating $x$ and $y$ which intersects $\langle \Lambda \rangle$. Once again according to Proposition \[prop:ProjHyp\], $J$ cannot separate $x$ and $x'$ nor $y$ and $y'$. Therefore, it has to separate $x'$ and $y'$. This concludes the proof of our lemma.
\[cor:diamproj\] Let $\Lambda, \Xi \subset \Gamma$ be two subgraphs and let $g,h \in \Gamma \mathcal{G}$. The diameter of the projection of $g \langle \Lambda \rangle$ onto $h \langle \Xi \rangle$ is at most the number of hyperplanes intersecting both $g\langle \Lambda \rangle$ and $h\langle \Xi \rangle$.
For convenience, let $p : X(\Gamma, \mathcal{G}) \to h \langle \Xi \rangle$ denote the projection onto $h \langle \Xi \rangle$. Let $D$ denote the number (possibly infinite) of hyperplanes intersecting both $g\langle \Lambda \rangle$ and $h\langle \Xi \rangle$. We claim that, for every vertices $x,y \in g \langle \Lambda \rangle$, the distance between $p(x)$ and $p(y)$ is at most $D$. Indeed, as a consequence of Corollary \[cor:hypsepprojections\], any hyperplane separating $p(x)$ and $p(y)$ separates $x$ and $y$, so that any hyperplane separating $p(x)$ and $p(y)$ must intersect both $g\langle \Lambda \rangle$ and $h\langle \Xi \rangle$. Consequently, the diameter of $p(g \langle \Lambda \rangle)$ is at most $D$.
\[cor:minseppairhyp\] Let $\Lambda, \Xi \subset \Gamma$ be two subgraphs and let $g,h \in \Gamma \mathcal{G}$ be two elements. Fix two vertices $x \in g \langle \Lambda \rangle$ and $y \in h \langle \Xi \rangle$ minimising the distance between $g\langle \Lambda \rangle$ and $h\langle \Xi \rangle$. The hyperplanes separating $x$ and $y$ are precisely those separating $g\langle \Lambda \rangle$ and $h\langle \Xi \rangle$.
Let $J$ be a hyperplane separating $x$ and $y$. Notice that $x$ is the projection of $y$ onto $g\langle \Lambda \rangle$, and similarly $y$ is the projection of $x$ onto $h\langle \Xi \rangle$. By applying Proposition \[prop:ProjHyp\] twice, it follows that $J$ is disjoint from both $g\langle \Lambda \rangle$ and $h\langle \Xi \rangle$. Consequently, $J$ separates $g\langle \Lambda \rangle$ and $h\langle \Xi \rangle$. Conversely, it is clear that any hyperplane separating $g\langle \Lambda \rangle$ and $h\langle \Xi \rangle$ also separates $x$ and $y$.
Let $\Lambda, \Xi \subset \Gamma$ be two subgraphs and let $g,h \in \Gamma \mathcal{G}$ be two elements. If $g \langle \Lambda \rangle \cap h \langle \Xi \rangle = \emptyset$ then there exists a hyperplane separating $g\langle \Lambda \rangle$ and $h\langle \Xi \rangle$.
Fix two vertices $x \in g \langle \Lambda \rangle$ and $y \in h \langle \Xi \rangle$ minimising the distance between $g\langle \Lambda \rangle$ and $h\langle \Xi \rangle$. Because these two subgraphs are disjoint, $x$ and $y$ must be distinct. According to Corollary \[cor:minseppairhyp\], taking a hyperplane separating $x$ and $y$ provides the desired hyperplane.
#### Hyperplane stabilisers.
A useful tool when working with the Cayley graph $X(\Gamma, \mathcal{G})$ is the notion of *rotative-stabiliser*.
Let $\Gamma$ be a simplicial graph and $\mathcal{G}$ a collection of groups indexed by $V(\Gamma)$. Given a hyperplane $J$ of $X(\Gamma, \mathcal{G})$, its *rotative-stabiliser* is the following subgroup of $\Gamma \mathcal{G}$: $$\mathrm{stab}_{\circlearrowleft}(J) := \bigcap\limits_{\text{$C$ clique dual to $J$}} \mathrm{stab}(C).$$
We begin by describing rotative-stabilisers of hyperplanes in $X(\Gamma, \mathcal{G})$. More precisely, our first main result is the following:
\[prop:rotstabinX\] The rotative-stabiliser of a hyperplane $J$ of $X(\Gamma, \mathcal{G})$ coincides with the stabiliser of any clique dual to $J$. Moreover, $\mathrm{stab}_{\circlearrowleft}(J)$ acts freely and transitively on the set of sectors delimited by $J$, and it stabilises each sector delimited by the hyperplanes transverse to $J$; in particular, it stabilises the hyperplanes transverse to $J$.
Let $J$ be a hyperplane of $X(\Gamma, \mathcal{G})$. Up to translating $J$ by some element of $\Gamma \mathcal{G}$, we may suppose without loss of generality that $J=J_u$ for some $u \in V(\Gamma)$. As a consequence of Proposition \[prop:EdgesDualHyp\], the cliques of $X(\Gamma, \mathcal{G})$ dual to $J_u$ correspond to the cosets $g G_u$ where $g \in \langle \mathrm{link}(u) \rangle$. Clearly, they all have the same stabiliser, namely $G_u$. This proves the first assertion of our proposition.
Next, if follows from Proposition \[prop:EdgesDualHyp\] that two vertices of $X(\Gamma, \mathcal{G})$ belong to the same sector delimited by $J_u$ if and only if they have the same projection onto the clique $G_u$. Therefore, the collection of sectors delimited by $J_u$ is naturally in bijection with the vertices of the clique $G_u$. Since $\mathrm{stab}_\circlearrowleft(J_u)=G_u$ acts freely and transitively on the vertices of the clique $G_u$, it follows that this rotative-stabiliser acts freely and transitively on the set of sectors delimited by $G_u$.
Finally, let $J_1$ and $J_2$ be two transverse hyperplanes. Up to translating $J_1$ and $J_2$ by an element of $\Gamma \mathcal{G}$, we may suppose without loss of generality that the vertex $1$ belongs to $N(J_1) \cap N(J_2)$. As a consequence, there exist vertices $u,v \in V(\Gamma)$ such that $J_1=J_u$ and $J_2=J_v$. According to Lemma \[lem:transverseimpliesadj\], $u$ and $v$ are adjacent in $\Gamma$. As a by-product, one gets the following statement, which we record for future use:
\[fact:RotStabCom\] The rotative-stabilisers of two transverse hyperplanes of $X(\Gamma, \mathcal{G})$ commute, ie., any element of one rotative-stabiliser commutes with any element of the other.
For every vertex $x \in X(\Gamma, \mathcal{G})$ and every element $g \in \mathrm{stab}_\circlearrowleft(J_u)=G_u$, we deduce from Lemma \[lem:ProjClique\] that $x$ and $gx$ have the same projection onto the clique $G_v$ since the vertex groups $G_u$ and $G_v$ commute. Because two vertices of $X(\Gamma, \mathcal{G})$ belong to the same sector delimited by $J_v$ if and only if they have the same projection onto the clique $G_v$, according to Proposition \[prop:EdgesDualHyp\], we conclude that $\mathrm{stab}_\circlearrowleft(J_u)$ stabilises each sector delimited by $J_v$.
We also record the following preliminary lemma which will be used later.
\[lem:shortendist\] Let $x \in X(\Gamma, \mathcal{G})$ be a vertex and let $J,H$ be two hyperplanes of $X(\Gamma, \mathcal{G})$. Suppose that $J$ separates $x$ from $H$ and let $g \in \mathrm{stab}_\circlearrowleft(J)$ denote the unique element sending $H$ into the sector delimited by $J$ which contains $x$. Then $d(1,N(gH))<d(1,N(H))$.
Let $y \in N(H)$ denote the projection of $x$ onto $N(H)$ and fix a geodesic $[x,y]$ between $x$ and $y$. Because $J$ separates $x$ from $H$, $[x,y]$ must contain an edge $[a,b]$ dual to $J$. Let $[x,a]$ and $[b,y]$ denote the subpath of $[x,y]$ between $x$ and $a$, and $b$ and $y$, respectively. Notice that $gb=a$ since $g$ stabilises the clique containing $[a,b]$ and sends the sector delimited by $J$ which contains $H$ (and a fortiori $b$) to the sector delimited by $J$ which contains $x$ (and a fortiori $a$). As a consequence, $[x,a] \cup g [b,y]$ defines a path from $x$ to $gN(H)$ of length $d(x,y)-1$, so that $$d(1,gN(H)) \leq d(x,y)-1 = d(1,N(H))-1,$$ concluding the proof.
One feature of rotative-stabilisers is that they can be used to play ping-pong. As an illustration, we prove a result that will be fundamental in Section \[section:ConjAut\]. Let us first introduce some notation:
\[def:peripheral\] A collection of hyperplanes $\mathcal{J}$ of $X(\Gamma, \mathcal{G})$ is *peripheral* if, for every $J_1,J_2 \in \mathcal{J}$, $J_1$ does not separate $1$ from $J_2$.
\[lem:pingpong\] Fix a collection of hyperplanes $\mathcal{J}$, and, for every $J \in \mathcal{J}$, let $S(J)$ denote the sector delimited by $J$ that contains $1$. If $\mathcal{J}$ is peripheral, then $g \notin \bigcap\limits_{J \in \mathcal{J}} S(J)$ for every non-trivial $g \in \langle \mathrm{stab}_\circlearrowleft (J) \mid J \in \mathcal{J} \rangle$.
For every $J \in \mathcal{J}$, let $R(J)$ denote the union of all the sectors delimited by $J$ that do not contain the vertex $1$. In order to prove our lemma we have to show that $g \in \bigcup\limits_{J \in \mathcal{J}} R(J)$ for every non-trivial element $g \in \langle \mathrm{stab}_\circlearrowleft(J) \mid J \in \mathcal{J} \rangle$. We deduce from Proposition \[prop:rotstabinX\] that:
- If $J_1,J_2 \in \mathcal{J}$ are two transverse hyperplanes, then $g \cdot R(J_1) = R(J_1)$ for every $g \in \mathrm{stab}_\circlearrowleft (J_2)$;
- If $J_1,J_2 \in \mathcal{J}$ are two distinct hyperplanes which are not transverse, then $g \cdot R(J_1)$ is contained in $R(J_2)$ for every non-trivial $g \in \mathrm{stab}_\circlearrowleft(J_2)$.
- For every hyperplane $J \in \mathcal{J}$ and every non-trivial element $g \in \mathrm{stab}_\circlearrowleft (J)$, $g$ belongs to $R(J)$.
Let $\Phi$ be the graph whose vertex-set is $\mathcal{J}$ and whose edges connect two transverse hyperplanes, and set $\mathcal{H}= \{ G_J= \mathrm{stab}_\circlearrowleft(J) \mid J \in \mathcal{J} \}$. As a consequence of Fact \[fact:RotStabCom\], we have a natural surjective morphism $\phi : \Phi \mathcal{H} \to \langle \mathrm{stab}_\circlearrowleft(J) \mid J \in \mathcal{J} \rangle$. It follows that a non-trivial element $g \in \langle \mathrm{stab}_\circlearrowleft(J) \mid J \in \mathcal{J} \rangle$ can be represented as a non-empty and reduced word in $\Phi \mathcal{H}$, say $w$ such that $\phi(w)=g$. We claim that $g$ belongs to $R(J)$ for some vertex $J \in V(\Phi)$ such that $\mathrm{head}(w)$ contains a syllable of $G_J$.
We argue by induction on the length of $w$. If $w$ has length one, then $w \in G_J \backslash \{ 1 \}$ for some $J \in V(\Phi)$. Our third point above implies that $g \in R(J)$. Now, suppose that $w$ has length at least two. Write $w=ab$ where $a$ is the first syllable of $w$ and $b$ the rest of the word. Thus, $a \in G_J \backslash \{ 1 \}$ for some $J \in V(\Phi)$. We know from our induction hypothesis that $\phi(b) \in R(I)$ where $I$ is a vertex of $\Phi$ such that $\mathrm{head}(b)$ contains a syllable of $G_I$. Notice that $I \neq J$ since otherwise the word $w=ab$ would not be reduced. Two cases may happen: either $I$ and $J$ are not adjacent in $\Phi$, so that our second point above implies that $g = \phi(ab) \in \phi(a) \cdot R(I) \subset R(J)$; or $I$ and $J$ are adjacent, so that our first point above implies that $g= \phi(ab) \in \phi(a) \cdot R(I) = R(I)$. It is worth noticing that, in the former case, $\mathrm{head}(w)$ contains a syllable of $G_J$, namely $a$; in the latter case, we know that we can write $b$ as a reduced product $cd$ where $c$ is a syllable of $G_I$, hence $w=ab=acd=cad$ since $a$ and $c$ belong to the commuting vertex groups $G_I$ and $G_J$, which implies that $\mathrm{head}(w)$ contains a syllable of $G_I$. This concludes the proof of our lemma.
The Davis complex associated to a graph product of groups {#section:Davis}
---------------------------------------------------------
In this section, we recall an important complex associated to a graph product of groups, whose structure will be used in Section \[section:atomic\].
\[def:Davis\] The *Davis complex* $D(\Gamma, {{\mathcal G}})$ associated to the graph product $\Gamma{{\mathcal G}}$ is defined as follows:
- Vertices correspond to left cosets of the form $g \langle \Lambda \rangle$ for $g\in \Gamma{{\mathcal G}}$ and $\Lambda \subset \Gamma$ a (possibly empty) complete subgraph.
- For every $g\in \Gamma{{\mathcal G}}$ and complete subgraphs $\Lambda_1, \Lambda_2 \subset \Gamma$ that differ by exactly one vertex $v$, one puts an edge between the vertices $g\langle \Lambda_1\rangle$ and $g \langle \Lambda_2\rangle$. The vertex $v$ is called the *label* of that edge.
- One obtains a cubical complex from this graph by adding for every $k \geq 2$ a $k$-cube for every subgraph isomorphic to the $1$-skeleton of a $k$-cube.
This complex comes with an action of $\Gamma{{\mathcal G}}$: The group $\Gamma{{\mathcal G}}$ acts on the vertices by left multiplication on left cosets, and this action extends to the whole complex.
If all the local groups $G_v$ are cyclic of order $2$, then $\Gamma{{\mathcal G}}$ is a right-angled Coxeter group (generally denoted $W_\Gamma$), and $D(\Gamma, {{\mathcal G}})$ is the standard Davis complex associated to a Coxeter group in that case. The Davis complex associated to a general graph product has a similarly rich combinatorial geometry. More precisely:
\[Davis\_building\] The Davis complex $D(\Gamma, {{\mathcal G}})$ is a CAT(0) cube complex.
We mention here a few useful observations about the action of $\Gamma{{\mathcal G}}$ on $D(\Gamma, {{\mathcal G}})$.
\[obs:stab\] The following holds:
- The action is without inversions, that is, an element of $\Gamma{{\mathcal G}}$ fixing a cube of $D(\Gamma, {{\mathcal G}})$ globally fixes that cube pointwise.
- The stabiliser of a vertex corresponding to a coset $g\langle \Lambda \rangle$ is the subgroup $g\langle \Lambda \rangle g^{-1}$ .
- The action of $\Gamma{{\mathcal G}}$ on $D(\Gamma, {{\mathcal G}})$ is cocompact, and a strict fundamental domain $K$ for this action is given by the subcomplex spanned by cosets of the form $\langle \Lambda\rangle $ (that is, cosets associated to the identity element of $\Gamma{{\mathcal G}}$).
The CAT(0) geometry of the Davis complex can be used for instance to recover the structure of finite subgroups of graph products due to Green [@GreenGP], which will be used in Section \[section:applicationfinite\]:
\[lem:finitesub\] A finite subgroup of $\Gamma \mathcal{G}$ is contained in a conjugate of the form $g\langle \Lambda \rangle g^{-1}$ for $g \in \Gamma \mathcal{G}$ and $\Lambda \subset \Gamma$ a complete subgraph.
Let $H \leq \Gamma \mathcal{G}$ be a finite subgroup. Since $D(\Gamma, {{\mathcal G}})$ is a (finite-dimensional, hence complete) CAT(0) cube complex by Theorem \[Davis\_building\], the CAT(0) fixed-point theorem [@BridsonHaefliger Corollary II.2.8] implies that $H$ fixes a point of $D(\Gamma, \mathcal{G})$. Since the action is without inversions, it follows that $H$ fixes a vertex, that is, $H$ is contained in a conjugate of the form $g\langle \Lambda \rangle g^{-1}$ for $g \in \Gamma \mathcal{G}$ and $\Lambda \subset \Gamma$ a complete subgraph.
It is also possible to give a proof of Lemma \[lem:finitesub\] using the geometry of the quasi-median graph $X(\Gamma,\mathcal{G})$ introduced in Section \[section:CubicalLikeGeom\]. Indeed, if $H$ is a finite subgroup of $\Gamma \mathcal{G}$, then it follows from [@Qm Theorem 2.115] that $H$ stabilises a prism of $X(\Gamma,\mathcal{G})$, so that the conclusion of Lemma \[lem:finitesub\] then follows from Lemma \[lem:PrismGP\].
Relation between the Davis complex and the quasi-median graph {#sec:dual}
-------------------------------------------------------------
There is a very close link between the Davis complex and the quasi-median graph $X(\Gamma,\mathcal{G})$, which we now explain.\
Let us consider the set of $\Gamma{{\mathcal G}}$-translates of the fundamental domain $K$ introduced in Section \[section:Davis\]. Two such translates $gK$ and $hK$ share a codimension $1$ face if and only if $gh^{-1} $ is a non-trivial element of some vertex group $G_v$. Thus, the set of such translates can be seen as a chamber system over the vertex set $V(\Gamma)$ that defines a building with underlying Coxeter group $W_\Gamma$, as explained in [@DavisBuildingsCAT0 Paragraph 5]. Moreover, the Davis complex is then quasi-isometric to the geometric realisation of that building (see [@DavisBuildingsCAT0 Paragraph 8] for the geometric realisation of a building).
When thought of as a complex of chambers associated to a building, a very interesting graph associated to the Davis complex is its dual graph. This is the simplicial graph whose vertices correspond to the $\Gamma{{\mathcal G}}$-translates of the fundamental domain $K$, and such that two vertices $gK$ and $hK$ are joined by an edge if and only if the translates share a codimension $1$ face, or in other words if $gh^{-1} $ is a non-trivial element of some local group $G_v$. Since the action of $\Gamma{{\mathcal G}}$ is free and transitive on the set of $\Gamma{{\mathcal G}}$-translates of $K$, this dual graph can also be described as the simplicial graph whose vertices are elements of $\Gamma{{\mathcal G}}$ and such that two elements $g, g'\in \Gamma{{\mathcal G}}$ are joined by an edge if and only if there is some non-trivial element $s \in G_v$ of some vertex group such that $g' = gs$. Thus, this dual graph is exactly the quasi-median graph $X(\Gamma, {{\mathcal G}})$.\
Descriptions of automorphism groups
===================================
In this section, we study an important class of automorphisms of a graph product of groups. **Until the end of Section \[section:algebraiccharacterisation\], we fix a finite simplicial graph $\Gamma$ and a collection of groups $\mathcal{G}$ indexed by $V(\Gamma)$.** Applications to particular classes of graphs products will then be given in Sections \[section:applicationfinite\] and \[section:applicationgirth\].
A *conjugating automorphism* of $\Gamma \mathcal{G}$ is an automorphism $\varphi : \Gamma \mathcal{G} \to \Gamma \mathcal{G}$ satisfying the following property: For every $G \in \mathcal{G}$, there exist $H \in \mathcal{G}$ and $g \in \Gamma \mathcal{G}$ such that $\varphi(G)=gHg^{-1}$. Let $\mathrm{ConjAut}_{\Gamma, {{\mathcal G}}}(\Gamma \mathcal{G})$ be the subgroup of conjugating automorphisms of $\Gamma{{\mathcal G}}$. In order to lighten notations, we will simply denote it $\mathrm{ConjAut}(\Gamma \mathcal{G})$ in the rest of this article, by a slight abuse of notation that we comment on below.
We emphasize that the subgroup $\mathrm{ConjAut}_{\Gamma, {{\mathcal G}}}(\Gamma \mathcal{G})$ of $\mathrm{Aut}(\Gamma \mathcal{G})$ heavily depends on the chosen decomposition of $\Gamma \mathcal{G}$ as a graph product under consideration, and not just on the group $\Gamma \mathcal{G}$ itself. For instance, let $\Gamma \mathcal{G}$ be the graph product corresponding to a single edge whose endpoints are both labelled by $\mathbb{Z}$, and let $\Phi \mathcal{H}$ be the graph product corresponding to a single vertex labelled by $\mathbb{Z}^2$. Both $\Gamma \mathcal{G}$ and $\Phi \mathcal{H}$ are isomorphic to $\mathbb{Z}^2$, but $\mathrm{ConjAut}_{\Gamma, {{\mathcal G}}}(\Gamma \mathcal{G})$ is finite while $\mathrm{ConjAut}_{\Phi, \mathcal{H}}(\Phi \mathcal{H}) = \mathrm{GL}(2, \mathbb{Z})$.
Thus, writing $\mathrm{ConjAut}(\Gamma \mathcal{G})$ instead of $\mathrm{ConjAut}_{\Gamma, {{\mathcal G}}}(\Gamma \mathcal{G})$ is indeed an abuse of notation. However, in this article we will only consider a single graph product decomposition at a time, so this lighter notation shall not lead to confusion.
Our goal in this section is to find a simple and natural generating set for $\mathrm{ConjAut}(\Gamma \mathcal{G})$. For this purpose we need the following definitions:
We define the following automorphisms of $\Gamma{{\mathcal G}}$:
- Given an isometry $\sigma : \Gamma \to \Gamma$ and a collection of isomorphisms $\Phi = \{ \varphi_u : G_u \to G_{\sigma(u)} \mid u \in V(\Gamma) \}$, the *local automorphism* $(\sigma, \Phi)$ is the automorphism of $\Gamma \mathcal{G}$ induced by $$\left\{ \begin{array}{ccc} \bigcup\limits_{u \in V(\Gamma)} G_u & \to & \Gamma \mathcal{G} \\ g & \mapsto & \text{$\varphi_u(g)$ if $g \in G_u$} \end{array} \right..$$ The group of local automorphisms of $\Gamma \mathcal{G}$ is denoted by $\mathrm{Loc}(\Gamma \mathcal{G})$. Again, this subgroup should be denoted $\mathrm{Loc}_{\Gamma, {{\mathcal G}}}(\Gamma \mathcal{G})$ as it depends on the chosen decomposition as a graph product, but we will use the same abuse of notation as above. Also, we denote by $\mathrm{Loc}^0(\Gamma \mathcal{G})$ the group of local automorphisms satisfying $\sigma = \mathrm{Id}$. Notice that $\mathrm{Loc}^0(\Gamma \mathcal{G})$ is a finite-index subgroup of $\mathrm{Loc}(\Gamma \mathcal{G})$ naturally isomorphic to the direct sum $\bigoplus\limits_{u \in V(\Gamma)} \mathrm{Aut}(G_u)$.
- Given a vertex $u \in V(\Gamma)$, a connected component $\Lambda$ of $\Gamma \backslash \mathrm{star}(u)$ and an element $h \in G_u$, the *partial conjugation* $(u, \Lambda,h)$ is the automorphism of $\Gamma \mathcal{G}$ induced by $$\left\{ \begin{array}{ccc} \bigcup\limits_{u \in V(\Gamma)} G_u & \to & \Gamma \mathcal{G} \\ g & \mapsto & \left\{ \begin{array}{cl} g & \text{if $g \notin \langle \Lambda \rangle$} \\ hgh^{-1} & \text{if $g \in \langle \Lambda \rangle$} \end{array} \right. \end{array} \right. .$$ Notice that an inner automorphism of $\Gamma \mathcal{G}$ is always a product of partial conjugations.
We denote by $\mathrm{ConjP}(\Gamma \mathcal{G})$ the subgroup of $\mathrm{Aut}(\Gamma \mathcal{G})$ generated by the inner automorphisms, the local automorphisms, and the partial conjugations. Again, this subgroup should be denoted $\mathrm{ConjP}_{\Gamma, {{\mathcal G}}}(\Gamma \mathcal{G})$ as it depends on the chosen decomposition as a graph product, but we will use the same abuse of notation as above.
It is clear that the inclusion $\mathrm{ConjP}(\Gamma \mathcal{G}) \subset \mathrm{ConjAut}(\Gamma \mathcal{G})$ holds. The main result of this section is that the reverse inclusion also holds. Namely:
\[thm:conjugatingauto\] The group of conjugating automorphisms of $\Gamma \mathcal{G}$ coincides with $\mathrm{ConjP}(\Gamma \mathcal{G})$.
In Sections \[section:applicationfinite\] and \[section:applicationgirth\], we deduce a description of automorphism groups of specific classes of graph products.
Action of $\mathrm{ConjAut}(\Gamma \mathcal{G})$ on the transversality graph {#section:algebraiccharacterisation}
----------------------------------------------------------------------------
In this section, our goal is to extract from the quasi-median graph associated to a given graph product a graph on which the group of conjugating automorphisms acts. **Let us recall that, until the end of Section \[section:algebraiccharacterisation\], we fix a finite simplicial graph $\Gamma$ and a collection of groups $\mathcal{G}$ indexed by $V(\Gamma)$.**
The *transversality graph* $T(\Gamma, \mathcal{G})$ is the graph whose vertices are the hyperplanes of $X(\Gamma, \mathcal{G})$ and whose edges connect two hyperplanes whenever they are transverse.
Note that the transversality graph is naturally isomorphic to the crossing graph of the Davis complex $D(\Gamma, {{\mathcal G}})$, that is, the simplicial graph whose vertices are parallelism classes of hyperplanes of $D(\Gamma, {{\mathcal G}})$ and whose edges correspond to transverse (classes of) hyperplanes. One of the advantages of working with the transversality graph instead is that it does not require to talk about parallelism classes or to choose particular representatives in proofs, which will make some of the arguments in this section simpler.
From this definition, it is not clear at all that the group of conjugating automorphisms acts on the corresponding transversality graph. To solve this problem, we will state and prove an algebraic characterisation of the transversality graph. This description is the following:
The *factor graph* $F(\Gamma, \mathcal{G})$ is the graph whose vertices are the conjugates of the vertex groups and whose edges connect two conjugates whenever they commute (ie., every element of one subgroup commutes with every element of the other one).
The main result of this section is the following algebraic characterisation:
\[prop:factorgraphalg\] The map $$\left\{ \begin{array}{ccc} T(\Gamma, \mathcal{G}) & \to & F(\Gamma, \mathcal{G}) \\ J & \mapsto & \mathrm{stab}_\circlearrowleft(J) \end{array} \right.$$ induces a graph isomorphism $T(\Gamma, \mathcal{G}) \to F(\Gamma, \mathcal{G})$.
Because the rotative-stabilisers of a hyperplane is indeed a conjugate of a vertex group, according to Lemma \[lem:CliqueStab\] and Proposition\[prop:rotstabinX\], our map is well-defined. Let $G \in \mathcal{G}$ be a vertex group and let $g \in \Gamma \mathcal{G}$. Then $gGg^{-1}$ is the stabiliser of the clique $gG$, and we deduce from Proposition \[prop:rotstabinX\] that it is also the rotative-stabilisers of the hyperplane dual to $gG$. Consequently, our map is surjective. To prove its injectivity, it is sufficient to show that two distinct hyperplanes $J_1,J_2$ which are not transverse have different rotative-stabilisers. More generally, we want to prove the following observation:
\[fact:disctingrotativestab\] The rotative-stabilisers of two distinct hyperplanes $J_1,J_2$ of $X(\Gamma, \mathcal{G})$ have a trivial intersection.
Indeed, if we fix a clique $C$ dual to $J_2$, then it must be entirely contained in a sector delimited by $J_1$. But, if $g \in \mathrm{stab}_\circlearrowleft(J_1)$ is non-trivial, then it follows from Proposition \[prop:rotstabinX\] that $g$ does not stabilise the sector delimited by $J_1$ which contains $C$. Hence $gJ_2 \neq J_2$. A fortiori, $g$ does not belong to the rotative-stabilisers of $J_2$, proving our fact.
In order to conclude the proof of our proposition, it remains to show that two distinct hyperplanes $J_1$ and $J_2$ of $X(\Gamma,\mathcal{G})$ are transverse if and only if their rotative-stabilisers commute.
Suppose first that $J_1$ and $J_2$ are not transverse. Let $g \in \mathrm{stab}_\circlearrowleft(J_1)$ and $h \in \mathrm{stab}_\circlearrowleft(J_2)$ be two non-trivial elements. Since we know from Proposition \[prop:rotstabinX\] that $g$ does not stabilise the sector delimited by $J_1$ which contains $J_2$, necessarily $J_1$ separates $J_2$ and $gJ_2$. Similarly, we deduce that $J_2$ separates $J_1$ and $hgJ_2$; and that $J_1$ separates $J_2$ and $gJ_2$. Therefore, both $J_1$ and $J_2$ separate $hgJ_2$ and $gJ_2=ghJ_2$. A fortiori, $hgJ_2 \neq ghJ_2$ so that $gh \neq hg$. Thus, we have proved that the rotative-stabilisers of $J_1$ and $J_2$ do not commute.
Next, suppose that $J_1$ and $J_2$ are transverse. Up to translating by an element of $\Gamma \mathcal{G}$, we may suppose suppose without loss of generality that the vertex $1$ belongs to $N(J_1) \cap N(J_2)$. As a consequence, there exist vertices $u,v \in V(\Gamma)$ such that $J_1=J_u$ and $J_2=J_v$. We know from Lemma \[lem:transverseimpliesadj\] that $u$ and $v$ are adjacent. Therefore, $\mathrm{stab}_\circlearrowleft(J_1)= G_u$ and $\mathrm{stab}_{\circlearrowleft} (J_2)= G_v$ commute.
Interestingly, if $\varphi : \Gamma \mathcal{G} \to \Phi \mathcal{H}$ is an isomorphism between two graph products that sends vertex groups to conjugates of vertex groups, then $\varphi$ naturally defines an isometry $$\left\{ \begin{array}{ccc} F(\Gamma, \mathcal{G}) & \to & F(\Phi,\mathcal{H}) \\ H & \mapsto & \varphi(H) \end{array} \right..$$ By transferring this observation to transversality graphs through the isomorphism providing by Proposition \[prop:factorgraphalg\], one gets the following statement:
\[fact:ConjIsomacts\] Let $\Gamma, \Phi$ be two simplicial graphs and $\mathcal{G}, \mathcal{H}$ two collections of groups indexed by $V(\Gamma),V(\Phi)$ respectively. Suppose that $\varphi : \Gamma \mathcal{G} \to \Phi \mathcal{H}$ is an isomorphism between two graph products which sends vertex groups to conjugates of vertex groups. Then $\varphi$ defines an isometry $T(\Gamma, \mathcal{G}) \to T(\Phi, \mathcal{H})$ via $$J \mapsto \text{hyperplane whose rotative-stabiliser is $\varphi(\mathrm{stab}_\circlearrowleft(J))$}.$$
In the case $\Gamma= \Phi$, by transferring the action $\mathrm{ConjAut}(\Gamma \mathcal{G}) \curvearrowright F(\Gamma, \mathcal{G})$ defined by: $$\left\{ \begin{array}{ccc} \mathrm{ConjAut}(\Gamma \mathcal{G}) & \to & \mathrm{Isom}(F(\Gamma, \mathcal{G})) \\ \varphi & \mapsto & \left( H \mapsto \varphi(H) \right) \end{array} \right.$$ to the transversality graph $T(\Gamma, \mathcal{G})$, one gets the following statement:
\[fact:ConjAutacts\] The group $\mathrm{ConjAut}(\Gamma \mathcal{G})$ acts on the transversality graph $T(\Gamma, \mathcal{G})$ via $$\left\{ \begin{array}{ccc} \mathrm{ConjAut}(\Gamma \mathcal{G}) & \to & \mathrm{Isom}(T(\Gamma, \mathcal{G})) \\ \varphi & \mapsto & \left( J \mapsto \text{hyperplane whose rotative-stabiliser is $\varphi( \mathrm{stab}_\circlearrowleft(J))$} \right) \end{array} \right. .$$ Moreover, if $\Gamma \mathcal{G}$ is centerless and if we identify the subgroup of inner automorphism $\mathrm{Inn}(\Gamma \mathcal{G}) \leq \mathrm{ConjAut}(\Gamma \mathcal{G})$ with $\Gamma \mathcal{G}$ via $$\left\{ \begin{array}{ccc} \Gamma \mathcal{G} & \to & \mathrm{Inn}(\Gamma \mathcal{G}) \\ g & \mapsto & (x \mapsto gxg^{-1}) \end{array} \right.,$$ then the action $\mathrm{ConjAut}(\Gamma \mathcal{G}) \curvearrowright T(\Gamma, \mathcal{G})$ extends the natural action $\Gamma \mathcal{G} \curvearrowright T(\Gamma, \mathcal{G})$ induced by $\Gamma \mathcal{G} \curvearrowright X(\Gamma,\mathcal{G})$.
Characterisation of conjugating automorphisms {#section:ConjAut}
---------------------------------------------
This section is dedicated to the proof of Theorem \[thm:conjugatingauto\], which will be based on the transversality graph introduced in the previous section. In fact, we will prove a stronger statement, namely:
\[thm:ConjIsom\] Let $\Gamma, \Phi$ be two simplicial graphs and $\mathcal{G},\mathcal{H}$ two collections of groups indexed by $V(\Gamma),V(\Phi)$ respectively. Suppose that there exists an isomorphism $\varphi : \Gamma \mathcal{G} \to \Phi \mathcal{H}$ which sends vertex groups of $\Gamma \mathcal{G}$ to conjugates of vertex groups of $\Phi \mathcal{H}$. Then there exist an automorphism $\alpha \in \mathrm{ConjP}(\Gamma \mathcal{G})$ and an isometry $s : \Gamma \to \Phi$ such that $\varphi \alpha$ sends isomorphically $G_u$ to $H_{s(u)}$ for every $u \in V(\Gamma)$.
It should be noted that the proof of Theorem \[thm:ConjIsom\] we give below is constructive. Namely, given two graph products $\Gamma \mathcal{G}$ and $\Phi \mathcal{H}$, it is possible to construct an algorithm that determines whether a given isomorphism $\varphi : \Gamma\mathcal{G} \to \Phi \mathcal{H}$ sends vertex groups to conjugates of vertex groups. In such a case, that algorithm also produces a list of partial conjugations $(u_1, \Lambda_1,h_1)$, $\ldots$, $(u_k, \Lambda_k,h_k) \in \mathrm{Aut}(\Gamma \mathcal{G})$ and an isometry $s: \Gamma \to \Phi$ such that $\varphi \circ (u_1, \Lambda_1,h_1) \cdots (u_k, \Lambda_k,h_k)$ sends $G_u$ isomorphically to $H_{s(u)}$ for every $u\in V(\Gamma)$. However, since such algorithmic considerations are beyond the scope of this article, we will not develop this further.\
Recall from Fact \[fact:ConjIsomacts\] that such an isomorphism $\varphi : \Gamma \mathcal{G} \to \Phi \mathcal{H}$ defines an isometry $T(\Gamma, \mathcal{G}) \to T(\Phi, \mathcal{H})$ via $$J \mapsto \text{hyperplane whose rotative-stabiliser is $\varphi(\mathrm{stab}_\circlearrowleft(J))$}.$$ By setting $\mathcal{J}= \{ J_u \mid u \in V(\Gamma)\}$, we define the *complexity* of $\varphi$ by $$\| \varphi \| = \sum\limits_{J \in \mathcal{J}} d(1, N(\varphi \cdot J)),$$ where $N(\cdot)$ denotes the carrier of a hyperplane; see Definition \[def:hyp\]. Theorem \[thm:ConjIsom\] will be an easy consequence of the following two lemmas:
\[lem:shortencomplexity\] Let $\varphi$ be as in the statement of Theorem \[thm:ConjIsom\]. If $\| \varphi \| \geq 1$ then there exists some automorphism $\alpha \in \mathrm{ConjP}(\Gamma \mathcal{G})$ such that $\| \varphi \alpha \| < \| \varphi \|$.
\[lem:complexityzero\] Let $\varphi$ be as in the statement of Theorem \[thm:ConjIsom\]. If $\| \varphi \| =0$ then there exists an isometry $s : \Gamma \to \Phi$ such that $\varphi$ sends isomorphically $G_u$ to $H_{s(u)}$ for every $u \in V(\Gamma)$.
Before turning to the proof of these two lemmas, we need the following observation:
\[claim:peripheral\] Let $\varphi$ be as in the statement of Theorem \[thm:ConjIsom\]. If $\varphi \cdot \mathcal{J}$ is peripheral (see Definition \[def:peripheral\]) then $\| \varphi \| =0$.
Notice that $$\langle \mathrm{stab}_\circlearrowleft (J) \mid J \in \varphi \cdot \mathcal{J} \rangle= \varphi \left( \langle \mathrm{stab}_\circlearrowleft (J) \mid J \in \mathcal{J} \rangle \right) = \Phi \mathcal{H}.$$ We deduce from Lemma \[lem:pingpong\] that, if we denote by $S(J)$ the sector delimited by $\varphi \cdot J$ which contains $1$, then $g \notin \bigcap\limits_{J \in \varphi \cdot \mathcal{J}} S(J)$ for every $g \in \Phi \mathcal{H} \backslash \{ 1 \}$. Since the action $\Phi \mathcal{H} \curvearrowright X(\Phi, \mathcal{H})$ is vertex-transitive, it follows that $\bigcap\limits_{J \in \varphi \cdot \mathcal{J}} S(J)= \{1\}$, which implies that $1 \in \bigcap\limits_{J \in \varphi \cdot \mathcal{J}} N(J)$, and finally that $\| \varphi \|=0$, proving our claim.
We deduce from Claim \[claim:peripheral\] that $\varphi \cdot \mathcal{J}$ is not peripheral, ie., there exist two distinct vertices $a,b \in V(\Gamma)$ such that $\varphi \cdot J_a$ separates $1$ from $\varphi \cdot J_b$. Notice that $a$ and $b$ are not adjacent in $\Gamma$ as the hyperplanes $\varphi \cdot J_a$ and $\varphi \cdot J_b$ are not transverse. Let $x \in \mathrm{stab}_\circlearrowleft (\varphi \cdot J_a)$ be the element sending $\varphi \cdot J_b$ into the sector delimited by $\varphi \cdot J_a$ which contains $1$. Notice that $\mathrm{stab}_\circlearrowleft (\varphi \cdot J_a)= \varphi \left( \mathrm{stab}_\circlearrowleft(J_a) \right)$, so there exists some $y \in \mathrm{stab}_\circlearrowleft(J_a)$ such that $x= \varphi(y)$.
Now, let $\alpha$ denote the partial conjugation of $\Gamma \mathcal{G}$ which conjugates by $y$ the vertex groups of the connected component $\Lambda$ of $\Gamma \backslash \mathrm{star}(a)$ which contains $b$. According to Fact \[fact:ConjAutacts\], $\alpha$ can be thought of as an isometry of $T(\Gamma, \mathcal{G})$. We claim that $\| \varphi \alpha \| < \| \varphi \|$. Notice that $\varphi \alpha \cdot J_u= \varphi \cdot J_u$ for every $u \notin \Lambda$, and that $$\varphi \alpha \cdot J_u = \varphi \cdot yJ_u = \varphi(y) \cdot \varphi \cdot J_u = x \cdot \varphi \cdot J_u$$ for every $u \in \Lambda$. Therefore, in order to prove the inequality $\| \varphi \alpha \| < \| \varphi \|$, it is sufficient to show that $d(1,N(x \cdot \varphi \cdot J_u))< d(1,N(\varphi \cdot J_u))$ for every $u \in \Lambda$. It is a consequence of Lemma \[lem:shortendist\] and of the following observation:
For every $u \in \Lambda$, $\varphi \cdot J_u$ is contained in the sector delimited by $\varphi \cdot J_a$ which contains $\varphi \cdot J_b$.
Let $u \in \Lambda$ be a vertex. By definition of $\Lambda$, there exists a path $$u_1=u, \ u_2, \ldots, u_{n-1}, \ u_n=b$$ in $\Gamma$ that is disjoint from $\mathrm{star}(a)$. This yields a path $$\varphi \cdot J_{u_1}=\varphi \cdot J_u, \ \varphi \cdot J_{u_2}, \ldots, \varphi \cdot J_{u_{n-1}}, \ \varphi \cdot J_{u_n}=\varphi \cdot J_b$$ in the transversality graph $T(\Gamma, \mathcal{G})$. As a consequence of Lemma \[lem:transverseimpliesadj\], none of these hyperplanes are transverse to $\varphi \cdot J_a$, which implies that they are all contained in the same sector delimited by $\varphi \cdot J_a$, namely the one containing $\varphi \cdot J_b$, proving our fact. This concludes the proof of our lemma.
If $\| \varphi \|=0$ then $1 \in \bigcap\limits_{J \in \varphi \cdot \mathcal{J}} N(J)$, which implies that $\varphi \cdot \mathcal{J} \subset \mathcal{K}$ where $\mathcal{K}= \{ J_u \mid u \in V(\Phi) \}$. Let $\Lambda$ be the subgraph of $\Phi$ such that $\varphi \cdot \mathcal{J}= \{ J_u \mid u \in V(\Lambda)$. Notice that $$\langle \mathrm{stab}_\circlearrowleft (J) \mid J \in \varphi \cdot \mathcal{J} \rangle= \varphi \left( \langle \mathrm{stab}_\circlearrowleft (J) \mid J \in \mathcal{J} \rangle \right) = \Phi \mathcal{H}.$$ Therefore, one has $$G_u= \mathrm{stab}_\circlearrowleft (J_u) \leq \Phi \mathcal{H} = \langle \mathrm{stab}_\circlearrowleft(J) \mid J \in \varphi \cdot \mathcal{J} \rangle = \langle \Lambda \rangle$$ for every $u \in V(\Phi)$, hence $\Lambda = \Phi$. This precisely means that $\varphi \cdot \mathcal{J} = \mathcal{K}$. In other words, there exists a map $s : V(\Gamma) \to V(\Phi)$ such that $\varphi$ sends isomorphically $G_u$ to $H_{s(u)}$ for every $u \in V(\Gamma)$. Notice that $s$ necessarily defines an isometry since $\varphi$ sends two (non-)transverse hyperplanes to two (non-)transverse hyperplanes.
Given our isomorphism $\varphi : \Gamma \mathcal{G} \to \Phi \mathcal{H}$, we apply Lemma \[lem:shortencomplexity\] iteratively to find automorphisms $\alpha_1, \ldots, \alpha_m \in \mathrm{ConjP}(\Gamma \mathcal{G})$ such that $\| \varphi \alpha_1 \cdots \alpha_m \| =0$. Set $\alpha = \alpha_1 \cdots \alpha_m$. By Lemma \[lem:complexityzero\], there exists an isometry $s : \Gamma \to \Phi$ such that $\varphi$ sends isomorphically $G_u$ to $H_{s(u)}$ for every $u \in V(\Gamma)$.
An immediate consequence of Theorem \[thm:ConjIsom\] is that $\mathrm{ConjAut}(\Gamma \mathcal{G})= \mathrm{Loc}(\Gamma \mathcal{G}) \cdot \mathrm{ConjP}(\Gamma \mathcal{G})$. As $\mathrm{Loc}(\Gamma \mathcal{G})$ is contained in $\mathrm{ConjP}(\Gamma \mathcal{G})$, it follows that $\mathrm{ConjP}(\Gamma \mathcal{G})$ and $\mathrm{ConjAut}(\Gamma \mathcal{G})$ coincide.
Application to graph products of finite groups {#section:applicationfinite}
----------------------------------------------
In this section, we focus on graph products of finite groups. This includes, for instance, right-angled Coxeter groups. The main result of this section, which will be deduced from Theorem \[thm:conjugatingauto\], is the following:
\[thm:GPfinitegroups\] Let $\Gamma$ be a finite simplicial graph and $\mathcal{G}$ a collection of finite groups indexed by $V(\Gamma)$. Then the subgroup generated by partial conjugations has finite index in $\mathrm{Aut}(\Gamma \mathcal{G})$.
Before turning to the proof of the theorem, we will need the following preliminary result about general graph products of groups:
\[lem:conjincluded\] Let $\Gamma$ be a simplicial graph and $\mathcal{G}$ a collection of groups indexed by $V(\Gamma)$. Fix two subgraphs $\Lambda_1,\Lambda_2 \subset \Gamma$ and an element $g \in \Gamma \mathcal{G}$. If $\langle \Lambda_1 \rangle \subset g \langle \Lambda_2 \rangle g^{-1}$, then $\Lambda_1 \subset \Lambda_2$ and $g \in \langle \Lambda_1 \rangle \cdot \langle \mathrm{link}(\Lambda_1) \rangle \cdot \langle \Lambda_2 \rangle$.
Notice that the subgroups $\langle \Lambda_1 \rangle$ and $g \langle \Lambda_2 \rangle g^{-1}$ coincide with the stabiliser of the subgraphs $\langle \Lambda_1 \rangle$ and $g \langle \Lambda_2 \rangle$ respectively. We claim that any hyperplane intersecting $\langle \Lambda_1 \rangle$ also intersects $g \langle \Lambda_2 \rangle$.
Suppose by contradiction that this is not the case, ie., there exists some hyperplane $J$ intersecting $\langle \Lambda_1 \rangle$ but not $g \langle \Lambda_2 \rangle$. Fix a clique $C \subset \langle \Lambda_1 \rangle$ dual to $J$. There exist $u \in \Lambda_1$ and $h \in \langle \Lambda_1 \rangle$ such that $\mathrm{stab}(C)=h \langle u \rangle h^{-1}$. As a consequence, $\mathrm{stab}(C)$, which turns out to coincide with the rotative stabiliser of $J$ according to Proposition \[prop:rotstabinX\], stabilises $\langle \Lambda_1 \rangle$. On the other hand, a non-trivial element of $\mathrm{stab}_{\circlearrowleft}(J)$ sends $g \langle \Lambda_2 \rangle$ into a sector that does not contain $g \langle \Lambda_2 \rangle$: a fortiori, it does not stabilise $g \langle \Lambda_2 \rangle$, contradicting our assumptions. This concludes the proof of our claim.
Because the cliques dual to the hyperplanes intersecting $\langle \Lambda_1 \rangle$ are labelled by vertices of $\Lambda_1$, and similarly that the cliques dual to the hyperplanes intersecting $g \langle \Lambda_2 \rangle$ are labelled by vertices of $\Lambda_2$, we deduce from the previous claim that $\Lambda_1 \subset \Lambda_2$, proving the first assertion of our lemma.
Next, let $x \in \langle \Lambda_1 \rangle$ and $y \in g \langle \Lambda_2 \rangle$ be two vertices minimising the distance between $\langle \Lambda_1 \rangle$ and $g \langle \Lambda_2 \rangle$. Fix a path $\alpha$ from $1$ to $x$ in $\langle \Lambda_1 \rangle$, a geodesic $\beta$ between $x$ and $y$, and a path $\gamma$ from $y$ to $g$ in $g \langle \Lambda_2 \rangle$. Thus, $g=abc$ where $a,b,c$ are the words labelling the paths $\alpha,\beta,\gamma$. Notice that $a \in \langle \Lambda_1 \rangle$ and $c \in \langle \Lambda_2 \rangle$. The labels of the edges of $\beta$ coincide with the labels of the hyperplanes separating $x$ and $y$, or equivalently (according to Corollary \[cor:minseppairhyp\]), to the labels of the hyperplanes separating $\langle \Lambda_1 \rangle$ and $g \langle \Lambda_2 \rangle$. But we saw that any hyperplane intersecting $\langle \Lambda_1 \rangle$ intersects $g \langle \Lambda_2 \rangle$ as well, so any hyperplane separating $\langle \Lambda_1 \rangle$ and $g \langle \Lambda_2 \rangle$ must be transverse to any hyperplane intersecting $\langle \Lambda_1 \rangle$. Because the set of labels of the hyperplanes of $\langle \Lambda_1 \rangle$ is $V(\Lambda_1)$, we deduce from Lemma \[lem:transverseimpliesadj\] that the vertex of $\Gamma$ labelling an edge of $\beta$ is adjacent to all the vertices of $\Lambda_1$, ie., it belongs to $\mathrm{link}(\Lambda_1)$. Thus, we have proved that $b \in \langle \mathrm{link}(\Lambda_1) \rangle$, hence $g=abc \in \langle \Lambda_1 \rangle \cdot \langle \mathrm{link}(\Lambda_1) \rangle \cdot \langle \Lambda_2 \rangle$.
We now move to the proof of Theorem \[thm:GPfinitegroups\]. **For the rest of Section \[section:applicationfinite\], we fix a finite simplicial graph $\Gamma$ and a collection of finite groups $\mathcal{G}$ indexed by $V(\Gamma)$.**
\[lem:maxfinitesub\] The maximal finite subgroups of $\Gamma \mathcal{G}$ are exactly the $g \langle \Lambda \rangle g^{-1}$ where $g \in \Gamma \mathcal{G}$ and where $\Lambda \subset \Gamma$ is a maximal complete subgraph of $\Gamma$.
Let $H \leq \Gamma \mathcal{G}$ be a maximal finite subgroup. By Lemma \[lem:finitesub\], there exist $g \in \Gamma \mathcal{G}$ and a complete subgraph $\Lambda \subset \Gamma$ such that $H$ is contained in $g \langle \Lambda \rangle g^{-1}$. Since $H$ is a maximal finite subgroup, then $H= g \langle \Lambda \rangle g^{-1}$ since $g \langle \Lambda \rangle g^{-1}$ is clearly finite. Moreover, the maximality of $H$ implies that $\Lambda$ is a maximal complete subgraph of $\Gamma$. Conversely, let $g \in \Gamma \mathcal{G}$ be an element and $\Lambda \subset \Gamma$ a complete subgraph. Clearly, $g\langle \Lambda \rangle g^{-1}$ is finite. Now, suppose that $\Lambda$ is a maximal complete subgraph of $\Gamma$. If $F$ is a finite subgroup of $\Gamma \mathcal{G}$ containing $g \langle \Lambda \rangle g^{-1}$, we know that $F= h \langle \Xi \rangle h^{-1}$ for some $h \in \Gamma \mathcal{G}$ and for some complete subgraph $\Xi \subset \Gamma$, so that we deduce from the inclusion $$g \langle \Lambda \rangle g^{-1} \leq F = h \langle \Xi \rangle h^{-1}$$ and from Lemma \[lem:conjincluded\] that $\Lambda \subset \Xi$. By maximality of $\Lambda$, necessarily $\Lambda = \Xi$. We also deduce from Lemma \[lem:conjincluded\] that $h \in g \langle \Lambda \rangle \cdot \langle \mathrm{link}(\Lambda) \rangle \cdot \langle \Lambda \rangle$. But the maximality of $\Lambda$ implies that $\mathrm{link}(\Lambda)= \emptyset$, hence $h \in g \langle \Lambda \rangle$. Therefore $$F= h \langle \Xi \rangle h^{-1} =h \langle \Lambda \rangle h^{-1}= g \langle \Lambda \rangle g^{-1}.$$ Thus, we have proved that $g \langle \Lambda \rangle g^{-1}$ is a maximal finite subgroup of $\Gamma \mathcal{G}$.
\[cor:maxself\] The maximal finite subgroups of $\Gamma \mathcal{G}$ are self-normalising.
By Lemma \[lem:maxfinitesub\], maximal finite subgroups are of the form $g\langle \Lambda \rangle g^{-1}$ for a maximal clique $\Lambda \subset \Gamma$. Since a maximal clique has an empty link, it follows from Lemma \[lem:conjincluded\] that such subgroups are self-normalising.
As a consequence of Lemma \[lem:maxfinitesub\], $\Gamma \mathcal{G}$ contains only finitely many conjugacy classes of maximal finite subgroups, so $\mathrm{Aut}(\Gamma \mathcal{G})$ contains a finite-index subgroup $H$ such that, for every maximal finite subgroup $F \leq \Gamma \mathcal{G}$ and every $\varphi \in H$, the subgroups $\varphi(F)$ and $F$ are conjugate.
Fix a maximal finite subgroup $F \leq \Gamma \mathcal{G}$. We define a morphism $\Psi_F : H \to \mathrm{Out}(F)$ in the following way. If $\varphi \in H$, there exists some $g \in \Gamma \mathcal{G}$ such that $\varphi(F)=gFg^{-1}$. Set $\Psi_F(\varphi) = \left[ \left( \iota(g)^{-1} \varphi \right)_{|F} \right]$ where $\iota(g)$ denotes the inner automorphism associated to $g$. Notice that, if $h \in \Gamma \mathcal{G}$ is another element such that $\varphi(F)=hFh^{-1}$, then $h=gs$ for some $s \in \Gamma \mathcal{G}$ normalising $F$. In fact, since $F$ is self-normalising according to Corollary \[cor:maxself\], $s$ must belong to $F$. Consequently, the automorphisms $\left( \iota(g)^{-1} \varphi \right)_{|F}$ and $\left( \iota(h)^{-1} \varphi \right)_{|F}$ of $F$ have the same image in $\mathrm{Out}(F)$. The conclusion is that $\Psi_F$ is well-defined as a map $H \to \mathrm{Out}(F)$. It remains to show that it is a morphism. So let $\varphi_1,\varphi_2 \in H$ be two automorphisms, and fix two elements $g_1,g_2 \in \Gamma \mathcal{G}$ such that $\varphi_i(F)=g_iFg_i^{-1}$ for $i=1,2$. Notice that $\varphi_1\varphi_2(F)= \varphi_1(g_2) g_1^{-1} F g_1 \varphi_1(g_2)^{-1}$, so that we have: $$\Psi_F(\varphi_1\varphi_2) = \left[ \left( \iota(\varphi_1(g_2)g_1^{-1} )^{-1} \varphi_1 \varphi_2 \right)_{|F} \right];$$ consequently, $$\begin{array}{lcl} \Psi_F(\varphi_1) \Psi_F(\varphi_2) & = & \left[ \left( \iota(g_1)^{-1} \varphi_1 \right)_{|F} \right] \cdot \left[ \left( \iota(g_2)^{-1} \varphi_2 \right)_{|F} \right] = \left[ \left( \iota(g_1)^{-1} \varphi_1 \iota(g_2)^{-1} \varphi_2 \right)_{|F} \right] \\ \\ & = & \left[ \left( \iota(g_1)^{-1} \iota(\varphi_1(g_2))^{-1} \varphi_1 \varphi_2 \right)_{|F} \right] = \Psi_F(\varphi_1 \varphi_2) \end{array}$$ Moreover, it is clear that $\Psi_F(\mathrm{Id})= \mathrm{Id}$. We conclude that $\Psi_F$ defines a morphism $H \to \mathrm{Out}(F)$. Now, we set $$K= \bigcap \{ \mathrm{ker}(\Psi_F) \mid \text{$F \leq \Gamma \mathcal{G}$ maximal finite subgroup} \}.$$ Notice that $K$ is a finite-index subgroup of $\mathrm{Aut}(\Gamma \mathcal{G})$ since it is the intersection of finitely many finite-index subgroups. We want to show that $K$ is a subgroup of $\mathrm{ConjP}(\Gamma \mathcal{G})$. According to Theorem \[thm:conjugatingauto\], it is sufficient to show that, for every $\varphi \in K$ and every $u \in V(\Gamma)$, the subgroups $\varphi(G_u)$ and $G_u$ are conjugate.
So fix a vertex $u \in V(\Gamma)$ and an automorphism $\varphi \in K$. As a consequence of Lemma \[lem:maxfinitesub\], there exists a maximal finite subgroup $F \leq \Gamma \mathcal{G}$ containing $G_u$. Since $\varphi$ belongs to $H$, there exists some $g \in \Gamma \mathcal{G}$ such that $\varphi(F)=gFg^{-1}$. And by definition of $K$, the automorphism $\left( \iota(g)^{-1} \varphi \right)_{|F}$ must be an inner automorphism of $F$, so there exists some $h \in F$ such that $\varphi(x)= ghxh^{-1}g^{-1}$ for every $x \in F$. In particular, the subgroups $\varphi(G_u)$ and $G_u$ are conjugate in $\Gamma \mathcal{G}$.
Thus, we have proved that $K$ is a subgroup of $\mathrm{ConjP}(\Gamma \mathcal{G})$. Because $K$ has finite index in $\mathrm{Aut}(\Gamma \mathcal{G})$, we conclude that $\mathrm{ConjP}(\Gamma \mathcal{G})$ is a finite-index subgroup of $\mathrm{Aut}(\Gamma \mathcal{G})$. As $$\mathrm{ConjP}(\Gamma \mathcal{G})= \langle \text{partial conjugations} \rangle \rtimes \mathrm{Loc}(\Gamma \mathcal{G}),$$ we conclude that the subgroup generated by partial conjugations has finite index in $\mathrm{Aut}(\Gamma \mathcal{G})$ since $\mathrm{Loc}(\Gamma \mathcal{G})$ is clearly finite.
As an interesting application of Theorem \[thm:GPfinitegroups\], we are able to determine precisely when the outer automorphism group of a graph product of finite groups is finite. Before stating the criterion, we need the following definition:
A *Separating Intersection of Links* (or *SIL* for short) in $\Gamma$ is the data of two vertices $u,v \in V(\Gamma)$ satisfying $d(u,v) \geq 2$ such that $\Gamma \backslash (\mathrm{link}(u) \cap \mathrm{link}(v))$ has a connected component which does not contain $u$ nor $v$.
Now we are able to state our criterion, which generalises [@AutGPabelian Theorem 1.4]:
\[cor:OutFinite\] The outer automorphism group $\mathrm{Out}(\Gamma \mathcal{G})$ is finite if and only if $\Gamma$ has no SIL.
The following argument was communicated to us by Olga Varghese.
Suppose that $\Gamma$ has no SIL. If $u,v \in V(\Gamma)$ are two distinct vertices, $a \in G_u$ and $b \in G_v$ two elements, and $\Lambda, \Xi$ two connected components of $\Gamma \backslash \mathrm{star}(u)$ and $\Gamma \backslash \mathrm{star}(v)$ respectively, then we claim that the two corresponding partial conjugations $(u, \Lambda,a)$ and $(v,\Xi,b)$ commute in $\mathrm{Out}(\Gamma \mathcal{G})$. For convenience, let $\Lambda_0$ (resp. $\Xi_0$) denote the connected component of $\Gamma \backslash \mathrm{star}(u)$ (resp. $\Gamma \backslash \mathrm{star}(v)$) which contains $v$ (resp. $u$). If $\Lambda \neq \Lambda_0$ or $\Xi \neq \Xi_0$, a direct computation shows that $(u, \Lambda,a)$ and $(v,\Xi,b)$ commute in $\mathrm{Aut}(\Gamma \mathcal{G})$ (see [@AutGPSIL Lemma 3.4] for more details). So suppose that $\Lambda= \Lambda_0$ and $\Xi=\Xi_0$. If $\Xi_1, \ldots, \Xi_k$ denote the connected components of $\Gamma \backslash \mathrm{star}(v)$ distinct from $\Xi_0$, notice that the product $(v,\Xi_0, b)(v, \Xi_1,b) \cdots (v,\Xi_k,b)$ is trivial in $\mathrm{Out}(\Gamma \mathcal{G})$ since the automorphism coincides with the conjugation by $b$. As we already know that $(u, \Lambda_0, a)$ commutes with $(v, \Xi_1, b) \cdots (v, \Xi_k,b)$ in $\mathrm{Aut}(\Gamma \mathcal{G})$, it follows that the following equalities hold in $\mathrm{Out}(\Gamma \mathcal{G})$: $$\left[ (u, \Lambda_0, a) , (v, \Xi_0,b) \right]= \left[ (u, \Lambda_0,a) , (v, \Xi_k,b)^{-1} \cdots (v, \Xi_1,b)^{-1} \right] = 1.$$ This concludes the proof of our claim. Consequently, if $\mathrm{PC}(u)$ denotes the subgroup of $\mathrm{Out}(\Gamma \mathcal{G})$ generated by the (images of the) partial conjugations based at $u$ for every vertex $u \in V(\Gamma)$, then the subgroup $\mathrm{PC}$ of $\mathrm{Out}(\Gamma \mathcal{G})$ generated by all the (images of the) partial conjugations is naturally a quotient of $\bigoplus\limits_{u \in V(\Gamma)} \mathrm{PC}(u)$. But each $\mathrm{PC}(u)$ is finite; indeed, it has cardinality at most $c \cdot | G_u |$ where $c$ is the number of connected components of $\Gamma \backslash \mathrm{star}(u)$. Therefore, $\mathrm{PC}$ has to be finite. As $\mathrm{PC}$ has finite index in $\mathrm{Out}(\Gamma \mathcal{G})$ as a consequence of Theorem \[thm:GPfinitegroups\], we conclude that $\mathrm{Out}(\Gamma \mathcal{G})$ must be finite.
Conversely, suppose that $\Gamma$ has a SIL. Thus, there exist two vertices $u,v \in V(\Gamma)$ satisfying $d(u,v) \geq 2$ such that $\Gamma \backslash (\mathrm{link}(u) \cap \mathrm{link}(v))$ has a connected component $\Lambda$ which contains neither $u$ nor $v$. Fix two non-trivial elements $a \in G_u$ and $b \in G_v$. A direct computation shows that the product $(u, \Lambda, a)(v, \Lambda, b)$ has infinite order in $\mathrm{Out}(\Gamma \mathcal{G})$. A fortiori, $\mathrm{Out}(\Gamma \mathcal{G})$ must be infinite.
It is worth noticing that, if $\Gamma \mathcal{G}$ is a graph product of finite groups, then $\mathrm{ConjP}(\Gamma \mathcal{G})$ may be a proper subgroup of $\mathrm{Aut}(\Gamma \mathcal{G})$. For instance, $$\left\{ \begin{array}{ccc} a & \mapsto & ab \\ b & \mapsto & b \end{array} \right.$$ defines an automorphism of $\mathbb{Z}_2 \ast \mathbb{Z}_2 = \langle a \rangle \ast \langle b \rangle$ which does not belong to $\mathrm{ConjP}(\Gamma \mathcal{G})$. The inclusion $\mathrm{ConjP}(\Gamma \mathcal{G}) \subset \mathrm{Aut}(\Gamma \mathcal{G})$ may also be proper if $\Gamma$ is connected. For instance, $$\left\{ \begin{array}{ccc} a & \mapsto & a \\ b & \mapsto & ab \\ c & \mapsto & c \end{array} \right.$$ defines an automorphism of $\mathbb{Z}_2 \oplus ( \mathbb{Z}_2 \ast \mathbb{Z}_2) = \langle a \rangle \oplus ( \langle b \rangle \ast \langle c \rangle)$ which does not belong to $\mathrm{ConjP}(\Gamma \mathcal{G})$.
Application to graph products over graphs of large girth {#section:applicationgirth}
--------------------------------------------------------
In this section, we focus on automorphism groups of graph products without imposing any restriction on their vertex groups. However, we need to impose more restrictive conditions on the underlying graph than in the previous sections. More precisely, the graphs which will interest us in the sequel are the following:
A *molecular graph* is a finite connected simplicial graph without vertex of valence $<2$ and of girth at least $5$.
Molecular graphs generalise *atomic graphs* introduced in [@RAAGatomic] (see Section \[section:atomic\] below for a precise definition) by allowing separating stars. Typically, a molecular graph can be constructed from cycles of length at least five by gluing them together and by connecting them by trees. See Figure \[figure4\] for an example.
The main result of this section is the following statement, which widely extends [@GPcycle Theorems A and D]. It will be a consequence of Theorem \[thm:conjugatingauto\].
\[thm:mainGPgeneral\] Let $\Gamma$ be a molecular graph and $\mathcal{G}$ a collection of groups indexed by $V(\Gamma)$. Then $\mathrm{Aut}(\Gamma \mathcal{G})= \mathrm{ConjP}(\Gamma \mathcal{G})$.
Before turning to the proof of the theorem, we need to introduce some vocabulary.
Given a simplicial graph $\Gamma$ and a collection of groups $\mathcal{G}$ indexed by $V(\Gamma)$, let $\mathcal{M}= \mathcal{M}(\Gamma, \mathcal{G})$ denote the collection of maximal subgroups of $\Gamma \mathcal{G}$ that decompose non trivially as direct products, and let $\mathcal{C}= \mathcal{C}(\Gamma, \mathcal{G})$ denote the collection of non trivial subgroups of $\Gamma \mathcal{G}$ that can be obtained by intersecting two subgroups of $\mathcal{M}$. A subgroup of $\Gamma \mathcal{G}$ that belongs to $\mathcal{C}$ is
- *$\mathcal{C}$-minimal* if it is minimal in $\mathcal{C}$ with respect to the inclusion;
- *$\mathcal{C}$-maximal* if it is maximal in $\mathcal{C}$ with respect to the inclusion (or equivalently if it belongs to $\mathcal{M}$);
- *$\mathcal{C}$-medium* otherwise.
It is worth noticing that these three classes of subgroups of $\Gamma \mathcal{G}$ are preserved by automorphisms. Now, we want to describe more explicitly the structure of theses subgroups. For this purpose, the following general result, which is a consequence of [@MinasyanOsinTrees Corollary 6.15], will be helpful:
\[lem:maxproduct\] Let $\Gamma$ be a simplicial graph and $\mathcal{G}$ a collection of groups indexed by $V(\Gamma)$. If a subgroup $H \leq \Gamma \mathcal{G}$ decomposes non-trivially as a product, then there exist an element $g \in \Gamma \mathcal{G}$ and a join $\Lambda \subset \Gamma$ such that $H \subset g \langle \Lambda \rangle g^{-1}$.
**For the rest of Section \[section:applicationgirth\], we fix a molecular graph $\Gamma$ and a collection of groups $\mathcal{G}$ indexed by $V(\Gamma)$.** The characterisation of the subgroups of $\mathcal{C}$ which we deduce from the previous lemma and from the quasi-median geometry of $X(\Gamma, \mathcal{G})$ is the following:
\[prop:SubgroupsC\] A subgroup $H \leq \Gamma \mathcal{G}$ belongs to $\mathcal{C}$ if and only if:
- $H$ is conjugate to $\langle \mathrm{star}(u) \rangle$ for some vertex $u \in V(\Gamma)$; if so, $H$ is $\mathcal{C}$-maximal.
- Or $H$ is conjugate to $\langle G_u,G_v \rangle$ for some adjacent vertices $u,v \in V(\Gamma)$; if so, $H$ is $\mathcal{C}$-medium.
- Or $H$ is conjugate to $G_u$ for some vertex $u \in V(\Gamma)$; if so, $H$ is $\mathcal{C}$-minimal.
Suppose that $H \leq \Gamma \mathcal{G}$ belongs to $\mathcal{M}$, ie., is a maximal product subgroup. It follows from Lemma \[lem:maxproduct\] that there exist an element $g \in \Gamma \mathcal{G}$ and a join $\Lambda \subset \Gamma$ such that $H \subset g \langle \Lambda \rangle g^{-1}$. Because $\Gamma$ is triangle-free and square-free, $\Lambda$ must be the star of a vertex, say $\Lambda= \mathrm{star}(u)$ where $u \in V(\Gamma)$. As $g \langle \mathrm{star}(u) \rangle g^{-1}$ decomposes as a product, namely $g \left( G_u \times \langle \mathrm{link}(u) \rangle \right)g^{-1}$, it follows by maximality of $H$ that $H= g \langle \mathrm{star}(u) \rangle g^{-1}$.
Conversely, we want to prove that, if $g \in \Gamma \mathcal{G}$ is an element and $u \in V(\Gamma)$ a vertex, then $g \langle \mathrm{star}(u) \rangle g^{-1}$ is a maximal product subgroup. So let $P$ be a subgroup of $\Gamma \mathcal{G}$ splitting non-trivially as a direct product and containing $g \langle \mathrm{star}(u) \rangle g^{-1}$. It follows from Lemma \[lem:maxproduct\] that there exist an element $h \in \Gamma \mathcal{G}$ and a join $\Xi \subset \Gamma$ such that $$g \langle \mathrm{star}(u) \rangle g^{-1} \subset P \subset h \langle \Xi \rangle h^{-1}.$$ By applying Lemma \[lem:conjincluded\], we deduce that $\mathrm{star}(u) \subset \Xi$ and that $h \in g \langle \mathrm{star}(u) \rangle \cdot \langle \mathrm{link}(\mathrm{star}(u)) \rangle \cdot \langle \Xi \rangle$. As $\mathrm{star}(u)$ is a maximal join, necessarily $\mathrm{star}(u)= \Xi$ and $\mathrm{link}(\mathrm{star}(u))= \emptyset$. As a consequence, $h \in g \langle \mathrm{star}(u) \rangle$, so that $$g \langle \mathrm{star}(u) \rangle g^{-1} \subset P \subset h \langle \Xi \rangle h^{-1}= h \langle \mathrm{star}(u) \rangle h^{-1} = g \langle \mathrm{star}(u) \rangle g^{-1}.$$ Therefore, $g \langle \mathrm{star}(u) \rangle g^{-1} = P$.
Thus, we have proved that $$\begin{array}{lcl} \mathcal{M} & = & \{ g \langle \mathrm{star}(u) \rangle g^{-1} \mid g \in \Gamma \mathcal{G}, u \in V(\Gamma) \} \\ \\ & = & \{ \mathrm{stab}(J) \mid \text{$J$ hyperplane of $X(\Gamma, \mathcal{G})$} \} \end{array}$$ where the last equality is justified by Theorem \[thm:HypStab\]. As a consequence, the collection $\mathcal{C}$ coincides with the non-trivial subgroups of $\Gamma \mathcal{G}$ obtained by intersecting two hyperplane-stabilisers. Therefore, the implication of our lemma follows from the following observation:
\[fact:HypStabInter\] Let $J_1$ and $J_2$ be two hyperplanes of $X(\Gamma, \mathcal{G})$.
- If $J_1$ and $J_2$ are transverse, then $\mathrm{stab}(J_1) \cap \mathrm{stab}(J_2)$ is conjugate to $\langle G_u,G_v \rangle$ for some adjacent vertices $u,v \in V(\Gamma)$.
- If $J_1$ and $J_2$ are not transverse and if there exists a third hyperplane $J$ transverse to both $J_1$ and $J_2$, then $\mathrm{stab}(J_1) \cap \mathrm{stab}(J_2)$ is conjugate to $G_u$ for some vertex $u \in V(\Gamma)$.
- If $J_1$ and $J_2$ are not transverse and if no hyperplane is transverse to both $J_1$ and $J_2$, then $\mathrm{stab}(J_1) \cap \mathrm{stab}(J_2)$ is trivial.
Suppose that $J_1$ and $J_2$ are transverse. As the carriers $N(J_1)$ and $N(J_2)$ intersect, say that $g \in X(\Gamma, \mathcal{G})$ belongs to their intersection, it follows that there exist two adjacent vertices $u,v \in V(\Gamma)$ such that $J_1=gJ_u$ and $J_2=gJ_v$. Therefore, $$\mathrm{stab}(J_1) \cap \mathrm{stab}(J_2)= g \langle \mathrm{star}(u) \rangle g^{-1} \cap g \langle \mathrm{star}(v) \rangle g^{-1} = g \langle G_u,G_v \rangle g^{-1},$$ proving the first point of our fact. Next, suppose that $J_1$ and $J_2$ are not transverse but that there exists a third hyperplane $J$ transverse to both $J_1$ and $J_2$. As a consequence of Lemma \[lem:projhypsquaretrianglefree\] below, we know that the projection of $N(J_1)$ onto $N(J_2)$, which must be $\mathrm{stab}(J_1) \cap \mathrm{stab}(J_2)$-invariant, is reduced to a single clique $C$. So $\mathrm{stab}(J_1) \cap \mathrm{stab}(J_2) \subset \mathrm{stab}(C)$. Notice that $J$ is dual to $C$. Indeed, the hyperplane dual to $C$ must be transverse to both $J_1$ and $J_2$, but we also know from Lemma \[lem:projhypsquaretrianglefree\] below that there exists at most one hyperplane transverse to $J_1$ and $J_2$. We conclude from Proposition \[prop:rotstabinX\] that $$\mathrm{stab}(C)= \mathrm{stab}_{\circlearrowleft}(J) \subset \mathrm{stab}(J_1) \cap \mathrm{stab}(J_2),$$ proving that $\mathrm{stab}(J_1) \cap \mathrm{stab}(J_2) = \mathrm{stab}(C)$. Then, the second point of our fact follows from Lemma \[lem:CliqueStab\]. Finally, suppose that $J_1$ and $J_2$ are not transverse and that no hyperplane is transverse to both $J_1$ and $J_2$. Then $\mathrm{stab}(J_1) \cap \mathrm{stab}(J_2)$ stabilises the projection of $N(J_2)$ onto $N(J_1)$, which is reduced to a single vertex. As vertex-stabilisers are trivial, the third point of our fact follows.
Conversely, if $u,v \in V(\Gamma)$ are two adjacent vertices, then $\langle G_u , G_v \rangle$ is the intersection of $\langle \mathrm{star}(u) \rangle$ and $\langle \mathrm{star}(v) \rangle$; and if $w \in V(\Gamma)$ is a vertex, then $G_w$ is the intersection of $\langle \mathrm{star}(x) \rangle$ and $\langle \mathrm{star}(y) \rangle$ where $x,y \in V(\Gamma)$ are two distinct neighbors of $w$.
The following result is used in the proof of Proposition \[prop:SubgroupsC\].
\[lem:projhypsquaretrianglefree\] Let $J_1,J_2$ two non-transverse hyperplanes of $X(\Gamma, \mathcal{G})$. There exists at most one hyperplane transverse to both $J_1$ and $J_2$. As a consequence, the projection of $N(J_1)$ onto $N(J_2)$ is either a single vertex or a single clique.
Because $\Gamma$ is triangle-free, we know from Corollary \[cor:cubdim\] that the cubical dimension of $X(\Gamma)$ is two. Consequently, if there exist two hyperplanes transverse to both $J_1$ and $J_2$, they cannot be transverse to one another. So, in order to conclude that at most one hyperplane may be transverse to both $J_1$ and $J_2$, it is sufficient to prove the following observation:
\[claim:lengthfour\] The transversality graph $T(\Gamma, \mathcal{G})$ does not contain an induced cycle of length four.
Suppose by contradiction that $X(\Gamma, \mathcal{G})$ contains a cycle of four hyperplanes $(J_1, \ldots, J_4)$. Suppose that the quantity $d(N(J_1),N(J_3))+d(N(J_2),N(J_4))$ is minimal. If $N(J_1)$ and $N(J_3)$ are disjoint, they must be separated by some hyperplane $J$. Replacing $J_1$ with $J$ produces a new cycle of four hyperplanes of lower complexity, contradicting the choice of our initial cycle. Therefore, $N(J_1)$ and $N(J_3)$ must intersect. Similarly, $N(J_2) \cap N(J_4) \neq \emptyset$. Thus, $N(J_1), \ldots, N(J_4)$ pairwise intersect, so that there exists a vertex $x \in N(J_1) \cap \cdots \cap N(J_4)$. For every $1 \leq i \leq 4$, let $u_i$ denote the vertex of $\Gamma$ labelling the hyperplane $J_i$. Notice that, as a consequence of Lemma \[lem:transverseimpliesadj\], $u_1 \neq u_3$ (resp. $u_2 \neq u_4$) because $J_1$ and $J_3$ (resp. $J_2$ and $J_4$) are tangent. So $u_1, \ldots, u_4$ define a cycle of length four in $\Gamma$, contradicting the fact that $\Gamma$ is molecular.
This concludes the proof of the first assertion of our lemma. The second assertion then follows from Corollary \[cor:diamproj\].
Now, Theorem \[thm:mainGPgeneral\] is clear since it follows from Proposition \[prop:SubgroupsC\] that Theorem \[thm:conjugatingauto\] applies, proving that $\mathrm{Aut}(\Gamma \mathcal{G})= \mathrm{ConjAut}(\Gamma \mathcal{G})= \mathrm{ConjP}(\Gamma \mathcal{G})$. In fact, we are able to prove a stronger statement, namely:
Let $\Gamma, \Phi$ be two molecular graphs and $\mathcal{G},\mathcal{H}$ two collections of groups indexed by $V(\Gamma), V(\Phi)$ respectively. Suppose that there exists an isomorphism $\varphi : \Gamma \mathcal{G} \to \Phi \mathcal{H}$. Then there exist an automorphism $\alpha \in \mathrm{ConjP}(\Gamma \mathcal{G})$ and an isometry $s : \Gamma \to \Phi$ such that $\varphi \alpha$ sends isomorphically $G_u$ to $H_{s(u)}$ for every $u \in V(\Gamma)$.
It follows from Proposition \[prop:SubgroupsC\] that conjugates of vertex groups may be defined purely algebraically, so that the isomorphism $\varphi : \Gamma \mathcal{G} \to \Phi \mathcal{H}$ must send vertex groups of $\Gamma \mathcal{G}$ to conjugates of vertex groups of $\Phi \mathcal{H}$. Then Theorem \[thm:ConjIsom\] applies, providing the desired conclusion.
Graph products of groups over atomic graphs {#section:atomic}
===========================================
In this section, we focus on automorphism groups of graph products over a specific class of simplicial graphs, namely *atomic graphs*. Recall from [@RAAGatomic] that a finite simplicial graph is *atomic* if it is connected, without vertex of valence $<2$, without separating stars, and with girth $\geq 5$. In other words, atomic graphs are molecular graphs without separating stars. As a consequence of Theorem \[thm:mainGPgeneral\], the automorphism group of a graph product over an atomic graph is given by $\mathrm{Aut}(\Gamma\mathcal{G})= \langle \mathrm{Inn}(\Gamma\mathcal{G}), \mathrm{Loc}(\Gamma \mathcal{G}) \rangle$.
\[fact\_2\] Let $\Gamma$ be a simplicial graph and $\mathcal{G}$ a collection of groups indexed by $V(\Gamma)$. If $\Gamma$ is not the star of a vertex, then $\mathrm{Inn}(\Gamma \mathcal{G}) \cap \mathrm{Loc}(\Gamma \mathcal{G})= \{ \mathrm{Id} \}$.
Let $g \in \Gamma{{\mathcal G}}$, let $\varphi\in \mathrm{Loc}(\Gamma \mathcal{G})$, let $\sigma$ be the automorphism of $\Gamma$ induced by $\varphi$, and suppose that $\iota(g) = \varphi$. Then in particular for every vertex $v$ of $\Gamma$, we have $gG_vg^{-1} = \varphi(G_v) = G_{\sigma(v)}.$ Since distinct local groups are not conjugated under $\Gamma{{\mathcal G}}$, it follows that $\sigma(v) = v$, and hence $g$ normalises $G_v$. By standard results on graph products, this implies that $g \in \langle \mathrm{star}(v) \rangle$. As this holds for every vertex of $\Gamma$, we get $g \in \cap_v \langle \mathrm{star}(v) \rangle$. Since $\Gamma$ is not the star of a vertex, it follows that $\cap_v \langle \mathrm{star}(v) \rangle = \{1\}$, hence $\varphi = \iota(g)=\mathrm{Id}$.
Thus, the automorphism group of a graph product over an atomic graph is given by $\mathrm{Aut}(\Gamma\mathcal{G})= \mathrm{Inn}(\Gamma\mathcal{G})\rtimes \mathrm{Loc}(\Gamma \mathcal{G}) $. In particular, any automorphism of $\Gamma{{\mathcal G}}$ decomposes in a unique way as a product of the form $ \iota(g) \varphi $ with $g \in \Gamma{{\mathcal G}}, \varphi \in \mathrm{Loc}(\Gamma \mathcal{G}) $.
Recall that the Davis complex of a graph product is a CAT(0) cube complex that was introduced in Definition \[def:Davis\]. The fundamental observation is that in the case of atomic graphs, the automorphism group of $\Gamma{{\mathcal G}}$ acts naturally on the associated Davis complex.
\[prop:extendT\] Let $\Gamma$ be an atomic graph and $\mathcal{G}$ a collection of groups indexed by $V(\Gamma)$. The action of $\Gamma \mathcal{G}$ on the Davis complex $D(\Gamma, \mathcal{G})$ extends to an action $\mathrm{Aut}(\Gamma \mathcal{G}) \curvearrowright D(\Gamma, \mathcal{G})$, where $\Gamma \mathcal{G}$ is identified canonically with $\mathrm{Inn}(\Gamma \mathcal{G})$. More precisely, the action is given by $$\iota(g) \varphi \cdot hH \coloneqq g\varphi(h)\varphi(H),$$ for $g, h \in \Gamma{{\mathcal G}}$, $H$ a subgroup of $\Gamma{{\mathcal G}}$ of the form $\langle \Lambda \rangle$ for some complete subgraph $\Lambda$, and $\varphi \in \mathrm{Loc}(\Gamma \mathcal{G})$.
By definition, elements of $\mathrm{Loc}(\Gamma \mathcal{G})$ preserve the family of subgroups of the form $\langle \Lambda \rangle$ for some complete subgraph $\Lambda$, so the action is well defined at the level of the vertices of $D(\Gamma, {{\mathcal G}})$. By definition of the edges and higher cubes of the Davis complex, one sees that the action extends to an action on $D(\Gamma, {{\mathcal G}})$ itself. One checks easily from the definition that the restriction to Inn$(\Gamma{{\mathcal G}})$ coincides with the natural action of $\Gamma{{\mathcal G}}$ on $D(\Gamma, {{\mathcal G}})$ by left multiplication.
As a first application of Proposition \[prop:extendT\], we are able to show that the automorphism group of a graph product over an atomic graph does not satisfy Kazhdan’s property (T).
\[thm:noT\] Let $\Gamma$ be an atomic graph and $\mathcal{G}$ a collection of groups indexed by $V(\Gamma)$. Then the automorphism group $\mathrm{Aut}(\Gamma \mathcal{G})$ does not satisfy Kazhdan’s property (T).
The action of $\Gamma \mathcal{G}$ on $D(\Gamma, \mathcal{G})$ is non-trivial by construction, hence has unbounded orbits, so in particular the action of Aut$(\Gamma \mathcal{G})$ on the CAT(0) cube complex $D(\Gamma, \mathcal{G})$ has unbounded orbits. The result then follows from a theorem of Niblo-Roller [@MR1459140].
As a second application, we prove the following:
\[thm:acyl\_hyp\] Let $\Gamma$ be an atomic graph and $\mathcal{G}$ a collection of finitely generated groups indexed by $V(\Gamma)$. Then the automorphism group $\mathrm{Aut}(\Gamma \mathcal{G})$ is acylindrically hyperbolic.
**For the rest of Section \[section:atomic\], we fix an atomic graph $\Gamma$ and a collection of groups $\mathcal{G}$ indexed by $V(\Gamma)$.** To prove Theorem \[thm:acyl\_hyp\], we use a criterion introduced in [@NoteAcylHyp] to show the acylindrical hyperbolicity of a group via its action on a CAT(0) cube complex. Following [@CapraceSageev], we say that the action of a group $G$ on a CAT(0) cube complex $Y$ is *essential* if no $G$-orbits stay in some neighborhood of a half-space. Following [@ChatterjiFernosIozzi], we say that the action is *non-elementary* if $G$ does not have a finite orbit in $Y \cup \partial_{\infty}Y$. We further say that $Y$ is *cocompact* if its automorphism group acts cocompactly on it; that it is *irreducible* if it does not split as the direct product of two non-trivial CAT(0) cube complexes; and that it *does not have a free face* if every non-maximal cube is contained in at least two maximal cubes.
\[thm:Chatterji\_Martin\] Let $G$ be a group acting non-elementarily and essentially on a finite-dimensional irreducible cocompact CAT(0) cube complex $X$ with no free face. If there exist two points $x,y \in X$ such that $\mathrm{stab}(x) \cap \mathrm{stab}(y)$ is finite, then $G$ is acylindrically hyperbolic.
We will use this criterion for the action of Aut$(\Gamma{{\mathcal G}})$ on the Davis complex $D(\Gamma, {{\mathcal G}})$. To this end, we need to check a few things about the action.
\[lem:ess\] The action of $\mathrm{Aut}(\Gamma{{\mathcal G}})$ on $D(\Gamma, {{\mathcal G}})$ is essential and non-elementary.
It is enough to show that the action of $\Gamma {{\mathcal G}}$ (identified with the subgroup $\mathrm{Inn}(\Gamma{{\mathcal G}})$ of $\mathrm{Aut}(\Gamma{{\mathcal G}})$) acts essentially and non-elementarily on $D(\Gamma, {{\mathcal G}})$.
**Non-elementarity.** The Davis complex is quasi-isometric to a building with underlying Coxeter group the right-angled Coxeter group $W_\Gamma$ (see Section \[sec:dual\]). As $\Gamma$ has girth at least $5$, $W_\Gamma$ is hyperbolic, and thus so is $D(\Gamma, {{\mathcal G}})$. Since the Davis complex is hyperbolic, the non-elementarity of the action will follow from the fact that there exist two elements $g, h \in \Gamma{{\mathcal G}}$ acting hyperbolically on $D(\Gamma, {{\mathcal G}})$ and having disjoint limit sets in the Gromov boundary of $D(\Gamma, {{\mathcal G}})$, by elementary considerations of the dynamics of the action on the boundary of a hyperbolic space. We now construct such hyperbolic elements, following methods used in [@NoteAcylHyp] and [@CharneyMorrisArtinAH].
Since $\Gamma$ is leafless and has girth at least $5$, we can find an induced geodesic of $\Gamma$ of the form $v_1, \ldots, v_4$. For each $1\leq i\leq 4$, choose a non-trivial element $s_i \in G_{v_i}$. For each $1\leq i\leq 4$, let also $e_{v_i}$ be the edge of $D(\Gamma, {{\mathcal G}})$ between the coset $\langle \varnothing \rangle $ and the coset $\langle v_i \rangle$.
We define the following two elements: $g = s_3s_1$ and $h = s_4s_2,$ as well as the following combinatorial paths of $D(\Gamma, {{\mathcal G}})$: $$\Lambda_g = \bigcup_{k\in {{\mathbb Z}}} g^k (e_{v_1}\cup e_{v_3} \cup s_3 e_{v_3} \cup s_3e_{v_1}),~~~~~ \Lambda_h = \bigcup_{k\in {{\mathbb Z}}} h^k (e_{v_2}\cup e_{v_4} \cup s_4 e_{v_4} \cup s_4e_{v_2}).$$
We claim that $\Lambda_g$ and $\Lambda_h$ are combinatorial axes for $g$ and $h$ respectively. Indeed, by construction $\Lambda_g$ and $\Lambda_h$ make an angle $\pi$ at each vertex, hence are geodesic lines, and each element acts by translation on its associated path by construction.
Now notice that no hyperplane of $D(\Gamma, {{\mathcal G}})$ crosses both $\Lambda_g$ and $\Lambda_h$, since edges in $\Lambda_g$ and $\Lambda_h$ are disjoint and edges defining the same hyperplane of $D(\Gamma, {{\mathcal G}})$ have the same label. This implies that for every vertex $u$ of $\Lambda_g$, the (unique) geodesic between $u$ and $\Lambda_h$ is the sub-path of $\Lambda_g$ between $u$ and the intersection point $\Lambda_g \cap \Lambda_h$. This in turn implies that the limit sets of $\Lambda_g$, $\Lambda_{h}$ are disjoint, and that the hyperplanes associated to the edges $e_{v_2}, e_{v_3}$ (which contain $\Lambda_g$, $\Lambda_{h}$ respectively) are essential.
**Essentiality.** For every vertex $v$ of $\Gamma$, we can use the fact that $\Gamma$ is leafless and of girth at least $5$ to construct a geodesic path $v_1, \ldots, v_4$ of $\Gamma$ with $v_2=v$. The above reasoning implies that the hyperplane associated to $e_v$ is essential. As every hyperplane of $D(\Gamma, {{\mathcal G}})$ is a translate of such a hyperplane, it follows that hyperplanes of $D(\Gamma, {{\mathcal G}})$ are essential. As the action of $\Gamma {{\mathcal G}}$ on $D(\Gamma, {{\mathcal G}})$ is cocompact, the action of $\mathrm{Aut}(\Gamma{{\mathcal G}})$ on $D(\Gamma, {{\mathcal G}})$ is also essential.
\[lem:irred\] The Davis complex $D(\Gamma, {{\mathcal G}})$ is irreducible.
The link of every vertex of $D(\Gamma, {{\mathcal G}})$ corresponding to a coset of the trivial subgroup has a link which is isomorphic to $\Gamma$. As $\Gamma$ has girth at least $5$, such a link does not decompose non-trivially as a join, hence $D(\Gamma, {{\mathcal G}})$ does not decompose non-trivially as a direct product.
\[lem:no\_free\_face\] The Davis complex $D(\Gamma, {{\mathcal G}})$ has no free face.
Since $\Gamma$ has girth at least $5$, the Davis complex is $2$-dimensional, and we have to show that every edge is contained in at least two squares. Let $e$ be an edge of $D(\Gamma, {{\mathcal G}})$. If $e$ contains a vertex $v$ that is a coset of the trivial subgroup, then the link of $v$ is isomorphic to $\Gamma$, and it follows from the leafless-ness assumption that $e$ is contained in at least two squares. Otherwise let $C$ be a any square containing $e$. As $\mathrm{Stab}_{\Gamma{{\mathcal G}}}(C)$ is trivial and $\mathrm{Stab}_{\Gamma{{\mathcal G}}}(e)$, which is conjugate to some $G_v$, contains at least two elements, it follows that there are at least two squares containing $e$.
\[lem:stab\] Let $P$ be the fundamental domain of $D(\Gamma, {{\mathcal G}})$ and let $g \in \Gamma{{\mathcal G}}$. Then $$\mathrm{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(P)\cap \mathrm{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(gP) = \{ \varphi \in \mathrm{Loc}(\Gamma {{\mathcal G}}) \mid \varphi(g)=g\}.$$
Since an element of $\mathrm{Aut}(\Gamma{{\mathcal G}})$ stabilises the fundamental domain of $D(\Gamma, {{\mathcal G}})$ if and only if it stabilises the vertex corresponding to the coset $\langle \varnothing \rangle$, we have $\mbox{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(P) = \mbox{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(\langle \varnothing \rangle) = \mathrm{Loc}(\Gamma {{\mathcal G}})$. Therefore, if $\varphi \in \mathrm{Aut}(\Gamma{{\mathcal G}})$ belongs to $\mbox{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(P)\cap \mbox{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(gP)$ then $\varphi \in \mathrm{Loc}(\Gamma {{\mathcal G}})$ and there exists some $\psi \in \mathrm{Loc}(\Gamma {{\mathcal G}})$ such that $\varphi = \iota(g) \circ \psi \circ \iota(g)^{-1}$. Since $\psi \circ \iota(g)^{-1} = \iota(\psi(g))^{-1} \circ \psi$, we deduce that $$\varphi \circ \psi^{-1}= \iota(g) \circ \psi \circ \iota(g)^{-1} \circ \psi^{-1} = \iota(g) \circ \iota(\psi(g))^{-1} \in \mathrm{Inn}(\Gamma{{\mathcal G}}) .$$ Thus, $\varphi \circ \psi^{-1} \in \mathrm{Inn}(\Gamma{{\mathcal G}}) \cap \mathrm{Loc}(\Gamma {{\mathcal G}}).$ On the other hand, we know from Lemma \[fact\_2\] that $\mathrm{Inn}(\Gamma{{\mathcal G}}) \cap \mathrm{Loc}(\Gamma {{\mathcal G}})= \{ \mathrm{Id} \}$, whence $\varphi= \psi$ and $\iota(g)= \iota(\psi(g))$. As a consequence of [@GreenGP Theorem 3.34], it follows from the fact that $\Gamma$ has girth at least $5$ that $\Gamma{{\mathcal G}}$ is centerless, so that the equality $\iota(g)= \iota(\psi(g))$ implies that $\varphi(g)=g$, hence the inclusion $\mbox{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(P)\cap \mbox{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(gP) \subset \{ \varphi \in \mbox{Loc}(\Gamma {{\mathcal G}}) \mid \varphi(g)=g\}.$ The reverse inclusion is clear.
For each vertex $v$ of $\Gamma$, choose a finite generating set $\{s_{v,j} \mid 1 \leq j \leq m_v\}$. Up to allowing repetitions, we will assume that all the integers $m_v$ are equal, and denote by $m$ that integer. We now define a specific element $g \in \Gamma \mathcal{G}$ in the following way.
Since $\Gamma$ is leafless and has girth at least $5$, we can find a sequence $v_1, \ldots, v_n$ exhausting all the vertices of $\Gamma$, such that for each $1\leq i < n$, $v_i$ and $v_{i+1}$ are not adjacent, and also $v_n$ and $v_1$ are not adjacent. We now define: $$g_{j}:=g _{v_1, j}\cdots g_{v_n,j} \mbox{ for $1 \leq j \leq m$,}$$ $$g:=g_{1}\cdots g_m.$$ Let $\varphi$ be an element of $ \mbox{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(P)\cap \mbox{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(gP)$. By Lemma \[lem:stab\], it follows that $\varphi \in \mathrm{Loc}(\Gamma {{\mathcal G}})$ and $\varphi(g) = g$. By construction, $g$ can be written as a concatenation of the form $g= s_1 \cdots s_p$, where each $s_k$ is of the form $s_{v,j}$, and such that no consecutive $s_k, s_{k+1}$ belong to groups of ${{\mathcal G}}$ that are joined by an edge of $\Gamma$. In particular, it follows from the properties of normal forms recalled in Section \[sec:generalities\] that the decomposition $g= s_1 \cdots s_p$ is the unique reduced form of $g$. As $g = \varphi(g) = \varphi(s_1)\cdots\varphi(s_p)$ is an another reduced form of $g$, it follows that $\varphi(s_k)=s_k$ for every $1 \leq k \leq p$. As this holds for a generating set of $\Gamma{{\mathcal G}}$, it follows that $\varphi$ is the identity. We thus have that $ \mbox{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(P)\cap \mbox{Stab}_{\mathrm{Aut}(\Gamma{{\mathcal G}})}(gP)$ is trivial. It now follows from Lemmas \[lem:ess\], \[lem:irred\], \[lem:no\_free\_face\], and \[lem:stab\] that Theorem \[thm:Chatterji\_Martin\] applies, hence Aut$(\Gamma{{\mathcal G}})$ is acylindrically hyperbolic.
It is worth noticing that, in the statement of Theorem \[thm:acyl\_hyp\], we do not have to require our vertex groups to be finitely generated. Indeed, during the proof, we only used the following weaker assumption: every $G \in \mathcal{G}$ contains a finite set $S \subset G$ such that every automorphism of $G$ fixing pointwise $S$ must be the identity. Of course, if $G$ is finitely generated, we may take $S$ to be a finite generating set. But if $G= \mathbb{Q}$ for instance, then $S= \{1\}$ works as well, even though $\mathbb{Q}$ is not finitely generated. However, some condition is needed, as shown by the example below.
Let $Z$ be the direct sum $\bigoplus\limits_{p \ \text{prime}} \mathbb{Z}_p$ and let $G_n$ be the graph product of $n$ copies of $Z$ over the cycle $C_n$ of length $n \geq 5$. We claim that $\mathrm{Aut}(G_n)$ is not acylindrically hyperbolic.
As a consequence of Corollary C (stated in the introduction), the automorphism group $\mathrm{Aut}(G_n)$ decomposes as $\left( \mathrm{Inn}(G_n) \rtimes \mathrm{Loc}(C_n, \mathcal{G}) \right) \rtimes \mathrm{Sym}(C_n, \mathcal{G})$. As the property of being acylindrically hyperbolic is stable under taking infinite normal subgroups [@OsinAcyl Lemma 7.2], it is sufficient to show that $\mathrm{Inn}(G_n) \rtimes \mathrm{Loc}(C_n, \mathcal{G})$ is not acylindrically hyperbolic. So let $\iota(g) \varphi$ be an automorphism of this subgroup, where $g \in G_n$ and $\varphi \in \mathrm{Loc}(C_n, \mathcal{G})$. For each copy $Z_i$ of $Z$, the reduced word representing $g$ contains only finitely many syllables in $Z_i$; let $S_i \subset Z_i$ denote this set of syllables. Clearly, there exists an infinite collection of automorphisms of $Z_i$ fixing $S_i$ pointwise; furthermore, we may suppose that this collection generates a subgroup of automorphisms $\Phi_i \leq \mathrm{Aut}(Z_i)$ which is a free abelian group of infinite rank. Notice that $\phi(g)=g$ for every $\phi \in \Phi_i$. Therefore, for every $\psi \in \Phi_1 \times \cdots \times \Phi_n \leq \mathrm{Loc}$, we have
$\psi \cdot \iota(g) \varphi= \iota(\psi(g)) \cdot \psi \varphi=g \cdot \psi \varphi= g \cdot \varphi \psi = g \varphi \cdot \psi$,
since $\varphi$ and $\psi$ clearly commute: each $\mathrm{Aut}(Z_i)$ is abelian so that $\mathrm{Loc}(C_n, \mathcal{G})$ is abelian as well. Thus, we have proved that the centraliser of any element of $\mathrm{Inn}(G_n) \rtimes \mathrm{Loc}(C_n, \mathcal{G})$ contains a free abelian group of infinite rank. Therefore, $\mathrm{Inn}(G_n) \rtimes \mathrm{Loc}(C_n, \mathcal{G})$ (and a fortiori $\mathrm{Aut}(G_n)$) cannot be acylindrically hyperbolic according to [@OsinAcyl Corollary 6.9].
In this section, it was more convenient to work with a CAT(0) cube complex rather than with a quasi-median graph because results already available in the literature allowed us to shorten the arguments. However, we emphasize that a quasi-median proof is possible. The main steps are the followings. First, as in Lemma \[prop:extendT\], the action $\Gamma \mathcal{G} \curvearrowright X(\Gamma, \mathcal{G})$ extends to an action $\mathrm{Aut}(\Gamma \mathcal{G}) \curvearrowright X(\Gamma, \mathcal{G})$ via $\iota(g) \varphi \cdot x = g \varphi(x)$ where $x \in X(\Gamma, \mathcal{G})$ is a vertex. So Theorem \[thm:noT\] also follows since Niblo and Roller’s argument [@MR1459140] can be reproduced almost word for word in the quasi-median setting; or alternatively, the theorem follows from the combination of [@MR1459140] with [@Qm Proposition 4.16] which shows that any quasi-median graph admits a “dual” CAT(0) cube complex. Next, in order to prove Theorem \[thm:acyl\_hyp\], we need to notice that $X(\Gamma, \mathcal{G})$ is hyperbolic (as a consequence of [@Qm Fact 8.33]) and that the element $g$ constructed in the proof above turns out to be a WPD element. For the latter observation, one can easily prove the following criterion by following the arguments of [@MoiAcylHyp Theorem 18]: given a group $G$ acting on a hyperbolic quasi-median graph $X$, if an element $g \in G$ skewers a pair of strongly separated hyperplanes $J_1,J_2$ such that $\mathrm{stab}(J_1) \cap \mathrm{stab}(J_2)$ is finite, then $g$ must be a WPD element.
<span style="font-variant:small-caps;">Département de Mathématiques Bâtiment 307, Faculté des Sciences d’Orsay, Université Paris-Sud, F-91405 Orsay Cedex, France.</span>
*E-mail address*: `anthony.genevois@math.u-psud.fr`
<span style="font-variant:small-caps;">Department of Mathematics and the Maxwell Institute for Mathematical Sciences, Heriot-Watt University, Riccarton, EH14 4AS Edinburgh, United Kingdom.</span>
*E-mail address*: `alexandre.martin@hw.ac.uk`
|
---
abstract: 'Magnetic B-type stars exhibit photometric variability due to diverse causes, and consequently on a variety of timescales. In this paper we describe interpretation of BRITE photometry and related ground-based observations of 4 magnetic B-type systems: $\epsilon$ Lupi, $\tau$ Sco, a Cen and $\epsilon$ CMa.'
author:
- 'G.A. Wade'
- 'D.H. Cohen'
- 'C. Fletcher'
- 'G. Handler'
- 'L. Huang'
- 'J. Krticka'
- 'C. Neiner'
- 'E. Niemczura'
- 'H. Pablo'
- 'E. Paunzen'
- 'V. Petit'
- 'A. Pigulski'
- 'Th. Rivinius'
- 'J. Rowe'
- 'M. Rybicka'
- 'R. Townsend'
- 'M. Shultz'
- 'J. Silvester'
- 'J. Sikora'
- 'the BRITE-Constellation Executive Science Team (BEST)'
bibliography:
- 'wade.bib'
title: 'Magnetic B stars observed with BRITE: Spots, magnetospheres, binarity, and pulsations'
---
Introduction
============
Approximately 10% of mid- to early-B stars located on the main sequence show direct evidence of strong surface magnetism. Studying the photometric variability of such systems provides insight into their multiplicity and physical characteristics, rotation, surface and wind structures, and pulsation properties. For example, in late- and mid- B-type stars (below about spectral type B2) magnetic fields stabilise atmospheric motions and allow the accumulation of peculiar abundances (and abundance distributions) of various chemical elements. At earlier spectral types, magnetic fields channel radiatively-driven stellar winds, confining wind plasma to produce complex co-rotating magnetospheres. Some magnetic stars are located in close binary systems, where photometric variability may reveal eclipses, tidal interaction and (potentially) mass and energy transfer effects. Finally, some magnetic B stars are located in an instability strip, and exhibit $\beta$ Cep and SPB-type pulsations.
The bright magnetic B stars observed by the BRITE-Constellation have been preferential targets of spectropolarimetric monitoring within the context of the BRITEpol survey (see the paper by Neiner et al. in these proceedings). In this article we provide brief reports on analysis of BRITE photometry and complementary data for 4 magnetic B-type stars for which the BRITE observations detect or constrain variability due to these mechanisms.
$\epsilon$ Lupi
===============
![Phased photometry (Upper panel - BRITE blue, middle panel - BRITE red) and radial velocities (lower panel) of $\epsilon$ Lupi, compared with the predictions of the heartbeat model (curves).[]{data-label="fig:epslupi"}](eps_lupi.pdf){width="\textwidth"}
$\epsilon$ Lupi is a short-period ($\sim 4.6$ d) eccentric binary system containing two mid/early-B stars . It was observed by the BRITE UBr, BAb, BTr, BLb nano-satellites [see e.g. @2016PASP..128l5001P] during the Centaurus campaign from March to August 2014, and again by BLb during the Scorpius campaign from February to August 2015.
[Magnetic fields associated with both the primary and secondary components were reported by @2015MNRAS.454L...1S, making $\epsilon$ Lupi the first known doubly-magnetic massive star binary]{}. The (variable) proximity of the two components led @2015MNRAS.454L...1S to speculate that their magnetospheres may undergo reconnection events during their orbit. Such events, as well as rotational modulation by surface structures and the suspected $\beta$ Cep pulsations of one or both components, could introduce brightness fluctuations potentially observable by BRITE.
The periodogram of the BRITE photometry shows power at the known orbital period. When the data are phased accordingly, both the red (BTr+UBr) and blue (BAb) lightcurves exhibit a subtle, non-sinusoidal modulation with peak flux occurring at the same phase as the orbital RV extremum (i.e. periastron). We interpret this modulation as a “heartbeat" effect [e.g. @2012ApJ...753...86T], resulting from tidally-induced deformation of the stars during their close passage at periastron. Assuming this phenomenon, we have successfully modeled the lightcuves and RV variations using the PHOEBE code [version 1, @2005ApJ...628..426P see Fig. \[fig:epslupi\]].
![BRITE red filter photometry of $\tau$ Sco, compared with the predictions of ADM models computed assuming pure scattering. The different colours correspond to source surface radii ranging from 2-5 $R_*$.[]{data-label="fig:tausco"}](tausco.pdf){width="\textwidth"}
$\tau$ Sco
==========
$\tau$ Sco is a hot main sequence B0.5V star that was observed by BAb, UBr, BLb, BHr during the Scorpius campaign from February August 2015. @2006MNRAS.370..629D detected a magnetic field in the photosphere of this X-ray bright star, varying according to a rotational period of 41 d. They modeled the magnetic field topology, finding it to be remarkably complex. @2010ApJ...721.1412I acquired Suzaku X-ray measurements of $\tau$ Sco. They found that the very modest phase variation of the X-ray flux was at odds with the predicted variability according to the 3D force-free extrapolation of the magnetosphere reported by @2006MNRAS.370..629D.
Petit et al. (in prep) have sought to explain this discrepancy by reconsidering the physical scale of the closed magnetospheric loops of $\tau$ Sco. New modeling of system using the Analytic Dynamical Magnetosphere (ADM) formalism [@2016MNRAS.462.3830O] yields predictions of the X-ray variability as a function of the adopted mass-loss rate (as quantified by the “source surface" of the extrapolation).
These same ADM models have been used in conjunction with BRITE photometry to constrain the distribution of cooler plasma surrounding the star. Adopting a pure electron scattering approximation, we have computed the expected brightness modulation as a function of source surface distance (Fig. \[fig:tausco\]). The very high quality of the BRITE red photometry allows us to rule out models with source surface radii smaller than 3 $R_*$.
a Cen
=====
a Cen is a Bp star of intermediate spectral type ($T_{\rm eff}\sim 19$ kK) that exhibits extreme variations of its helium lines during its 8.82 d rotational cycle. It was observed during the Centaurus campaign from March to August 2014 by UBr, BAb, BTr, BLb.
used high resolution spectra to compute Doppler Imaging maps of the distributions of He, Fe, N and O of a Cen, revealing in particular a more than two-order-of-magnitude contrast in the abundance of He in opposite stellar hemispheres. They also discovered that the He-poor hemisphere shows a high relative concentration of $^3$He.
The BRITE photometry of a Cen exhibits clear variability according to the previously-known rotational period (Fig. \[fig:acen1\], left panel). It also reveals marginal variability at frequencies that may correspond to pulsations in the SPB range. Using a collection of 19 new ESPaDOnS and HARPSpol Stokes $V$ spectra, in addition to archival UVES spectra (e.g. Fig. \[fig:acen1\], right panel), new self-consistent Magnetic Doppler Imaging maps have been derived of the stellar magnetic field and the abundance distributions of various elements, including Si (Fig. \[fig:acen2\]). These maps will be used as basic input for modeling the two-colour BRITE lightcurves .
![[*Left panel -*]{} BRITE photometry of a Cen (upper curve - blue filter, lower curve - red filter) phased according to the stellar rotation period. [*Right panel -*]{} Dynamic spectrum of the rotational variability of the He [i]{} $\lambda 4388$ line, showing the extreme changes in line strength.[]{data-label="fig:acen1"}](acen_brite.pdf "fig:"){width="6.5cm"}![[*Left panel -*]{} BRITE photometry of a Cen (upper curve - blue filter, lower curve - red filter) phased according to the stellar rotation period. [*Right panel -*]{} Dynamic spectrum of the rotational variability of the He [i]{} $\lambda 4388$ line, showing the extreme changes in line strength.[]{data-label="fig:acen1"}](acen_he.pdf "fig:"){width="6.5cm"}
![Magnetic Doppler Imaging maps of a Cen, showing the surface Si distribution (upper row), and the magnetic field modulus and orientation (middle and bottom rows, respectively). []{data-label="fig:acen2"}](acen_map.pdf){width="\textwidth"}
$\epsilon$ CMa
==============
$\epsilon$ CMa is an evolved B1.5II star. It was observed during the Canis Majoris-Puppis campaign from October 2015 to April 2016 by UBr, BLb, BTr. A weak magnetic field was detected in photospheric lines of this star by .
A preliminary analysis of the BRITE photometry reveals no significant variability. However, we have continued to monitor the magnetic field of $\epsilon$ CMa, analysing over 120 Stokes $V$ exposures obtained over a span of 125 d, with the aim of (i) detecting rotational modulation and determining the stellar rotational period, and (ii) modeling the surface magnetic field strength and geometry. Variability of the Stokes $V$ profiles - as quantified by a deviation analysis (Fig. \[fig:epscma\]) - is very weak. At the level of precision of the magnetic data (best error bars of 2 G, median error bar of 4 G), no periodic variability can be inferred. Considering the reported projected rotational velocity and measured angular diameter of the star, the rotational period should be no longer than $\sim 25$ d. This could imply that the star is viewed close to the rotational pole, that the magnetic axis is aligned with the rotation axis, or that the global field contrast is significantly weaker than expected from a dipole.
![Results of the variability analysis of over 120 LSD Stokes $V$ profiles of $\epsilon$ CMa, showing the mean profile (shifted vertically for display purposes) and per-pixel deviation of both Stokes $V$ and the diagnostic null ($N$) normalised to their respective uncertainties. The dashed lines show the velocity span of the $V$ profile and its centre-of-gravity. The largest deviation of $V$ occurs near the centre-of-gravity of the profile, and corresponds to only 2$\sigma$.[]{data-label="fig:epscma"}](deviation.pdf){width="\textwidth"}
|
---
abstract: 'After the development of a self-consistent quantum formalism nearly a century ago there began a quest for how to interpret the theoretical constructs of the formalism. In fact, the pursuit of new interpretations of quantum mechanics persists to this day. Most of these endeavors assume the validity of standard quantum formalism and proceed to ponder the ontic nature of wave functions, operators, and the Schödinger equation. The present essay takes a different approach, more epistemological than ontological. I endeavor to give a heuristic account of how empirical principles lead us to a quantum mechanical description of the world. An outcome of this approach is the suggestion that the notion of discrete quanta leads to the wave nature and statistical behavior of matter rather than the other way around. Finally, the hope is to offer some solace to those older of us who still worry about such things and also to provide the neophyte student of quantum mechanics with physical insight into the mathematically abstract and often baffling aspects of the theory.'
author:
- Stephen Boughn
- |
Stephen Boughn[*[^1]*]{}\
*Department of Physics, Princeton University, Princeton NJ\
*Departments of Physics and Astronomy, Haverford College, Haverford PA**
date: '[LaTeX-ed ]{}'
title: A Quantum Story
---
[*Keywords:*]{} Quantum theory $\cdot$ Canonical quantization $\cdot$ Foundations of quantum mechanics $\cdot$ Measurement problem
Introduction {#INTRO}
============
From the very beginnings of quantum mechanics a century ago, it was clear that the concepts of classical physics were insufficient for describing many phenomena. In particular, the fact that electrons and light exhibited both the properties of particles and the properties of waves was anathema to classical physics. After the development of a self-consistent quantum formalism, there began a quest for just how to interpret the new theoretical constructs. De Broglie and Schrödinger favored interpreting quantum waves as depicting a continuous distributions of matter while Einstein and Born suggested that they only provide a statistical measure of where a particle of matter or radiation might be. After 1930, the [*Copenhagen interpretation*]{} of Bohr and Heisenberg was generally accepted; although, Bohr and Heisenberg often emphasized different aspects of the interpretation and there has never been complete agreement as to its meaning even among its proponents[@St1972]. The Copenhagen interpretation dealt with the incongruous dual wave and particle properties by embracing Bohr’s [*principle of complementarity*]{} in which complementary features of physical systems can only be accessed by experiments designed to observe one or the other but not both of these features. For example, one can observe either the particle behavior or wave behavior of electrons but not both at the same time. In addition, the waves implicit in Schrödinger’s equation were interpreted as probability amplitudes for the outcomes of experiments. Finally, in order to facilitate the communication of experimental results, the Copenhagen interpretation emphasized that the description of experiments, which invariably involve macroscopic apparatus, must be described in classical terms.
These aspects of quantum theory are familiar to all beginning students of quantum mechanics; however, many students harbor the uneasy feeling that something is missing. How can an electron in some circumstances exhibit the properties of a particle and at other times exhibit the properties of a wave? How is it that a primary theoretical constructs of quantum mechanics, the Schrödinger wave functions or Hilbert state vectors, only indicate the probability of events? Quantum mechanics itself does not seem to indicate that any event actually happens. Why is it that experiments are only to be described classically? Where is the quantum/classical divide between the quantum system and the classical measurement and what governs interactions across this divide? In fact, these sorts of questions are raised not only by neophyte students of quantum mechanics but also by seasoned practitioners. In actuality, the question of how to interpret quantum theory has never been fully answered and new points of view are still being offered. Many of these interpretations involve novel mathematical formalisms that have proved to be useful additions to quantum theory. In fact, new formulations of quantum mechanics and quantum field theory, including axiomatic approaches, are often accompanied by new or modified interpretations. Such interpretive analyses are largely framed within the mathematical formalism of quantum theory and I will refrain from saying anything more about them.
The purpose of this essay is to address a different, more epistemological question, “What is it about the physical world that leads us to a quantum theoretic model of it?" The intention is to in no way malign the more formal investigations of quantum mechanics. Such investigations have been extremely successful in furthering our understanding of quantum theory as well as increasing our ability to predict and make use of novel quantum phenomena. These treatments invariably begin with the assumption that standard quantum mechanics is a fundamental law of nature and then proceed with interpreting its consequences. In this essay I take the point of view that quantum mechanics is a model, a human invention, created to help us describe and understand our world and then proceed to address the more philosophical question posed above, a question that is still pondered by some physicists and philosophers and certainly by many physics students when they first encounter quantum mechanics. Most of the latter group eventually come to some understanding, perhaps via the ubiquitous Copenhagen Interpretation, and then proceed according to the maxim “Shut up and calculate!"[^2] One modest aim of this essay is to provide such students with a heuristic perspective on quantum mechanics that might enable them to proceed to calculations without first having to “shut up".
What’s Quantized?
=================
Let us begin by asking where the ‘quantum’ in quantum mechanics comes from. What is it that’s quantized? That matter is composed of discrete quanta, atoms, was contemplated by Greek philosophers in the $5^{th}$ century B.C.[@Be2011] and the idea continued to be espoused through the $18^{th}$ century. Even though it wasn’t until the $19^{th}$ and early $20^{th}$ centuries that the existence of atoms was placed on a firm empirical basis, it’s not difficult to imagine what led early philosophers to an atomistic model. Perhaps the primary motivation, an argument that still resonates today, was to address the puzzle of change, i.e., the transformation of matter. This was often expressed by the assertion that things cannot come from nothing nor can they ever return to nothing. Rather, creation, destruction, and change are most simply explained by the rearrangement of the atomic constituents of matter. In his epic poem [*De rerum natura*]{} (On the Nature of Things, circa 55 BC), Lucretius[^3] explained (translation by R. Melville[@Lu55])
> ...no single thing returns to nothing but at its dissolution everything returns to matter’s primal particles...they must for sure consist of changeless matter. For if the primal atoms could suffer change...then no more would certainty exist of what can be and what cannot...Nor could so oft the race of men repeat the nature, manners, habits of their parents.
While it took nearly 2500 years, the conjectures of the atomists were largely justified.
One might also reasonably ask, “Are there other aspects of nature that are quantized?" It’s no coincidence that during the same period that saw the confirmation of the atomic hypothesis, there appeared evidence for the discrete nature of atomic interactions. Perhaps the first clues were the early $19^{th}$ century observations by Wollaston and Fraunhofer of discrete absorption lines in the spectrum of the sun and the subsequent identification of emission lines in the spectra of elements in the laboratory by Kirchoff and Bunsen in 1859. In 1888, Rydberg was able to relate the wavelengths of these discrete spectral lines to ratios of integers. Boltzmann introduced discrete energy as early as 1868 but only as a computational device in statistical mechanics. It was in 1900 that Planck found he must take such quantization more seriously in his derivation of the Planck black body formula[@Ba2009]. A decade later Jeans, Poincaré, and Ehrenfest demonstrated that the discreteness of energy states, which source black body radiation, follows from the general morphology of the spectrum and is not the consequence of precisely fitting the observed spectral data[@No1993]. In 1905 Einstein introduced the notion of quanta of light with energies that depended on frequency with precisely the same relation as introduced by Planck[^4], $E=h\nu$, and then used this relation to explain qualitative observations of the photoelectric effect[^5]. In 1907 it was again Einstein who demonstrated that energy quantization of harmonic oscillators explained why the heat capacities of solids decrease at low temperatures. Finally, Bohr’s 1913 model of discrete energy levels of electrons in atoms explained the spectral lines of Kirchoff and Bunsen as well as resolved the conflict of Maxwell’s electrodynamics with the stability of Rutherford’s 1911 nuclear atomic model.
In a 1922 conversation with Heisenberg[@He1972], Bohr expressed an argument for the discreteness of atomic interactions that harkened back to the ancient Greeks’ arguments for atoms (and to the Lucretius quote above). Bohr based his argument on the stability of matter, but not in the sense just mentioned. Bohr explained,
> By ‘stability’ I mean that the same substances always have the same properties, that the same crystals recur, the same chemical compounds, etc. In other words, even after a host of changes due to external influences, an iron atom will always remain an iron atom, with exactly the same properties as before. This cannot be explained by the principles of classical mechanics...according to which all effects have precisely determined causes, and according to which the present state of a phenomenon or process is fully determined by the one that immediately preceded it.
In other words, in a world composed of Rutherford atoms, quantum discreteness is necessary in order to preserve the simplicity and regularity of nature. Bohr’s ‘stability’ and Lucretius’s ‘repeatability’ clearly refer to the same aspect of nature.
It might appear from the examples given above that energy is the key dynamical quantity that must always come in discrete quanta. However, there are problems with this demand. For one thing, there is no fundamental constant in physics, such as the speed of light $c$, the charge on the electron $e$, Planck’s constant $h$, or Newton’s constant $G$, that has the units of energy.[^6] In addition, it’s straightforward to demonstrate that energy is not quantized in all situations. Suppose there were a fundamental quantum of energy $\varepsilon_{0}$ and that a free point particle already in motion acquires this much additional kinetic energy through some unspecified interaction. When viewed by an observer that is initially co-moving with the particle, it is trivial to demonstrate that the change in its kinetic energy is significantly less than $\varepsilon_{0}$, thereby violating the hypothetical energy quantization condition. Similar arguments show that a fundamental quantum of linear momentum is also excluded. In fact, it’s easy to think of interactions in which the energy and momentum change by arbitrarily small increments. Rutherford scattering and Compton scattering are two such examples. Finally, we know from quantum theory that a free particle may assume any of a continuum of values of energy and momentum.
Angular momentum is another example of a dynamical quantity that might come in discrete quanta and it has the same units as $h$. In the context of standard quantum mechanics, the quantum state of a particle can always be expressed as a linear combination of angular momentum eigenfunctions with eigenvalues that are discrete multiples of $h/2\pi$. Furthermore, according to the standard interpretation of quantum mechanics, any measurement of the angular momentum of a system must be an eigenvalue of the angular momentum operator. This might lead one to conclude that angular momentum somehow characterizes the quantum nature of interactions. On the other hand, that such a specific quantity as angular momentum should occupy this primal status seems doubtful. In addition, a procedure for how to effect a measurement corresponding to the angular momentum operator is not necessarily well-defined. For example, as far as I know, there is no unique prescription for how to measure the angular momentum of a free particle. One can easily conjure a sensible measurement that, depending on the chosen origin of the coordinate system, results in an arbitrary value of the angular momentum.[^7] Finally, I remind the reader that it is not the intent of the current discussion to offer an interpretation of standard quantum mechanics but rather to understand why we are led to a quantum mechanical model of nature. It is in this context that the semi-classical arguments excluding energy, momentum, and angular momentum have merit.
To be sure, there are many instances of the quantization of energy $E$, momentum $p$, angular momentum $L$, and even position $x$; however, the values of these quanta depend on the specifics of the system and have a wide range of values. For example, the energy and momentum in a monochromatic beam of photons are quantized in units of $h\nu$ and $h\nu/c$ but the values of these quanta depend on the frequency of the photons and are unconstrained; they can take on any value between $0$ and $\infty$, which again argues against their primal status. An alternate approach might be to consider the quantization of some specific combination of $E$, $p$, $L$, and $x$. In fact, Heisenberg’s indeterminacy relation[@He1927], $\delta x \delta p \sim h$, points to the product of position and momentum as such a combination (and we will see that this is, indeed, the case). It might seem inappropriate to invoke a result of quantum mechanics in the current epistemological approach; however, Heisenberg arrived at this relation, not from quantum mechanics, but rather from an empirical gedanken experiment involving a gamma ray microscope as will be discussed later.
In 1912, Nicholson proposed that the angular momentum of an electron in orbit about an atomic nucleus is quantized and the following year Ehrenfest argued that the unit of this quantum is $\hbar \equiv h/2\pi$. In the same year, Bohr incorporated these ideas into his model of an atom, a model that provided a successful explanation of the spectrum of atomic hydrogen. Even though $\hbar$ has the same units as angular momentum, we argued above that it is doubtful that angular momentum constitutes the fundamental quantum interaction. Wilson, Ishiwara, Epstein, and Sommerfeld soon replaced Nicholoson/Ehrenfest/Bohr quantization with the notion that Hamilton-Jacobi action variables $J_{k}$ (for periodic systems), which also have the same units as $h$, are quantized[@Wh1951]. That is, $J_{k} \equiv \oint p_{k}dx_{k} = nh$ where $p_{k}$ is the momentum conjugate to the coordinate $x_{k}$, $n$ is an integer, and the integral is taken over one cycle of periodic motion. Recall that one rationale for excluding energy, momentum, and angular momentum as fundamental quanta is that such quantization is not necessarily independent of the frame of reference. It is straightforward to demonstrate that the action integral is the same for all non-relativistic inertial observers, i.e., $v \ll c$, (up to an additive constant that is irrelevant for the physics of the systems)[^8]. What other properties of action variables might single them out as amenable to quantization?
In 1846, the French astronomer Delaunay invented a method of solving a certain class of separable[^9] dynamical problems for which the action variables $J_{k}$ are constants of the motion[@La1970]. In 1916, Ehrenfest added to the primacy of action variables by pointing out that all of these quantities that had been previously quantized were adiabatic invariants[@Wh1951]. That is, they remain constant during adiabatic changes of the parameters that define a system. This implies, for example, that if one slowly changes the length of a pendulum, it will remain in the same quantum state. This requirement is desirable; otherwise, such an adiabatic perturbation is apt to leave the system in a state that is not consistent with the quantization condition for the changed system. On the other hand, the quantization of action depends on the coordinates employed, which again seems unacceptable for a fundamental principle.
It was Einstein who in 1917 gave a new interpretation of the quantization conditions by demonstrating that they followed from the requirement that Hamilton’s principle function $S$ is multivalued such that the change in $S$ around any closed curve in configuration space is an integer times Planck’s constant, i.e., $\oint p_{k}dq_{k} = \oint \frac{\partial S}{\partial q_{k}}dq_{k} = \oint dS = nh$[@St2005]. This geometric expression no longer depends on the choice of coordinates. Consider the case of a one-dimensional harmonic oscillator such that $\oint pdx = \oint p\dot{x}dt =\oint 2Kdt = 2\langle K \rangle \tau = E\tau = nh$ where $\langle K \rangle$ is the average kinetic energy, $E$ is the total energy, and $\tau$ is the period of the oscillator. Then $\Delta E=h/\tau=h\nu$, Planck’s ansatz for the quantization of the energy exchanged between blackbody radiation and hypothetical harmonic oscillators of frequency $\nu=1/\tau$ embedded in the walls of a blackbody cavity. For the case of an electromagnetic wave of wavelength $\lambda$ and frequency $\nu=c/\lambda$, the quantization condition becomes $\oint pdx=p\lambda=pc/\nu=nh$. The energy and momentum of an electromagnetic wave are related by $E=pc$, therefore, $E=nh\nu$, Einstein’s relation for photons, the quanta of electromagnetic radiation. The same quantization rule applies to the angular momentum $p_{\phi}$ of an electron in a hydrogen atom, $\oint p_{\phi} d\phi=2\pi p_{\phi} =nh \Rightarrow p_{\phi}=n\hbar$, the Nicholoson/Ehrenfest/Bohr quantization condition.
[*Disclaimer:*]{} The quantization conditions of Hamilton-Jacobi action were part of the foundations of the old quantum theory from the years 1900-1925 and have long since been superseded by the formalism of modern quantum mechanics and quantum field theory.[^10] In addition, the quantization procedure fails for non-integrable (chaotic) systems.[^11] Finally, the above arguments are semi-classical and, as such, it’s difficult to imagine how they can provide a firm foundation for modern quantum theory. However, the reader should be reminded that the purpose of this essay is not an axiomatic derivation of quantum mechanics from fundamental principles but rather to acquire insight into the quantum world and thus address the question, “What is it about the physical world that led us to a quantum theoretic model of it?" I now continue with this task.
Quantization and Waves {#QandW}
======================
In 1923 Duane[@Du1923], Breit[@Br1923], and Compton[@Co1923] applied the quantization condition to the interaction of x-ray photons with an infinite, periodic crystal lattice and were able to obtain Bragg’s law of reflection without directly invoking the wave nature of x-rays. A somewhat simpler case is that of photons incident on an infinite diffraction grating. Figure 1 is a replica of the schematic diagram in Breit’s 1923 paper where $h\nu/c =p_{\gamma}$ is the momentum of the incident photon, $G$ is the diffraction grating, $D_{0},D_{\pm1},D_{\pm2},...$ are the positions of the slits of the grating, $\theta$ is the scattering angle, and $P$ is the transverse momentum of the emergent photon. Now assume that the momentum transferred from the
radiation to the grating is governed by $\oint pdx=nh$ where $p$ is in a direction parallel to its surface and the integral is taken over the transverse distance necessary to bring the system back to its original condition, i.e., the line spacing $d=D_{k+1}-D_{k}$. In this case, the average momentum transferred to the grating is $\langle p \rangle=\oint pdx/\oint dx = nh/d$ and by conservation of momentum this must also be the magnitude of the transverse momentum transferred to an incident photon, i.e., $P=\langle p \rangle$. If photons are incident perpendicular to the plane of the grating, then the allowed angles at which they are transmitted through the grating are given by $\sin \theta_n=\langle p \rangle/p_{\gamma}$.[^12] Thus, $\sin \theta_n=nh/p_{\gamma}d$, which is the relation for diffraction (interference) of a wave with wavelength $\lambda=h/p_{\gamma}$. Again, no specific reference to the wave nature of the photons is necessary. Breit[@Br1923] and Epstein & Ehrenfest[@Ep1924] extended these results to finite width, single and multiple slit interference patterns. Thus, the quantization condition $\oint pdx=nh$ leads directly to the interference properties of photons without directly invoking their wave nature. It is curious that none of these authors extended their analyses to the case of electrons scattered from crystals, a process that should obey the same quantization condition. If they had, they might have predicted that $\lambda=h/p$ and the wave nature of electrons prior to de Broglie’s 1924 thesis and Davisson & Germer’s and Thomson’s 1927 electron diffraction experiments. The analyses of Duane [*et al.*]{} provide seminal illustrations of a direct path from the quantization of action to the wave behavior of particles and photons. As such, they lend credence to the notion that there is a primal relation between the quantization of dynamical properties and the dual wave-particle behavior of quantum systems.
Physics and Probability {#prob}
=======================
Another major conundrum of quantum mechanics is the fundamental role of probability in the theory.[^13] The probabilities are taken to apply to the outcomes of possible observations of a system even though some of the observations are mutually exclusive (Bohr’s [*principle of complementarity*]{}). This seems to fly in the face of our classical notion that physical systems should be completely describable in isolation, prior to and independent of any observation. How is it that the specification of mere probabilities can possibly constitute a fundamental description of a physical system and if so, how can such a description possibly provide a complete description of reality?[^14]
In 1927 Heisenberg proposed the indeterminacy relation, $\delta x\delta p \sim h$, that bears his name. It was his contention that “this indeterminacy is the real basis for the occurrence of statistical relations in quantum mechanics."[@He1927] He arrived at the concept by considering a gedanken experiment in the form of a gamma ray microscope. Heisenberg reasoned that with such a microscope one could only determine an electron’s position to within on the order of one gamma ray wavelength, $\delta x \sim \lambda$. But in doing so, one would impart to the electron an unknown momentum on the order of the momentum of the incident gamma ray, $\delta p \sim E_{\gamma}/c=h\nu/c=h/\lambda$, and hence, $\delta x\delta p \sim h$.[^15] To the extent that the wave behavior of gamma rays follows from quantization, as demonstrated by Duane [*et al.*]{}, the Heisenberg indeterminacy relation is a direct consequence of the quantum of action. Heisenberg also demonstrated that this relation can be determined directly from the formalism of quantum mechanics; however, our point here is that it is already evident from the quantization of action.
Heisenberg’s uncertainty principle is one of the pillars of modern physics and his gamma ray microscope provides a particularly intuitive interpretation of the principle. However, there are other insightful gendanken experiments that are more directly tied to quantization. For example, suppose a particle is confined to be within a one-dimensional box (potential well) of width $\ell$ but is otherwise free, i.e., has constant momentum $p$ along the one dimension but in either direction. The motion of the particle will clearly be periodic with a spatial period $2\ell$ and the quantization condition is $\oint pdx=2p\ell=nh$. If the particle is in its ground state, $n=1$ and $2p\ell=h$. At any instant, the uncertainty in the particle’s position is clearly $\delta x\sim \ell$. The magnitude of the particle’s momentum is known but it could be moving in either direction so the uncertainty in its momentum is $\delta p \sim h/\ell$. Combining these two relations, we arrive at Heisenberg’s indeterminacy relation, $\delta x\delta p \sim h$. Of course, this particle is confined; however, if the box is opened, the particle is free to move in either direction. Immediately after the box is opened, the uncertainties in the position and momentum of the now free particle again satisfy the Heisenberg relation, $\delta x\delta p \sim h$.
The argument that Heisenberg gave to support his contention that the uncertainty relations are the basis for the statistical relations in quantum mechanics is as follows,[@He1927]
> We have not assumed that quantum theory–in opposition to classical theory–is an essentially statistical theory in the sense that only statisitical conclusions can be drawn from precise statistical data....Rather, in all cases in which relations exist in classical theory between quantities which are really exactly measurable, the corresponding exact relations also hold in quantum theory (laws of conservation of momentum and energy). But what is wrong in the sharp formulation of the law of causality, “When we know the present precisely, we can predict the future," is not the conclusion but the assumption. Even in principle we cannot know the present in all detail. For that reason everything observed is a slection from a plenitude of possibilities and a limitation on what is possible in the future.
Another reason to concede to a statistical view of nature is the realization that this notion is not particularly foreign to classical physics. Certainly, statistical mechanics is one of the triumphal successes of classical physics. On the experimental side, careful consideration of uncertainties is always essential when comparing observations with theoretical predictions, either quantum or classical. In the classical case these uncertainties are usually viewed as experimental “noise" and left to the experimentalist to elucidate. However, this doesn’t necessarily have to be the case. The Hamilton-Jacobi formalism provides an approach in which such uncertainties can be included in the fundamental equations of classical mechanics[@Ha2005; @HR2016]; although, it is usually far more convenient to deal with them in the analysis of a measurement rather than as fundamental facet of the theory.[^16] An interesting aside is that by combining the statistical Hamilton-Jacobi formalism of classical mechanics with the Heisenberg uncertainty relations, one can generate a plausible route to Schrödinger’s equation and the concomitant wave nature of particles[@Ha2002; @Bo2017]. One can even construe statistical relations in classical physics in terms of classical indeterminacy relations $\delta x > 0$ and $\delta p > 0$[@Vo2011]. In a very real sense, violations of these relations, namely $\delta x=0$ or $\delta p=0$, are just as inaccessible as a violation of the quantum mechanical uncertainty principle, $\delta x\delta p < \hbar/2$, an assertion to which any experimentalist will attest.[^17] These arguments are certainly not intended to demonstrate that quantum mechanics and classical mechanics are compatible. Clearly, they are not. They are offered simply to emphasize that probability and statistics are fundamental to physics, both classical and quantum. Rather, the crucial difference between the two is the quantization of action that is primal in quantum physics but absent in classical physics.
The Quantum/Classical Divide {#QCD}
============================
The dual wave-particle nature of matter and radiation and the probabilistic nature of the theory are not the only elements that exasperate beginning students of quantum mechanics. Another point of discomfort is the quantum/classical divide that the Copenhagen interpretation places between a quantum system and a classical measuring apparatus. Where is the divide and what physical interactions occur at the divide? This dilemma is predicated upon the supposition that experiments must be, or inevitably are, described by classical physics. Upon closer inspection, the assertion that classical physics adequately describes experiments is far from obvious. Bohr expressed the situation as follows[@Bo1958]:
> The decisive point is to recognize that the description of the experimental arrangement and the recordings of observations must be given in plain language, suitably refined by the usual terminology. This is a simple logical demand, since by the word ‘experiment’ we can only mean a procedure regarding which we are able to communicate to others what we have done and what we have learnt.
Stapp[@St1972] chose to emphasize this pragmatic view of classicality by using the word [*specifications*]{}, i.e.,
> Specifications are what architects and builders, and mechanics and machinists, use to communicate to one another conditions on the concrete social realities or actualities that bind their lives together. It is hard to think of a theoretical concept that could have a more objective meaning. Specifications are described in technical jargon that is an extension of everyday language. This language may incorporate concepts from classical physics. But this fact in no way implies that these concepts are valid beyond the realm in which they are used by technicians.
The point is that descriptions of experiments are invariably given in terms of operational prescriptions or specifications that can be communicated to technicians, engineers, and the physics community at large. The formalism of quantum mechanics has absolutely nothing to say about experiments.
There have been many proposed theoretical resolutions to the problem of the quantum/classical divide but none of them seem adequate (e.g., [@Bo2013]). One obvious approach is simply to treat the measuring apparatus as a quantum mechanical system. While perhaps impractical, no one doubts that quantum mechanics applies to the bulk properties of matter and so this path might, in principle, seem reasonable. However to the extent that it can be accomplished, the apparatus becomes part of the (probabilistic) quantum mechanical system for which yet another measuring apparatus is required to observe the combined system. Heisenberg expressed this in the extreme case, “One may treat the whole world as one mechanical system, but then only a mathematical problem remains while access to observation is closed off."[@Sch2008]
Ultimately, the dilemma of the quantum/classical divide or rather system/experiment divide is a faux problem. Precisely the same situation occurs in classical physics but apparently has not been considered problematic. Are the operational prescriptions of experiments part and parcel of classical theory? Are they couched in terms of point particles, rigid solid bodies, Newton’s laws or Hamilton-Jacobi theory? Of course not. They are part of Bohr’s “procedure regarding which we are able to communicate to others what we have done and what we have learnt." Therefore, it seems that the problem of the relation of theory and measurement didn’t arise with quantum mechanics but exists in classical mechanics as well. At a 1962 conference on the foundations of quantum mechanics, Wendell Furry explained[@Fu1962]
> So that in quantum theory we have something not really worse than we had in classical theory. In both theories you don’t say what you do when you make a measurement, what the process is. But in quantum theory we have our attention focused on this situation. And we do become uncomfortable about it, because we have to talk about the effects of the measurement on the systems....I am asking for something that the formalism doesn’t contain, finally when you describe a measurement. Now, classical theory doesn’t contain any description of measurement. It doesn’t contain anywhere near as much theory of measurement as we have here \[in quantum mechanics\]. There is a gap in the quantum mechanical theory of measurement. In classical theory there is practically no theory of measurement at all, as far as I know.
At that same conference Eugene Wigner put it like this [@Wi1962]
> Now, how does the experimentalist know that this apparatus will measure for him the position? “Oh", you say, “he observed that apparatus. He looked at it." Well that means that he carried out a measurement on it. How did he know that the apparatus with which he carried out that measurement will tell him the properties of the apparatus? Fundamentally, this is again a chain which has no beginning. And at the end we have to say, “We learned that as children how to judge what is around us." And there is no way to do this scientifically. The fact that in quantum mechanics we try to analyze the measurement process only brought this home to us that much sharply.
Because physicists have long since become comfortable with the relation between theory and measurement in classical physics, perhaps the quantum case shouldn’t be viewed as particularly problematic.
Back to Quanta {#BTQ}
==============
I began this essay with the question “What is it about the physical world around us that leads us to a quantum theoretic model of it?" and have tried to answer it by discussing the quantal character of the physical world along with the inevitability of the statistical nature of both quantum and classical physics. In addition, when compared with its classical counterpart, the relation of theory and measurement in quantum mechanics doesn’t seem all that unusual. I hope these musings will provide some comfort to beginning students of quantum mechanics by providing at least a heuristic answer that bears on the epistemological origin of the dual wave-particle nature, the probabilistic interpretation of the quantum formalism, and the somewhat elusive connection of theoretical formalism and measurements. Perhaps they will be afforded some solace as their credulity is strained by references to [*wave-particle duality*]{}, the [*collapse of the wave function*]{}, and the [*spooky action at a distance*]{} of entangled quantum systems. I personally suspect that the quagmire to which we are led by these issues is spawned by conflating the physical world with the mathematical formalism that is intended only to model it, but this is a topic for another conversation.
The purpose of this essay is neither to demystify quantum mechanics nor to stifle conversation about its interpretation. To be sure, the number of extraordinary quantum phenomena seems to be nearly without limit. Quantum spin, anti-matter, field theory, gauge symmetry, the standard model of elementary particles, etc., are all subsequent developments in quantum theory that have very little connection to classical physics and about which the above discussion has little to say. Certainly wave-particle duality is a mysterious fact of nature. Whether one considers it to be a fundamental principle, as did Bohr, or sees it as intimately related to the quantal character of the world is, perhaps, a matter of taste. I have sought to couch the discussion not in the mathematical formalism of quantum theory, but in terms of a simple physical principle: [*Matter, radiation, and their interactions occur only in discreet quanta.*]{} Rather than quashing discussion of the meaning of quantum mechanics, perhaps this essay will stimulate new discussions.
**Acknowledgements:** I would like to thank Marcel Reginatto for many helpful conversations as well as for commenting on several early versions of this paper. Also, thanks to Serena Connolly for introducing me to Lucretius’s wonderful poem.
[99]{}
H. Stapp, “The Copenhagen Interpretation", [*Am. J. Phys.*]{} **40**, 1098-1116 (1972).
D. Mermin, “Could Feynman Have Said This?", [*Physics Today*]{}, April (1989).
S. Berryman, “Ancient Atomism", [*Stanford Encyclopedia of Philosophy*]{} (Stanford University 2011).
Lucretius, [*On the Nature of Things*]{}, translation by R. Melville, (Oxford University Press 1997).
M. Badino, “The Odd Couple: Boltzmann, Planck and the Application of Statistics to Physics (1900-1913)", [*Annalen der Physik*]{} **18**, 81-101 (2009).
J. Norton, “The Determination of Theory by Evidence: The Case for Quantum Discontinuity, 1900-1915", [*Synthese*]{} **97**, 1-31 (1993).
M. Planck, “Über irreversible Strahlungsvorg�nge", [*Sitzungsberichte der K�niglich Preu�ischen Akademie der Wissenschaften zu Berlin*]{} **5**, 440-480 (1899).
W. Heisenberg, [*Physics and Beyond: Encounters and Conversations*]{}, (Harper & Row 1972), p. 39.
F. Dyson, “Thought Experiments – in Honor of John Wheeler", 2002, reprinted in [*Birds and Frogs*]{} by F. Dyson, (World Scientific, 2015).
W. Heisenberg, “Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", [*Zeitschrift für Physik*]{}, **43**, 172-98 (1927), translation in [*Quantum Theory and Measurement*]{} by J. Wheeler and W. Zurek, (Princeton University Press, Princeton 1983).
E. Whittaker, [*A History of the Theories of Aether & Electricity, Volume II*]{}, (Dover Pub, Mineola, New York, 1989) Chapter IV.
M-J. Kim and C-K. Choi, “Hamilton-Jacobi theory and the relativistic action variable", [*Physics Lett. A*]{} **182**, 184-190 (1993).
M. Trump and W. Schieve, [*Classical Relativistic Many-Body Dynamics*]{}, (Kluwer Academics, Dordrecht 1999), p. 164.
V. Fock, [*The Theory of Space, Time and Gravitation*]{}, (Pergamon Press, 1966) Chapter III.
C. Lanczos, [*The Variational Principles of Mechanics*]{}, (University of Toronto Press,1970) Chapter VIII.
A. Stone, “Einstein’s Unknown Insight and the Problem of Quantizing Chaos", [*Physics Today*]{}, August 2005, 37-43.
J. Keller, “Corrected Bohr-Sommerfeld Quantum Conditions for Nonseparable Systems", [*Annals of Physics*]{} **4**, 180-188 (1958).
A. Stone, A D, [*Einstein and the Quantum*]{}, (Princeton University Press 2013) p. 198.
W. Duane, “The Transfer in Quanta of Radiation Momentum to Matter", [*Proceedings of the National Academy of Sciences*]{} **9**, 158-164 (1923).
G. Breit, “The Interference of Light and the Quantum Theory", [*Proceedings of the National Academy of Sciences*]{} **9**, 238-243 (1923).
A. Compton, “The Quantum Integral and Diffraction by a Crystal", [*Proceedings of the National Academy of Sciences*]{} **9**, 359-362 (1923).
P. Epstein and P Ehrenfest, “The Quantum Theory of the Fraunhofer Diffraction", [*Proceedings of the National Academy of Sciences*]{} **10**, 133-139 (1924).
A. Einstein, A, Podolsky, and N Rosen, “Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", [*Phys Rev*]{} **47**, 777-780 (1935).
M. Hall and M. Reginatto, “Interacting classical and quantum ensembles", [*Phys. Rev. A*]{} **72**, 062109 (2005).
M. Hall and M. Reginatto, [*Ensembles on Configuration Space: Classical, Quantum, and Beyond*]{}, (Springer International Publishing, Switzerland 2016).
M. Hall and M. Reginatto, “Schrödinger equation from an exact uncertainty principle", [*Journal of Physics A: Mathematical and General*]{}, **35**, 3289-3303 (2002).
S. Boughn and M. Reginatto, “Another Look Through Heisenberg’s Microscope", [*Eur. J. Phys.*]{} in press https://doi.org/10.1088/1361-6404/aaa33f also https://arxiv.org/abs/1712.08579 (2017).
I. Volovich, “Randomness in Classical Mechanics and Quantum Mechanics, [*Found Phys*]{} **41** 516-528 (2011).
Bohr, N, [*Essays 1958-1962 on Atomic Physics and Human Knowledge*]{} (Wiley, NewYork, 1963).
S. Boughn and M. Reginatto, “A pedestrian approach to the measurement problem in quantum mechanics", [*European Physical Journal H*]{}, **38**, 443-470 (2013).
M. Schlosshauer and K. Camilleri, “The quantum-to-classical transition: Bohr’s doctrine of classical concepts, emergent classicality, and decoherence", e-print arXiv:0804.1609 \[quant-ph\] (2008).
W. Furry, in [*On the Foundations of Quantum Mechanics*]{} (Physics Department, Xavier University, 1962).
E. Wigner, in [*On the Foundations of Quantum Mechanics*]{} (Physics Department, Xavier University, 1962).
[^1]: sboughn@haverford.edu
[^2]: The full David Mermin quote is “If I were forced to sum up in one sentence what the Copenhagen interpretation says to me, it would be ‘Shut up and calculate!’"[@Me1989]
[^3]: Lucretius was a disciple of the Greek atomist Epicurus and his predecessors Democritus and Leucippus[@Be2011].
[^4]: It is interesting that in 1899, the year before his seminal Planck’s Law paper, Planck introduced the constant that bears his name (although he gave it the symbol $b$) from Paschen’s fit of spectral data to Wien’s Law. Even then he identified it as a fundamental constant of nature along side $e$, $c$, and $G$.[@Pl1899]
[^5]: Einstein’s quantitative prediction was confirmed by the 1914 experiments of Robert Millikan.
[^6]: To be sure, the Planck energy $\sqrt{\frac{\hbar c^{5}}{G}}$ is fundamental but is much too large ($2 \times 10^{9}$ Joules) to be relevant on atomic scales.
[^7]: In fact, sensible measurements can always be found that seemingly violate the quantum hypothesis as well as the Heisenberg uncertainty principle; however, these invariably involve the inappropriate application of quantum mechanics. Dyson [@Dy2002] has given several examples of such measurements.
[^8]: The relativistic case is much more complicated[@Ki1993]; however, even then there are individual cases in which action angle variables are Lorentz invariant[@Tr1999]. Fock[@Fo1966] demonstrated that the geodesic equation of general relativity can be expressed in terms of the Hamilton-Jacobi equation and action function S.
[^9]: Separable refers to problems for which Hamilton’s principle function $S$ can be written as a sum of functions, each depending on a single generalized coordinate, i.e., $S= \sum S_k(q_k)$.
[^10]: Although, it is possible to derive a similar set of quantization conditions from today’s quantum mechanics, i.e., $\oint \sum p_kdq_k = (n + m/4)h$, where $n$ is an arbitrary integer and $m$ is an integer related to the caustic structure of $S$.[@St2005; @Ke1958]
[^11]: Einstein already indicated such a problem in 1917 but it wasn’t until years later that its significance to quantum mechanics, in the context of quantum chaos, was realized. Gutzwiller later demonstrated that despite the absence of a canonical quantization scheme for such cases “strong classical-quantum correspondences exist even for chaotic systems."[@St2005]
[^12]: Because the mass of the grating is very large, the momentum and energy of the scattered photon, $p_{\gamma}$ and $p_{\gamma}c$, do not change, a result that follows from the Compton effect discovered only a few months earlier.
[^13]: It was Einstein who first suggested that the intensity of electromagnetic waves was a measure of the probability of the location of photons. Born extended this notion to particles with a similar interpretation of the wave functions of Schrödinger’s equation.[@St2013]
[^14]: In fact, Einstein, Podolsky, and Rosen[@Ei1935] maintained “that the description of reality as given by a \[quantum\] wave function is not complete."
[^15]: Heisenberg also argued that similar indeterminacy relations occurred for all conjugate pairs of observable quantities.
[^16]: In fact, some experimental uncertainties are routinely included in quantum mechanical calculations expressed as the weightings in [*mixed states*]{}.
[^17]: Note that here I’ve replaced Heiseberg’s $\delta x \delta p \sim h$ with the usual $\delta x \delta p \geq \hbar/2$, which is derived from the corresponding quantum mechanical commutation relation.
|
---
abstract: 'We propose a novel class of network models for temporal dyadic interaction data. Our objective is to capture important features often observed in social interactions: sparsity, degree heterogeneity, community structure and reciprocity. We use mutually-exciting Hawkes processes to model the interactions between each (directed) pair of individuals. The intensity of each process allows interactions to arise as responses to opposite interactions (reciprocity), or due to shared interests between individuals (community structure). For sparsity and degree heterogeneity, we build the non time dependent part of the intensity function on compound random measures following [@Todeschini2016]. We conduct experiments on real-world temporal interaction data and show that the proposed model outperforms competing approaches for link prediction, and leads to interpretable parameters.'
author:
- |
Xenia Miscouridou$^{\mathbf{1}}$, François Caron$^{\mathbf{1}}$, Yee Whye Teh$^{\mathbf{1,2}}$\
$^{1}$Department of Statistics, University of Oxford\
$^{2}$DeepMind\
`{miscouri, caron, y.w.teh}@stats.ox.ac.uk`\
bibliography:
- '../biblio.bib'
title: 'Modelling sparsity, heterogeneity, reciprocity and community structure in temporal interaction data'
---
Introduction
============
There is a growing interest in modelling and understanding temporal dyadic interaction data. Temporal interaction data take the form of time-stamped triples $(t,i,j)$ indicating that an interaction occurred between individuals $i$ and $j$ at time $t$. Interactions may be directed or undirected. Examples of such interaction data include commenting a post on an online social network, exchanging an email, or meeting in a coffee shop. An important challenge is to understand the underlying structure that underpins these interactions. To do so, it is important to develop statistical network models with interpretable parameters, that capture the properties which are observed in real social interaction data.
One important aspect to capture is the *community structure* of the interactions. Individuals are often affiliated to some latent communities (e.g. work, sport, etc.), and their affiliations determine their interactions: they are more likely to interact with individuals sharing the same interests than to individuals affiliated with different communities. An other important aspect is *reciprocity*. Many events are responses to recent events of the opposite direction. For example, if Helen sends an email to Mary, then Mary is more likely to send an email to Helen shortly afterwards. A number of papers have proposed statistical models to capture both community structure and reciprocity in temporal interaction data [@Blundell2012; @Dubois2013; @Linderman2014]. They use models based on Hawkes processes for capturing reciprocity and stochastic block-models or latent feature models for capturing community structure.
In addition to the above two properties, it is important to capture the global properties of the interaction data. Interaction data are often *sparse*: only a small fraction of the pairs of nodes actually interact. Additionally, they typically exhibit high degree (number of interactions per node) *heterogeneity*: some individuals have a large number of interactions, whereas most individuals have very few, therefore resulting in empirical degree distributions being heavy-tailed. As shown by @Karrer2011, @Gopalan2013 and @Todeschini2016, failing to account explicitly for degree heterogeneity in the model can have devastating consequences on the estimation of the latent structure.
Recently, two classes of statistical models, based on random measures, have been proposed to capture sparsity and power-law degree distribution in network data. The first one is the class of models based on exchangeable random measures [@Caron2017; @Veitch2015; @Herlau2015; @Borgs2016; @Todeschini2016; @Palla2016; @Janson2017]. The second one is the class of edge-exchangeable models [@Crane2015; @Crane2017; @Cai2016; @Williamson2016; @Janson2017a; @Ng2017]. Both classes of models can handle both sparse and dense networks and, although the two constructions are different, connections have been highlighted between the two approaches [@Cai2016; @Janson2017a].
The objective of this paper is to propose a class of statistical models for temporal dyadic interaction data that can capture all the desired properties mentioned above, which are often found in real world interactions. These are *sparsity*, *degree heterogeneity*, *community structure* and *reciprocity*. Combining all the properties in a single model is non trivial and there is no such construction to our knowledge. The proposed model generalises existing reciprocating relationships models [@Blundell2012] to the sparse and power-law regime. Our model can also be seen as a natural extension of the classes of models based on exchangeable random measures and edge-exchangeable models and it shares properties of both families. The approach is shown to outperform alternative models for link prediction on a variety of temporal network datasets.
The construction is based on Hawkes processes and the (static) model of @Todeschini2016 for sparse and modular graphs with overlapping community structure. In Section \[sec:background\], we present Hawkes processes and compound completely random measures which form the basis of our model’s construction. The statistical model for temporal dyadic data is presented in Section \[sec:model\] and its properties derived in Section \[sec:properties\]. The inference algorithm is described in Section \[sec:inference\]. Section \[sec:experiments\] presents experiments on four real-world temporal interaction datasets.
Background material {#sec:background}
===================
Hawkes processes
----------------
Let $(t_k)_{k\geq 1}$ be a sequence of event times with $t_k\geq 0$, and let $\mathcal H_t=(t_k|t_k\leq t )$ the subset of event times between time $0$ and time $t$. Let $N(t)=\sum_{k\geq 1}1_{t_k\leq t}$ denote the number of events between time $0$ and time $t$, where $1_{A}=1$ if $A$ is true, and 0 otherwise. Assume that $N(t)$ is a counting process with conditional intensity function $\lambda(t)$, that is for any $t\geq 0$ and any infinitesimal interval $dt$ $$\Pr(N(t+dt)-N(t)=1|\mathcal H_t)=\lambda(t)dt.\label{eq:counting}$$ Consider another counting process $\tilde{N}(t)$ with the corresponding $(\tilde{t}_k)_{k\geq 1}, \mathcal{\tilde{H}}_t, \tilde{\lambda}(t)$. Then, $N(t),\tilde{N}(t)$ are mutually-exciting Hawkes processes [@self_exc_HP] if the conditional intensity functions $\lambda(t)$ and $\tilde{\lambda}(t)$ take the form $$\begin{aligned}
\lambda(t)=\mu + \int_0^t g_\phi(t-u)\, d\tilde{N}(u)\label{eq:hawkesintensity}\qquad
\tilde{\lambda}(t)=\tilde{\mu} + \int_0^t {g}_{\tilde{\phi}}(t-u)\, d{N}(u)\end{aligned}$$ where $\mu=\lambda(0)>0, \tilde{\mu}=\tilde{\lambda}(0)>0$ are the base intensities and $g_\phi,g_{\tilde{\phi}}$ non-negative kernels parameterised by $\phi$ and $\tilde{\phi}$. This defines a pair of processes in which the current rate of events of each process depends on the occurrence of past events of the opposite process.
Assume that $\mu=\tilde{\mu},\, \phi= \tilde{\phi}$ and $g_\phi(t) \geq 0$ for $t > 0$, $g_\phi(t)=0$ for $t<0$. If $g_\phi$ admits a form of fast decay then this results in strong local effects. However, if it prescribes a peak away from the origin then longer term effects are likely to occur. We consider here an exponential kernel $$g_\phi(t-u)= \eta e^{-\delta (t-u)}, t>u
\label{expo_kernel}$$ where $\phi=(\eta,\delta)$. $\eta\geq 0$ determines the sizes of the self-excited jumps and $\delta>0$ is the constant rate of exponential decay. The stationarity condition for the processes is $\eta < \delta$. Figure \[fig:process\_and\_inten\] gives an illustration of two mutually-exciting Hawkes processes with exponential kernel and their conditional intensities.
Compound completely random measures
-----------------------------------
A homogeneous completely random measure (CRM) [@Kingman1967; @Kingman1993] on $\mathbb R_+$ without fixed atoms nor deterministic component takes the form $$\begin{aligned}
W=\sum_{i\geq 1}w_i \delta_{\theta_i}\end{aligned}$$ where $(w_i,\theta_i)_{i\geq 1}$ are the points of a Poisson process on $(0,\infty)\times\mathbb R_+$ with mean measure $\rho(dw)H(d\theta)$ where $\rho$ is a L' evy measure, $H$ is a locally bounded measure and $\delta_x$ is the dirac delta mass at $x$. The homogeneous CRM is completely characterized by $\rho$ and $H$, and we write $W\sim \operatorname{CRM}(\rho,H)$, or simply $W\sim \operatorname{CRM}(\rho)$ when $H$ is taken to be the Lebesgue measure. @Griffin2017 proposed a multivariate generalisation of CRMs, called compound CRM (CCRM). A compound CRM $(W_1,\ldots,W_p)$ with independent scores is defined as $$\begin{aligned}
W_k=\sum_{i\geq 1}{w_{ik}}\delta_{\theta_i},~~k=1,\ldots,p\end{aligned}$$ where $w_{ik}=\beta_{ik}w_{i0}$ and the scores $\beta_{ik}\geq 0$ are independently distributed from some probability distribution $F_k$ and $W_0=\sum_{i\geq 1}w_{i0}\delta_{\theta_i}$ is a CRM with mean measure $\rho_0(dw_0)H_0(d\theta)$. In the rest of this paper, we assume that $F_k$ is a gamma distribution with parameters $(a_k,b_k)$, $H_0(d\theta)=d\theta$ is the Lebesgue measure and $\rho_0$ is the Lévy measure of a generalized gamma process $$\begin{aligned}
\rho_0(dw)=\frac{1}{\Gamma(1-\sigma)}w^{-1-\sigma}e^{-\tau w}dw\label{eq:levyGGP}\end{aligned}$$ where $\sigma\in(-\infty, 1)$ and $\tau>0$.
Statistical model for temporal interaction data {#sec:model}
===============================================
Consider temporal interaction data of the form $\mathcal D=(t_k,i_k,j_k)_{k\geq 1}$ where $(t_k,i_k,j_k)\in \mathbb R_+\times \mathbb N_*^2$ represents a directed interaction at time $t_k$ from node/individual $i_k$ to node/individual $j_k$. For example, the data may correspond to the exchange of messages between students on an online social network.
We use a point process $(t_k,U_k,V_k)_{k\geq 1}$ on $\mathbb R^3_+$, and consider that each node $i$ is assigned some continuous label $\theta_i\geq 0$. the labels are only used for the model construction, similarly to [@Caron2017; @Todeschini2016], and are not observed nor inferred from the data. A point at location $(t_k,U_k,V_k)$ indicates that there is a directed interaction between the nodes $U_k$ and $V_k$ at time $t_k$. See Figure \[fig:model\] for an illustration.
For a pair of nodes $i$ and $j$, with labels $\theta_i$ and $\theta_j$, let $N_{ij}(t)$ be the counting process $$\begin{aligned}
N_{ij}(t)=\sum_{k|(U_k,V_k)=(\theta_i,\theta_j)}1_{t_k\leq t}
\label{eq:PointProcess}\end{aligned}$$ for the number of interactions between $i$ and $j$ in the time interval $[0,t]$. For each pair $i,j$, the counting processes $N_{ij}, N_{ji}$ are mutually-exciting Hawkes processes with conditional intensities $$\begin{aligned}
\lambda_{ij}(t)=\mu_{ij} + \int_0^t g_\phi(t-u)\, dN_{ji}(u),\qquad \lambda_{ji}(t)=\mu_{ji} + \int_0^t g_\phi(t-u)\, dN_{ij}(u)
\label{eq:intensity}\end{aligned}$$ where $g_\phi$ is the exponential kernel defined in Equation . Interactions from individual $i$ to individual $j$ may arise as a response to past interactions from $j$ to $i$ through the kernel $g_\phi$, or via the base intensity $\mu_{ij}$. We also model assortativity so that individuals with similar interests are more likely to interact than individuals with different interests. For this, assume that each node $i$ has a set of positive latent parameters $(w_{i1},\ldots,w_{ip})\in \mathbb R_+^p$, where $w_{ik}$ is the level of its affiliation to each latent community $k=1,\ldots,p$. The number of communities $p$ is assumed known. We model the base rate $$\mu_{ij}=\mu_{ji}=\sum_{k=1}^p w_{ik}w_{jk}.\label{eq:muij}$$Two nodes with high levels of affiliation to the same communities will be more likely to interact than nodes with affiliation to different communities, favouring assortativity.
In order to capture sparsity and power-law properties and as in @Todeschini2016, the set of affiliation parameters $(w_{i1},\ldots,w_{ip})$ and node labels $\theta_i$ is modelled via a compound CRM with gamma scores, that is $W _0 =\sum_{i=1}^{\infty}w_{i0} \delta_{\theta_{i}} \sim\text{CRM} (\rho_{0})$ where the Lévy measure $\rho_0$ is defined by Equation , and for each node $i\geq 1$ and community $k=1,\ldots,p$ $$\begin{aligned}
w_{ik}= w_{i0} \beta_{ik}\text{, where }\beta_{ik} \overset{\text{ind}}{\sim}\operatorname{Gamma}(a_k,b_k).
\label{weights}\end{aligned}$$ The parameter $w_{i0}\geq 0$ is a degree correction for node $i$ and can be interpreted as measuring the overall popularity/sociability of a given node $i$ irrespective of its level of affiliation to the different communities. An individual $i$ with a high sociability parameter $w_{i0}$ will be more likely to have interactions overall than individuals with low sociability parameters. The scores $\beta_{ik}$ tune the level of affiliation of individual $i$ to the community $k$. The model is defined on $\mathbb R_+^3$. We assume that we observe interactions over a subset $[0,T]\times[0,\alpha]^2\subseteq \mathbb R_+^3$ where $\alpha$ and $T$ tune both the number of nodes and number of interactions. The whole model is illustrated in Figure \[fig:model\].
![Degree distribution for a Hawkes graph with different values of $\sigma$ (left) and $\tau$ (right). The degree of a node $i$ is defined as the number of nodes with whom it has at least one interaction. The value $\sigma$ tunes the slope of the degree distribution, larger values corresponding to a higher slope. The parameter $\tau$ tunes the exponential cut-off in the degree distribution.[]{data-label="fig:degree"}](figures/tau_and_sigma_degreea "fig:"){width=".48\textwidth"}![Degree distribution for a Hawkes graph with different values of $\sigma$ (left) and $\tau$ (right). The degree of a node $i$ is defined as the number of nodes with whom it has at least one interaction. The value $\sigma$ tunes the slope of the degree distribution, larger values corresponding to a higher slope. The parameter $\tau$ tunes the exponential cut-off in the degree distribution.[]{data-label="fig:degree"}](figures/tau_and_sigma_degreeb "fig:"){width=".48\textwidth"}
The model admits the following set of hyperparameters, with the following interpretation:\
$\bullet$ The hyperparameters $\phi=(\eta,\delta)$ where $\eta\geq 0$ and $\delta\geq 0$ of the kernel $g_\phi$ tune the *reciprocity*.\
$\bullet$ The hyperparameters $(a_k,b_k)$ tune the *community structure* of the interactions. $a_k/b_k=\mathbb E[\beta_{ik}]$ tunes the size of community $k$ while $a_k/b_k^2=\text{var}(\beta_{ik})$ tunes the variability of the level of affiliation to this community; larger values imply more separated communities.\
$\bullet$ The hyperparameter $\sigma$ tunes the *sparsity* and the *degree heterogeneity*: larger values imply higher sparsity and heterogeneity. It also tunes the slope of the degree distribution. Parameter $\tau$ tunes the exponential cut-off in the degree distribution. This is illustrated in Figure \[fig:degree\].\
$\bullet$ Finally, the hyperparameters $\alpha$ and $T$ tune the overall number of interactions and nodes.
We follow [@Rasmussen2013] and use vague Exponential$(0.01)$ priors on $\eta$ and $\delta$. Following @Todeschini2016 we set vague Gamma$(0.01, 0.01)$ priors on $\alpha$, $1-\sigma$, $\tau$, $a_k$ and $b_k$. The right limit for time window, $T$ is considered known.
Properties {#sec:properties}
==========
Connection to sparse vertex-exchangeable and edge-exchangeable models {#sec:properties2}
---------------------------------------------------------------------
The model is a natural extension of sparse vertex-exchangeable and edge-exchangeable graph models. Let $z_{ij}(t)=1_{N_{ij}(t)+N_{ji}(t)>0}$ be a binary variable indicating if there is at least one interaction in $[0,t]$ between nodes $i$ and $j$ in either direction. We assume $$\Pr(z_{ij}(t)=1|(w_{ik},w_{jk})_{k=1,\ldots,p})=1-e^{-2t\sum_{k=1}^p w_{ik}w_{jk}}$$ which corresponds to the probability of a connection in the static simple graph model proposed by @Todeschini2016. Additionally, for fixed $\alpha>0$ and $\eta=0$ (no reciprocal relationships), the model corresponds to a rank-$p$ extension of the rank-1 Poissonized version of edge-exchangeable models considered by @Cai2016 and @Janson2017. The sparsity properties of our model follow from the sparsity properties of these two classes of models.
Sparsity
--------
The size of the dataset is tuned by both $\alpha$ and $T$. Given these quantities, both the number of interactions and the number of nodes with at least one interaction are random variables. We now study the behaviour of these quantities, showing that the model exhibits sparsity. Let $I_{\alpha,T}, E_{\alpha,T}, V_{\alpha,T}$ be the overall number of interactions between nodes with label $\theta_i\leq \alpha$ until time $T$, the total number of pairs of nodes with label $\theta_i\leq \alpha$ who had at least one interaction before time $T$, and the number of nodes with label $\theta_i\leq \alpha$ who had at least one interaction before time $T$ respectively.
$$\begin{aligned}
\begin{split}
V_{\alpha,T}&=\sum_{i}1_{\sum_{j\neq i} N_{ij}(T)1_{\theta_j\leq \alpha} >0}1_{\theta_i\leq \alpha}
\qquad I_{\alpha,T}=\sum_{i\neq j}N_{ij}(T)1_{\theta_i\leq \alpha}1_{\theta_j\leq \alpha} \\
E_{\alpha,T}&=\sum_{i<j}1_{N_{ij}(T)+N_{ji}(T)>0}1_{\theta_i\leq \alpha}1_{\theta_j\leq \alpha}
\end{split}\end{aligned}$$
We provide in the supplementary material a theorem for the exact expectations of $I_{\alpha,T}$, $E_{\alpha,T}$ $V_{\alpha,T}$. Now consider the asymptotic behaviour of the expectations of $ V_{\alpha,T}$, $E_{\alpha,T}$ and $I_{\alpha,T}$ , as $\alpha$ and $T$ go to infinity.[^1] Consider fixed $T>0$ and $\alpha$ that tends to infinity. Then, $$\begin{aligned}
\mathbb E[ V_{\alpha,T}]= \left\{
\begin{array}{ll}
\Theta( \alpha) & \text{if }\sigma<0 \\
\omega(\alpha) & \text{if }\sigma\geq 0
\end{array}
\right.,\qquad
\mathbb E[ E_{\alpha,T}]=\Theta(\alpha^2),\qquad
\mathbb E[ I_{\alpha,T}]=\Theta(\alpha^2)\end{aligned}$$ as $\alpha$ tends to infinity. For $\sigma<0$, the number of edges and interactions grows quadratically with the number of nodes, and we are in the dense regime. When $\sigma\geq 0$, the number of edges and interaction grows subquadratically, and we are in the sparse regime. Higher values of $\sigma$ lead to higher sparsity. For fixed $\alpha$, $$\begin{aligned}
\mathbb E[ V_{\alpha,T}]= \left\{
\begin{array}{ll}
\Theta( 1 ) & \text{if }\sigma<0 \\
\Theta(\log T) & \text{if }\sigma=0 \\
\Theta( T^{\sigma}) & \text{if }\sigma>0 \\
\end{array}
\right.,\qquad
\mathbb E[ E_{\alpha,T}]= \left\{
\begin{array}{ll}
\Theta( 1 ) & \text{if }\sigma<0 \\
O(\log T) & \text{if }\sigma=0 \\
O( T^{3\sigma/2} ) & \text{if }\sigma\in(0,1/2] \\
O( T^{(1+\sigma)/2}) & \text{if }\sigma\in(1/2,1) \\
\end{array}
\right.\end{aligned}$$ and $\mathbb E[ I_{\alpha,T}]=\Theta(T )$ as $T$ tends to infinity. Sparsity in $T$ arises when $\sigma\geq 0$ for the number of edges and when $\sigma> 1/2$ for the number of interactions. The derivation of the asymptotic behaviour of expectations of $ V_{\alpha,T}$, $E_{\alpha,T}$ and $I_{\alpha,T}$ follows the lines of the proofs of Theorems 3 and 5.3 in [@Todeschini2016] $(\alpha\rightarrow\infty)$ and Lemma D.6 in the supplementary material of [@Cai2016] $(T\rightarrow\infty)$, and is omitted here.
Approximate Posterior Inference {#sec:inference}
===============================
Assume a set of observed interactions $\mathcal D=(t_k,i_k,j_k)_{k\geq 1}$ between $V$ individuals over a period of time $T$. The objective is to approximate the posterior distribution $\pi(\phi,\xi|\mathcal D)$ where $\phi$ are the kernel parameters and $\xi=((w_{ik})_{i=1,\ldots,V,k=1,\ldots,p},(a_k,b_k)_{k=1,\ldots,p},\alpha,\sigma,\tau)$, the parameters and hyperparameters of the compound CRM. One possible approach is to follow a similar approach to that taken in [@Rasmussen2013]; derive a Gibbs sampler using a data augmentation scheme which associates a latent variable to each interaction. However, such an algorithm is unlikely to scale well with the number of interactions. Additionally, we can make use of existing code for posterior inference with Hawkes processes and graphs based on compound CRMs, and therefore propose a two-step approximate inference procedure, motivated by modular Bayesian inference [@Jacob2017].
Let $\mathcal Z=(z_{ij}(T))_{1\leq i,j\leq V}$ be the adjacency matrix defined by $z_{ij}(T)=1$ if there is at least one interaction between $i$ and $j$ in the interval $[0,T]$, and 0 otherwise. We have $$\begin{aligned}
\pi(\phi,\xi|\mathcal D)&=\pi(\phi,\xi|\mathcal D,\mathcal Z)=\pi(\xi|\mathcal D,\mathcal Z)\pi(\phi|\xi ,\mathcal D).\end{aligned}$$ The idea of the two-step procedure is to (i) Approximate $\pi(\xi|\mathcal D,\mathcal Z)$ by $\pi(\xi|\mathcal Z)$ and obtain a Bayesian point estimate $\widehat\xi$ then (ii) Approximate $\pi(\phi|\xi ,\mathcal D)$ by $\pi(\phi|\widehat \xi ,\mathcal D)$.
The full posterior is thus approximated by $\widetilde\pi(\phi,\xi)=\pi(\xi|\mathcal Z)\pi(\phi|\widehat \xi ,\mathcal D).$ As mentioned in Section \[sec:properties2\], the statistical model for the binary adjacency matrix $\mathcal Z$ is the same as in [@Todeschini2016]. We use the MCMC scheme of [@Todeschini2016] and the accompanying software package SNetOC[^2] to perform inference. The MCMC sampler is a Gibbs sampler which uses a Metropolis-Hastings (MH) step to update the hyperparameters and a Hamiltonian Monte Carlo (HMC) step for the parameters. From the posterior samples we compute a point estimate $(\widehat w_{i1},\ldots,\widehat w_{ip})$ of the weight vector for each node. We follow the approach of @Todeschini2016 and compute a minimum Bayes risk point estimate using a permutation-invariant cost function. Given these point estimates we obtain estimates of the base intensities $\widehat \mu_{ij}$. Posterior inference on the parameters $\phi$ of the Hawkes kernel is then performed using Metropolis-Hastings, as in [@Rasmussen2013]. Details of the two-stage inference procedure are given in the supplementary material.
**Empirical investigation of posterior consistency.** To validate the two-step approximation to the posterior distribution, we study empirically the convergence of our approximate inference scheme using synthetic data. Experiments suggest that the posterior concentrates around the true parameter value. More details are given in the supplementary material.
Experiments {#sec:experiments}
===========
We perform experiments on four temporal interaction datasets from the Stanford Large Network Dataset Collection[^3] [@SNAP2014]:\
$\bullet$ The [Email]{} dataset consists of emails sent within a large European research institution over 803 days. There are 986 nodes, 24929 edges and 332334 interactions. A separate interaction is created for every recipient of an email.\
$\bullet$ The [College]{} dataset consists of private messages sent over a period of 193 days on an online social network (Facebook-like platform) at the University of California, Irvine. There are 1899 nodes, 20296 edges and 59835 interactions. An interaction $(t,i,j)$ corresponds to a user $i$ sending a private message to another user $j$ at time $t$.\
$\bullet$ The [Math]{} overflow dataset is a temporal network of interactions on the stack exchange website Math Overflow over 2350 days. There are 24818 nodes, 239978 edges and 506550 interactions.An interaction $(t,i,j)$ means that a user $i$ answered another user’s $j$ question at time $t$, or commented on another user’s $j$ question/response.\
$\bullet$ The [Ubuntu]{} dataset is a temporal network of interactions on the stack exchange website Ask Ubuntu over 2613 days. There are 159316 nodes, 596933 edges and 964437 interactions. An interaction $(t,i,j)$ means that a user $i$ answered another user’s $j$ question at time $t$, or commented on another user’s $j$ question/response. **Comparison on link prediction.** We compare our model (Hawkes-CCRM) to five other benchmark methods: (i) our model, without the Hawkes component (obtained by setting $\eta=0$), (ii) the Hawkes-IRM model of @Blundell2012 which uses an infinite relational model (IRM) to capture the community structure together with a Hawkes process to capture reciprocity, (iii) the same model, called Poisson-IRM, without the Hawkes component, (iv) a simple Hawkes model where the conditional intensity given by Equation is assumed to be same for each pair of individuals, with unknown parameters $\delta$ and $\eta$, (v) a simple Poisson process model, which assumes that interactions between two individuals arise at an unknown constant rate. Each of these competing models capture a subset of the features we aim to capture in the data: sparsity/heterogeneity, community structure and reciprocity, as summarized in Table \[tab:results\_and\_prop\]. The models are given in the supplementary material. The only model to account for all the features is the proposed Hawkes-CCRM model.
------------- ------------- ------------- ------------- -------------
email college math ubuntu
Hawkes-CCRM **10.95** **1.88** **20.07** **29.1**
CCRM $12.08$ $ 2.90$ $89.0$ $36.5$
Hawkes-IRM $14.2$ $ 3.56$ 96.9 59.5
Poisson-IRM $31.7$ $15.7$ 204.7 79.3
Hawkes $154.8$ $153.29$ 220.10 191.39
Poisson $\sim10^ 3$ $ \sim10^4$ $ \sim10^4$ $\sim10^4 $
------------- ------------- ------------- ------------- -------------
: (Left) Performance in link prediction. (Right) Properties captured by the different models.
\[tab:results\_and\_prop\]
[c@c@c@]{} sparsity/ & community & reciprocity\
heterogeneity & structure\
& &\
& &\
&&\
&&\
& &\
\
We perform posterior inference using a Markov chain Monte Carlo algorithm. For our Hawkes-CCRM model, we follow the two-step procedure described in Section \[sec:inference\]. For each dataset, there is some background information in order to guide the choice of the number $p$ of communities. The number of communities $p$ is set to $p=4$ for the [Email]{} dataset, as there are 4 departments at the institution, $p=2$ for the [College]{} dataset corresponding to the two genders, and $p=3$ for the [Math]{} and [Ubuntu]{} datasets, corresponding to the three different types of possible interactions. We follow @Todeschini2016 regarding the choice of the MCMC tuning parameters and initialise the MCMC algorithm with the estimates obtained by running a MCMC algorithm with $p = 1$ feature with fewer iterations. For all experiments we run 2 chains in parallel for each stage of the inference. We use 100000 iterations for the first stage and 10000 for the second one. For the Hawkes-IRM model, we also use a similar two-step procedure, which first obtains a point estimate of the parameters and hyperparameters of the IRM, then estimates the parameters of the Hawkes process given this point estimate. This allows us to scale this approach to the large datasets considered. We use the same number of MCMC samples as for our model for each step.
We compare the different algorithms on link prediction. For each dataset, we make a train-test split in time so that the training datasets contains $85 \%$ of the total temporal interactions. We use the training data for parameter learning and then use the estimated parameters to perform link prediction on the held out test data. We report the root mean square error between the predicted and true number of interactions for each directed pair in the test set . The results are reported in Table \[tab:results\_and\_prop\]. On all the datasets, the proposed Hawkes-CCRM approach outperforms other methods. Interestingly, the addition of the Hawkes component brings improvement for both the IRM-based model and the CCRM-based model.
**Community structure, degree distribution and sparsity.** Our model also estimates the latent structure in the data through the weights $w_{ik}$, representing the level of affiliation of a node $i$ to a community $k$. For each dataset, we order the nodes by their highest estimated feature weight, obtaining a clustering of the nodes. We represent the ordered matrix $\left(z_{ij}\left(T\right)\right)$ of binary interactions in Figure \[fig:adjacency\] (a)-(d). This shows that the method can uncover the latent community structure in the different datasets. Within each community, nodes still exhibit degree heterogeneity as shown in Figure \[fig:adjacency\] (e)-(h). where the nodes are sorted within each block according to their estimated sociability $\widehat w_{i0}$. The ability of the approach to uncover latent structure was illustrated by @Todeschini2016, who demonstrate that models which do not account for degree heterogeneity, cannot capture latent community estimation but they rather cluster the nodes based on their degree. We also look at the posterior predictive degree distribution based on the estimated hyperparameters, and compare it to the empirical degree distribution in the test set. The results are reported in Figure \[fig:adjacency\] (i)-(l) showing a reasonable fit to the degree distribution. Finally we report the $95\%$ posterior credible intervals (PCI) for the sparsity parameter $\sigma$ for all datasets. Each PCI is $(-0.69, -0.50), (-0.35, -0.20 ),(0.15, 0.18), (0.51, 0.57)$ respectively. The range of $\sigma$ is $ (-\infty,1)$. [Email]{} and [college]{} give negative values corresponding to denser networks whereas [Math]{} and [Ubuntu]{} datasets are sparser.
Conclusion
==========
We have presented a novel statistical model for temporal interaction data which captures multiple important features observed in such datasets, and shown that our approach outperforms competing models in link prediction. The model could be extended in several directions. One could consider asymmetry in the base intensities $\mu_{ij}\neq \mu_{ji}$ and/or a bilinear form as in [@Zhou2015]. Another important extension would be the estimation of the number of latent commmunities/features $p$.
**Acknowledgments.** The authors thank the reviewers and area chair for their constructive comments. XM, FC and YWT acknowledge funding from the ERC under the European Union’s 7th Framework programme (FP7/2007-2013) ERC grant agreement no. 617071. FC acknowledges support from EPSRC under grant EP/P026753/1 and from the Alan Turing Institute under EPSRC grant EP/N510129/1. XM acknowledges support from the A. G. Leventis Foundation.
Supplementary Material {#supplementary-material .unnumbered}
======================
A: Background on compound completely random measures {#a-background-on-compound-completely-random-measures .unnumbered}
====================================================
We give the necessary background on compound completely random measures (CCRM). An extensive account of this class of models is given in [@Griffin2017]. In this article, we consider a CCRM $(W_1,\ldots,W_p)$ on $\mathbb R_+$ characterized, for any $(t_1,\ldots,t_p)\in\mathbb R_+^p$ and measurable set $A\subset \mathbb R_+$, by $$\mathbb E[e^{-\sum_{k=1}^p t_k W_k(A)}] =\exp (-H_0(A)\psi(t_1,\ldots,t_p))$$ where $H_0$ is the Lebesgue measure and $\psi$ is the multivariate Laplace exponent defined by $$\psi(t_1,\ldots,t_p) = \int_{\mathbb{R}^p_+}(1-e^{-\sum_{k=1}^p t_k w_k})\rho(dw_{1},\ldots,dw_p).
\label{eq:laplaceexponent}$$ The multivariate Lévy measure $\rho$ takes the form $$\rho(dw_{1},\ldots,dw_{p})=\int_{0}^{\infty
}w_{0}^{-p}\prod_{k=1}^p F_k\left( \frac{dw_{k}}{w_{0}}\right)
\rho_{0}(dw_{0})
\label{eq:levymeasure}$$ where $F_k$ is the distribution of a Gamma random variable with parameters $a_k$ and $b_k$ and $\rho_0$ is the Lévy measure on $(0,\infty)$ of a generalized gamma process $$\rho_0(dw_0)=\frac{1}{\Gamma(1-\sigma)}w_0^{-1-\sigma}\exp(-w_0\tau)dw_0$$ where $\sigma\in(-\infty, 1)$ and $\tau>0$.
Denote ${\boldsymbol{w}}_i=(w_{i1},\dots,w_{ip})^T$, ${\boldsymbol{\beta}}_i=\left(\beta_{i1},\dots,\beta_{ip} \right)^T$ and $\rho(d{\boldsymbol{w}})=\rho(dw_1,\dots,dw_p)$. $w_0$ always refers to the scalar weight corresponding to the measure $\rho_0$.
B: Expected number of interactions, edges and nodes {#b-expected-number-of-interactions-edges-and-nodes .unnumbered}
===================================================
Recall that $I_{\alpha,T}, E_{\alpha,T}$ and $V_{\alpha,T}$ are respectively the overall number of interactions between nodes with label $\theta_i\leq \alpha$ until time $T$, the total number of pairs of nodes with label $\theta_i\leq \alpha$ who had at least one interaction before time $T$, and the number of nodes with label $\theta_i\leq \alpha$ who had at least one interaction before time $T$ respectively, and are defined as $$\begin{aligned}
I_{\alpha,T}&=\sum_{i\neq j}N_{ij}(T)1_{\theta_i\leq \alpha}1_{\theta_j\leq \alpha} \\
E_{\alpha,T}&=\sum_{i<j}1_{N_{ij}(T)+N_{ji}(T)>0}1_{\theta_i\leq \alpha}1_{\theta_j\leq \alpha}\\
V_{\alpha,T}&=\sum_{i}1_{\sum_{j\neq i} N_{ij}(T)1_{\theta_j\leq \alpha} >0}1_{\theta_i\leq \alpha}\end{aligned}$$
\[expected\_num\] The expected number of interactions $I_{\alpha,T}$, edges $E_{\alpha,T}$ and nodes $V_{\alpha,T}$ are given as follows: $$\begin{aligned}
\mathbb{E}[I_{\alpha,T}]{} & = \alpha^2 {\boldsymbol{\mu}}_w^T{\boldsymbol{\mu}}_w \left ( \frac{\delta}{\delta - \eta }T -
\frac{\eta}{(\eta -\delta)^2} \left( 1 - e^{-T(\delta - \eta)}\right)\right )\\
\mathbb{E}[E_{\alpha,T}]&=
\frac{\alpha^2}{2} \int_{\mathbb{R}^p_+} \psi(2Tw_1,\ldots,2Tw_p) \rho(dw_{1},\ldots,dw_p)\\
\mathbb{E}[V_{\alpha,T}]&=
\alpha \int_{\mathbb{R}^p_+} \left (1-e^{- \alpha \psi(2Tw_1,\ldots,2Tw_p)}\right )\rho(dw_1,\ldots,dw_p)\end{aligned}$$ where $\mu_w =\int_{\mathbb{R}^p_+} {\boldsymbol{w}}\rho(dw_1,\ldots,dw_p)$.
The proof of Theorem \[expected\_num\] is given below and follows the lines of Theorem 3 in [@Todeschini2016].
Mean number of nodes $\mathbb{E}[V_{\alpha,T}]$ {#mean-number-of-nodes-mathbbev_alphat .unnumbered}
-----------------------------------------------
We have $$\begin{split}
\mathbb{E} \left[ V_{\alpha,T} \right] &= \mathbb{E} \left[ \sum_i (1-1_{N_{ij}(T)=0, \forall j\neq i|\theta_j\leq \alpha)} \right]1_{\theta_i\leq \alpha} \nonumber\\
&= \sum_i \left\{ 1- \mathbb{P}(N_{ij}(T)=0, \forall j\neq i|\theta_j\leq \alpha) \right\}1_{\theta_i\leq \alpha}
\end{split}$$ Using the Palm/Slivnyak-Mecke formula and Campbell’s theorem, see e.g. [@Moller2003 Theorem 3.2] and [@Kingman1993], we obtain $$ \begin{split}
\mathbb{E}[V_{\alpha,T}]&= \mathbb{E} \left[ \mathbb{E} \left[ V_{\alpha,T} |W_1,\ldots,W_p \right] \right] \\
&= \mathbb{E} \left[\sum_i \left( 1- e^{-2T{\boldsymbol{w}}_i^T \sum_{j\neq i} {\boldsymbol{w}}_j 1_{\theta_j \leq \alpha} } \right) 1_{\theta_i \leq \alpha} \right] \\
&= \alpha \int \mathbb{E} \left( 1- e^{-2T{\boldsymbol{w}}^T \sum_j {\boldsymbol{w}}_j 1_{\theta_j \leq \alpha} } \right) \rho(d{\boldsymbol{w}}) \\
&= \alpha
\int_{\mathbb{R}_+^p} \left(1-e^{- \alpha \psi(2Tw_1,\ldots,2Tw_p)}\right) \rho \left(dw_1,\ldots,dw_p \right)
\end{split}
\label{nodes}$$
Mean number of edges $\mathbb{E}[E_{\alpha,T}]$ {#mean-number-of-edges-mathbbee_alphat .unnumbered}
-----------------------------------------------
Using the extended Slivnyak-Mecke formula, see e.g. [@Moller2003 Theorem 3.3] $$\begin{split}
\mathbb{E}\left[E_{\alpha,T}\right] &=\mathbb{E}\left[ \mathbb{E}\left[E_{\alpha,T}| W_1,\ldots,W_p \right] \right] \nonumber\\
& = \mathbb{E} \left[ \sum_i 1_{\theta_i<\alpha}\frac{1}{2} \sum_{j\neq i} 1_{\theta_j \leq \alpha}(1-e^{-2T{\boldsymbol{w}}_i^T{\boldsymbol{w}}_j})\right] \nonumber\\
&= \frac{\alpha^2 }{2}\int_{\mathbb{R}^p_+} \psi(2Tw_1,\dots,2Tw_p) \rho(dw_1,\ldots,dw_p)
\end{split}$$
Mean number of interactions $ \mathbb{E}[I_{\alpha,T}] $ {#mean-number-of-interactions-mathbbei_alphat .unnumbered}
---------------------------------------------------------
We have $$\begin{aligned}
\mathbb{E}[I_{\alpha,T}]
& =\mathbb{E}\left[\mathbb{E}\left[I_{\alpha,T} | W_1,\ldots,W_p\right]\right] \nonumber\\
& = \mathbb{E}\left[ \sum_{i\neq j}\left(\mathbb{E}\left[\int_0^T \lambda_{ij}(t)dt\mid W_1,\ldots,W_p \right] \right) \right]\nonumber\\
& = \mathbb{E}\left[ \sum_{i\neq j}\left( \int_0^T \mathbb{E}\left[ \mu_{ij} \frac{\delta}{\delta - \eta} - \mu_{ij}\frac{\eta}{\delta - \eta}e^{-{t(\delta - \eta)}} \mid W_1,\ldots,W_p \right] dt \right) \right]\nonumber\\
& = \mathbb{E}\left[ \sum_{i\neq j}{\boldsymbol{w}}_i^T {\boldsymbol{w}}_j \right] \int_0^T \left[ \frac{\delta}{\delta - \eta} - \frac{\eta}{\delta - \eta}e^{-{t(\delta - \eta)}} \right] dt \nonumber\\
& = \alpha^2 {\boldsymbol{\mu}}_w^T{\boldsymbol{\mu}}_w \left (\frac{\delta}{\delta - \eta} T - \frac{\eta}{(\delta - \eta)^2}(1 - e^{-T(\delta - \eta)})\right )
\label{dir_edges}\end{aligned}$$ where the third line follows from [@Dassios_Zhao], the last line follows from another application of the extended Slivnyak-Mecke formula for Poisson point processes and ${\boldsymbol{\mu}}_w =\int_{\mathbb{R}^p_+} {\boldsymbol{w}}\rho(dw_1,\ldots,dw_p)$.
C: Details of the approximate inference algorithm {#c-details-of-the-approximate-inference-algorithm .unnumbered}
=================================================
Here we provide additional details on the two-stage procedure for approximate posterior inference. The code is publicly available at <https://github.com/OxCSML-BayesNP/HawkesNetOC>.
Given a set of observed interactions $\mathcal D=(t_k,i_k,j_k)_{k\geq 1}$ between $V$ individuals over a period of time $T$, the objective is to approximate the posterior distribution $\pi(\phi,\xi|\mathcal D)$ where $\phi=(\eta,\delta)$ are the kernel parameters and $\xi=((w_{ik})_{i=1,\ldots,V,k=1,\ldots,p},(a_k,b_k)_{k=1,\ldots,p},\alpha,\sigma,\tau)$, the parameters and hyperparameters of the compound CRM. Given data $\mathcal D$, let $\mathcal Z=(z_{ij}(T))_{1\leq i,j\leq V}$ be the adjacency matrix defined by $z_{ij}(T)=1$ if there is at least one interaction between $i$ and $j$ in the interval $[0,T]$, and 0 otherwise.
For posterior inference, we employ an approximate procedure, which is formulated in two steps and is motivated by modular Bayesian inference (Jacob et al., 2017). It also gives another way to see the two natures of this type of temporal network data. Firstly we focus on the static graph i.e. the adjacency matrix of the pairs of interactions. Secondly, given the node pairs that have at least one interaction, we learn the rate for the appearance of those interactions assuming they appear in a reciprocal manner by mutual excitation.
We have $$\begin{aligned}
\pi(\phi,\xi|\mathcal D)&=\pi(\phi,\xi|\mathcal D,\mathcal Z)=\pi(\xi|\mathcal D,\mathcal Z)\pi(\phi|\xi ,\mathcal D).\end{aligned}$$ The idea of the two-step procedure is to
1. Approximate $\pi(\xi|\mathcal D,\mathcal Z)$ by $\pi(\xi|\mathcal Z)$ and obtain a Bayesian point estimate $\widehat\xi$.
2. Approximate $\pi(\phi|\xi ,\mathcal D)$ by $\pi(\phi|\widehat \xi ,\mathcal D)$.
C1: Stage 1 {#c1-stage-1 .unnumbered}
-----------
As mentioned in Section \[sec:inference\] in the main article, the joint model $\pi(\mathcal Z, \xi)$ on the binary undirected graph is equivalent to the model introduced by [@Todeschini2016], and we will use their Markov chain Monte Carlo (MCMC) algorithm and the publicly available code SNetOC[^4] in order to approximate the posterior $\pi(\xi| \mathcal Z)$ and obtain a Bayesian point estimate $\widehat \xi$. Let $w_{*k}= W_k([0,\alpha])-\sum_i w_{ik}$ corresponding to the overall level of affiliation to community $k$ of all the nodes with no interaction (recall that in our model, the number of nodes with no interaction may be infinite). For each undirected pair $i,j$ such that $z_{ij}(T)=1$, consider latent count variables $\widetilde n_{ijk}$ distributed from a truncated multivariate Poisson distribution, see [@Todeschini2016 Equation (31)]. The MCMC sampler to produce samples asymptotically distributed according to $\pi(\xi| \mathcal Z)$ then alternates between the following steps:
1. Update $(w_{ik})_{i=1,\ldots,V,k=1,\ldots,p}$ using an Hamiltonian Monte Carlo (HMC) update,
2. Update $(w_{*k},a_k,b_k)_{k=1,\ldots,p},\alpha,\sigma,\tau$ using a Metropolis-Hastings step
3. Update the latent count variables using a truncated multivariate Poisson distribution
We use the same parameter settings as in [@Todeschini2016]. We use $\epsilon=10^{-3}$ as truncation level to simulate the $w_{*k}$, and set the number of leapfrog steps $L=10$ in the HMC step. The stepsizes of both the HMC and the random walk MH are adapted during the first 50 000 iterations so as to target acceptance ratios of 0.65 and 0.23 respectively. The minimum Bayes point estimates $(\widehat w_{ik})_{i=1,\ldots,V,k=1,\ldots,p}$ are then computed using a permutation-invariant cost function, as described in [@Todeschini2016 Section 5.2]. This allows to compute point estimates of the base intensity measures of the Hawkes processes, for each $i\neq j$ $$\widehat \mu_{ij} =\sum_{k=1}^p \widehat w_{ik}\widehat w_{jk}.$$
C2: Stage 2 {#c2-stage-2 .unnumbered}
-----------
In stage 2, we use a MCMC algorithm to obtain samples approximately distributed according to $$\pi(\phi|\widehat \xi,\mathcal D)=\pi(\phi|(\widehat \mu_{ij}),\mathcal D)$$ where $\phi$ are the parameters of the Hawkes kernel. For each ordered pair $(i,j)$ such that $n_{ij}=N_{ij}(T)>0$, let $(t_{ij}^{(1)}<t_{ij}^{(2)}<\ldots<t_{ij}^{(n_{ij})})$ be the times of the observed directed interactions from $i$ to $j$. The intensity function is $$\lambda_{ij}(t) = \widehat \mu_{ij} + \sum_{\ell|t_{ji}^{(\ell)}<t} \frac{\eta}{\delta} f_\delta(t-t_{ji}^{(\ell))})
$$ where $$\frac{\eta}{\delta} f_\delta(t-t_{ji}^{(\ell)}) =\eta \times e^{-\delta (t-t_{ji}^{(\ell)})}.$$ We write the kernel in this way to point out that it is equal to a density function $f_\delta$, (here exponential density), scaled by the step size $\eta/\delta$. We denote the distribution function of $f_\delta$ by $F_\delta$. Then, by Proposition 7.2.III in [@Daley2008] we obtain the likelihood $$L(\mathcal D\mid \phi, (\widehat \mu_{ij}))=\prod_{(i,j)|N_{ij}(T)>0} \left [\exp(-\Lambda_{ij}(T)) \prod_{l =1}^{n_{ij}} \lambda_{ij} (t_{ij}^{(\ell)}) \right ]$$
where $$\Lambda_{ij}(t) = \int_0^t \lambda_{ij}(u)\, du = t \widehat \mu_{ij} + \sum_{\ell | t_{ji}^{(\ell)}<t} \frac{\eta}{\delta} F_\delta(t- t_{ji}^{(\ell)} ).
$$
We derive a Gibbs sampler with Metropolis Hastings steps to estimate the parameters $\eta,\delta$ conditionally on the estimates $\widehat{\mu}_{ij}$. As mentioned in Section \[sec:model\] of the main article we follow [@Rasmussen2013] for the choice of vague exponential priors $p(\eta), p(\delta)$. For proposals we use truncated Normals with variances $\left(\sigma^2_\eta,\sigma^2_\delta \right)= (1.5,2.5) $. The posterior is given by $$\begin{aligned}
\pi(\phi \mid\mathcal D, (\widehat \mu_{ij})) \propto \exp\left(- \sum_{(i,j)|N_{ij}(T)>0}\Lambda_{ij}(T)\right) \left [\prod_{(i,j)|N_{ij}(T)>0} \prod_{\ell=1}^{n_{ij}} \lambda_{ij}(t_{ij}^{(\ell)}) \right ] \times p(\eta)p(\delta) \\\end{aligned}$$
We use an efficient way to compute the intensity at each time point $t_{ij}^{(\ell)}$ by writing it in the form $$\lambda_{ij}(t_{ij}^{(\ell)})= \widehat \mu_{ij} + \eta S_{ij}^{(\ell)}(\delta),$$ where $$S_{ij}^{(\ell)}(\delta) =e^{-\delta t_{ij}^{(\ell)} }\sum_{k=1}^{n_{ji}} e^{\delta t_{ji}^{(k)}} 1_{t_{ji}^{(k)} < t_{ij}^{(\ell)}},$$ and then derive a recursive relationship of $S_{ij}^{(\ell)}(\delta)$ in terms of $S_{ij}^{(\ell-1)}(\delta) $. In this way, we can precompute several terms by ordering the event times and arrange them in bins defined by the event times of the opposite process.
D: Posterior consistency {#d-posterior-consistency .unnumbered}
========================
We simulate interaction data from the Hawkes-CCRM model described in Section \[sec:model\] in the main article, using parameters $p=4, \alpha=50, \sigma=0.3, \tau=1, a_k = 0.08, \phi=\left( 0.85, 3\right), T=300$. We perform the two-step inference procedure with data of increasing sample size, and check empirically that the approximate posterior $\pi(\phi|\widehat \xi, \mathcal D)$ concentrates around the true value as the sample size increases. Figure \[fig:post\_conv\] below shows the plots of the approximate marginal posterior distribution of $\delta$ and $\eta$. Experiments suggest that the posterior still concentrates around the true parameter value under this approximate inference scheme.
[![(Left) Approximate marginal posterior distribution of $\delta$ given $n$ interaction data. (Right) Approximate marginal posterior distribution of $\eta$ given $n$ interaction data. Posterior concentrates around the true value, with increasing sample size $n$.[]{data-label="fig:post_conv"}](figures/post_distr_delta "fig:"){width=".48\textwidth"}]{} [![(Left) Approximate marginal posterior distribution of $\delta$ given $n$ interaction data. (Right) Approximate marginal posterior distribution of $\eta$ given $n$ interaction data. Posterior concentrates around the true value, with increasing sample size $n$.[]{data-label="fig:post_conv"}](figures/post_distr_eta "fig:"){width=".51\textwidth"}]{}
E: Experiments {#e-experiments .unnumbered}
==============
We perform experiments in which we compare our Hawkes-CCRM model to five other competing models. The key part in all cases is the conditional intensity of the point process. which we give below. In all cases we use $\{t^{(k)}_{ij} \}_{k\geq 1}$ to refer to the set of events from $i$ to $j$, i.e. the interactions for the directed pair $(i,j)$.
Hawkes-CCRM {#hawkes-ccrm .unnumbered}
-----------
For each directed pair of nodes $(i,j), \,i \neq j$\
$N_{ij}(t) \sim \text{Hawkes Process}(\lambda_{ij}(t))$ where $ \lambda_{ij}(t)=\sum_k w_{ik} w_{jk} + \sum_{t_{ji}^{(k)}<t} \eta e^{-\delta(t-t_{ji}^{(k)})}$.
Hawkes-IRM [@Blundell2012] {#hawkes-irm .unnumbered}
--------------------------
For each directed pair of clusters $(p,q), \,p \neq q$\
$N_{pq}(t) \sim \text{Hawkes Process}(\lambda_{pq}(t))$ where $ \lambda_{pq}(t)=n_p n_q\gamma_{pq} + \sum_{t_{qp}^{(k)}<t}\eta e^{-\delta(t-t_{qp}^{(k)})}$.\
For the details of the model see [@Blundell2012].
Poisson-IRM (as explained in [@Blundell2012]) {#poisson-irm-as-explained-in .unnumbered}
---------------------------------------------
For each directed pair of clusters $(p,q), \,p \neq q$\
$N_{pq}(t) \sim \text{Poisson}(\lambda_{pq}(t))$ where $ \lambda_{pq}(t)=n_p n_q\gamma_{pq}$.\
For the details of the model see [@Blundell2012].
CCRM [@Todeschini2016] {#ccrm .unnumbered}
----------------------
$N_{ij}(t) \sim \text{Poisson Process}(\lambda_{ij}(t))$ where $ \lambda_{ij}(t)=\sum_k w_{ik} w_{jk}$.
Hawkes {#hawkes .unnumbered}
------
$N_{ij}(t) \sim \text{Hawkes Process}\left(\lambda_{ij}(t)\right)$ where $ \lambda_{ij} (t)= \mu + \sum_{t_{ji}^{(k)}<t} \eta e^{-\delta(t-t_{ji}^{(k)})}$.
Poisson {#poisson .unnumbered}
-------
$N_{ij}(t) \sim \text{Poisson Process}(\lambda_{ij}(t))$ where $ \lambda_{ij} (t)= \mu$.
[^1]: We use the following asymptotic notations. $X_\alpha = O(Y_\alpha)$ if $\lim X_\alpha/Y_\alpha < \infty$, $ X_\alpha = \omega(Y_\alpha)$ if $\lim Y_\alpha/X_\alpha =0$ and $X_\alpha = \Theta(Y_\alpha)$ if both $X_\alpha= O( Y_\alpha )$ and $Y_\alpha =O( X_\alpha )$.
[^2]: https://github.com/misxenia/SNetOC
[^3]: [https://snap.stanford.edu/data/]{}
[^4]: https://github.com/misxenia/SNetOC
|
---
abstract: 'Crucial problems of the quantum Internet are the derivation of stability properties of quantum repeaters and theory of entanglement rate maximization in an entangled network structure. The stability property of a quantum repeater entails that all incoming density matrices can be swapped with a target density matrix. The strong stability of a quantum repeater implies stable entanglement swapping with the boundness of stored density matrices in the quantum memory and the boundness of delays. Here, a theoretical framework of noise-scaled stability analysis and entanglement rate maximization is conceived for the quantum Internet. We define the term of entanglement swapping set that models the status of quantum memory of a quantum repeater with the stored density matrices. We determine the optimal entanglement swapping method that maximizes the entanglement rate of the quantum repeaters at the different entanglement swapping sets as function of the noise of the local memory and local operations. We prove the stability properties for non-complete entanglement swapping sets, complete entanglement swapping sets and perfect entanglement swapping sets. We prove the entanglement rates for the different entanglement swapping sets and noise levels. The results can be applied to the experimental quantum Internet.'
author:
- 'Laszlo Gyongyosi[^1]'
- 'Sandor Imre[^2]'
title: 'Theory of Noise-Scaled Stability Bounds and Entanglement Rate Maximization in the Quantum Internet'
---
Introduction {#sec1}
============
The quantum Internet allows legal parties to perform networking based on the fundamentals of quantum mechanics [@ref1; @ref2; @ref3; @ref11; @ref13a; @ref13; @ref18; @refn7; @puj1; @puj2; @pqkd1; @np1]. The connections in the quantum Internet are formulated by a set of quantum repeaters and the legal parties have access to large-scale quantum devices [@ref5; @ref6; @ref7; @qmemuj; @ref4] such as quantum computers [@qc1; @qc2; @qc3; @qc4; @qc5; @qc6; @qcadd1; @qcadd2; @qcadd3; @qcadd4; @shor1; @refibm]. Quantum repeaters are physical devices with quantum memory and internal procedures [@ref5; @ref6; @ref7; @ref11; @ref13a; @ref13; @ref8; @ref9; @ref10; @add1; @add2; @add3; @refqirg; @ref18; @ref19; @ref20; @ref21; @add4; @refn7; @refn5; @refn3; @sat; @telep; @refn1; @refn2; @refn4; @refn6]. An aim of the quantum repeaters is to generate the entangled network structure of the quantum Internet via entanglement distribution [@ref23; @ref24; @ref25; @ref26; @ref27; @nadd1; @nadd2; @nadd3; @nadd4; @nadd5; @nadd6; @nadd7; @kris1; @kris2]. The entangled network structure can then serve as the core network of a global-scale quantum communication network with unlimited distances (due to the attributes of the entanglement distribution procedure). Quantum repeaters share entangled states over shorter distances; the distance can be extended by the entanglement swapping operation in the quantum repeaters [@refn7; @ref1; @ref5; @ref6; @ref7; @ref13; @ref13a]. The swapping operation takes an incoming density matrix and an outgoing density matrix; both density matrices are stored in the local quantum memory of the quantum repeater [@ref45; @ref46; @ref47; @ref48; @ref49; @ref50; @ref51; @ref52; @ref53; @ref54; @ref55; @ref56; @ref57; @ref58; @ref60; @ref61; @ref62]. The incoming density matrix is half of an entangled state such that the other half is stored in the distant source node, while the outgoing density matrix is half of an entangled state such that the other half is stored in the distant target node. The entanglement swapping operation, applied on the incoming and outgoing density matrices in a particular quantum repeater, entangles the distant source and target quantum nodes. Crucial problems here are the size and delay bounds connected to the local quantum memory of a quantum repeater and the optimization of the swapping procedure such that the entanglement rate of the quantum repeater (outgoing entanglement throughput measured in entangled density matrices per a preset time unit) is maximal. These questions lead us to the necessity of strictly defining the fundamental stability and performance criterions [@ref36; @ref37; @ref38; @ref39; @ref40; @ref41; @ref42; @ref43; @ref44] of quantum repeaters in the quantum Internet.
Here, a theoretical framework of noise-scaled stability analysis and entanglement rate maximization is defined for the quantum Internet. By definition, the stability of a quantum repeater can be weak or strong. The strong stability implies weak stability, by some fundamentals of queueing theory [@refL1; @refL2; @refL3; @refL4; @refD1]. Weak stability of a quantum repeater entails that all incoming density matrices can be swapped with a target density matrix. Strong stability of a quantum repeater further guarantees the boundness of the number of stored density matrices in the local quantum memory. The defined system model of a quantum repeater assumes that the incoming density matrices are stored in the local quantum memory of the quantum repeater. The stored density matrices formulate the set of incoming density matrices (input set). The quantum memory also consists of a separate set for the outgoing density matrices (output set). Without loss of generality, the cardinality of the input set (number of stored density matrices) is higher than the cardinality of the output set. Specifically, the cardinality of the input set is determined by the entanglement throughput of the input connections, while the cardinality of the output set equals the number of output connections. Therefore, if in a given swapping period, the number of incoming density matrices exceeds the cardinality of the output set, then several incoming density matrices must be stored in the input set (Note: The logical model of the storage mechanisms of entanglement swapping in a quantum repeater is therefore analogous to the logical model of an input-queued switch architecture [@refL1; @refL2; @refL3].). The aim of entanglement swapping is to select the density matrices from the input and output sets, such that the outgoing entanglement rate of the quantum repeater is maximized; this also entails the boundness of delays. The maximization procedure characterizes the problem of optimal entanglement swapping in the quantum repeaters.
Finding the optimal entanglement swapping means determining the entanglement swapping between the incoming and outgoing density matrices that maximizes the outgoing entanglement rate of the quantum repeaters. The problem of entanglement rate maximization must be solved for a particular noise level in the quantum repeater and with the presence of various entanglement swapping sets. The noise level in the proposed model is analogous to the lost density matrices in the quantum repeater due to imperfections in the local operations and errors in the quantum memory units. The entanglement swapping sets are logical sets that represent the actual state of the quantum memory in the quantum repeater. The entanglement swapping sets are formulated by the set of received density matrices stored in the local quantum memory and the set of outgoing density matrices, which are also stored in the local quantum memory. Each incoming and outgoing density matrix represent half of an entangled system, such that the other half of an incoming density matrix is stored in the distant source quantum repeater, while the other half of an outgoing density matrix is stored in the distant target quantum repeater. The aim of determining the optimal entanglement swapping method is to apply the local entanglement swapping operation on the set of incoming and outgoing density matrices such that the outgoing entanglement rate of the quantum repeater is maximized at a particular noise level. As we prove, the entanglement rate maximization procedure depends on the type of entanglement swapping sets formulated by the stored density matrices in the quantum memory. We define the logical types of the entanglement swapping sets and characterize the main attributes of the swapping sets. We present the efficiency of the entanglement swapping procedure as a function of the local noise and its impacts on the entanglement rate. We prove that the entanglement swapping sets can be defined as a function of the noise, which allows us to define noise-scaled entanglement swapping and noise-scaled entanglement rate maximization. The proposed theoretical framework utilizes the fundamentals of queueing theory, such as the Lyapunov methodology [@refL1], which is an analytical tool used to assess the performance of queueing systems [@refL1; @refL2; @refL3; @refL4; @refD1; @refs3], and defines a fusion of queueing theory with quantum Shannon theory [@ref4; @ref22; @ref28; @ref29; @ref30; @ref32; @ref33; @ref34; @ref35] and the theory of quantum Internet.
The novel contributions of our manuscript are as follows:
1. We define a theoretical framework of noise-scaled entanglement rate maximization for the quantum Internet.
2. We determine the optimal entanglement swapping method that maximizes the entanglement rate of a quantum repeater at the different entanglement swapping sets as a function of the noise level of the local memory and local operations.
3. We prove the stability properties for non-complete entanglement swapping sets, complete entanglement swapping sets and perfect entanglement swapping sets.
4. We prove the entanglement rate of a quantum repeater as a function of the entanglement swapping sets and the noise level.
This paper is organized as follows. In [Section \[sec2\]]{}, the preliminary definitions are discussed. [Section \[sec3\]]{} proposes the noise-scaled stability analysis. In [Section \[sec4\]]{}, the noise-scaled entanglement rate maximization is defined. [Section \[sec5\]]{} provides a performance evaluation. Finally, [Section \[sec6\]]{} concludes the results. Supplemental information is included in the Appendix.
System Model and Problem Statement {#sec2}
==================================
System Model
------------
Let $V$ refer to the nodes of an entangled quantum network $N$, which consists of a transmitter node $A\in V$, a receiver node $B\in V$, and quantum repeater nodes $R_{i} \in V$, $i=1,\ldots ,q$. Let $E=\left\{E_{j} \right\}$, $j=1,\ldots ,m$ refer to a set of edges (an edge refers to an entangled connection in a graph representation) between the nodes of $V$, where each $E_{j} $ identifies an ${\rm L}_{l} $-level entanglement, $l=1,\ldots ,r$, between quantum nodes $x_{j} $ and $y_{j} $ of edge $E_{j} $, respectively. Let $N=\left(V,{\rm {\mathcal S}}\right)$ be an actual quantum network with $\left|V\right|$ nodes and a set ${\rm {\mathcal S}}$ of entangled connections. An ${\rm L}_{l} $-level, $l=1,\ldots ,r$, entangled connection $E_{{\rm L}_{l} } \left(x,y\right)$, refers to the shared entanglement between a source node $x$ and a target node $y$, with hop-distance $$\label{ZEqnNum202142}
d\left(x,y\right)_{{\rm L}_{l} } =2^{l-1} ,$$ since the entanglement swapping (extension) procedure doubles the span of the entangled pair in each step. This architecture is also referred to as the doubling architecture [@ref1; @ref5; @ref6; @ref7].
For a particular ${\rm L}_{l} $-level entangled connection $E_{{\rm L}_{l} } \left(x,y\right)$ with hop-distance , there are $d\left(x,y\right)_{{\rm L}_{l} } -1$ intermediate nodes between the quantum nodes $x$ and $y$.
[Fig. \[fig1\]]{} depicts a quantum Internet scenario with an intermediate quantum repeater $R_{j} $. The aim of the quantum repeater is to generate long-distance entangled connections between the distant quantum repeaters. The long-distance entangled connections are generated by the $U_{S} $ entanglement swapping operation applied in $R_{j} $. The quantum repeater must manage several different connections with heterogeneous entanglement rates. The density matrices are stored in the local quantum memory of the quantum repeater. The aim is to find an entanglement swapping in $R_{j} $ that maximizes the entanglement rate of the quantum repeater.
### Entanglement Fidelity
The aim of the entanglement distribution procedure is to establish a $d$-dimensional entangled system between the distant points $A$ and $B$, through the intermediate quantum repeater nodes. Let $d=2$, and let ${\left| \beta _{00} \right\rangle} $ be the target entangled system $A$ and $B$, ${\left| \beta _{00} \right\rangle} =\frac{1}{\sqrt{2} } \left({\left| 00 \right\rangle} +{\left| 11 \right\rangle} \right),$ subject to be generated. At a particular density $\sigma $ generated between $A$ and $B$, the fidelity of $\sigma $ is evaluated as $$\label{ZEqnNum728497}
F=\left\langle {{\beta }_{00}} | \sigma |{{\beta }_{00}} \right\rangle ,$$ Without loss of generality, an aim of a practical entanglement distribution is to reach $F\ge 0.98$ in for a given $\sigma $ [@ref1; @ref2; @ref3; @ref4; @ref5; @ref6; @ref7; @ref8].
### Entanglement Purification and Entanglement Throughput
Entanglement purification [@purifuj1; @purifuj2; @purifuj3] is a probabilistic procedure that creates a higher fidelity entangled system from two low-fidelity Bell states. The entanglement purification procedure yields a Bell state with an increased entanglement fidelity $F'$, $$\label{3)}
F_{in} <F' \le 1,$$ where $F_{in} $ is the fidelity of the imperfect input Bell pairs. The purification requires the use of two-way classical communications [@ref1; @ref2; @ref3; @ref4; @ref5; @ref6; @ref7; @ref8].
Let $B_{F} (E_{{\rm L}_{l} }^{i})$ refer to the entanglement throughput of a given ${\rm L}_{l} $ entangled connection $E_{{\rm L}_{l} }^{i} $ measured in the number of $d$-dimensional entangled states established over $E_{{\rm L}_{l} }^{i} $ per sec at a particular fidelity $F$ (dimension of a qubit system is $d=2$) [@ref1; @ref2; @ref3; @ref4; @ref5; @ref6; @ref7; @ref8].
For any entangled connection $E_{{\rm L}_{l} }^{i} $, a condition $c$ should be satisfied, as $$\label{ZEqnNum801212}
c:{{B}_{F}}( E_{{{\text{L}}_{l}}}^{i})\ge {B}_{F}^{\text{*}}( E_{{{\text{L}}_{l}}}^{i}),\text{ for }\forall i,$$ where ${{B}}_{F}^{\text{*}}( E_{{{\text{L}}_{l}}}^{i})$ is a critical lower bound on the entanglement throughput at a particular fidelity $F$ of a given $E_{{{\text{L}}_{l}}}^{i}$, i.e., ${{B}_{F}}( E_{{{\text{L}}_{l}}}^{i})$ of a particular $E_{{{\text{L}}_{l}}}^{i}$ has to be at least ${B}_{F}^{\text{*}}( E_{{{\text{L}}_{l}}}^{i})$.
Definitions
-----------
Some preliminary definitions for the proposed model are as follows.
(Incoming and outgoing density matrix). In a $j$-th quantum repeater $R_{j} $, an $\rho $ incoming density matrix is half of an entangled state $\left| {{\beta }_{00}} \right\rangle =\tfrac{1}{\sqrt{2}}\left( \left| 00 \right\rangle +\left| 11 \right\rangle \right)$ received from a previous neighbor node $R_{j-1} $. The $\sigma $ outgoing density matrix in $R_{j} $ is half of an entangled state ${\left| \beta _{00} \right\rangle} $ shared with a next neighbor node $R_{j+1} $.
(Entanglement Swapping Operation). The $U_{S} $ entanglement swapping operation is a local transformation in a $j$-th quantum repeater $R_{j}$ that swaps an incoming density matrix $\rho $ with an outgoing density matrix $\sigma $ and measures the density matrices to entangle the distant source and target nodes $R_{j-1} $ and $R_{j+1} $.
(Entanglement Swapping Period). Let $C$ be a cycle with time $t_{C} ={1\mathord{\left/ {\vphantom {1 f_{C} }} \right. \kern-\nulldelimiterspace} f_{C} } $ determined by the $o_{C} $ oscillator in node $R_{j} $, where $f_{C} $ is the frequency of $o_{C} $. Then, let $\pi _{S} $ be an entanglement swapping period in which the set ${\rm {\mathcal S}}_{I} \left(R_{j} \right)=\bigcup _{i}\rho _{i} $ of incoming density matrices is swapped via $U_{S} $ with the set ${\rm {\mathcal S}}_{O} \left(R_{j} \right)=\bigcup _{i}\sigma _{i} $ of outgoing density matrices, defined as $\pi _{S} =xt_{C} $, where $x$ is the number of $C$.
(Complete and Non-Complete Swapping Sets). Set ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ formulates a complete set ${\rm {\mathcal S}}_{I}^{*} \left(R_{j} \right)$ if set ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ contains all the $Q=\sum _{i=1}^{N}\left|B_{i} \right| $ incoming density matrices per $\pi _{S} $ that is received by $R_{j} $ during a swapping period, where $N$ is the number of input entangled connections of $R_{j} $ and $\left|B_{i} \right|$ is the number of incoming densities of the $i$-th input connection per $\pi _{S} $; thus, ${\rm {\mathcal S}}_{I} \left(R_{j} \right)=\bigcup _{i=1}^{Q}\rho _{i} $ and $\left|{\rm {\mathcal S}}_{I} \left(R_{j} \right)\right|=Q$. Set ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ formulates a complete set ${\rm {\mathcal S}}_{O}^{*} \left(R_{j} \right)$ if ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ contains all the $N$ outgoing density matrices that are shared by $R_{j} $ during a swapping period $\pi _{S} $; thus, ${\rm {\mathcal S}}_{O} \left(R_{j} \right)=\bigcup _{i=1}^{N}\sigma _{i} $ and $\left|{\rm {\mathcal S}}_{O} \left(R_{j} \right)\right|=N$.
Let ${\rm {\mathcal S}}\left(R_{j} \right)$ be an entanglement swapping set of $R_{j} $, defined as $$\label{1)}
{\rm {\mathcal S}}\left(R_{j} \right)={\rm {\mathcal S}}_{I} \left(R_{j} \right)\bigcup {\rm {\mathcal S}}_{O} \left(R_{j} \right) .$$ Then, ${\rm {\mathcal S}}\left(R_{j} \right)$ is a complete swapping ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$ set, if $$\label{2)}
{\rm {\mathcal S}}^{*} \left(R_{j} \right)={\rm {\mathcal S}}_{I}^{*} \left(R_{j} \right)\bigcup {\rm {\mathcal S}}_{O}^{*} \left(R_{j} \right) ,$$ with cardinality $$\label{3)}
\left|{\rm {\mathcal S}}^{*} \left(R_{j} \right)\right|=Q+N.$$ Otherwise, ${\rm {\mathcal S}}\left(R_{j} \right)$ formulates a non-complete swapping set ${\rm {\mathcal S}}\left(R_{j} \right)$, with cardinality $$\label{4)}
\left|{\rm {\mathcal S}}\left(R_{j} \right)\right|<Q+N.$$
(Perfect Swapping Sets). A complete swapping set ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$ is a perfect swapping set $$\label{5)}
\hat{{\rm {\mathcal S}}}\left(R_{j} \right)=\hat{{\rm {\mathcal S}}}_{I} \left(R_{j} \right)\bigcup \hat{{\rm {\mathcal S}}}_{O} \left(R_{j} \right)$$ at a given $\pi _{S} $, if $$\label{6)}
\left|\hat{{\rm {\mathcal S}}}\left(R_{j} \right)\right|=N+N$$ holds for the cardinality.
(Coincidence set). In a given $\pi _{S} $, the coincidence set ${\rm {\mathcal S}}_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)$ is a subset of incoming density matrices in ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ of $R_{j} $ received from $R_{i} $ that requires the outgoing density matrix $\sigma _{k} $ from ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ for the entanglement swapping. The cardinality of the coincidence set is $$\label{7)}
Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)=\left|{\rm {\mathcal S}}_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)\right|.$$
(Coincidence set increment in an entanglement swapping period) Let $\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|$ refer to the number of density matrices arriving from $R_{i} $ for swapping with $\sigma _{k} $ at $\pi _{S} $. This means the increment of the $Z_{R_{j} }^{\left(\pi '_{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)$ cardinality of the coincidence set is $$\label{8)}
Z_{R_{j} }^{\left(\pi '_{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)=Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)+\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|,$$ where $\pi '_{S} $ is the next entanglement swapping period. The derivations assume that an incoming density matrix $\rho $ chooses a particular output density matrix $\sigma $ for the entanglement swapping with probability $\Pr \left(\rho ,\sigma \right)=x\ge 0$ (Bernoulli i.i.d.).
(Incoming and outgoing entanglement rate) Let $\left|B_{R_{i} } \left(\pi _{S} \right)\right|$ be the incoming entanglement rate of $R_{j} $ per a given $\pi _{S} $, defined as $$\label{9)}
\left|B_{R_{j} } \left(\pi _{S} \right)\right|=\sum _{i,k}\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right| ,$$ where $\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|$ refers to the number of density matrices arriving from $R_{i} $ for swapping with $\sigma _{k} $ per $\pi _{S} $.
Then, at a given $\left|B_{R_{i} } \left(\pi _{S} \right)\right|$, the $\left|B'_{R_{j} } \left(\pi _{S} \right)\right|$, the outgoing entanglement rate of $R_{j} $ is defined as $$\label{10)}
\left|B'_{R_{j} } \left(\pi _{S} \right)\right|=\left(1-{\tfrac{L}{N}} \right){\tfrac{1}{1+D\left(\pi _{S} \right)}} \left(\left|B_{R_{j} } \left(\pi _{S} \right)\right|\right),$$ where $L$ is the loss, $0<L\le N$, and $D\left(\pi _{S} \right)$ is a delay measured in entanglement swapping periods caused by the optimal entanglement swapping at a particular entanglement swapping set.
(Swapping constraint). In a given $\pi _{S} $, each incoming density in ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ can be swapped with at most one outgoing density, and only one outgoing density is available in ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ for each outgoing entangled connection.
(Weak stable (stable) and strongly stable entanglement swapping). Weak stability (stability) of a quantum repeater $R_{j}$ entails that all incoming density matrices can be swapped with a target density matrix in $R_{j}$. A $\zeta \left(\pi _{S} \right)$ entanglement swapping in $R_{j}$ is weak stable (stable) if, for every $\varepsilon >0$, there exists a $B>0$, such that $$\label{ZEqnNum415455}
\mathop{\lim }\limits_{\pi _{S} \to \infty } \Pr \left(\left|{\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)\right|>B\right)<\varepsilon ,$$ where ${\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)$ is the set of incoming densities of $R_{j} $ at $\pi _{S} $, while $\left|{\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)\right|$ is the cardinality of the set.
For a strongly stable entanglement swapping in $R_{j}$, the weak stability is satisifed and the cardinality of ${\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)$ is bounded. A $\zeta \left(\pi _{S} \right)$ entanglement swapping in $R_{j} $ is strongly stable if $$\label{ZEqnNum908737}
\mathop{\lim }\limits_{\pi _{S} \to \infty } \sup {\mathbb{E}}\left(\left|{\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)\right|\right)<\infty .$$
### Noise-Scaled Entanglement Swapping Sets
(Noise Scaled Swapping Sets). Let $\gamma $ be a noise coefficient that models the noise of the local quantum memory and the local operations, $0\le \gamma \le 1$. For $\gamma =0$, the swapping set at a given $\pi _{S} $ is complete swapping set ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$, while for any $\gamma >0$, the swapping set is non-complete swapping set ${\rm {\mathcal S}}\left(R_{j} \right)$ at a given $\pi _{S} $.
In realistic situations, $\gamma $ corresponds to the noises and imperfections of the physical devices and physical-layer operations (quantum operations, realization of quantum gates, storage errors, losses from local physical devices, optical losses, etc) in the quantum repeater that lead to the loss of density matrices. For further details on the physical-layer aspects of repeater-assisted quantum communications in an experimental quantum Internet setting, we suggest [@ref13].
[Fig. \[fig2\]]{} illustrates the perfect swapping set, complete swapping set and non-complete swapping set. For both sets, the incoming densities are stored in incoming set ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$. Its cardinality depends on the incoming entanglement throughputs of the incoming connections. The outgoing set ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ is a collection of outgoing density matrices. The outgoing matrix is half of an entangled state and the other half is shared with a distant target node.
The input set ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ and output se ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ of $R_{j} $ consist of the incoming and outgoing density matrices. For a non-complete entanglement swapping set, the noise is non-zero; therefore, loss is present in the quantum memory. As a convention of our model (see the swapping constraint in Definition 9), any density matrix loss is modeled as a “double loss” that affects both sets ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ and ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$. Because of a loss, the $U_{S} $ swapping operation cannot be performed on the incoming and outgoing density matrices.
Problem Statement
-----------------
The problem formulation for the noise-scaled entanglement rate maximization is given in Problems 1-3.
Determine the entanglement swapping method that maximizes the entanglement rate of a quantum repeater at the different entanglement swapping sets as a function of the noise level of the local memory and local operations.
Prove the stability for non-complete entanglement swapping sets, complete entanglement swapping sets and perfect entanglement swapping sets.
Determine the outgoing entanglement rate of a quantum repeater as a function of the entanglement swapping sets and the noise level.
Define the optimal entanglement swapping period length as a function of the noise level at the different entanglement swapping sets.
The resolutions of Problems 1-4 are proposed in Proposition 1 and Theorems 1-4.
Entanglement Swapping Stability at Swapping Sets {#sec3}
================================================
This section presents the stability analysis of the quantum repeaters for the different entanglement swapping sets.
(Noise-scaled weight coefficient). Let $\omega \left(\gamma \left(\pi _{S} \right)\right)$ be a weight at a non-complete swapping set ${\rm {\mathcal S}}\left(R_{j} \right)$ at $\gamma \left(\pi _{S} \right)>0$, where $\gamma \left(\pi _{S} \right)$ is the noise at a given $\pi _{S} $. For a ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$ complete swapping set, $\gamma \left(\pi _{S} \right)=0$, and $\omega \left(\gamma \left(\pi _{S} \right)=0\right)=\omega ^{*} \left(\pi _{S} \right)$ is a maximized weight at $\pi _{S} $. For any non-complete set ${\rm {\mathcal S}}\left(R_{j} \right)$, $\omega \left(\gamma \left(\pi _{S} \right)\right)\ge \omega ^{*} \left(\pi _{S} \right)-f\left(\gamma \left(\pi _{S} \right)\right)$, where $f\left(\cdot \right)$ is a sub-linear function, and $\omega \left(\gamma \left(\pi _{S} \right)\right)<\omega ^{*} \left(\pi _{S} \right)$. At a perfect swapping set $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)$, the weight is $\hat{\omega }\left(\pi _{S} \right)\le \omega ^{*} \left(\pi _{S} \right)$.
At a given $\pi _{S} $, let $\zeta _{ik} \left(\rho _{A} ,\sigma _{k} \right)$ be constant, with respect to the swapping constraint, defined as $$\label{ZEqnNum888718}
\zeta _{ik} \left(\rho _{A} ,\sigma _{k} \right)=\left\{\begin{array}{l} {1,{\rm \; if\; }\rho _{A} \in {\rm {\mathcal S}}_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right){\rm \; }} \\ {0,{\rm \; otherwise,}} \end{array}\right.$$ thus $\zeta _{ik} \left(\rho _{A} ,\sigma _{k} \right)=1$, if incoming density $\rho _{A} $ is selected from set ${\rm {\mathcal S}}_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)$ for the swapping with $\sigma _{k} $, and $0$ otherwise. The aim is therefore to construct a $$\label{ZEqnNum544183}
\zeta \left(\pi _{S} \right)=\zeta _{ik} \left(\rho _{A} ,\sigma _{k} \right)_{i\le N, k\le N}$$ feasible entanglement swapping method for all input and output neighbors of $R_{j} $, for all $\pi _{S} $ entanglement swapping periods.
Then, from $Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)$ (see Definition 7) and , a $\omega \left(\pi _{S} \right)$ weight coefficient can be defined for a given entanglement swapping $\zeta \left(\pi _{S} \right)$ at a given $\pi _{S} $, as $$\label{15)}
\begin{split}
\omega \left( {{\pi }_{S}} \right)&=\sum\limits_{i,k}{{{\zeta }_{ik}}\left( {{\rho }_{A}},{{\sigma }_{k}} \right)Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right)} \\
& =\left\langle \zeta \left( {{\pi }_{S}} \right), {{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right\rangle ,
\end{split}$$ where $\left\langle \cdot \right\rangle $ is the inner product, $Z_{R_{j} } \left(\pi _{S} \right)$ is a matrix of all coincidence set cardinalities for all input and output connections at $\pi _{S} $, defined as $$\label{ZEqnNum354275}
Z_{R_{j} } \left(\pi _{S} \right)=Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)_{i\le N,k\le N} ,$$ while $\zeta \left(\pi _{S} \right)$ is as given in .
For a perfect and complete entanglement swapping set, at $\gamma \left(\pi _{S} \right)=0$, let $\zeta ^{*} \left(\pi _{S} \right)$ refer to the entanglement swapping, with $\omega \left(\gamma \left(\pi _{S} \right)=0\right)=\omega ^{*} \left(\pi _{S} \right)$, where $\omega ^{*} \left(\pi _{S} \right)$ is the maximized weight coefficient defined as $$\label{ZEqnNum770876}
\omega ^{*} \left(\gamma \left(\pi _{S} \right)=0\right)=\mathop{\max }\limits_{\zeta ^{*} \left(\pi _{S} \right)} \left\langle \zeta ^{*} \left(\pi _{S} \right),Z_{R_{j} } \left(\pi _{S} \right)\right\rangle ,$$ where $\zeta ^{*} \left(\pi _{S} \right)$ is an optimal entanglement swapping method at $\gamma \left(\pi _{S} \right)=0$ (in general not unique, by theory) with the maximized weight. By some fundamental theory [@refL1; @refL2; @refL3; @refD1], it can be verified that for a non-complete set with entanglement swapping $\chi \left(\pi _{S} \right)$ at $\gamma \left(\pi _{S} \right)>0$, defined as $$\label{18)}
\chi \left(\pi _{S} \right)=\left\{\left(\chi _{ik} \left(\rho _{A}, \sigma _{k} \right)\right)_{x}, x=Mi+k, i=0,\ldots , M-1, k=0, \ldots , M-1\right\}$$ with norm a $\left|\chi \left(\pi _{S} \right)\right|$ [@refL1; @refL3], defined as $$\label{19)}
\left|\chi \left(\pi _{S} \right)\right|=\mathop{\max }\limits_{k=0,\ldots ,M-1} \left(\sum _{x=0}^{M-1}\left(\chi _{ik} \left(\rho _{A} ,\sigma _{k} \right)\right)_{Mx+k} ,\sum _{y=0}^{M-1}\left(\chi _{ik} \left(\rho _{A} ,\sigma _{k} \right)\right)_{Mk+y} \right),$$ with $\left|\chi \left(\pi _{S} \right)\right|\le 1$, the relation $$\label{20)}
\left\langle \zeta ^{*} \left(\pi _{S} \right),Z_{R_{j} } \left(\pi _{S} \right)\right\rangle -\left\langle \chi \left(\pi _{S} \right),Z_{R_{j} } \left(\pi _{S} \right)\right\rangle \ge 0$$ holds for the weights.
Then, let ${\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)$ be a Lyapunov function [@refL1; @refL2; @refL3; @refD1]of $Z_{R_{j} } \left(\pi _{S} \right)$ (see ) as $$\label{ZEqnNum792527}
{\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)=\sum _{i,k}\left(Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)\right)^{2}.$$
Then it can be verified [@refL1; @refL2; @refL3; @refD1] that $$\label{ZEqnNum314655}
{\mathbb{E}}\left({\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi '_{S} \right)\right)-\left. {\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)\right|Z_{R_{j} } \left(\pi _{S} \right)\right)\le -\varepsilon \left|Z_{R_{j} } \left(\pi _{S} \right)\right|,$$ holds, as $Z_{R_{j} } \left(\pi _{S} \right)$ is sufficiently large [@refL1; @refL2; @refL3; @refD1], where $\varepsilon >0$. Since is analogous to the condition on strong stability given in , it follows that as holds for all $\pi _{S} $ entanglement swapping periods, the entanglement swapping at $\gamma \left(\pi _{S} \right)=0$ in $R_{j} $ is a strongly stable entanglement swapping with maximized weight coefficients for all periods.
Since for any complete and perfect entanglement swapping set, the noise is zero, the $\hat{\omega }\left(\pi _{S} \right)$ weight coefficient of at a perfect swapping set $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)$ is also a maximized weight with $f\left(\gamma \left(\pi _{S} \right)\right)=0$ as $$\label{23)}
\hat{\omega }\left(\pi _{S} \right)\le \omega ^{*} \left(\pi _{S} \right).$$ As the noise is nonzero, $\gamma \left(\pi _{S} \right)>0$, an $\zeta \left(\pi _{S} \right)\ne \zeta ^{*} \left(\pi _{S} \right)$ entanglement swapping cannot reach the $\omega ^{*} \left(\pi _{S} \right)$ maximized weight coefficient , thus $$\label{24)}
\omega (\gamma \left(\pi _{S} \right))=\mathop{\max }\limits_{\zeta \left(\pi _{S} \right)} \left\langle \zeta \left(\pi _{S} \right),Z_{R_{j} } \left(\pi _{S} \right)\right\rangle <\omega ^{*} \left(\pi _{S} \right),$$ where the non-zero noise $\gamma \left(\pi _{S} \right)$ decreases $\omega ^{*} \left(\pi _{S} \right)$ to $\omega \left(\gamma \left(\pi _{S} \right)\right)<\omega ^{*} \left(\pi _{S} \right)$ as $$\label{ZEqnNum577058}
\omega \left(\gamma \left(\pi _{S} \right)\right)\ge \omega ^{*} \left(\gamma \left(\pi _{S} \right)=0\right)-f\left(\gamma \left(\pi _{S} \right)\right),$$ where $f\left(\cdot \right)$ is a sub-linear function, as $$\label{26)}
0\le f\left(\gamma \left(\pi _{S} \right)\right)<c\left(\gamma \left(\pi _{S} \right)\right),$$ and $$\label{27)}
\mathop{\lim }\limits_{\gamma \left(\pi _{S} \right)\to \infty } {\tfrac{f\left(\gamma \left(\pi _{S} \right)\right)}{\gamma \left(\pi _{S} \right)}} =0,$$ where $\gamma \left(\pi _{S} \right)\ge \gamma \left(\pi _{S} =0\right)$, and $c>0$ is a constant.
For a non-complete set, can be rewritten as $$\label{28)}
{\mathbb{E}}\left({\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi '_{S} \right)\right)-\left. {\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)\right|Z_{R_{j} } \left(\pi _{S} \right)\right)\le -\varepsilon ,$$ where $\varepsilon >0$, which represents the stability condition given in . As follows, an entanglement swapping at a non-complete entanglement swapping set is stable.
These statements are further improved in Theorems 1 and 2, respectively.
Non-Complete Swapping Sets
--------------------------
(Noise-scaled stability at non-complete swapping sets) An $\zeta \left(\pi _{S} \right)$ entanglement swapping at $\gamma \left(\pi _{S} \right)>0$ is stable for any non-complete entanglement swapping set ${\rm {\mathcal S}}\left(R_{j} \right)$.
Let $\gamma \left(\pi _{S} \right)>0$ be the noise at a given $\pi _{S} $, and let $\zeta \left(\pi _{S} \right)$ be the actual entanglement swapping at any non-complete entanglement swapping set ${\rm {\mathcal S}}\left(R_{j} \right)$. Using the formalism of [@refD1], let ${\rm {\mathcal L}}\left(X\right)$ refer to a Lyapunov function of a $M\times M$ size matrix $X$, defined as $$\label{29)}
{\rm {\mathcal L}}\left(X\right)=\sum _{i,k}\left(x_{ik} \right)^{2} ,$$ where $x_{ik} $ is the $\left(i,k\right)$-th element of $X$.
Let $C_{1} $ and $C_{2} $ constants, $C_{1} >0$, $C_{2} >0$. Then, an $\zeta \left(\pi _{S} \right)$ entanglement swapping with $\gamma \left(\pi _{S} \right)>0$ is stable if only $$\label{ZEqnNum985408}
{\mathbb{E}}\left(\left. \Delta _{{\rm {\mathcal L}}} \right|Z_{R_{j} } \left(\pi _{S} \right)\right)\le -C_{1} \omega \left(\gamma \left(\pi _{S} \right)=0\right),$$ where $\Delta _{{\rm {\mathcal L}}} $ is a difference of the Lyapunov functions ${\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi '_{S} \right)\right)$ and ${\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)$, where $\pi '_{S} $ is a next entanglement swapping period, defined as $$\label{31)}
\Delta _{{\rm {\mathcal L}}} ={\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi '_{S} \right)\right)-{\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right),$$ and $$\label{32)}
\omega \left(\gamma \left(\pi _{S} \right)=0\right)=\omega ^{*} \left(\pi _{S} \right)\ge C_{2} ,$$ by theory [@refL1; @refL2; @refL3; @refD1].
To verify , first $\Delta _{{\rm {\mathcal L}}} $ is rewritten via as $$\label{ZEqnNum432521}
\begin{split}
{{\Delta }_{\mathcal{L}}}&=\sum\limits_{i,k}{{{\left( Z_{{{R}_{j}}}^{\left( {{{{\pi }'_{S}}}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right) \right)}^{2}}-{{\left( Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right) \right)}^{2}}} \\
& =\sum\limits_{i,k}{\left( Z_{{{R}_{j}}}^{\left( {{{{\pi }'_{S}}}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right)-Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right) \right)\left( Z_{{{R}_{j}}}^{\left( {{{{\pi }'_{S}}}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right)+Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right) \right),}
\end{split}$$ where $Z_{R_{j} } \left(\pi '_{S} \right)$ can be evaluated as $$\label{ZEqnNum742078}
\begin{split}
{{Z}_{{{R}_{j}}}}\left( {{{{\pi }'_{S}}}} \right)&=\left( {{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right)-{{\zeta }_{ik}}\left( {{\rho }_{A}},{{\sigma }_{k}} \right) \right)+\left| \bar{B}\left( {{R}_{i}}\left( {{{{\pi }'_{S}}}} \right),{{\sigma }_{k}} \right) \right| \\
& \le \max \left( \left( {{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right)-{{\zeta }_{ik}}\left( {{\rho }_{A}},{{\sigma }_{k}} \right) \right)+\left| \bar{B}\left( {{R}_{i}}\left( {{{{\pi }'_{S}}}} \right),{{\sigma }_{k}} \right) \right|,1 \right),
\end{split}$$ where $\left|\bar{B}\left(R_{i} \left(\pi '_{S} \right),\sigma _{k} \right)\right|\le 1$ is the normalized number of arrival density matrices from $R_{i} $ for swapping with $\sigma _{k} $ at a next entanglement swapping period $\pi '_{S} $, defined as $$\label{ZEqnNum733839}
\left|\bar{B}\left(R_{i} \left(\pi '_{S} \right),\sigma _{k} \right)\right|={\tfrac{\left|B\left(R_{i} \left(\pi '_{S} \right),\sigma _{k} \right)\right|}{\left|B_{R_{j} } \left(\pi _{S} \right)\right|}} ,$$ where $\left|B_{R_{j} } \left(\pi _{S} \right)\right|=\sum _{i,k}\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right| $ is a total number of incoming density matrices of $R_{j} $ from the $N$ quantum repeaters.
Using , the result in can be rewritten $$\label{36)}
\begin{split}
{{\Delta }_{\mathcal{L}}}&\le \sum\limits_{i,k}{\left( \left| \bar{B}\left( {{R}_{i}}\left( {{{{\pi }'_{S}}}} \right),{{\sigma }_{k}} \right) \right|-{{\zeta }_{ik}}\left( {{\rho }_{A}},{{\sigma }_{k}} \right) \right)\left( \left( 2Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right)+1 \right)+1 \right)} \\
& \le \sum\limits_{i,k}{\left( \left| \bar{B}\left( {{R}_{i}}\left( {{{{\pi }'_{S}}}} \right),{{\sigma }_{k}} \right) \right|-{{\zeta }_{ik}}\left( {{\rho }_{A}},{{\sigma }_{k}} \right) \right)\left( \left( 2Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right) \right) \right)+2{{M}^{2}},}
\end{split}$$ thus can be rewritten as
$$\label{ZEqnNum677350}
\begin{split}
\mathbb{E}\left( \left. {{\Delta }_{\mathcal{L}}} \right|{{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right)&\le 2\sum\limits_{i,k}{Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right)\left( \mathbb{E}\left( \left| \bar{B}\left( {{R}_{i}}\left( {{\pi }_{S}} \right),{{\sigma }_{k}} \right) \right| \right)-\left. {{\zeta }_{ik}}\left( {{\rho }_{A}},{{\sigma }_{k}} \right) \right|{{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right)+2{{M}^{2}}} \\
& =2\sum\limits_{i,k}{Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right)\left( \left| \bar{B}\left( {{R}_{i}}\left( {{\pi }_{S}} \right),{{\sigma }_{k}} \right) \right|-{{\zeta }_{ik}}\left( {{\rho }_{A}},{{\sigma }_{k}} \right) \right)+2{{\left( N-L \right)}^{2}}},
\end{split}$$
where ${\mathbb{E}}\left(\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|\right)$ is the expected normalized number of density matrices arrive from $R_{i} $ for swapping with $\sigma _{k} $ at $\pi _{S} $.
Let $\omega \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)>0\right)\right)$ be the weight coefficient of the $\zeta \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)>0\right)\right)$ actual entanglement swapping at the noise $\gamma \left(\pi _{S} \right)>0$ at a given $\pi _{S} $, as $$\label{38)}
\omega \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)>0\right)\right)=\left\langle \zeta \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)>0\right)\right),Z_{R_{j} } \left(\pi _{S} \right)\right\rangle ,$$ and let $\alpha _{ik} $ be defined as $$\label{39)}
\alpha _{ik} =Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right).$$ Then, by some fundamental theory [@refL1; @refL2; @refL3; @refD1], $$\label{ZEqnNum101849}
\begin{split}
\sum\limits_{i,k}{{{\alpha }_{ik}}}&\le \sum\limits_{z}{{{\nu }_{z}}\left\langle {{\zeta }_{z}}\left( {{\pi }_{S}}\left( \gamma \left( {{\pi }_{S}} \right)>0 \right) \right),{{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right\rangle } \\
& =\sum\limits_{z}{{{\nu }_{z}}{{{{\omega }'_{z}}}}\left( {{\pi }_{S}}\left( \gamma \left( {{\pi }_{S}} \right)>0 \right) \right),}
\end{split}$$ where $\nu _{z} \ge 0$ is a constant, and $\zeta _{z} \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)>0\right)\right)$ is a $z$-th entanglement swapping at a noise $\gamma \left(\pi _{S} \right)>0$, while $\omega '_{z} \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)>0\right)\right)$ is the weight of $\zeta _{z} \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)>0\right)\right)$.
Then, using , ${\mathbb{E}}\left(\left. \Delta _{{\rm {\mathcal L}}} \right|Z_{R_{j} } \left(\pi _{S} \right)\right)$ from can be rewritten as $$\label{ZEqnNum588923}
\begin{split}
& \mathbb{E}\left( \left. {{\Delta }_{\mathcal{L}}} \right|{{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right) \\
& \le 2\left( \sum\limits_{z}{{{\nu }_{z}}{{{{\omega }'_{z}}}}\left( {{\pi }_{S}}\left( \gamma \left( {{\pi }_{S}} \right)>0 \right) \right)-\omega \left( {{\pi }_{S}}\left( \gamma \left( {{\pi }_{S}} \right)>0 \right) \right)} \right)+2{{\left( N-L \right)}^{2}} \\
& =2\left( \sum\limits_{z}{\begin{split}
& {{\nu }_{z}}{{{{\omega }'_{z}}}}\left( {{\pi }_{S}}\left( \gamma \left( {{\pi }_{S}} \right)>0 \right) \right)-\omega \left( {{\pi }_{S}}\left( \gamma \left( {{\pi }_{S}} \right)=0 \right) \right) \\
& +\omega \left( {{\pi }_{S}}\left( \gamma \left( {{\pi }_{S}} \right)=0 \right) \right)-\omega \left( {{\pi }_{S}}\left( \gamma \left( {{\pi }_{S}} \right)>0 \right) \right) \\
\end{split}} \right)+2{{\left( N-L \right)}^{2}} \\
& \le 2\left( \sum\limits_{z}{{{\nu }_{z}}-1} \right)\omega \left( {{\pi }_{S}}\left( \gamma \left( {{\pi }_{S}} \right)=0 \right) \right)+2f\left( \gamma \left( {{\pi }_{S}} \right) \right)+2{{\left( N-L \right)}^{2}} \\
& =-2{{C}_{1}}\omega \left( {{\pi }_{S}}\left( \gamma \left( {{\pi }_{S}} \right)=0 \right) \right)+2f\left( \gamma \left( {{\pi }_{S}} \right) \right)+2{{\left( N-L \right)}^{2}},
\end{split}$$ where $C_{1} $ is set as [@refD1] $$\label{ZEqnNum400853}
C_{1} =1-\sum _{z}\nu _{z} .$$ Therefore, there as $C_{2} $ is selected such that $$\label{43)}
0<C_{2} \le \omega \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)=0\right)\right),$$ then $$\label{ZEqnNum846609}
{\mathbb{E}}\left(\left. \Delta _{{\rm {\mathcal L}}} \right|Z_{R_{j} } \left(\pi _{S} \right)\right)\le -C_{1} \omega \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)=0\right)\right)$$ holds, by theory [@refL1; @refL2; @refL3; @refD1], since can be rewritten as $$\label{45)}
{\mathbb{E}}\left(\left. \Delta _{{\rm {\mathcal L}}} \right|Z_{R_{j} } \left(\pi _{S} \right)\right)\le -\varepsilon ,$$ where $\varepsilon $ is defined as $$\label{ZEqnNum449028}
\varepsilon =C_{1} \omega \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)=0\right)\right),$$ therefore the stability condition in is satisfied via $$\label{47)}
\mathop{\lim }\limits_{\pi _{S} \to \infty } \Pr \left(\left|{\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)\right|>B\right)<C_{1} \omega \left(\pi _{S} \left(\gamma \left(\pi _{S} \right)=0\right)\right),$$ that concludes the proof.
As a corollary, is satisfied for the $\zeta \left(\pi _{S} \right)$ entanglement swapping method with any $\gamma \left(\pi _{S} \right)>0$ non-zero noise, at a given $\pi _{S} $.
Complete and Perfect Swapping Sets
----------------------------------
Lemma 1 extends the results for entanglement swapping at complete and perfect swapping sets.
(Noise-scaled stability at perfect and complete swapping sets) An $\zeta ^{*} \left(\pi _{S} \right)$ optimal entanglement swapping at $\gamma \left(\pi _{S} \right)=0$ is strongly stable for any complete ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$ and perfect $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)$ entanglement swapping set.
Let $\omega ^{*} \left(\gamma \left(\pi _{S} \right)=0\right)$ be the weight coefficient of $\zeta ^{*} \left(\pi _{S} \right)$ at $\gamma \left(\pi _{S} \right)=0$ at a given ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$, as given in , with $\zeta ^{*} \left(\pi _{S} \right)$ as $$\label{ZEqnNum173893}
\zeta ^{*} \left(\pi _{S} \right)=\arg \mathop{\max }\limits_{\zeta ^{*} \left(\pi _{S} \right)\in {\rm {\mathcal S}}\left(\zeta \left(\pi _{S} \right)\right)} \left\langle \zeta ^{*} \left(\pi _{S} \right),Z_{R_{j} } \left(\pi _{S} \right)\right\rangle ,$$ where ${\rm {\mathcal S}}\left(\zeta \left(\pi _{S} \right)\right)$ is the set of all possible $N!$ entanglement swapping operations at a given $\pi _{S} $, and at $N$ outgoing density matrices, $\left|{\rm {\mathcal S}}_{O} \left(R_{j} \right)\right|=N$. For a $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)$ perfect entanglement swapping set, ${\rm {\mathcal S}}\left(\zeta \left(\pi _{S} \right)\right)$ is the set of all possible $N!$ entanglement swapping operations, since $\left|{\rm {\mathcal S}}_{I} \left(R_{j} \right)\right|=\left|{\rm {\mathcal S}}_{O} \left(R_{j} \right)\right|=N$ [@refL1; @refL3].)
Then, let $\Delta _{{\rm {\mathcal L}}} $ be a difference of the Lyapunov functions ${\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)$ and ${\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi '_{S} \right)\right)$, $$\label{49)}
{\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)=\sum _{i,k}\left(Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)\right)^{2}$$ and $$\label{50)}
{\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi '_{S} \right)\right)=\sum _{i,k}\left(Z_{R_{j} }^{\left(\pi '_{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)\right)^{2} ,$$ where $\pi '_{S} $ is a next entanglement swapping period; as $$\label{51)}
\Delta _{{\rm {\mathcal L}}} ={\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi '_{S} \right)\right)-{\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right).$$ Then, for any complete swapping set ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$, from , ${\mathbb{E}}\left(\left. \Delta _{{\rm {\mathcal L}}} \right|Z_{R_{j} } \left(\pi _{S} \right)\right)$ at $\zeta ^{*} \left(\pi _{S} \right)$ is as $$\label{ZEqnNum805523}
{\mathbb{E}}\left(\left. \Delta _{{\rm {\mathcal L}}} \right|Z_{R_{j} } \left(\pi _{S} \right)\right)\le -2C_{1} \omega ^{*} \left(\gamma \left(\pi _{S} \right)\right)+2N^{2} ,$$ where $C_{1} $ is set as in , and by some fundamentals of queueing theory [@refL1; @refL2; @refL3; @refD1], the condition in can be rewritten as $$\label{ZEqnNum748637}
{\mathbb{E}}\left(\left. \Delta _{{\rm {\mathcal L}}} \right|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right)\le -\varepsilon \left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|,$$ where $\varepsilon $ as given in , while $\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|$ is the cardinality of the coincidence sets of ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$ at a given $\pi _{S} $, as $$\label{54)}
\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|=\sum _{i,k}Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right) =\left|{\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)\right|.$$ Thus, can be rewritten as $$\label{ZEqnNum511923}
{\mathbb{E}}\left(\left. \Delta _{{\rm {\mathcal L}}} \right|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right)\le -\varepsilon \left|{\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)\right|.$$ By similar assumptions, for any $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)$ perfect entanglement swapping with cardinality $\left|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right|$ of the coincidence sets of $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)$ at a given $\pi _{S} $, the condition in can be rewritten as $$\label{ZEqnNum182627}
{\mathbb{E}}\left(\left. \Delta _{{\rm {\mathcal L}}} \right|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right)\le -\varepsilon \left|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right|.$$ Thus, from and , it follows that $\zeta ^{*} \left(\pi _{S} \right)$ is strongly stable for any complete and perfect entanglement swapping set, which concludes the proof.
Noise-Scaled Entanglement Rate Maximization {#sec4}
===========================================
This section proposes the entanglement rate maximization procedure for the different entanglement swapping sets.
Since the entanglement swapping is stable for both complete and non-complete entanglement swapping, this allows us to derive further results for the noise-scaled entanglement rate. The proposed derivations utilize the fundamentals of queueing theory (Note: in queueing theory, Little’s law defines a connection between the $L$ average queue length and the $W$ average delay as $L=\lambda W$, where $\lambda $ is the arrival rate. The stability property is a required preliminary condition for the relation.). The derivations of the maximized noise-scaled entanglement rate assume that an incoming density matrix $\rho $ chooses a particular output density matrix $\sigma $ for the entanglement swapping with probability $\Pr \left(\rho ,\sigma \right)=x\ge 0$.
Preliminaries
-------------
Let $\left|Z_{R_{j} } \left(\pi _{S} \right)\right|$ be the cardinality of the coincidence sets at a given $\pi _{S} $, as $$\label{ZEqnNum450845}
\left|Z_{R_{j} } \left(\pi _{S} \right)\right|=\sum _{i,k}Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right) ,$$ and let $\left|B_{R_{i} } \left(\pi _{S} \right)\right|$ be the total number of incoming density matrices in $R_{j} $ per a given $\pi _{S} $, as $$\label{ZEqnNum505402}
\left|B_{R_{j} } \left(\pi _{S} \right)\right|=\sum _{i,k}\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right| ,$$ where $\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|$ refers to the number of density matrices arrive from $R_{i} $ for swapping with $\sigma _{k} $ per $\pi _{S} $.
From and , let $D\left(\pi _{S} \right)$ be the delay measured in entanglement swapping periods, as $$\label{59)}
D\left(\pi _{S} \right)={\tfrac{\left|Z_{R_{j} } \left(\pi _{S} \right)\right|}{\left|B_{R_{j} } \left(\pi _{S} \right)\right|}} =\sum _{i,k}D_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right) ,$$ where $D_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)$ is the delay for a given $R_{i} $ at a given $\pi _{S} $, as $$\label{60)}
D_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)={\tfrac{Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)}{\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|}} .$$ At delay $D\left(\pi _{S} \right)\ge 0$, the $B'_{R_{j} } \left(\pi _{S} \right)$ number of swapped density matrices per $\pi _{S} $ is $$\label{ZEqnNum517324}
\begin{split}
\left| {{{{B}'_{{{R}_{j}}}}}}\left( {{\pi }_{S}} \right) \right|&=\left( 1-\tfrac{L}{N} \right)\tfrac{{{\pi }_{S}}}{{{{\tilde{\pi }}}_{S}}}\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \\
& =\left( 1-\tfrac{L}{N} \right)\tfrac{x{{t}_{C}}}{x{{t}_{C}}+D\left( {{\pi }_{S}} \right)x{{t}_{C}}}\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \\
& =\left( 1-\tfrac{L}{N} \right)\tfrac{1}{1+D\left( {{\pi }_{S}} \right)}\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \\
& =\left( 1-\tfrac{L}{N} \right)\tfrac{{{\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right|}^{2}}}{\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right|+\left( D\left( {{\pi }_{S}} \right)\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \right)},
\end{split}$$ where $0<L\le N$ is the number of lost density matrices in ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ of $R_{j} $ per $\pi _{S} $ at a non-zero noise $\gamma \left(\pi _{S} \right)>0$, $L=0$ if $\gamma \left(\pi _{S} \right)=0$, and $\tilde{\pi }_{S} $ is the extended period, defined as $$\label{62)}
\tilde{\pi }_{S} =\pi _{S} +D\left(\pi _{S} \right),$$ with ${\pi _{S} \mathord{\left/ {\vphantom {\pi _{S} \left(\tilde{\pi }_{S} \right)}} \right. \kern-\nulldelimiterspace} \left(\tilde{\pi }_{S} \right)} \le 1$; thus, identifies the $B'_{R_{j} } \left(\pi _{S} \right)$ outgoing entanglement rate per $\pi _{S} $ for a particular entanglement swapping set.
Non-Complete Swapping Sets
---------------------------
(Entanglement rate decrement at non-complete swapping sets). For a non-complete entanglement swapping set ${\rm {\mathcal S}}\left(R_{j} \right)$, $\gamma >0$, the $B'_{R_{j} } \left(\pi _{S} \right)$ outgoing entanglement rate is $\left|B'_{R_{j} } \left(\pi _{S} \right)\right|=\left(1-{\tfrac{L}{N}} \right){\tfrac{1}{1+D\left(\pi _{S} \right)}} \left(\left|B_{R_{j} } \left(\pi _{S} \right)\right|\right)$ per $\pi _{S} $, where $\left|B_{R_{j} } \left(\pi _{S} \right)\right|$ is the total incoming entanglement throughput at a given $\pi _{S} $, and $D\left(\pi _{S} \right)\le {\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\mathord{\left/ {\vphantom {\left|Z_{R_{j} } \left(\pi _{S} \right)\right| \left|B_{R_{j} } \left(\pi _{S} \right)\right|}} \right. \kern-\nulldelimiterspace} \left|B_{R_{j} } \left(\pi _{S} \right)\right|} $, where $\mathop{\lim }\limits_{\pi _{S} \to \infty } {\mathbb{E}}\left[\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right]\le {\tfrac{\left(N-L\right)\beta }{C_{1} }} +\xi \left(\gamma \right)$, where $\beta =\sum _{i,k}\left(\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|-\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|^{2} \right) $ and $\xi \left(\gamma \right)={\tfrac{\left(N-L\right)}{2C_{1} }} f\left(\gamma \left(\pi _{S} \right)\right)$, where $C_{1} >0$ is a constant.
The entanglement rate decrement for a non-complete entanglement swapping set is as follows.
After some calculations, the result in can be rewritten as $$\label{ZEqnNum461144}
\begin{split}
& \mathbb{E}\left( \left. {{\Delta }_{\mathcal{L}}} \right|{{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right) \\
& \le -2\tfrac{{{C}_{1}}}{N-L}\left| {{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right|+2f\left( \gamma \left( {{\pi }_{S}} \right) \right)+2\beta ,
\end{split}$$ where $$\label{64)}
C_{1} =1-\mathop{\max }\limits_{i} \left(\sum _{k}\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right| \right),$$ where $\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|$ refers to the normalized number of density matrices arrive from $R_{i} $ for swapping with $\sigma _{k} $ at $\pi _{S} $ as $$\label{65)}
\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|={\tfrac{\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|}{\sum _{i}\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right| }} ,$$ while $\beta $ is defined as [@refL1; @refD1] $$\label{66)}
\beta =\sum _{i,k}\left(\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|-\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|^{2} \right) .$$ From , ${\mathbb{E}}\left({\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi '_{S} \right)\right)\right)$ can be evaluated as $$\label{67)}
\begin{split}
\mathbb{E}\left( \mathcal{L}\left( {{Z}_{{{R}_{j}}}}\left( {{{{\pi }'_{S}}}} \right) \right) \right)&=\mathbb{E}\left( \mathcal{L}\left( {{Z}_{{{R}_{j}}}}\left( {{{{\pi }'_{S}}}} \right) \right)-\mathcal{L}\left( {{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right)+\mathcal{L}\left( {{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right) \right) \\
& =\mathbb{E}\left( \mathbb{E}\left( \mathcal{L}\left( {{Z}_{{{R}_{j}}}}\left( {{{{\pi }'_{S}}}} \right) \right)-\mathcal{L}\left. \left( {{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right) \right|{{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right) \right)+\mathbb{E}\left( \mathcal{L}\left( {{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right) \right) \\
& \le -2\tfrac{{{C}_{1}}}{N-L}\mathbb{E}\left( \left| {{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \right)+2f\left( \gamma \left( {{\pi }_{S}} \right) \right)+2\beta +\mathbb{E}\left( \mathcal{L}\left( {{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right) \right),
\end{split}$$ thus, after $P$ entanglement swapping periods $\pi _{S} =0,\ldots ,P-1$, ${\mathbb{E}}\left({\rm {\mathcal L}}\left(Z_{R_{j} } \left(P\pi _{S} \right)\right)\right)$ is yielded as $$\label{68)}
{\mathbb{E}}\left({\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \left(P\right)\right)\right)\right)\le P\left(2f\left(\gamma \left(\pi _{S} \right)\right)+2\beta \right)+{\mathbb{E}}\left({\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \left(0\right)\right)\right)\right)-2{\tfrac{C_{1} }{N-L}} \sum _{\pi _{S} =0}^{P-1}{\mathbb{E}}\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right) ,$$ where $\pi _{S} \left(P\right)$ is the $P$-th entanglement swapping, while ${\mathbb{E}}\left({\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \left(0\right)\right)\right)\right)$ identifies the initial system state, thus $$\label{69)}
{\mathbb{E}}\left({\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \left(0\right)\right)\right)\right)=0,$$ by theory [@refD1].
Therefore, after $P$ entanglement swapping periods, the expected value of $\left|Z_{R_{j} } \left(\pi _{S} \right)\right|$ can be evaluated as $$\label{ZEqnNum472017}
{\tfrac{1}{P}} \sum _{\pi _{S} =0}^{P-1}{\mathbb{E}}\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right) \le {\tfrac{N-L}{2C_{1} }} \left(2f\left(\gamma \left(\pi _{S} \right)\right)+2\beta \right)-{\tfrac{1}{P}} {\mathbb{E}}\left({\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \left(P\right)\right)\right)\right),$$ where ${\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \left(P\right)\right)\right)\ge 0$, thus, assuming that the arrival of the density matrices can be modeled as an i.i.d. arrival process, the result in can be rewritten as $$\label{ZEqnNum474049}
\mathop{\lim }\limits_{\pi _{S} \to \infty } {\tfrac{1}{P}} \sum _{\pi _{S} =0}^{P-1}{\mathbb{E}}\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right)=\mathop{\lim }\limits_{\pi _{S} \to \infty } {\mathbb{E}}\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right) \le {\tfrac{N-L}{2C_{1} }} \left(2f\left(\gamma \left(\pi _{S} \right)\right)+2\beta \right).$$ Then, since for any noise $\gamma \left(\pi _{S} \right)$ at $\pi _{S} $, the relation $$\label{72)}
\gamma \left(\pi _{S} \right)\le \left|Z_{R_{j} } \left(\pi _{S} \right)\right|$$ holds, thus for any entanglement swapping period $\pi _{S} $, from the sub-linear property of $f\left(\cdot \right)$, the relation $$\label{73)}
f\left(\gamma \left(\pi _{S} \right)\right)\le f\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right)$$ follows for $f\left(\gamma \left(\pi _{S} \right)\right)$.
Therefore, $\omega \left(\gamma \left(\pi _{S} \right)\right)$ from can be rewritten as $$\label{ZEqnNum446340}
\omega \left(\gamma \left(\pi _{S} \right)\right)\ge \omega ^{*} \left(\gamma \left(\pi _{S} \right)=0\right)-f\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right),$$ such that for $f\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right)$, the relation $$\label{75)}
\mathop{\lim }\limits_{\pi _{S} \to \infty } {\tfrac{1}{P}} \sum _{\pi _{S} =0}^{P-1}f\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right)=\mathop{\lim }\limits_{\pi _{S} \to \infty } {\mathbb{E}}\left(f\left(\left|Z_{R_{j} } \left(\pi _{S} \left(P\right)\right)\right|\right)\right) ,$$ holds, which allows us to rewrite in the following manner: $$\label{ZEqnNum512518}
\mathop{\lim }\limits_{\pi _{S} \to \infty } {\mathbb{E}}\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right)\le {\tfrac{\left(N-L\right)\beta }{C_{1} }} +{\tfrac{N-L}{2C_{1} }} \mathop{\lim }\limits_{\pi _{S} \to \infty } {\mathbb{E}}\left(f\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right)\right),$$ where $$\label{ZEqnNum482205}
\mathop{\lim }\limits_{\pi _{S} \to \infty } {\mathbb{E}}\left(f\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right)\right)=f\left(\gamma \left(\pi _{S} \right)\right),$$ thus from , is as $$\label{ZEqnNum557989}
\mathop{\lim }\limits_{\pi _{S} \to \infty } {\mathbb{E}}\left(\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\right)\le {\tfrac{\left(N-L\right)\beta }{C_{1} }} +{\tfrac{N-L}{2C_{1} }} f\left(\gamma \left(\pi _{S} \right)\right)={\tfrac{\left(N-L\right)\beta }{C_{1} }} +\xi \left(\gamma \right),$$ where $$\label{79)}
\xi \left(\gamma \right)={\tfrac{\left(N-L\right)}{2C_{1} }} f\left(\gamma \left(\pi _{S} \right)\right).$$ Therefore, the $D\left(\pi _{S} \right)$ delay for any non-complete swapping set is as $$\label{ZEqnNum102271}
D\left(\pi _{S} \right)\le {\tfrac{\left|Z_{R_{j} } \left(\pi _{S} \right)\right|}{\left|B_{R_{j} } \left(\pi _{S} \right)\right|}} ={\tfrac{1}{\left|B_{R_{j} } \left(\pi _{S} \right)\right|}} \left({\tfrac{\left(N-L\right)\beta }{C_{1} }} +{\tfrac{N-L}{2C_{1} }} f\left(\gamma \left(\pi _{S} \right)\right)\right).$$ As a corollary, the $B'_{R_{j} } \left(\pi _{S} \right)$ outgoing entanglement rate per $\pi _{S} $ at , is as $$\label{ZEqnNum901920}
\begin{split}
\left| {{{{B}'_{{{R}_{j}}}}}}\left( {{\pi }_{S}} \right) \right|&=\left( 1-\tfrac{L}{N} \right)\tfrac{1}{1+D\left( {{\pi }_{S}} \right)}\left( \left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \right) \\
& =\left( 1-\tfrac{L}{N} \right)\tfrac{{{\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right|}^{2}}}{\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right|+\left( D\left( {{\pi }_{S}} \right)\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \right)},
\end{split}$$ that concludes the proof.
Complete Swapping Sets
-----------------------
(Entanglement rate decrement at complete swapping sets). For a complete entanglement swapping set ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$, $\gamma =0$, the $B'_{R_{j} } \left(\pi _{S} \right)$ outgoing entanglement rate is $\left|B'_{R_{j} } \left(\pi _{S} \right)\right|={\tfrac{1}{1+D^{*} \left(\pi _{S} \right)}} \left|B_{R_{j} } \left(\pi _{S} \right)\right|$ per $\pi _{S} $, where $D^{*} \left(\pi _{S} \right)={\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|\mathord{\left/ {\vphantom {\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right| \left|B_{R_{j} } \left(\pi _{S} \right)\right|}} \right. \kern-\nulldelimiterspace} \left|B_{R_{j} } \left(\pi _{S} \right)\right|} <D\left(\pi _{S} \right)$, with $\mathop{\lim }\limits_{\pi _{S} \to \infty } {\mathbb{E}}\left[\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|\right]\le {\tfrac{N\beta }{C_{1} }} $.
Since for any complete swapping set, $$\label{82)}
f\left(\gamma \left(\pi _{S} \right)\right)=0,$$ it follows that $$\label{83)}
\xi \left(\gamma \right)=0,$$ therefore can be rewritten for a complete swapping set as $$\label{84)}
\mathop{\lim }\limits_{\pi _{S} \to \infty } {\mathbb{E}}\left(\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|\right)\le {\tfrac{N\beta }{C_{1} }} .$$ Therefore, the $D^{*} \left(\pi _{S} \right)$ decrement for any complete swapping set is as $$\label{ZEqnNum531673}
D^{*} \left(\pi _{S} \right)\le {\tfrac{\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|}{\left|B_{R_{j} } \left(\pi _{S} \right)\right|}} ={\tfrac{N\beta }{C_{1} \left|B_{R_{j} } \left(\pi _{S} \right)\right|}} ,$$ with relation $D^{*} \left(\pi _{S} \right)<D\left(\pi _{S} \right)$, where $D\left(\pi _{S} \right)$ is as in .
As a corollary, the $B'_{R_{j} } \left(\pi _{S} \right)$ entanglement rate at , is as $$\label{86)}
\begin{split}
\left| {{{{B}'_{{{R}_{j}}}}}}\left( {{\pi }_{S}} \right) \right|&=\tfrac{1}{1+{{D}^{*}}\left( {{\pi }_{S}} \right)}\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \\
& =\tfrac{{{\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right|}^{2}}}{\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right|+\left( {{D}^{*}}\left( {{\pi }_{S}} \right)\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \right)},
\end{split}$$ which concludes the proof.
### Perfect Swapping Sets
(Entanglement rate decrement at perfect swapping sets). For a perfect entanglement swapping set $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)$, $\gamma =0$, the $B'_{R_{j} } \left(\pi _{S} \right)$ outgoing entanglement rate is $\left|B'_{R_{j} } \left(\pi _{S} \right)\right|={\tfrac{1}{1+\hat{D}\left(\pi _{S} \right)}} \left|B_{R_{j} } \left(\pi _{S} \right)\right|$ per $\pi _{S} $, where $\hat{D}\left(\pi _{S} \right)={\left|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right|\mathord{\left/ {\vphantom {\left|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right| \left|B_{R_{j} } \left(\pi _{S} \right)\right|}} \right. \kern-\nulldelimiterspace} \left|B_{R_{j} } \left(\pi _{S} \right)\right|} $, with ${\mathbb{E}}\left[\left|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right|\right]=N$.
The proof trivially follows from the fact, that for a $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)$ perfect entanglement swapping set, the $Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)$ cardinality of all $N$ coincidence sets ${\rm {\mathcal S}}_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)$, $i=1,\ldots ,N$, $k=1,\ldots ,N$, is $Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)=1$, thus $$\label{87)}
{\mathbb{E}}\left[\left|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right|\right]=N.$$ As follows, $$\label{ZEqnNum669177}
\hat{D}\left(\pi _{S} \right)\le {\tfrac{\left|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right|}{\left|B_{R_{j} } \left(\pi _{S} \right)\right|}} ={\tfrac{N}{\left|B_{R_{j} } \left(\pi _{S} \right)\right|}} ,$$ and the $B'_{R_{j} } \left(\pi _{S} \right)$ entanglement rate at , is as $$\label{89)}
\begin{split}
\left| {{{{B}'_{{{R}_{j}}}}}}\left( {{\pi }_{S}} \right) \right|&=\tfrac{1}{1+\hat{D}\left( {{\pi }_{S}} \right)}\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \\
& =\tfrac{{{\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right|}^{2}}}{\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right|+\left( \hat{D}\left( {{\pi }_{S}} \right)\left| {{B}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right| \right)}.
\end{split}$$ The proof is concluded here.
Sub-Linear Function at a Swapping Period
----------------------------------------
For any $\gamma \left(\pi _{S} \right)>0$, at a given $\pi _{S}^{*} =\left(1+h\right)\pi _{S} $ entanglement swapping period, $h>0$, $f\left(\gamma \left(\pi _{S} \right)\right)$ can be evaluated as $f\left(\gamma \left(\pi _{S} \right)\right)=2\pi _{S}^{*} N$.
Let $\zeta ^{*} \left(\pi _{S} \right)$ be the optimal entanglement swapping at $\gamma \left(\pi _{S} \right)=0$with a maximized weight coefficient $\omega ^{*} \left(\pi _{S} \right)$, and let $\chi \left(\pi _{S} \right)$ be an arbitrary entanglement swapping with defined as $$\label{ZEqnNum702785}
\chi \left(\pi _{S} \right)=\zeta ^{*} \left(\pi _{S} -\pi _{S}^{*} \right),$$ where $\zeta ^{*} \left(\pi _{S} -\pi _{S}^{*} \right)$ an optimal entanglement swapping at an $\left(\pi _{S} -\pi _{S}^{*} \right)$-th entanglement swapping period, while $\pi _{S}^{*} $ is an entanglement swapping period, defined as $$\label{91)}
\pi _{S}^{*} =\left(1+h\right)\pi _{S} =\left(1+h\right)xt_{C} .$$ where $h>0$.
Then, the $\omega \left(\pi _{S} -\pi _{S}^{*} \right)$ weight coefficient of entanglement swapping $\chi \left(\pi _{S} \right)$ at an $\left(\pi _{S} -\pi _{S}^{*} \right)$-th entanglement swapping period is as $$\label{ZEqnNum223298}
\begin{split}
\omega \left( {{\pi }_{S}}-\pi _{S}^{*} \right)&=\sum\limits_{i,k}{{{\chi }_{ik}}\left( {{\rho }_{A}},{{\sigma }_{k}} \right)Z_{{{R}_{j}}}^{\left( {{\pi }_{S}}-\pi _{S}^{*} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right)} \\
& =\left\langle \chi \left( {{\pi }_{S}} \right),{{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}}-\pi _{S}^{*} \right) \right\rangle ,
\end{split}$$ while $\omega \left(\pi _{S} \right)$ at $\pi _{S} $ is as
$$\label{ZEqnNum179592}
\begin{split}
\omega \left( {{\pi }_{S}} \right)&=\sum\limits_{i,k}{{{\chi }_{ik}}\left( {{\rho }_{A}},{{\sigma }_{k}} \right)Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right)} \\
& =\left\langle \chi \left( {{\pi }_{S}} \right),{{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right\rangle .
\end{split}$$
It can be concluded, that the difference of and is as $$\label{ZEqnNum295151}
\omega \left(\pi _{S} -\pi _{S}^{*} \right)-\omega \left(\pi _{S} \right)\le \pi _{S}^{*} N,$$ thus is at most $\pi _{S}^{*} N$ more than , since the weight coefficient of $\chi \left(\pi _{S} \right)$ can at most decrease by $N$ every period (i.e., at a given entanglement swapping period at most $N$ density matrix pairs can be swapped by $\chi \left(\pi _{S} \right)$).
On the other hand, let $$\label{ZEqnNum671774}
\begin{split}
{{\omega }^{*}}\left( {{\pi }_{S}} \right)&=\omega \left( {{\pi }_{S}}\left( \gamma =0 \right) \right) \\
& =\sum\limits_{i,k}{\zeta _{ik}^{*}\left( {{\rho }_{A}},{{\sigma }_{k}} \right)Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right)} \\
& =\left\langle {{\zeta }^{*}}\left( {{\pi }_{S}} \right),{{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right\rangle
\end{split}$$ be the weight coefficient of $\zeta ^{*} \left(\pi _{S} \right)$ at $\pi _{S} $. It also can be verified, that the corresponding relation for the difference of and is as $$\label{96)}
\omega ^{*} \left(\pi _{S} \right)-\omega \left(\pi _{S} -\pi _{S}^{*} \right)\le \pi _{S}^{*} N.$$ From and , for the $\omega \left(\pi _{S} \right)$ coefficient of $\chi \left(\pi _{S} \right)$ at $\pi _{S} $, it follows that $$\label{97)}
\omega \left(\pi _{S} \right)\ge \omega ^{*} \left(\pi _{S} \right)-2\pi _{S}^{*} N,$$ therefore, function $f\left(\gamma \left(\pi _{S} \right)\right)$ for any non-zero noise, $\gamma \left(\pi _{S} \right)>0$, is evaluated as (i.e., $\chi \left(\pi _{S} \right)\ne \zeta ^{*} \left(\pi _{S} \right)$) $$\label{98)}
f\left(\gamma \left(\pi _{S} \right)\right)=2\pi _{S}^{*} N,$$ while for any $\gamma \left(\pi _{S} \right)=0$ (i.e., $\chi \left(\pi _{S} \right)=\zeta ^{*} \left(\pi _{S} \right)$) $$\label{99)}
f\left(\gamma \left(\pi _{S} \right)\right)=0,$$ which concludes the proof.
Performance Evaluation {#sec5}
======================
In this section, a numerical performance evaluation is proposed to study the delay and the entanglement rate at the different entanglement swapping sets.
Entanglement Swapping Period
----------------------------
In [Fig. \[fig3\]]{}(a), the values of $\pi _{S}^{*} $ as a function of $h$ and $\pi _{S} $ are depicted. $\pi _{S} \in \left[1,10\right]$, $h\in \left[0,1\right]$. In [Fig. \[fig3\]]{}(b), the values of $f\left(\gamma \left(\pi _{S} \right)\right)=2\pi _{S}^{*} N$ are depicted as a function of $h$ and $N$.
Delay and Entanglement Rate Ratio
---------------------------------
In [Fig. \[fig4\]]{}(a), the values of $D\left(\pi _{S} \right)$ are depicted as a function of the incoming entanglement rate $\left|B_{R_{j} } \left(\pi _{S} \right)\right|$ for the non-complete ${\rm {\mathcal S}}\left(R_{j} \right)$, complete ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$ and perfect $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)$ entanglement swapping sets. The $D\left(\pi _{S} \right)$ delay values are evaluated as $D\left(\pi _{S} \right)={\left|Z_{R_{j} } \left(\pi _{S} \right)\right|\mathord{\left/ {\vphantom {\left|Z_{R_{j} } \left(\pi _{S} \right)\right| \left|B_{R_{j} } \left(\pi _{S} \right)\right|}} \right. \kern-\nulldelimiterspace} \left|B_{R_{j} } \left(\pi _{S} \right)\right|} $ via the maximized values of , and , i.e., the delay depends on the cardinality of the coincidence set and the incoming entanglement rate (This relation also can be derived from Little’s law; for details, see the fundamentals of queueing theory [@refL1; @refL2; @refL3; @refD1]).
In [Fig. \[fig4\]]{}(b), the ratio $r$ of the $\left|B'_{R_{j} } \left(\pi _{S} \right)\right|$ outgoing and $\left|B_{R_{j} } \left(\pi _{S} \right)\right|$ incoming entanglement rates is depicted as a function of the incoming entanglement rate. For the non-complete entanglement swapping set, the loss is set as $\tilde{L}=0.2N$.
The highest $D\left(\pi _{S} \right)$ delay values can be obtained for non-complete entanglement swapping set ${\rm {\mathcal S}}\left(R_{j} \right)$, while the lowest delays can be found for the perfect entanglement swapping set $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)$. For a complete entanglement swapping set ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$, the delay values are between the non-complete and perfect sets. This is because for a non-complete set, the losses due to the $\gamma \left(\pi _{S} \right)>0$ non-zero noise allow only an approximation of the delay of a complete set; thus, $D_{R_{j} } \left(\pi _{S} \right)>D_{R_{j} }^{*} \left(\pi _{S} \right)$. For a complete entanglement swapping set, while the noise is zero, $\gamma \left(\pi _{S} \right)=0$, the $\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|$ cardinality of the coincidence set is high, $\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|>\left|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right|$; thus, the $D_{R_{j} }^{*} \left(\pi _{S} \right)$ delay is higher than the $\hat{D}_{R_{j} } \left(\pi _{S} \right)$ delay of a perfect entanglement swapping set, $D_{R_{j} }^{*} \left(\pi _{S} \right)>\hat{D}_{R_{j} } \left(\pi _{S} \right)$. For a perfect set, the noise is zero, $\gamma \left(\pi _{S} \right)=0$ and the cardinality of the coincidence sets is one for all inputs; thus, $\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|>\left|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right|=N$. Therefore, the $\hat{D}_{R_{j} } \left(\pi _{S} \right)$ delay is minimal for a perfect set.
From the relation of $D_{R_{j} } \left(\pi _{S} \right)>D_{R_{j} }^{*} \left(\pi _{S} \right)>\hat{D}_{R_{j} } \left(\pi _{S} \right)$, the corresponding delays of the entanglement swapping sets, the relation for the decrease in the outgoing entanglement rates is straightforward, as follows. As $r\to 1$ holds for the ratio $r$ of the $\left|B'_{R_{j} } \left(\pi _{S} \right)\right|$ outgoing and $\left|B_{R_{j} } \left(\pi _{S} \right)\right|$ incoming entanglement rates, then $\left|B'_{R_{j} } \left(\pi _{S} \right)\right|\to \left|B_{R_{j} } \left(\pi _{S} \right)\right|$, i.e., no significant decrease is caused by the entanglement swapping operation.
The highest outgoing entanglement rates are obtained for a perfect entanglement swapping set, which is followed by the outgoing rates at a complete entanglement swapping set. For a non-complete set, the outgoing rate is significantly lower due to the losses caused by the non-zero noise in comparison with the perfect and complete sets.
Conclusions {#sec6}
===========
The quantum repeaters determine the structure and performance attributes of the quantum Internet. Here, we defined the theory of noise-scaled stability derivation of the quantum repeaters and methods of entanglement rate maximization for the quantum Internet. The framework characterized the stability conditions of entanglement swapping in quantum repeaters and the terms of non-complete, complete and perfect entanglement swapping sets in the quantum repeaters to model the status of the quantum memory of the quantum repeaters. The defined terms are evaluated as a function of the noise level of the quantum repeaters to describe the physical procedures of the quantum repeaters. We derived the conditions for an optimal entanglement swapping at a particular noise level to maximize the entanglement throughput of the quantum repeaters. The results are applicable to the experimental quantum Internet.
Acknowledgements {#acknowledgements .unnumbered}
================
The research reported in this paper has been supported by the Hungarian Academy of Sciences (MTA Premium Postdoctoral Research Program 2019), by the National Research, Development and Innovation Fund (TUDFO/51757/2019-ITM, Thematic Excellence Program), by the National Research Development and Innovation Office of Hungary (Project No. 2017-1.2.1-NKP-2017-00001), by the Hungarian Scientific Research Fund - OTKA K-112125 and in part by the BME Artificial Intelligence FIKP grant of EMMI (Budapest University of Technology, BME FIKP-MI/SC).
[10]{} Pirandola, S. and Braunstein, S. L. Unite to build a quantum internet. *Nature* 532, 169–171 (2016).
Pirandola, S. End-to-end capacities of a quantum communication network, *Commun. Phys.* 2, 51 (2019).
Wehner, S., Elkouss, D., and R. Hanson. Quantum internet: A vision for the road ahead, *Science* 362, 6412, (2018).
Pirandola, S., Laurenza, R., Ottaviani, C. and Banchi, L. Fundamental limits of repeaterless quantum communications, *Nature Communications* 8, 15043, DOI:10.1038/ncomms15043 (2017).
Pirandola, S., Braunstein, S. L., Laurenza, R., Ottaviani, C., Cope, T. P. W., Spedalieri, G. and Banchi, L. Theory of channel simulation and bounds for private communication, *Quantum Sci. Technol*. 3, 035009 (2018).
Pirandola, S. Bounds for multi-end communication over quantum networks, *Quantum Sci. Technol.* 4, 045006 (2019).
Pirandola, S. Capacities of repeater-assisted quantum communications, *arXiv:1601.00966* (2016).
Pirandola, S. et al. Advances in Quantum Cryptography, *arXiv:1906.01645* (2019).
Laurenza, R. and Pirandola, S. General bounds for sender-receiver capacities in multipoint quantum communications, *Phys. Rev. A* 96, 032318 (2017).
Van Meter, R. *Quantum Networking*. ISBN 1118648927, 9781118648926, John Wiley and Sons Ltd (2014).
Lloyd, S., Shapiro, J. H., Wong, F. N. C., Kumar, P., Shahriar, S. M. and Yuen, H. P. Infrastructure for the quantum Internet. *ACM SIGCOMM* *Computer* *Communication Review*, 34, 9–20 (2004).
Kimble, H. J. The quantum Internet. *Nature*, 453:1023–1030 (2008).
Gyongyosi, L. and Imre, S. Optimizing High-Efficiency Quantum Memory with Quantum Machine Learning for Near-Term Quantum Devices, *Scientific Reports*, Nature, DOI: 10.1038/s41598-019-56689-0 (2019).
Van Meter, R., Ladd, T. D., Munro, W. J. and Nemoto, K. System Design for a Long-Line Quantum Repeater, *IEEE/ACM Transactions on Networking* 17(3), 1002-1013, (2009).
Van Meter, R., Satoh, T., Ladd, T. D., Munro, W. J. and Nemoto, K. Path selection for quantum repeater networks, *Networking Science*, Volume 3, Issue 1–4, pp 82–95, (2013).
Van Meter, R. and Devitt, S. J. Local and Distributed Quantum Computation, *IEEE Computer* 49(9), 31-42 (2016).
Gyongyosi, L., Imre, S. and Nguyen, H. V. A Survey on Quantum Channel Capacities, *IEEE Communications Surveys and Tutorials*, DOI: 10.1109/COMST.2017.2786748 (2018).
Preskill, J. Quantum Computing in the NISQ era and beyond, *Quantum* 2, 79 (2018).
Arute, F. et al. Quantum supremacy using a programmable superconducting processor, *Nature*, Vol 574, DOI:10.1038/s41586-019-1666-5 (2019).
Harrow, A. W. and Montanaro, A. Quantum Computational Supremacy, *Nature*, vol 549, pages 203-209 (2017).
Aaronson, S. and Chen, L. Complexity-theoretic foundations of quantum supremacy experiments. *Proceedings of the 32nd Computational Complexity Conference*, CCC ’17, pages 22:1-22:67, (2017).
Farhi, E., Goldstone, J., Gutmann, S. and Neven, H. Quantum Algorithms for Fixed Qubit Architectures. *arXiv:1703.06199v1* (2017).
Farhi, E. and Neven, H. Classification with Quantum Neural Networks on Near Term Processors, *arXiv:1802.06002v1* (2018).
Alexeev, Y. et al. Quantum Computer Systems for Scientific Discovery, *arXiv:1912.07577* (2019).
Loncar, M. et al. Development of Quantum InterConnects for Next-Generation Information Technologies, *arXiv:1912.06642* (2019).
Ajagekar, A., Humble, T. and You, F. Quantum Computing based Hybrid Solution Strategies for Large-scale Discrete-Continuous Optimization Problems. *Computers and Chemical Engineering* Vol 132, 106630 (2020).
Foxen, B. et al. Demonstrating a Continuous Set of Two-qubit Gates for Near-term Quantum Algorithms, *arXiv:2001.08343* (2020).
Shor, P. W. Algorithms for quantum computation: discrete logarithms and factoring. In: *Proceedings of the 35th Annual Symposium on Foundations of Computer Science* (1994).
IBM. *A new way of thinking: The IBM quantum experience*. URL: http://www.research.ibm.com/quantum. (2017).
Chakraborty, K., Rozpedeky, F., Dahlbergz, A. and Wehner, S. Distributed Routing in a Quantum Internet, *arXiv:1907.11630v1* (2019).
Khatri, S., Matyas, C. T., Siddiqui, A. U. and Dowling, J. P. Practical figures of merit and thresholds for entanglement distribution in quantum networks, *Phys. Rev. Research* 1, 023032 (2019).
Kozlowski, W. and Wehner, S. Towards Large-Scale Quantum Networks, *Proc. of the Sixth Annual ACM International Conference on Nanoscale Computing and Communication*, Dublin, Ireland, *arXiv:1909.08396* (2019).
Pathumsoot, P., Matsuo, T., Satoh, T., Hajdusek, M., Suwanna, S. and Van Meter, R. Modeling of Measurement-based Quantum Network Coding on IBMQ Devices, *arXiv:1910.00815v1* (2019).
Pal, S., Batra, P., Paterek, T. and Mahesh, T. S. Experimental localisation of quantum entanglement through monitored classical mediator, *arXiv:1909.11030v1* (2019).
Miguel-Ramiro, J. and Dur, W. Delocalized information in quantum networks, *arXiv:1912.12935v1* (2019).
Pirker, A. and Dur, W. A quantum network stack and protocols for reliable entanglement-based networks, *arXiv:1810.03556v1* (2018).
Tanjung, K. et al. Probing quantum features of photosynthetic organisms. *npj Quantum Information*, 2056-6387 4 (2018).
Tanjung, K. et al. Revealing Nonclassicality of Inaccessible Objects. *Phys. Rev. Lett.*, 1079-7114 119 12 (2017).
Gyongyosi, L. and Imre, S. Decentralized Base-Graph Routing for the Quantum Internet, *Physical Review A*, DOI: 10.1103/PhysRevA.98.022310, https://link.aps.org/doi/10.1103/PhysRevA.98.022310 (2018).
Gyongyosi, L. and Imre, S. Dynamic topology resilience for quantum networks, *Proc. SPIE 10547*, Advances in Photonics of Quantum Computing, Memory, and Communication XI, 105470Z; DOI: 10.1117/12.2288707 (2018).
Gyongyosi, L. and Imre, Topology Adaption for the Quantum Internet, *Quantum Information Processing*, Springer Nature, DOI: 10.1007/s11128-018-2064-x, (2018).
Gyongyosi, L. and Imre, S. Entanglement Access Control for the Quantum Internet, *Quantum Information Processing*, Springer Nature, DOI: 10.1007/s11128-019-2226-5, (2019).
Gyongyosi, L. and Imre, S. Opportunistic Entanglement Distribution for the Quantum Internet, *Scientific Reports*, Nature, DOI:10.1038/s41598-019-38495-w, (2019).
Gyongyosi, L. and Imre, S. Adaptive Routing for Quantum Memory Failures in the Quantum Internet, *Quantum Information Processing*, Springer Nature, DOI: 10.1007/s11128-018-2153-x, (2018).
Quantum Internet Research Group (QIRG), web: https://datatracker.ietf.org/rg/qirg/about/ (2018).
Gyongyosi, L. and Imre, S. Multilayer Optimization for the Quantum Internet, *Scientific Reports*, Nature, DOI: 10.1038/s41598-018-30957-x, (2018).
Gyongyosi, L. and Imre, S. Entanglement Availability Differentiation Service for the Quantum Internet, *Scientific Reports*, Nature, DOI:10.1038/s41598-018-28801-3, https://www.nature.com/articles/s41598-018-28801-3 (2018).
Gyongyosi, L. and Imre, S. Entanglement-Gradient Routing for Quantum Networks, *Scientific Reports*, Nature, DOI: 10.1038/s41598-017-14394-w, https://www.nature.com/articles/s41598-017-14394-w (2017).
Gyongyosi, L. and Imre, S. A Survey on Quantum Computing Technology, *Computer Science Review*, DOI: 10.1016/j.cosrev.2018.11.002, https://doi.org/10.1016/j.cosrev.2018.11.002 (2018).
Rozpedek, F., Schiet, T., Thinh, L., Elkouss, D., Doherty, A., and S. Wehner, Optimizing practical entanglement distillation, *Phys. Rev. A* 97, 062333 (2018).
Humphreys, P. et al., Deterministic delivery of remote entanglement on a quantum network, *Nature* 558, (2018).
Liao, S.-K. et al. Satellite-to-ground quantum key distribution, *Nature* 549, pages 43–47, (2017).
Ren, J.-G. et al. Ground-to-satellite quantum teleportation, *Nature* 549, pages 70–73, (2017).
Hensen, B. et al., Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres, *Nature* 526, (2015).
Hucul, D. et al., Modular entanglement of atomic qubits using photons and phonons, *Nature Physics* 11(1), (2015).
Noelleke, C. et al, Efficient Teleportation Between Remote Single-Atom Quantum Memories, *Physical Review Letters* 110, 140403, (2013).
Sangouard, N. et al., Quantum repeaters based on atomic ensembles and linear optics, *Reviews of Modern Physics* 83, 33, (2011).
Caleffi, M. End-to-End Entanglement Rate: Toward a Quantum Route Metric, 2017 *IEEE Globecom*, DOI: 10.1109/GLOCOMW.2017.8269080, (2018).
Caleffi, M. Optimal Routing for Quantum Networks, *IEEE Access*, Vol 5, DOI: 10.1109/ACCESS.2017.2763325 (2017).
Caleffi, M., Cacciapuoti, A. S. and Bianchi, G. Quantum Internet: from Communication to Distributed Computing, *aXiv:1805.04360* (2018).
Castelvecchi, D. The quantum internet has arrived, *Nature*, News and Comment, https://www.nature.com/articles/d41586-018-01835-3, (2018).
Cacciapuoti, A. S., Caleffi, M., Tafuri, F., Cataliotti, F. S., Gherardini, S. and Bianchi, G. Quantum Internet: Networking Challenges in Distributed Quantum Computing, *arXiv:1810.08421* (2018).
Imre, S. and Gyongyosi, L. *Advanced Quantum Communications - An Engineering Approach*. New Jersey, Wiley-IEEE Press (2013).
Kok, P., Munro, W. J., Nemoto, K., Ralph, T. C., Dowling, J. P. and Milburn, G. J., Linear optical quantum computing with photonic qubits, *Rev. Mod. Phys*. 79, 135-174 (2007).
Petz, D. *Quantum Information Theory and Quantum Statistics*, Springer-Verlag, Heidelberg, Hiv: 6. (2008).
Bacsardi, L. On the Way to Quantum-Based Satellite Communication, *IEEE Comm. Mag.* 51:(08) pp. 50-55. (2013).
Biamonte, J. et al. Quantum Machine Learning. *Nature*, 549, 195-202 (2017).
Lloyd, S., Mohseni, M. and Rebentrost, P. Quantum algorithms for supervised and unsupervised machine learning. *arXiv:1307.0411* (2013).
Lloyd, S., Mohseni, M. and Rebentrost, P. Quantum principal component analysis. *Nature Physics*, 10, 631 (2014).
Lloyd, S. Capacity of the noisy quantum channel. *Physical Rev. A*, 55:1613–1622 (1997).
Lloyd, S. The Universe as Quantum Computer, *A Computable Universe: Understanding and exploring Nature as computation*, Zenil, H. ed., World Scientific, Singapore, *arXiv:1312.4455v1* (2013).
Shor, P. W. Scheme for reducing decoherence in quantum computer memory. *Phys. Rev. A*, 52, R2493-R2496 (1995).
Chou, C., Laurat, J., Deng, H., Choi, K. S., de Riedmatten, H., Felinto, D. and Kimble, H. J. Functional quantum nodes for entanglement distribution over scalable quantum networks. *Science*, 316(5829):1316–1320 (2007).
Muralidharan, S., Kim, J., Lutkenhaus, N., Lukin, M. D. and Jiang. L. Ultrafast and Fault-Tolerant Quantum Communication across Long Distances, *Phys. Rev. Lett*. 112, 250501 (2014).
Yuan, Z., Chen, Y., Zhao, B., Chen, S., Schmiedmayer, J. and Pan, J. W. *Nature* 454, 1098-1101 (2008).
Kobayashi, H., Le Gall, F., Nishimura, H. and Rotteler, M. General scheme for perfect quantum network coding with free classical communication, *Lecture Notes in Computer Science* (Automata, Languages and Programming SE-52 vol. 5555), Springer) pp 622-633 (2009).
Hayashi, M. Prior entanglement between senders enables perfect quantum network coding with modification, *Physical Review A*, Vol.76, 040301(R) (2007).
Hayashi, M., Iwama, K., Nishimura, H., Raymond, R. and Yamashita, S, Quantum network coding, *Lecture Notes in Computer Science* (STACS 2007 SE52 vol. 4393) ed Thomas, W. and Weil, P. (Berlin Heidelberg: Springer) (2007).
Chen, L. and Hayashi, M. Multicopy and stochastic transformation of multipartite pure states, *Physical Review A*, Vol.83, No.2, 022331, (2011).
Schoute, E., Mancinska, L., Islam, T., Kerenidis, I. and Wehner, S. Shortcuts to quantum network routing, *arXiv:1610.05238* (2016).
Lloyd, S. and Weedbrook, C. Quantum generative adversarial learning. *Phys. Rev. Lett*., 121, arXiv:1804.09139 (2018).
Gisin, N. and Thew, R. Quantum Communication. *Nature Photon.* 1, 165-171 (2007).
Xiao, Y. F., Gong, Q. Optical microcavity: from fundamental physics to functional photonics devices. *Science Bulletin*, 61, 185-186 (2016).
Zhang, W. et al. Quantum Secure Direct Communication with Quantum Memory. *Phys. Rev. Lett.* 118, 220501 (2017).
Enk, S. J., Cirac, J. I. and Zoller, P. Photonic channels for quantum communication. *Science*, 279, 205-208 (1998).
Briegel, H. J., Dur, W., Cirac, J. I. and Zoller, P. Quantum repeaters: the role of imperfect local operations in quantum communication. *Phys. Rev. Lett.* 81, 5932-5935 (1998).
Dur, W., Briegel, H. J., Cirac, J. I. and Zoller, P. Quantum repeaters based on entanglement purification. *Phys. Rev. A*, 59, 169-181 (1999).
Dehaene, J. et al. Local permutations of products of Bell states and entanglement distillation, 67, 022310 (2003).
Dur, W. and Briegel, H. J. Entanglement purification and quantum error correction, *Rep. Prog. Phys*. 70 1381 (2007).
Munro, W. J., et al. Inside quantum repeaters. *IEEE Journal of Selected Topics in Quantum Electronics* 78-90 (2015).
Duan, L. M., Lukin, M. D., Cirac, J. I. and Zoller, P. Long-distance quantum communication with atomic ensembles and linear optics. *Nature*, 414, 413-418 (2001).
Van Loock, P., Ladd, T. D., Sanaka, K., Yamaguchi, F., Nemoto, K., Munro, W. J. and Yamamoto, Y. Hybrid quantum repeater using bright coherent light. *Phys. Rev. Lett.*, 96, 240501 (2006).
Zhao, B., Chen, Z. B., Chen, Y. A., Schmiedmayer, J. and Pan, J. W. Robust creation of entanglement between remote memory qubits. *Phys. Rev. Lett.* 98, 240502 (2007).
Goebel, A. M., Wagenknecht, G., Zhang, Q., Chen, Y., Chen, K., Schmiedmayer, J. and Pan, J. W. Multistage Entanglement Swapping. *Phys. Rev. Lett.* 101, 080403 (2008).
Simon, C., de Riedmatten, H., Afzelius, M., Sangouard, N., Zbinden, H. and Gisin N. Quantum Repeaters with Photon Pair Sources and Multimode Memories. *Phys. Rev. Lett*. 98, 190503 (2007).
Tittel, W., Afzelius, M., Chaneliere, T., Cone, R. L., Kroll, S., Moiseev, S. A. and Sellars, M. Photon-echo quantum memory in solid state systems. *Laser Photon. Rev.* 4, 244-267 (2009).
Sangouard, N., Dubessy, R. and Simon, C. Quantum repeaters based on single trapped ions. *Phys. Rev. A*, 79, 042340 (2009).
Sheng, Y. B., Zhou, L. Distributed secure quantum machine learning. *Science Bulletin*, 62, 1025-1019 (2017).
Leung, D., Oppenheim, J. and Winter, A. *IEEE Trans. Inf. Theory* 56, 3478-90. (2010).
Kobayashi, H., Le Gall, F., Nishimura, H. and Rotteler, M. Perfect quantum network communication protocol based on classical network coding, *Proceedings of 2010 IEEE International Symposium on Information Theory* (ISIT) pp 2686-90. (2010).
Bianco, A., Giaccone, P., Leonardi, E., Mellia, M. and Neri, F. Theoretical Performance of Input-queued Switches using Lyapunov Methodology, in: Elhanany, I., and Hamdi, M. (eds.) *High-performance Packet Switching Architectures*, Springer (2007).
Leonardi, E., Mellia, M., Neri, F., Ajmone Marsan, M. On the stability of Input-Queued Switches with Speed-up. *IEEE/ACM Transactions on Networking* 9:104-118 (2001).
Leonardi, E., Mellia, M., Neri F., Ajmone Marsan, M. Bounds on delays and queue lengths in input-queued cell switches. *Journal of the ACM* 50:520-550 23 (2003).
Ajmone Marsan, M., Bianco, A., Giaccone, P., Leonardi, E. and Neri, F. Packet-mode scheduling in input-queued cell-based switches. *IEEE/ACM Transactions on Networking* 10:666-678 (2002).
Shah, D. and Kopikare, M. Delay Bounds for Approximate Maximum Weight Matching Algorithms for Input Queued Switches. *Proc. of IEEE INFOCOM 2002* (2002).
Mitzenmacher, N. and Upfal, E. *Probability and computing: Randomized algorithms and probabilistic analysis*. Cambridge University Press (2005).
Appendix
========
Notations
---------
The notations of the manuscript are summarized in [Table \[tab2\]]{}.
*Notation* *Description*
------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$N$ An entangled quantum network, $N=\left(V,E\right)$, where $V$ is a set of nodes, $E$ is a set of entangled connections.
$A$ A source user (quantum node) in the quantum network.
$B$ A destination user (quantum node).
$R_{j} $ A current $j$-th quantum repeater, $j=1,\ldots ,q$, where $q$ is the total number of quantum repeaters.
$R_{i} $ A previous neighbor of $R_{i} $.
$R_{k} $ A next neighbor of $R_{k} $ (towards destination).
$l$ Level of entanglement.
${\rm L}_{l} \left(x,y\right)$ An $l$-level entangled connection between quantum nodes $x$ and $y$.
$d\left(x,y\right)_{{\rm L}_{l} } $ Hop-distance at an ${\rm L}_{l} $-level entangled connection between quantum nodes $x$ and $y$, $d\left(x,y\right)_{{\rm L}_{l} } =2^{l-1} $.
$O_{C} $ An oscillator with frequency $f_{C} $, $f_{C} ={1\mathord{\left/ {\vphantom {1 t_{C} }} \right. \kern-\nulldelimiterspace} t_{C} } $, serves as a reference clock.
$C$ A cycle, with $t_{C} ={1\mathord{\left/ {\vphantom {1 f_{C} }} \right. \kern-\nulldelimiterspace} f_{C} } $.
$\pi _{S} $ An entanglement swapping period, in which the set ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ of density matrices are swapped via the $U_{S} $ entanglement swapping operator with the ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ of density matrices, defined as $\pi _{S} =xt_{C} $, where $x$ is the number of $C$.
$\pi '_{S} $ A next entanglement swapping period after $\pi _{S} $.
$q$ Total number of quantum repeaters in an entangled path ${\rm {\mathcal P}}\left(A_{i} \to B_{i} \right)$, $q=d\left(A,B\right)_{{\rm L}_{l} } -1$.
$B_{F} $ Entanglement throughput \[Bell states per $\pi _{S} $\].
$\left|B_{F} \right|$ Number of entangled states \[Number of Bell states\].
${\rm L}_{l} \left(k\right)$ A $k$-th entangled connection.
$B_{F} \left({\rm L}_{l} \left(k\right)\right)$ Entanglement throughput of the entangled connection ${\rm L}_{l} \left(k\right)$ \[Bell states per $\pi _{S} $\].
$\rho $ In a $j$-th quantum repeater $R_{j} $, an $\rho $ incoming density matrix is a half of an entangled state ${\left| \beta _{00} \right\rangle} $ received from a previous neighbor node $R_{j-1} $.
$\sigma $ The $\sigma $ outgoing density matrix in $R_{j} $ is a half of an entangled state ${\left| \beta _{00} \right\rangle} $ shared with a next neighbor node $R_{j+1} $.
$U_{S} $ Entanglement swapping operation is a local transformation that swaps an incoming density matrix $\rho $ with an outgoing density matrix $\sigma $ in a quantum repeater $R$.
${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ Set of incoming density matrices stored in the quantum memory of $R_{j} $, ${\rm {\mathcal S}}_{I} \left(R_{j} \right)=\bigcup _{i}\rho _{i} $, where $\rho _{i} $ is an $i$-th density matrix.
${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ Set of outgoing density matrices stored in the quantum memory of $R_{j} $, ${\rm {\mathcal S}}_{O} \left(R_{j} \right)=\bigcup _{i}\sigma _{i} $, where $\sigma _{i} $ is an $i$-th density matrix.
${\rm {\mathcal S}}_{I}^{*} \left(R_{j} \right)$ A complete set of incoming density matrices. Set ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ formulates a ${\rm {\mathcal S}}_{I}^{*} \left(R_{j} \right)$ complete set if ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ contains all the $Q=\sum _{i=1}^{N}\left|B_{i} \right| $ incoming density matrices per $\pi _{S} $ that is received by $R_{j} $ in a swapping period, where $N$ is the number of input entangled connections of $R_{j} $, $\left|B_{i} \right|$ is the number of incoming densities of the $i$-th input connection per $\pi _{S} $, thus ${\rm {\mathcal S}}_{I} \left(R_{j} \right)=\bigcup _{i=1}^{Q}\rho _{i} $ and $\left|{\rm {\mathcal S}}_{I} \left(R_{j} \right)\right|=Q$.
${\rm {\mathcal S}}_{O}^{*} \left(R_{j} \right)$ A complete set of outgoing density matrices. An ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ set formulates a ${\rm {\mathcal S}}_{O}^{*} \left(R_{j} \right)$ complete set, if ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ contains all the $N$ outgoing density matrices that is shared by $R_{j} $ during a swapping period $\pi _{S} $, thus ${\rm {\mathcal S}}_{O} \left(R_{j} \right)=\bigcup _{i=1}^{N}\sigma _{i} $ and $\left|{\rm {\mathcal S}}_{O} \left(R_{j} \right)\right|=N$.
${\rm {\mathcal S}}\left(R_{j} \right)$ An entanglement swapping set of $R_{j} $, ${\rm {\mathcal S}}\left(R_{j} \right)={\rm {\mathcal S}}_{I} \left(R_{j} \right)\bigcup {\rm {\mathcal S}}_{O} \left(R_{j} \right) $ that describes the status of the quantum memory in $R_{j} $.
${\rm {\mathcal S}}^{*} \left(R_{j} \right)$ A complete entanglement swapping set. A ${\rm {\mathcal S}}\left(R_{j} \right)$ is a ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$ complete swapping set, if ${\rm {\mathcal S}}^{*} \left(R_{j} \right)={\rm {\mathcal S}}_{I}^{*} \left(R_{j} \right)\bigcup {\rm {\mathcal S}}_{O}^{*} \left(R_{j} \right) $, with cardinality $\left|{\rm {\mathcal S}}^{*} \left(R_{j} \right)\right|=Q+N$.
${\rm {\mathcal S}}^{*} \left(R_{j} \right)$ A perfect entanglement swapping set. A ${\rm {\mathcal S}}^{*} \left(R_{j} \right)$ complete swapping set is a $\hat{{\rm {\mathcal S}}}\left(R_{j} \right)=\hat{{\rm {\mathcal S}}}_{I} \left(R_{j} \right)\bigcup \hat{{\rm {\mathcal S}}}_{O} \left(R_{j} \right) $ perfect swapping set at a given $\pi _{S} $, if $\left|\hat{{\rm {\mathcal S}}}\left(R_{j} \right)\right|=N+N$.
${\rm {\mathcal S}}_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)$ A coincidence set, a subset of incoming density matrices in ${\rm {\mathcal S}}_{I} \left(R_{j} \right)$ of $R_{j} $ received from $R_{i} $ that requires the same outgoing density matrix $\sigma _{k} $ from ${\rm {\mathcal S}}_{O} \left(R_{j} \right)$ for the entanglement swapping.
$Z_{R_{j} }^{\left(\pi '_{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)$ Cardinality of the coincidence set ${\rm {\mathcal S}}_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)$ \[Number of Bell states\].
$\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|$ Number of density matrices arrive from $R_{i} $ to $R_{j} $ for swapping with $\sigma _{k} $ at $\pi _{S} $. It increments the cardinality of the coincidence set as $Z_{R_{j} }^{\left(\pi '_{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)=Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)+\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|$, where $\pi '_{S} $ is a next entanglement swapping period \[Number of Bell states\].
$\left|B_{R_{i} } \left(\pi _{S} \right)\right|$ Incoming entanglement rate of $R_{j} $ per a $\pi _{S} $, defined as$\left|B_{R_{j} } \left(\pi _{S} \right)\right|=\sum _{i,k}\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right| $, where $\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|$ refer to the number of density matrices arrive from $R_{i} $ for swapping with $\sigma _{k} $ per $\pi _{S} $ \[Bell states per $\pi _{S} $\].
$D\left(\pi _{S} \right)$ Delay, measured in entanglement swapping periods $\pi _{S} $ \[Number of $\pi _{S} $ periods\].
$\left|B'_{R_{j} } \left(\pi _{S} \right)\right|$ Outgoing entanglement rate of $R_{j} $, defined as$\left|B'_{R_{j} } \left(\pi _{S} \right)\right|=\left(1-{\tfrac{L}{N}} \right){\tfrac{1}{1+D\left(\pi _{S} \right)}} \left(\left|B_{R_{j} } \left(\pi _{S} \right)\right|\right),$where $L$ is the loss, $0<L\le N$ \[Bell states per $\pi _{S} $\].
$\zeta \left(\pi _{S} \right)$ Entanglement swapping procedure at a given $\pi _{S} $.
${\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)$ Set of incoming densities of $R_{j} $ at $\pi _{S} $.
$\left|{\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)\right|$ Cardinality of the set ${\rm {\mathcal S}}_{I}^{\left(\pi _{S} \right)} \left(R_{j} \right)$ \[Number of Bell states\].
$\gamma $ Noise coefficient, models the noise of the local quantum memory and the local operations, $0\le \gamma \le 1$.
$N$ Number of coincidence sets of $R_{j} $, and number of outgoing connections of $R_{j} $.
$L$ Number of losses, $0\le L\le N$.
$M$ Reduced number of swapped incoming and outgoing density matrices per $\pi _{S} $ at $L$ losses, $M=N-L$.
$\gamma \left(\pi _{S} \right)$ Noise at a given $\pi _{S} $.
$Z_{R_{j} } \left(\pi _{S} \right)$ A matrix of all coincidence set cardinalities for all input and output connections at $\pi _{S} $, defined as $Z_{R_{j} } \left(\pi _{S} \right)=Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)_{i\le N,k\le N} $.
$\omega \left(\pi _{S} \right)$ Weight coefficient, for a given entanglement swapping $\zeta \left(\pi _{S} \right)$ at a given $\pi _{S} $ is as $\omega \left( {{\pi }_{S}} \right)=\sum\limits_{i,k}{{{\zeta }_{ik}}\left( {{\rho }_{A}},{{\sigma }_{k}} \right)Z_{{{R}_{j}}}^{\left( {{\pi }_{S}} \right)}\left( \left( {{R}_{i}},{{\sigma }_{k}} \right) \right)}=\left\langle \zeta \left( {{\pi }_{S}} \right),{{Z}_{{{R}_{j}}}}\left( {{\pi }_{S}} \right) \right\rangle ,$ where $\left\langle \cdot \right\rangle $ is the inner product.
$\omega ^{*} \left(\pi _{S} \right)$ Maximized weight coefficient.
$\zeta ^{*} \left(\pi _{S} \right)$ Optimal entanglement swapping method at $\gamma \left(\pi _{S} \right)=0$
$\left|\chi \left(\pi _{S} \right)\right|$ Norm, defined for an entanglement swapping $\chi \left(\pi _{S} \right)$.
${\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)$ Lyapunov function of $Z_{R_{j} } \left(\pi _{S} \right)$, as${\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)=\sum _{i,k}\left(Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)\right)^{2} $.
$C_{1} $, $C_{2} $ Constants, $C_{1} >0$, $C_{2} >0$.
$\Delta _{{\rm {\mathcal L}}} $ Difference of Lyapunov functions ${\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi '_{S} \right)\right)$ and ${\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)$, where $\pi '_{S} $ is a next entanglement swapping period, defined as $\Delta _{{\rm {\mathcal L}}} ={\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi '_{S} \right)\right)-{\rm {\mathcal L}}\left(Z_{R_{j} } \left(\pi _{S} \right)\right)$.
$\left|\bar{B}\left(R_{i} \left(\pi '_{S} \right),\sigma _{k} \right)\right|$ A normalized number of arrival density matrices, $\left|\bar{B}\left(R_{i} \left(\pi '_{S} \right),\sigma _{k} \right)\right|\le 1,$ from $R_{i} $ for swapping with $\sigma _{k} $ at a next entanglement swapping period $\pi '_{S} $, defined as$\left|\bar{B}\left(R_{i} \left(\pi '_{S} \right),\sigma _{k} \right)\right|={\tfrac{\left|B\left(R_{i} \left(\pi '_{S} \right),\sigma _{k} \right)\right|}{\left|B_{R_{j} } \left(\pi _{S} \right)\right|}} $,where $\left|B_{R_{j} } \left(\pi _{S} \right)\right|=\sum _{i,k}\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right| $ is a total number of incoming density matrices of $R_{j} $ from the $N$ quantum repeaters.
${\mathbb{E}}\left(\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|\right)$ Expected normalized number of density matrices arrive from $R_{i} $ for swapping with $\sigma _{k} $ at $\pi _{S} $.
$\alpha _{ik} $ Parameter, defined as $\alpha _{ik} =Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right)\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)$.
$\nu _{z} $ A constant, $\nu _{z} \ge 0$.
$C_{1} $ Constant, $C_{1} =1-\sum _{z}\nu _{z} $.
$\left|Z_{R_{j} } \left(\pi _{S} \right)\right|$ Cardinality of the coincidence sets at a given $\pi _{S} $, as$\left|Z_{R_{j} } \left(\pi _{S} \right)\right|=\sum _{i,k}Z_{R_{j} }^{\left(\pi _{S} \right)} \left(\left(R_{i} ,\sigma _{k} \right)\right) =\left|{\rm {\mathcal S}}_{I} \left(R_{j} \right)\right|$.
$\left|B_{R_{i} } \left(\pi _{S} \right)\right|$ Total number of incoming density matrices in $R_{j} $ per a given $\pi _{S} $, as $\left|B_{R_{j} } \left(\pi _{S} \right)\right|=\sum _{i,k}\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right| $.
$\tilde{\pi }_{S} $ An extended entanglement swapping period, defined as $\tilde{\pi }_{S} =\pi _{S} +D\left(\pi _{S} \right)$,with ${\pi _{S} \mathord{\left/ {\vphantom {\pi _{S} \left(\tilde{\pi }_{S} \right)}} \right. \kern-\nulldelimiterspace} \left(\tilde{\pi }_{S} \right)} \le 1$ \[Number of $\pi _{S} $ periods\].
$B'_{R_{j} } \left(\pi _{S} \right)$ Outgoing entanglement rate per $\pi _{S} $ for a particular entanglement swapping set \[Bell states per $\pi _{S} $\].
$f\left(\cdot \right)$ Sub-linear function.
$\xi \left(\gamma \right)$ Parameter, defined as $\xi \left(\gamma \right)={\tfrac{\left(N-L\right)}{2C_{1} }} f\left(\gamma \left(\pi _{S} \right)\right)$.
$\beta $ Parameter, defined as$\beta =\sum _{i,k}\left(\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|-\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|^{2} \right) $, where $\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|$ refers to the normalized number of density matrices arrive from $R_{i} $ for swapping with $\sigma _{k} $ at $\pi _{S} $ as$\left|\bar{B}\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|={\tfrac{\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right|}{\sum _{i}\left|B\left(R_{i} \left(\pi _{S} \right),\sigma _{k} \right)\right| }} $.
$P$ Number of entanglement swapping periods .
$D\left(\pi _{S} \right)$ Delay per $\pi _{S} $ at a non-complete entanglement swapping set.
$D^{*} \left(\pi _{S} \right)$ Delay per $\pi _{S} $ at a complete entanglement swapping set.
$\hat{D}\left(\pi _{S} \right)$ Delay per $\pi _{S} $ at a perfect entanglement swapping set.
$\left|Z_{R_{j} } \left(\pi _{S} \right)\right|$ Cardinality of the coincidence sets at a given $\pi _{S} $, for a non-complete entanglement swapping set.
$\left|Z_{R_{j} }^{*} \left(\pi _{S} \right)\right|$ Cardinality of the coincidence sets at a given $\pi _{S} $, for a complete entanglement swapping set.
$\left|\hat{Z}_{R_{j} } \left(\pi _{S} \right)\right|$ Cardinality of the coincidence sets at a given $\pi _{S} $, for a perfect entanglement swapping set.
$\pi _{S}^{*} $ An extended entanglement swapping period, defined as $\pi _{S}^{*} =\left(1+h\right)\pi _{S} $, where $h>0$ \[Number of $\pi _{S} $ periods\].
: Summary of notations.[]{data-label="tab2"}
[^1]: School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, U.K., and Department of Networked Systems and Services, Budapest University of Technology and Economics, 1117 Budapest, Hungary, and MTA-BME Information Systems Research Group, Hungarian Academy of Sciences, 1051 Budapest, Hungary.
[^2]: Department of Networked Systems and Services, Budapest University of Technology and Economics, 1117 Budapest, Hungary.
|
---
author:
- 'Mrinal Kumar[^1]'
- 'Shubhangi Saraf[^2]'
bibliography:
- 'refs.bib'
title: Superpolynomial lower bounds for general homogeneous depth 4 arithmetic circuits
---
="2D
Introduction
============
Proving lower bounds for explicit polynomials is one of the most important open problems in the area of algebraic complexity theory. Valiant [@Valiant79] defined the classes $\VP$ and $\VNP$ as the algebraic analog of the classes $\P$ and $\NP$, and showed that proving superpolynomial lower bounds for the Permanent would suffice in separating $\VP$ from $\VNP$. Despite the amount of attention received by the problem, we still do not know any superpolynomial (or even [*quadratic*]{}) lower bounds for general arithmetic circuits. This absence of progress on the general problem has led to a lot of attention on the problem of proving lower bounds for restricted classes of arithmetic circuits. The hope is that an understanding of restricted classes might lead to a better understanding of the nature of the more general problem, and the techniques developed in this process could possibly be adapted to understand general circuits better. Among the many restricted classes of arithmetic circuits that have been studied with this motivation, [*bounded depth*]{} circuits have received a lot of attention. In a striking result, Valiant et al [@VSBR83] showed that any $n$ variate polynomial of degree $\text{poly}(n)$ which can be computed by a polynomial sized arithmetic circuit of arbitrary depth can also be computed by an arithmetic circuit of depth $O(\log^2 n)$ and size poly$(n)$. Hence, proving superpolynomial lower bounds for circuits of depth $\log^2 n$ is as hard as proving lower bounds for general arithmetic circuits. In a series of recent works, Agrawal-Vinay [@AV08], Koiran [@koiran] and Tavenas [@Tavenas13] showed that the depth reduction techniques of Valiant et al [@VSBR83] can in fact be extended much further. They essentially showed that in order to prove superpolynomial lower bounds for general arithmetic circuits, it suffices to prove strong enough lower bounds for just [*homogeneous depth 4*]{} circuits. In particular, to separate $\VNP$ from $\VP$, it would suffice to focus our attention on proving strong enough lower bounds for homogeneous depth 4 circuits.
The first superpolynomial lower bounds for homogeneous circuits of depth 3 were proved by Nisan and Wigderson [@NW95]. Their main technical tool was the use of the [*dimension of partial derivatives*]{} of the underlying polynomials as a complexity measure. For many years thereafter, progress on the question of improved lower bounds stalled. In a recent breakthrough result on this problem, Gupta, Kamath, Kayal and Saptharishi [@GKKS12] proved the first superpolynomial ($2^{\Omega(\sqrt n)}$) lower bounds for homogeneous depth 4 circuits when the fan-in of the product gates at the bottom level is bounded (by $\sqrt n$). This result was all the more remarkable in light of the results by Koiran [@koiran] and Tavenas [@Tavenas13] which showed that $2^{\omega(\sqrt n\log n)}$ lower bounds for this model would suffice in separating $\VP$ from $\VNP$. The results of Gupta et al were further improved upon by Kayal Saha and Sapthrashi [@KSS13] who showed $2^{\Omega(\sqrt n\log n)}$ lower bounds for the model of homogeneous depth 4 circuits when the fan-in of the product gates at the bottom level is bounded (by $\sqrt n$). Thus even a slight asymptotic improvement in the exponent of either of these bounds would imply lower bounds for general arithmetic circuits!
The main tool used in both the papers [@GKKS12] and [@KSS13] was the notion of the dimension of [*shifted partial derivatives*]{} as a complexity measure, a refinement of the Nisan-Wigderson complexity measure of dimension of partial derivatives.
In spite of all this exciting progress on homogeneous depth 4 circuits with bounded bottom fanin (which suggests that possibly we might be within reach of lower bounds for much more general classes of circuits) these results give almost no non trivial (not even super linear) lower bounds for general homogeneous depth 4 circuits (with no bound on bottom fanin). Indeed the only lower bounds we know for general homogeneous depth 4 circuits are the slightly superlinear lower bounds by Raz using the notion of elusive functions [@Raz10b].
Thus nontrivial lower bounds for the class of general depth 4 homogeneous circuits seems like a natural and basic question left open by these works, and strong enough lower bounds for this model seems to be an important barrier to overcome before proving lower bounds for more general classes of circuits.
In this direction, building upon the work in [@GKKS12; @KSS13], Kumar and Saraf [@KS-depth4; @KS-formula] proved superpolynomial lower bounds for depth 4 circuits with unbounded bottom fan-in but [*bounded top fan-in*]{}. For the case of [*multilinear*]{} depth 4 circuits, superpolynomial lower bounds were first proved by Raz and Yehudayoff [@RY08b]. These lower bounds were recently improved in a paper by Fournier, Limaye, Malod and Srinivasan [@FLMS13]. The main technical tool in the work of Fournier et al was the use of the technique of [*random restrictions*]{} before using shifted partial derivatives as a complexity measure. By setting a large collection of variables at random to zero, all the product gates with high bottom fan-in got set to zero. Thus the resulting circuit had bounded bottom fanin and then known techniques of shifted partial derivatives could be applied. This idea of random restrictions crucially uses the multilinearity of the circuits, since in multilinear circuits high bottom fanin means [*many*]{} distinct variables feeding in to a gate, and thus if a large collection of variables is set at random to zero, then with high probability that gate is also set to zero.
[**Our Results:** ]{} In this paper, we prove the first superpolynomial lower bounds for general homogeneous depth 4 circuits with no restriction on the fan-in, either top or bottom. The main ingredient in our proof is a new complexity measure of [*bounded support*]{} shifted partial derivatives. This measure allows us to prove exponential lower bounds for homogeneous depth 4 circuits where all the monomials computed at the bottom layer have only few variables (but possibly large degree/fan-in). This exponential lower bound combined with a careful “random restriction" procedure that allows us to transform general depth 4 homogeneous circuits to this form gives us our final result. We will now formally state our results.
Our main theorem is stated below.
\[thm:main\] There is an explicit family of homogeneous polynomials of degree $n$ in $n^2$ variables in $\VNP$ which requires homogeneous $\spsp$ circuits of size $n^{\Omega(\log\log n)}$ to compute it.
We prove our lower bound for the family of Nisan-Wigderson polynomials $NW_d$ which is based upon the idea of Nisan-Wigderson designs. We give the formal definition in Section \[sec:prelims\].
As a first step in the proof of Theorem \[thm:main\], we prove an exponential lower bound on the top fan-in of any homogeneous $\spsp$ circuit where every product gate at the bottom level has at most $O(\log n)$ distinct variables feeding into it. Let homogeneous $\spsp^{\{s\}}$ circuits denote the class of homogeneous $\spsp$ circuits where every product gate at the bottom level has at most $s$ distinct variables feeding into it (i.e. has support at most $s$).
\[thm:main2\] There exists a constant $\beta > 0$, and an explicit family of homogeneous polynomials of degree $n$ in $n^2$ variables in $\VNP$ such that any homogeneous $\spsp^{\{\beta\log n\}}$ circuit computing it must have top fan-in at least $2^{\Omega(n)}$.
Observe that since homogeneous $\spsp^{\{s\}}$ circuits are a more general class of circuits than homogeneous $\spsp$ circuits with bottom fan-in at most $s$, our result strengthens the results of of Gupta et al and Kayal et al [@GKKS12; @KSS13] when $s = O(\log n)$.
We prove Theorem \[thm:main\] by applying carefully chosen random restrictions to both the polynomial family and to any arbitrary homogeneous $\spsp$ circuit and showing that with high probability the circuit simplifies into a homogeneous $\spsp$ circuit with bounded bottom support while the polynomial (even after the restriction) is still rich enough for Theorem \[thm:main2\] to hold. Our results hold over every field.
[**Organization of the paper :**]{} The rest of the paper is organized as follows. In Section \[sec:overview\], we provide a high level overview of the proof. In Section \[sec:prelims\], we introduce some notations and preliminary notions used in the paper. In Section \[sec:small-support-lb\], we give a proof of Theorem \[thm:main2\]. In Section \[sec:rand-res\], we describe the random restriction procedure and analyze its effect on the circuit and the polynomial. In Section \[sec:lowerbounds\], we prove Theorem \[thm:main\]. We conclude with some open problems in Section \[sec:conclusion\].
Proof Overview
==============
\[sec:overview\] Our proof is divided into two parts. In the first part we show a $2^{\Omega(n)}$ lower bound for homogeneous $\spsp$ circuits whose [*bottom support*]{} is at most $O(\log n)$. To the best of our knowledge, even when the bottom support is $1$, none of the earlier lower bound techniques sufficed for showing nontrivial lower bounds for this model. Thus a new complexity measure was needed. We consider the measure of [*bounded support*]{} shifted partial derivatives, a refinement of the measure of shifted partial derivatives used in several recent works [@GKKS12; @KSS13; @KS-depth4; @KS-formula; @FLMS13]. For this measure, we show that the complexity of the $NW_d$ polynomial (an explicit polynomial in VNP) is [*high*]{} whereas any subexponential sized homogeneous depth 4 circuit with bounded bottom support has a much smaller complexity measure. Thus for any depth 4 circuit to compute the $NW_d$ polynomial, it must be large – we show that it must have exponential top fan-in. Thus we get an exponential lower bound for bounded bottom support homogeneous $\spsp$ circuits. We believe this result might be of independent interest.
In the second part we show how to “reduce" any $\spsp$ circuit that is not too large to a $\spsp$ circuit with bounded bottom support. This reduction basically follows from a random restriction procedure that sets some of the variables feeding into the circuit to zero. At the same time we ensure that when this random restriction procedure is applied to $NW_d$, the polynomial does not get affected very much, and still has large complexity.
We could have set variables to zero by picking the variables to set to zero independently at random. For instance consider the following process: Independently keep each variable alive (i.e. nonzero) with probability $1/n^\epsilon$. Then any monomial with $\Omega(\log n)$ distinct variables is set to the zero polynomial with probability at least $1 - 1/n^{\Omega(\log n)}$. Since any circuit of size $n^{o(\log n)}$ will have only $n^{o(\log n)}$ monomials computed at the bottom layer, hence by the union bound, each such monomial with $\Omega(\log n)$ distinct variables will be set to zero. Thus the resulting circuit will have bounded bottom support. The problem with this approach is that we do not know how to analyze the effect of this simple randomized procedure on $NW_d$. Thus we define a slightly more refined random restriction procedure which keeps the $NW_d$ polynomial hard and at the same time makes the $\spsp$ circuit one of bounded bottom support. We describe the details of this procedure in Section \[sec:rralgorithm\]
Preliminaries and Notations
===========================
\[sec:prelims\] [**Arithmetic Circuits:** ]{} An arithmetic circuit over a field $\F$ and a set of variables $x_1, x_2, \ldots, x_{N}$ is an directed acyclic graph whose internal nodes are labelled by the field operations and the leaf nodes are labelled by the variables or field elements. The nodes with fan-out zero are called the output gates and the nodes with fan-in zero are called the leaves. In this paper, we will always assume that there is a unique output gate in the circuit. The [*size*]{} of the circuit is the number of nodes in the underlying graph and the [*depth*]{} of the circuit is the length of the longest path from the root to a leaf. We will call a circuit [*homogeneous*]{} if the polynomial computed at every node is a homogeneous polynomial. By a $\spsp$ circuit or a depth 4 circuit, we mean a circuit of depth 4 with the top layer and the third layer only have sum gates and the second and the bottom layer have only product gates. In this paper, we will confine ourselves to working with homogeneous depth 4 circuits. A homogeneous polynomial $P$ of degree $n$ in $N$ variables, which is computed by a homogeneous $\spsp$ circuit can be written as
$$\label{def:model}
P(x_1, x_2, \ldots, x_{N}) = \sum_{i=1}^{T}\prod_{j=1}^{d_i}{Q_{i,j}(x_1, x_2, \ldots, x_{N})}$$
Here, $T$ is the top fan-in of the circuit. Since the circuit is homogeneous, we know that for every $i \in \{1, 2, 3, \ldots, T\}$, $$\sum_{j = i}^{d_i} \text{deg}(Q_{i,j}) = n$$ By the support of a monomial $\alpha$, we will refer to the set of variables which have a positive degree in $\alpha$. In this paper, we will also study the class of homogeneous $\spsp$ circuits such that for every $i, j$, every monomial in $Q_{i,j}$ has bounded support. We will now formally define this class.
[**Homogeneous $\Sigma\Pi\Sigma\Pi^{\{s\}}$ Circuits:**]{} A homogeneous $\spsp$ circuit in Equation \[def:model\], is said to be a $\Sigma\Pi\Sigma\Pi^{\{s\}}$ circuit if every product gate at the bottom level has support at most $s$. Observe that there is no restriction on the bottom fan-in except that implied by the restriction of homogeneity.
[**Shifted Partial Derivatives:** ]{} In this paper will use a variant of the notion of [*shifted partial derivatives*]{} which was introduced in [@Kayal12] and has subsequently been the complexity measure used to to prove lower bounds for various restricted classes of depth four circuits and formulas [@FLMS13; @GKKS12; @KSS13; @KS-depth4; @KS-formula]. For a field $\F$, an $N$ variate polynomial $P \in {{{\F}}}[x_1, \ldots, x_{N}]$ and a positive integer $r$, we denote by $\partial^{r} P$, the set of all partial derivatives of order equal to $r$ of $P$. For a polynomial $P$ and a monomial $\gamma$, we denote by ${\partial_{\gamma} (P)}$ the partial derivative of $P$ with respect to $\gamma$. We now reproduce the formal definition from [@GKKS12].
\[def:shiftedderivative\] For an $N$ variate polynomial $P \in {{\mathbb{F}}}[x_1, x_2, \ldots, x_{N}]$ and positive integers $r, \ell \geq 0$, the space of order-$r$ $\ell$-shifted partial derivatives of $P$ is defined as $$\begin{aligned}
\langle \partial^{r} P\rangle_{ \ell} \stackrel{def}{=} {\mathbb{F}}\mhyphen span\{\prod_{i\in [N]}{x_i}^{j_i}\cdot g : \sum_{i\in [N]}j_i = \ell, g \in \partial^{r} P\}\end{aligned}$$
In this paper, we introduce the variation of [*bounded support*]{} shifted partial derivatives as a complexity measure. The basic difference is that instead of shifting the partial derivatives by all monomials of degree $\ell$, we will shift the partial derivatives only by only those monomials of degree $\ell$ which have support(the number of distinct variables which have non-zero degree in the monomial) exactly equal to $m$. We now formally define the notion.
\[def:restshiftedderivative\] For an $N$ variate polynomial $P \in {{\mathbb{F}}}[x_1, x_2, \ldots, x_{N}]$ and positive integers $r, \ell, m \geq 0$, the space of support-m degree-$\ell$ shifted partial derivatives of order-$r$ of $P$ is defined as $$\begin{aligned}
\langle \partial^{r} P\rangle_{(\ell, m)} \stackrel{def}{=} {\mathbb{F}}\mhyphen span\{\prod_{\substack{i\in S \\ S\subseteq [N] \\ |S| = m}}{x_i}^{j_i}\cdot g : \sum_{i\in S}j_i = \ell, j_i \geq 1, g \in \partial^{r} P\}\end{aligned}$$
The following property follows from the definition above.
\[subadditive\] For any two multivariate polynomials $P$ and $Q$ in $\F[x_1, x_2, \ldots, x_{N}]$ and any positive integers $r, \ell, m$, and scalars $\alpha$ and $\beta$ $$\dim(\langle \partial^{r} (\alpha P + \beta Q)\rangle_{(\ell, m)}) \leq \dim(\langle \partial^{r} P\rangle_{(\ell, m)}) + \dim(\langle \partial^{r} Q\rangle_{(\ell, m)})$$
In the rest of the paper, we will use the term $(m, \ell, r)$-shifted partial derivatives to refer to support-m degree-$\ell$ shifted partial derivatives of order-r of a polynomial. For any linear or affine space $V$ over a field $\F$, we will use $\dim(V)$ to represent the dimension of $V$ over $\F$. We will use the dimension of the space $\langle\partial^{r} P\rangle_{(\ell, m)}$ which we denote by $\dim(\langle\partial^{r} P\rangle_{(\ell, m)})$ as the measure of complexity of a polynomial.
[**Nisan-Wigderson Polynomials:**]{} We will show our lower bounds for a family of polynomials in $\VNP$ which were used for the first time in the context of lower bounds in [@KSS13]. The construction is based upon the intuition that over any finite field, any two distinct low degree polynomials do not agree at too many points. For the rest of this paper, we will assume $n$ to be of the form $2^k$ for some positive integer $k$. Let $\F_n$ be a field of size $n$. For the set of $N = n^2$ variables $\{x_{i,j} : i, j \in [n]\} $ and $d < n$, we define the degree $n$ homogeneous polynomial $NW_{d}$ as
$$NW_d = \sum_{\substack{f(z) \in \F_n[z] \\
deg(f) \leq d-1}} \prod_{i \in [n]} x_{i,f(i)}$$
From the definition, we can observe the following properties of $NW_d$.
1. The number of monomials in $NW_d$ is exactly $n^d$.
2. Each of the monomials in $NW_d$ is multilinear.
3. Each monomial corresponds to evaluations of a univariate polynomial of degree at most $d-1$ at all points of $\F_n$. Thus, any two distinct monomials agree in at most $d-1$ variables in their support.
For any $S \subseteq [n]$ and each $f \in \F_n[z]$, we define the monomial $${m_f^S} = \prod_{i\in S} x_{i, f(i)}$$ and $${m_f} = \prod_{i\in[n]} x_{i, f(i)}$$ We also define the set ${\cal M}^S$ to represent the set $\{\prod_{i \in S}\prod_{j \in [n]} x_{i,j}\}$. Clearly, $$NW_d = \sum_{\substack{f(z) \in \F_n[z] \\
deg(f) \leq d-1}} m_f$$
[**Monomial Ordering and Distance:** ]{} We will also use the notion of a monomial being an extension of another as defined below.
A monomial $\theta$ is said to be an extension of a monomial $\tilde{\theta}$, if $\theta$ divides $\tilde{\theta}$.
In this paper, we will imagine our variables to be coming from a $n\times n$ matrix $\{x_{i,j}\}_{i, j \in [n]}$. We will also consider the following total order on the variables. $x_{i_1, j_1} > x_{i_2, j_2}$ if either $i_1 < i_2$ or $i_1 = i_2$ and $j_1 < j_2$. This total order induces a lexicographic order on the monomials. For a polynomial $P$, we will use the notation $\lm(P)$ to indicate the leading monomial of $P$ under this monomial ordering.
We will use the following notion of distance between two monomials which was also used in [@CM13].
Let $m_1$ and $m_2$ be two monomials over a set of variables. Let $S_1$ and $S_2$ be the multiset of variables in $m_1$ and $m_2$ respectively, then the distance $\Delta(m_1, m_2)$ between $m_1$ and $m_2$ is the min$\{|S_1| - |S_1\cap S_2|, |S_2| - |S_1\cap S_2|\}$ where the cardinalities are the order of the multisets.
In this paper, we will invoke this definition only for multilinear monomials of the same degree. In this special case, we have the following crucial observation.
\[obs:multilinear-dist\] Let $\alpha$ and $\beta$ be two multilinear monomials of the same degree which are at a distance $\Delta$ from each other. If $\text{Supp}(\alpha)$ and $\text{Supp}(\beta)$ are the supports of $\alpha$ and $\beta$ respectively, then $$|\text{Supp}(\alpha)| - |\text{Supp}(\alpha)\cap \text{Supp}(\beta)| = |\text{Supp}(\beta)| - |\text{Supp}(\alpha)\cap \text{Supp}(\beta)| = \Delta$$
[**Approximations:** ]{} We will repeatedly refer to the following lemma to approximate expressions during our calculations.
\[lem:approx\] Let $a(n), f(n), g(n) : \Z_{>0}\rightarrow \Z_{>0}$ be integer valued functions such that $(f+g) = o(a)$. Then, $$\log \frac{(a+f)!}{(a-g)!} = (f+g)\log a \pm O\left( \frac{(f+g)^2}{a}\right)$$
In our setup, very often $(f+g)^2$ will be $\theta(a)$. In this case, the error term will be an absolute constant. Hence, up to multiplication by constants, $\frac{(a+f)!}{(a-g)!} = a^{(f+g)}$.
We will also use the following basic fact in our proof.
\[fact:numsolutions\] The number of [*positive*]{} integral solutions of the equation $$\sum_{i = 1}^t y_i = k$$ equals ${k-1 \choose t-1}$.
As a last piece of notation, for any $i\times j$ matrix $H$ over $\F_2$ and a vector $\alpha \in \F^{i}_2$, we denote by $H||\alpha$ to be the $i\times (j+1)$ matrix which when restricted to the first $j$ columns is equal to $H$ and whose last column is $\alpha$. Similarly, for any vector $\alpha \in \F^i_2$ and any $b\in\F_2$, $\alpha||b$ is the $i+1$ dimensional vector where $b$ is appended to $\alpha$.
Lower bounds for $\spsp^{\{O(\log n)\}}$ circuits
=================================================
\[sec:small-support-lb\] In this section, we will prove Theorem \[thm:main2\]. We will prove an exponential lower bound on the top fan-in for homogeneous $\spsp$ circuits such that every product gate at the bottom has a bounded number of variables feeding into it. We will use the dimension of the span of $(m, \ell, r)$-shifted partial derivatives as the complexity measure. We will prove our lower bound for the $NW_d$ polynomial. The proof will be in two parts. In the first part, we will prove an upper bounded on the complexity of the circuit. Then, we will prove a lower bound on the complexity of the $NW_d$ polynomial. Comparing the two will then imply our lower bound. The bound holds for $NW_d$ for any $d = \delta n$, where $\delta$ is a constant such that $0 < \delta < 1$.
Complexity of homogeneous depth 4 $\spsp^{\{s\}}$ circuits
----------------------------------------------------------
Let $C$ be a homogeneous $\spsp^{\{s\}}$ circuit computing the $NW_d$ polynomial. We will now prove an upper bound on the complexity of a product gate in such a circuit. The bound on the complexity of the circuit follows from the subadditivity of the complexity measure.
\[lem:product-gate-bound1\] Let $Q = \prod_{i = 1}^n Q_i$ be a product gate at the second layer from the top in a homogeneous $\spsp^{\{s\}}$ circuit computing a homogeneous degree $n$ polynomial in $N$ variables. For any positive integers $m, r, s, \ell$ satisfying $m+rs \leq \frac{N}{2}$ and $m+rs \leq \frac{\ell}{2}$,
$$\dim(\langle\partial^{r} Q\rangle_{(\ell, m)}) \leq \text{poly}(nrs){n+r \choose r}{N \choose m+rs}{\ell+n-r \choose m+rs}$$
By the application of chain rule, any partial derivative of order $r$ of $Q$ is a linear combination of a number of product terms. Each of these product terms is of the form $\prod_{i \in S}\partial_{\gamma_i}(Q_i)\prod_{j\in [n]\setminus S}Q_j$, where $S$ is a subset of $\{1, 2, \ldots, n\}$ of size at most $r$ and $\gamma_i$ are monomials such that $\sum_{i \in S}\text{deg}(\gamma_i) = r$. Also, observe that $\prod_{i \in S}\partial_{\gamma_i}(Q_i)$ is of degree at most $n-r$. In this particular special case all $Q_i$ have support at most $s$, so every monomial in $\prod_{i \in S}\partial_{\gamma_i}(Q_i)$ has support at most $rs$. Shifting these derivatives is the same as multiplying them with monomials of degree $\ell$ and support equal to $m$. So, $(m, \ell, r)$-shifted partial derivative of order $r$ can be expressed as sum of the product of $\prod_{j\in [n]\setminus S}Q_j$ for $S\subseteq [n]$ of size at most $r$, and a monomial of support between $m$ and $m+rs$ and degree between $\ell$ and $\ell+n-r$.
We can choose the set $S$ in ${n+r \choose r}$ ways. The second part in each term is a monomial of degree between $l$ and $\ell+n-r$ and support between $m$ and $m+rs$. The number of monomials over $N$ variables of support between $m$ and $m+rs$ and degree between $\ell$ and $\ell+n-r$ equals $$\sum_{i = 0}^{n-r} {\sum_{j = 0}^{rs} {N \choose m+j}{\ell+i -1\choose m+j-1}}$$ Now, in the range of choice of our parameters $m, r, s, \ell$, the binomial coefficients increase monotonically with $i$ and $j$. Hence, we can upper bound the dimension by $\text{poly}(nrs){n+r \choose r}{N \choose m+rs}{\ell+n-r-1 \choose m+rs-1}$.
For a homogeneous $\spsp$ circuit where each of the bottom level product gates is of support at most $s$, Lemma \[lem:product-gate-bound1\] immediately implies the following upper bound on the complexity of the circuit due to subadditivity from Lemma \[subadditive\].
\[cor:circuit-complexity-bound\] Let $C = \sum_{j = 1}^{T}\prod_{i = 1}^n Q_{i,j}$ be a a homogeneous $\spsp^{\{s\}}$ circuit computing a homogeneous degree $n$ polynomial in $N$ variables. For any $m, r, s, \ell$ satisfying $m+rs \leq \frac{N}{2}$ and $m+rs \leq \frac{\ell}{2}$, $$\dim(\langle\partial^{r} C\rangle_{(\ell, m)}) \leq T\times \text{poly}(nrs){n+r \choose r}{N \choose m+rs}{\ell+n-r-1 \choose m+rs-1}$$
Lower bound on the complexity of the $NW_d$ polynomial
------------------------------------------------------
We will now prove a lower bound on the complexity of the $NW_d$ polynomial. For this, we will first observe that distinct partial derivatives of the $NW_d$ polynomial are [*far*]{} from each other in some sense and then show that shifting such partial derivatives gives us a lot of distinct shifted partial derivatives. Recall that we defined the set ${\cal M}^S$ to represent the set $\{\prod_{i \in S}\prod_{j \in [n]} x_{i,j}\}$. We start with the following observation.
\[lem:many-partial-derivatives\] For any positive integer $r$ such that $n-r > d$ and $r < d-1$, the set $\{\partial_{\alpha}(NW_d) : \alpha \in {\cal M}^{[r]}\}$ consists of $|{\cal M}^{[r]}| = n^r$ nonzero distinct polynomials.
We need to show the following two statements.
- $\forall \alpha \in {\cal M}^{[r]}$, $\partial_{\alpha}(NW_d)$ is a non zero polynomial.
- $\forall \alpha \neq \beta \in {\cal M}^{[r]}$, $\partial_{\alpha}(NW_d) \neq \partial_{\beta}(NW_d)$.
For the first item, observe that, since $r < d-1$, for every $\alpha \in {\cal M}^{[r]}$, there is a polynomial $f$ of degree at most $d-1$ in $\F_n[z]$ such that $\alpha = \prod_{i = 1}^r x_{i, f(i)}$. So, $\partial_{\alpha}(m_f) \neq 0$ since $m_f$ is an extension of $\alpha$, in fact, there are many such extensions. Also, observe for any two extensions $m_f$ and $m_g$, $\partial_{\alpha}(m_f)$ and $\partial_{\alpha}(m_g)$ are multilinear monomials at a distance at least $n-r-d > 0$ from each other. Hence, $\partial_{\alpha}(NW_d) = \sum_{g} \partial_{\alpha}(m_g)$ is a non zero polynomial, where the sum is over all $g \in \F_n[z]$ of degree $\leq d-1$ such that $m_g$ is an extension of $\alpha$.
For the second item, let us now consider the leading monomials of $\partial_{\alpha}(NW_d)$ and $\partial_{\beta}(NW_d)$. These leading monomials each come from some distinct polynomials $f, g \in \F_n[z]$ of degree at most $d-1$. Also, since $\alpha \neq \beta$ and $n-r > d$, $\partial_{\alpha}(m_f) \neq \partial_{\beta}(m_g)$. In fact, $\partial_{\alpha}(NW_d) \text{ and } \partial_{\beta}(NW_d)$ do not have a common monomial. Therefore, $\partial_{\alpha}(NW_d) \neq \partial_{\beta}(NW_d)$.
\[rmk2\] Observe that there is nothing special about the set ${\cal M}^{[r]}$ and the Lemma \[lem:many-partial-derivatives\] holds for $\{\cal M\}^S$ for any set $S$, such that $S \subseteq [n]$ and $|S| < d-1$.
In the proof above, we observed that for any $\alpha \neq \beta \in {\cal M}^{[r]}$, the leading monomials of $\partial_{\alpha}(NW_d)$ and $ \partial_{\beta}(NW_d)$ are multilinear monomials of at a distance at least $n-r-d$ from each other. We will exploit this structure to show that shifting the polynomials in the set $\{\partial_{\alpha}(NW_d) : \alpha \in {\cal M}^{[r]}\}$ by monomials of support m and degree $\ell$ results in many linearly independent shifted partial derivatives. We will first prove the following lemma.
\[clm:manyshifts1\] Let $\alpha$ and $\beta$ be two distinct multilinear monomials of equal degree such that the distance between them is $\Delta$. Let $S_{\alpha}$ and $S_{\beta}$ be the set of all monomials obtained by shifting $\alpha$ and $\beta$ respectively with monomials of degree $\ell$ and support exactly $m$ over $N$ variables. Then $|S_{\alpha}\cap S_{\beta}| \leq {N-\Delta \choose m-\Delta}{\ell-1 \choose m-1}$.
From the distance property, we know that there is a unique monomial $\gamma$ of degree $\Delta$ and support $\Delta$ such that $\alpha\gamma$ is the lowest degree extension of $\alpha$ which is divisible by $\beta$. Therefore, any extension of $\alpha$ which is also an extension of $\beta$ must have the support of $\alpha\gamma$ as a subset. In particular, for a shift of $\alpha$ to lie in $S_{\beta}$, $\alpha$ must be shifted by monomial of degree $\ell$ and support $m$ which is an extension of $\gamma$. Hence, the freedom in picking the support is restricted to picking some $m-\Delta$ variables from the remaining $N - \Delta$ variables. Once the support is chosen, the number of possible degree $\ell$ shifts on this support equals ${\ell-1 \choose m-1}$ by Fact \[fact:numsolutions\]. Hence, the number of shifts of degree equal to $\ell$ and support equal to m of $\alpha$ which equals some degree $\ell$ and support m shift of $\beta$ is exactly ${N-\Delta \choose m-\Delta}{\ell-1 \choose m-1}$.
We will now prove the following lemma, which is essentially an application of Claim \[clm:manyshifts1\] to the $NW_d$ polynomial. For any monomial $\alpha$ and positive integers $\ell, m$, we will denote by $S_{\ell, m}(\alpha)$ the set of all shifts of $\partial_{\alpha}NW_d$ by monomials of degree $\ell$ and support m. More formally, $$S_{\ell, m}(\alpha) = \{\gamma\cdot\partial_{\alpha}(NW_d) : \gamma = \prod_{\substack{i\in U \\ U\subseteq [N] \\ |U| = m}}{x_i}^{j_i} , \sum_{i\in U}j_i = \ell, j_i \geq 1\}$$ also, let $$LM_{\ell, m}(\alpha) = \{{\lm}(f) : f \in S_{\ell, m}(\alpha)\}$$
\[lem:manyshifts2\] For any positive integers $r$, $m$ and $\ell$ such that $n-r > d$ and $r < d-1$, let $\alpha$ and $\beta$ be two distinct monomials in ${\cal M}^{[r]}$. Then $|S_{\ell, m}(\alpha)\cap S_{\ell, m}(\beta)| \leq {N-(n-d-r) \choose m- (n-d-r)}{\ell-1 \choose m-1}$.
In the proof of Lemma \[lem:many-partial-derivatives\], we have observed that the leading monomials of $\partial_{\alpha}(NW_d)$ and $\partial_{\beta}(NW_d)$ are equal to $\partial_{\alpha}(m_f)$ and $\partial_{\beta}(m_g)$ for two distinct polynomials $f, g \in \F_n[z]$ of degree at most $d-1$. Hence, $\partial_{\alpha}(m_f)$ and $\partial_{\beta}(m_g)$ are multilinear monomials at a distance at least $\Delta = n-r-d$ from each other.
Since monomial orderings respect multiplication by the same polynomial, we know that the leading monomial of a shift equals the shift of the leading monomial. Therefore, if $\gamma_{\alpha}$ and $\gamma_{\beta}$ are two monomials of degree $\ell$ and support equal to $m$ such that $\gamma_{\alpha}\partial_{\alpha}(NW_d) = \gamma_{\beta}\partial_{\beta}(NW_d)$, then $\gamma_{\alpha}\partial_{\alpha}(m_f) = \gamma_{\beta}\partial_{\beta}(m_g)$. Hence, the $|S_{\ell, m}(\alpha)\cap S_{\ell, m}(\beta)|$ is at most the number of shifts of $\partial_{\alpha}(m_f)$ which is also a shift of $\partial_{\beta}(m_g)$. By Lemma \[clm:manyshifts1\], this is at most ${N-(n-d-r) \choose m- (n-d-r)}{\ell-1 \choose m-1}$.
We will now prove a lower bound on the dimension of the span of $(m, \ell, r)$-shifted partial derivatives of the $NW_d$ polynomial. For this, we will use the following proposition from [@GKKS12], the proof of which is a simple application of Gaussian elimination.
For any field $\F$, let ${\cal P} \subseteq \F[z]$ be any finite set of polynomials. Then, $$\dim(\F\mbox{-}span({\cal P})) = |\{\lm(f) : f \in \F\mbox{-}span({\cal P})\}|$$
Therefore, in order to lower bound $\dim(\langle\partial^{r} NW_d\rangle_{(\ell, m)})$, it would suffice to obtain a lower bound on the size of the set $\bigcup_{\alpha} LM_{ \ell, m}(\alpha)$, where the union is over all monomials $\alpha$ of degree equal to $r$. To obtain this lower bound, we will show a lower bound on the size of the set $\bigcup_{\alpha \in {\cal M}^{[r]}} LM_{\ell, m}(\alpha)$.
\[lem:manyshiftsNW\] Let $d = \delta n$ for any constant $0<\delta<1$. Let $\ell, m, r$ be positive integers such that $n-r > d$, $r < d-1$, $m \leq N$, $m = \theta(N)$ and for $\phi = {\frac{N}{m}}$, $r$ satisfies $ r \leq \frac{ (n-d)\log{ \phi} \pm O(\phi\frac{(n-d-r)^2}{N})}{\log n + \log{ \phi}}$. Then, $$\dim(\langle\partial^{r} NW_d\rangle_{(\ell, m)}) \geq 0.5n^r{N \choose m}{\ell-1 \choose m-1}$$
Recall that ${\cal M}^{[r]} = \{\prod_{i = 1}^r\prod_{j \in [n]} x_{i,j}\}$. We have argued in Lemma \[lem:many-partial-derivatives\] that for each $\alpha, \beta \in {\cal M}^{[r]}$, such that $\alpha \neq \beta$, $\partial_{\alpha}(NW_d) \neq \partial_{\beta}(NW_d)$ and both of these are non zero polynomials. As discussed above, we will prove a lower bound on the size of the set $\bigcup_{\alpha \in {\cal M}^{[r]}} LM_{\ell, m}(\alpha)$. From the principle of inclusion-exclusion, we know $$|\bigcup_{\alpha \in {\cal M}^{[r]}} LM_{ \ell, m}(\alpha)| \geq \sum_{\alpha \in {\cal M}^{[r]}}|LM_{\ell, m}(\alpha)| -\sum_{\alpha \neq \beta \in {\cal M}^{[r]}} |LM_{ \ell, m}(\alpha) \cap LM_{\ell, m}(\beta)|$$ Let us now bound both these terms separately.
- Since shifting preserves monomial orderings, therefore for any $\gamma \neq \tilde{\gamma}$ of degree $\ell$ and support $m$, and for any $\alpha \in {\cal M}^{[r]}, \lm(\gamma\partial_{\alpha}(NW_d)) \neq \lm(\tilde{\gamma}\partial_{\alpha}(NW_d))$. Hence, for each $\alpha \in {\cal M}^{[r]}$, $|LM_{ \ell, m}(\alpha)|$ is the number of different shifts possible, which is equal to the number of distinct monomials of degree $\ell$ and support $m$ over $N$ variables. Hence, $$|LM_{ \ell, m}(\alpha)| = {N \choose m}{\ell-1 \choose m-1}$$.
- For any two distinct $\alpha, \beta \in {\cal M}^{[r]}$, from Lemma \[lem:manyshifts2\], $$|LM_{\ell, m}(\alpha) \cap LM_{\ell, m}(\beta)| \leq {N-(n-d-r) \choose m- (n-d-r)}{\ell-1 \choose m-1}$$
Therefore, $$|\bigcup_{\alpha \in {\cal M}^{[r]}} LM_{ \ell, m}(\alpha)| \geq |{\cal M}^{[r]}|{N \choose m}{\ell-1 \choose m-1} - {|{\cal M}^{[r]}| \choose 2}{N-(n-d-r) \choose m- (n-d-r)}{\ell-1 \choose m-1}$$ To simplify this bound, we will show that for the choice of our parameters, the second term is at most the half the first term. In this case, we have $$|\bigcup_{\alpha \in {\cal M}^{[r]}} LM_{\ell, m}(\alpha)| \geq 0.5|{\cal M}^{[r]}|{N \choose m}{\ell-1 \choose m-1}$$
We need to ensure, $$\frac{{|{\cal M}^{[r]}| \choose 2}{N-(n-d-r) \choose m- (n-d-r)}{\ell-1 \choose m-1}}{|{\cal M}^{[r]}|{N \choose m}{\ell-1 \choose m-1} } \leq 0.5$$ It suffices to ensure
$$\frac{|{\cal M}^{[r]}|{N-(n-d-r) \choose m- (n-d-r)}}{{N \choose m}} \leq 1$$ which is the same as ensuring that $$|{\cal M}^{[r]}|\times \frac{(N-(n-d-r))!}{N!}\times \frac{m!}{(m-(n-d-r))!} \leq 1$$
Now, using the approximation from Lemma \[lem:approx\], $$\begin{aligned}
\log\frac{N!}{(N-(n-d-r))!} &=& (n-d-r)\log N \pm O\left(\frac{(n-d-r)^2}{N}\right) \text{and} \\
\log\frac{m!}{(m-(n-d-r))!} &=& (n-d-r)\log m \pm O\left(\frac{(n-d-r)^2}{m}\right)\end{aligned}$$
Thus we need to ensure that
$$\log |{\cal M}^{[r]}| \leq \log\left(\frac{N}{m}\right)^{n-d-r} \pm O\left(\frac{(n-d-r)^2}{N}\right) \pm O\left(\frac{(n-d-r)^2}{m}\right)$$ Substituting $|{\cal M}^{[r]}| = n^r$, we need $$r\log n \leq \log\left(\frac{N}{m}\right)^{n-d-r} \pm O\left(\frac{(n-d-r)^2}{N} + \frac{(n-d-r)^2}{m}\right)$$
Substituting $m = \frac{N}{\phi}$ (and noting that $\phi > 1$), we require $$r\log n \leq (n-d-r)\log{\phi} \pm O\left(\phi\frac{(n-d-r)^2}{N}\right).$$ Thus we require $$r \leq \frac{ (n-d)\log{ \phi} \pm O(\phi\frac{(n-d-r)^2}{N})}{\log n + \log{ \phi}}$$
Observe that for any constant $0< \delta < 1$ such that $d = \delta n$, $r$ can be chosen any constant times $\frac{n}{\log n}$ by choosing $\phi$ to be an appropriately large constant. So, for such a choice of $r$, $$\dim(\langle\partial^{r} NW_d\rangle_{(\ell, m)}) \geq 0.5|{\cal M}^{[r]}|{N \choose m}{\ell-1 \choose m-1}$$ For $|{\cal M}^{[r]}| = n^r$, we have $$\dim(\langle\partial^{r} NW_d\rangle_{(\ell, m)}) \geq 0.5n^r{N \choose m}{\ell-1 \choose m-1}$$
\[rmk:bounded-sup-lb\] The proof above shows something slightly more general than a lower bound on just the complexity of the $NW_d$ polynomial. The only property of the $NW_d$ polynomial that we used here was that the leading monomials of any two distinct partial derivatives of it were [*far*]{} from each other. We will crucially use this observation in the proof of our main theorem. Also, there is nothing special about using the set ${\cal M}^{[r]}$. The proof works for any set of monomials ${\cal M}^{S} = \{\prod_{i \in S}\prod_{j \in [n]} x_{i,j}\}$, where $S$ is a subset of $\{1, 2, 3,\ldots, n\}$ of size exactly $r$.
Top fan-in lower bound
----------------------
We are now ready to prove our lower bound on the top fan-in of any homogeneous $\spsp^{\{\beta\log n\}}$ (for some constant $\beta$) and computes the $NW_d$ polynomial, where $d = \delta n$ for some constant $\delta$ between $0$ and $1$.
\[thm:bounded-support-lower-bound\] Let $d = \delta n$ for any constant $0<\delta<1$. There exists a constant $\beta$ such that all homogeneous $\spsp^{\{\beta\log n\}}$ circuits which compute the $NW_d$ polynomial have top fan-in at least $2^{\Omega(n)}$.
By comparing the complexities of the circuit and the polynomial as given by Corollary \[cor:circuit-complexity-bound\] and Lemma \[lem:manyshiftsNW\], the top fan-in of the circuit must be at least
$$~\label{eqn:top-fan-in-1}
\frac{0.5n^{r}{N\choose m}{\ell-1 \choose m-1}}{\text{poly}(nrs){n+r \choose r}{N \choose m+rs}{\ell+n-r \choose m+rs}}$$
This bound holds for any choice of positive integers $\ell, m, r$, a constant $\beta$ such that $s = \beta\log n$ which satisfy the constraints in the hypothesis of Corollary \[cor:circuit-complexity-bound\] and Lemma \[lem:manyshiftsNW\]. In other words, we want these parameters to satisfy
- $m+rs \leq \frac{N}{2}$
- $m+rs \leq \frac{\ell}{2}$
- $m = \theta(N)$
- $n-r > d$
- $r < d-1$
- For $\phi = {\frac{N}{m}}$, $ r \leq \frac{ (n-d)\log{ \phi} \pm O\left(\phi \frac{(n-d-r)^2}{N}\right)}{\log n + \log{ \phi}}$
In the rest of the proof, we will show that there exists a choice of these parameters such that we get a bound of $2^{\Omega(n)}$ from Expression \[eqn:top-fan-in-1\]. We will show the existence of such parameters satisfying the asymptotics $\ell = \theta(N)$, $r = \theta\left( \frac{n}{\log n} \right)$ and $s = \theta(\log n)$. In the rest of the proof, we will crucially use these asymptotic bounds for various approximations.
For this, we will group together and approximate the terms in the ratio $\frac{0.5n^{r}{N\choose m}{\ell-1 \choose m-1}}{\text{poly}(nrs){n+r \choose r}{N \choose m+rs}{\ell+n-r \choose m+rs}}$
- $\frac{{N \choose m}}{{N \choose m+rs}} = \frac{(N-m-rs)!(m+rs)!}{(N-m)!m!} = (\frac{m}{N-m})^{rs}$ upto some constant factors, as long as $(rs)^2 = \theta (N) = \theta(m)$.
- $\frac{{\ell-1 \choose m-1}}{{\ell+n-r \choose m+rs}} = {\frac{(\ell-1)!}{(m-1)!(\ell-m)!} \times \frac{(m+rs)!(\ell-m+n-r-rs)!}{(\ell+n-r)!}}$. We now pair up things we know how to approximate within constant factors. $\frac{{\ell-1 \choose m-1}}{{\ell+n-r \choose m+rs}} = \frac{(\ell-1)!}{(\ell+n-r)} \times \frac{(m+rs)!}{(m-1)!} \times \frac{(\ell-m+n-r-rs)!}{(\ell-m)!} = \text{poly(n)} \times {\frac{1}{\ell^{n-r}}}\times m^{rs} \times {\frac{(\ell-m)^{n-r}}{(\ell-m)^{rs}}}$. This simplifies to $\text{poly(n)} \times {\left(\frac{m}{\ell-m}\right)}^{rs} \times {\left(\frac{\ell-m}{\ell}\right)}^{n-r}$.
- $\frac{n^{r}}{{n+r \choose r}} \geq \frac{n^{r}}{{\left(\frac{2(n+r)}{r}\right)}^{r}}$. We just used Stirling’s approximation here.
In the range of our parameters, the approximations above imply that the top fan-in, up to polynomial factors is at least
$${\left(\frac{r}{3}\right)}^r\times{\left(\frac{m}{\ell-m}\right)}^{rs} \times {\left(\frac{\ell-m}{\ell}\right)}^{n-r} \times \left(\frac{m}{N-m}\right)^{rs}$$
Simplifying further, this is at least
$$2^{\Omega(r\log r - rs\log\frac{\ell-m}{m} - (n-r)\log\frac{\ell}{\ell-m}-rs\log\frac{N-m}{m})}$$
Recall that we will set $m$ and $\ell$ to be $\theta(N)$ and $r$ to be $\theta(\frac{n}{\log n})$. The constants have to be chosen carefully in order to satisfy the constraints. We will choose constants $\alpha, \beta$ and $\eta$ such that $s = \beta\log n$, $r = \alpha\cdot n/\log n$ and $m= \eta \ell$. First choose $\eta$ to be any small constant $> 0$ (for instance $\eta= 1/4$). Now, choose $\alpha$ to be a constant much larger than $\log\frac{1}{1-\eta}$. This makes sure that $r\log r$ dominates $(n-r)\log\frac{\ell}{\ell-m}$. Recall that $\alpha$ can be chosen to be any large constant by choosing $\phi$ to be an appropriately large constant (by the constraint between $r$ and $\phi$ in the fifth bullet). Notice that this sets $m$ to be a small constant factor of $N$. Fix these choices of $\eta$ and $\alpha$. Now, we choose the term $\beta$ to be a small positive constant such that $rs\log\frac{1-\eta}{\eta}$ and $rs\log\frac{N-m}{m}$ are much less than $r\log r$. Observe that this choice of parameters satisfies all the constraints imposed in the calculations above, and the top fan-in is at least $2^{\Omega(r\log r)} = 2^{\Omega(n)}$.
Random Restrictions
===================
\[sec:rand-res\]
In this section, we will describe our random restriction algorithm and analyze the effect of random restrictions on $\spsp$ circuits as well as the $NW_d$ polynomial.
Let $n = 2^k$. We identify elements of $[n]$ with elements of $\F_{2^k}$. We view $\F_{2^k}$ as a $k$-dimensional vector space over $\F_2$. Let $\phi: \F_{2^k} \to \F_2^k$ be an $\F_2$-linear isomorphism between $\F_{2^k}$ and $\F_2^k$. Thus $\phi(\alpha + \beta) = \phi(\alpha) + \phi(\beta)$. Let $M: \F_{2^k} \to \F_2^{k\times k}$, map $\alpha \in \F_{2^k}$ to the matrix $M(\alpha)$, which represents the linear transformation over $\F_2^k$ that is given by multiplication by $\alpha$ in $\F_{2^k}$. Thus it follows that $M(\alpha \times \beta) = M(\alpha) \times M(\beta)$, and $M(\alpha + \beta) = M(\alpha) + M(\beta)$. Moreover it is not hard to see that $\phi(\alpha \times \beta) = M(\alpha) \times \phi(\beta)$.
Since $n = 2^k$, thus $\F_n \equiv \F_{2^k}$. Let $\F_n[Z]$ denote the space of univariate polynomials over $\F_n$. For $f\in \F_n[Z]$ of degree $\leq d-1$, $f$ is of the form $\sum_{i = 0}^{d-1} a_i Z^i$, for $a_i \in \F_n$. Thus we can represent $f$ as a vector of coefficients $(a_0,a_1, \ldots a_{d-1})$, and hence view $f$ as an element of $\F_n^d$. For ease of notation, for $\alpha \in \F_n$ we will let $[\alpha]$ represent $\phi(\alpha)$. Also, for $f \in \F_n[Z]$ of degree at most $d-1$, we let $[f] \in \F_2^{kd}$ represent the concatenation of $\phi$ applied to each of the coefficients of $f$.
Let $\eval_\alpha$ be the $dk \times k$ matrix obtained by stacking the matrices $M(\alpha^0)$, $M(\alpha^1)$, ..., $M(\alpha^{d-1})$ one below the other. In other words, the first $k$ rows are the rows of $M(\alpha^0)$, the second $k$ rows are the rows of $M(\alpha^1)$ and so on. The following claim follows easily from the definitions.
Let $f \in \F_n[Z]$ be of degree at most $d-1$, and let $\alpha \in \F_n$. Then $$[f(\alpha)] = [f] \times \eval_\alpha.$$
In the rest of the discussion we will identify the elements of $\F_n$ with $\{1, 2, \ldots, n\}$. Let $\overline {\eval_i}$ be the $dk \times 2^k$ matrix obtained by adding a column for each of the $2^k$ linear combinations of the columns of $\eval_i$. Let $\eval$ be the $dk \times nk$ matrix obtained by concatenating $\eval_{i}$ for all $i \in [n]$. Let $\overline {\eval}$ be the $dk \times n2^k$ matrix obtained by concatenating $\overline {\eval_{i}}$ for all $i \in [n]$.
In order to restrict the variables in the circuit, we will first “randomly restrict" the space of polynomials in $\F_n[Z]$ of degree at most $d-1$. We present the random restriction procedure in the next section.
Random Restriction Algorithm {#sec:rralgorithm}
----------------------------
Let $\epsilon > 0$ be any constant. We will define a randomized procedure $R_\epsilon$ which selects a subset of the variables $\{x_{i,j}\mid i,j \in [n]\}$ to set to zero.
The restriction proceeds by first restricting the space of polynomials $f \in \F_n[Z]$ of degree at most $d-1$. This restriction then naturally induces a restriction on the space of variables by selecting only those variables $x_{i,j}$ such that there is some polynomial $f$ in the restricted space for which $f(i) = j$.
We restrict the space of polynomials by iteratively restricting the values the polynomials can take at points in $\F_{2^k}$. For each $i \in F_{2^k}$, we restrict the values $f$ can take at $i$ to a random affine subspace of codimension $\epsilon k$ (when we view $\F_{2^k}$ as a $k$ dimensional vector space over $\F_2$). We do this by sampling $\epsilon k$ random and independent columns from $\overline{\eval_i}$ and restricting the inner product of $[f]$ with these columns to be randomly chosen values. Each column that we pick in this manner imposes an $\F_2$-affine constraint on $[f]$, and restricts $[f]$ to vary in an affine subspace of codimension $1$. Since these random constraints for the various values of $i$ might not be linearly independent, it is possible that at the end of the process no polynomial $f$ satisfies the constraints. Thus we need to be more careful. We iteratively impose these random constraints for various values of $i$, but at the same time ensure that each new constraint that is imposed on $f$ is linearly independent of the old constraints. We do this by making sure that each new column that is sampled is linearly independent of the old columns.
[**Random restriction procedure $R_\epsilon$**]{}\
[**Output:**]{} The set of variables that are set to zero.
1. Initialize $A_0 = \F_2^{kd}$, $\cal B$ to be a $0$ dimensional vector, $\cal M$ to be an empty matrix over $\F_2$.
2. [**Outer Loop :** ]{}For $i$ from $1$ to $n$, do the following:
- [**Inner Loop :** ]{}For $j$ going from $1$ to $\epsilon k$, do the following:
1. If all the columns of $\overline {\eval_{i}}$ have been spanned by the columns in $\cal M$, then do nothing
2. Else pick a uniformly random column $C$ of $\overline {\eval_{i}}$ that has not been spanned by the columns of $\cal M$, and pick a uniformly random element $b$ of $\F_2$.
3. Set ${\cal M} = {\cal M} \| C $ (appending $C$ as a new column of [M]{}) and set ${\cal B} = {\cal B} \| b$ (appending $b$ to the vector ${\cal B}$.
- Set $A_i = \{[f] \mid [f] \times {\cal M} = {\cal B}; [f] \in \F_2^{kd}\}$
3. Let $S_0 = \{x_{i,j} \mid j \neq f(i) ~\forall ~ [f] \in A_n\}$. Set all the variables $x_{i,j}\in S_0$ to $0$.
The above random restriction procedure imposes at most $\epsilon k \times n$ independent $\F_2$-affine constraints on $[f]$. Each constraint restricts the space of possible $[f]$ by codimension $1$. Thus in the end $A_n$ is an affine subspace of $\F_2^{kd}$ of codimension at most $\epsilon k \times n$. This immediately implies the claim below which shows that the size of $A_n$ is large. This in turn will imply that many of the monomials in $NW_d$ will survive after the random restriction.
\[claim:anlarge\] $|A_n| \geq n^d/2^{\epsilon kn} = n^{d-\epsilon n}. $
The main observation is that each time we are in Step (b) of the inner loop, we impose an [*independent*]{} $\F_2$-affine constraint on the possible choices of $[f]$. Thus the space of possible $[f]$ reduces by codimension exactly $1$. Thus we never impose conflicting constraints on $[f]$ and we ensure that at each step the number of $[f]$ satisfying all constraints is large.
Effect of random restriction on $NW_d$
--------------------------------------
Let $S_0$ be the set of variables output by the random restriction procedure $R_\epsilon$. Let $R_\epsilon(NW_d)$ be the polynomial obtained from $NW_d$ after setting the variables in $S_0$ to $0$. In this section we will show that $R_\epsilon(NW_d)$ continues to remain hard in some sense. More precisely, we will show that for any $S_0$ output by the $R_\epsilon$, and for $r<d$, a lot of distinct $r^{th}$ order partial derivatives of $R_\epsilon(NW_d)$ are non zero. Let $r< d-1$. Let $S \subset [n]$ be a set of size $r$. Let $T_S = \{\prod_{i \in S} x_{i,j_i} \mid (j_i)_{i \in S} \in [n]^r \}$ be a set of $n^r$ monomials. We will consider partial derivatives of $NW_d$ with respect to monomials in $T_S$ for some choice of $S$.
\[lem:many-derivatives\] For every $\epsilon > 0$, and every set $S_0$ output by the random restriction procedure $R_\epsilon$, there is a set $S \subset [n]$ of size $r$ such that at least $n^{r(1-\epsilon n/d)}$ monomials in $T_S$ are such that the partial derivative of $R_\epsilon(NW_d)$ with respect to each of these monomials is nonzero and distinct.
Observe that for any polynomial of degree at most $d-1$, its evaluation at some $d$ distinct points uniquely determines it. Let $S_i \in [n]$ be the set $\{(i-1)r +1, (i-1)r +2, \ldots, ir\}$. We will consider the set of evaluations of $f$ such that $[f] \in A_n$ at points of the set $S_i$ for various $i$. We will show that for some choice of $i$, the number of distinct sets of evaluations in $S_i$ as $[f]$ ranges in $A_n$ is large. Let $m_i$ be the number of distinct $r$-tuples of evaluations on $S_i$ as $[f]$ varies in $A_n$. Thus the total number of distinct $d$-tuples of evaluations on $[d]$ as $[f]$ varies in $A_n$ is at most $\prod_{i=1}^{d/r} m_i$. However each $d$-tuple of evaluations on $[d]$ uniquely identifies $[f] \in A_n$. Thus $|A_n| \leq \prod_{i=1}^{d/r} m_i$. However by Claim \[claim:anlarge\] we know that $|A_n| \geq n^d/2^{\epsilon kn} = n^{d-\epsilon n}$. Thus there exists $i \leq d/r$ such that $m_i \geq n^{r(1 - \epsilon n/d)}$. Thus there are $n^{r(1 - \epsilon n/d)}$ monomials in $T_{S_i}$ each of which is consistent with some polynomial $f$ such that $[f] \in A_n$. Thus for each such monomial, there exists a monomial in $R_\epsilon(NW_d)$ extending it, and hence the corresponding partial derivative is nonzero. From Remark \[rmk2\] it follows that each of these partial derivative is distinct.
Effect of random restriction on $\spsp$ circuit
-----------------------------------------------
Let $C$ be a homogeneous $\spsp$ circuit of size at most $n^{\rho\log\log n}$ for some very small constant $\rho$ that we will choose later. We will use $R_\epsilon(C)$ to refer to the $\spsp$ circuit obtained from $C$ after setting the variables in $S_0$ to $0$. This operation simply eliminates those monomials computed at the bottom later of $C$ which contain at least one variable which is set to $0$. Observe that homogeneity is preserved in this process. We will now show that with very high probability over the random restrictions, no product gate in $C$ at the bottom layer which takes more than $\Omega(\log n)$ distinct variables as input survives.
\[lem:ckt-simplifies\] Let $\epsilon > 0$ and $\beta > 0$ be constants. Then there exists $\rho > 0$ such that if $C$ is a $\spsp$ circuit of size at most $n^{\rho\log\log n}$, then with probability $> 9/10$, all the monomials computed at the bottom layer which have support at least $\beta\log n$ have some variable set to $0$ by $R_\epsilon$.
Before we prove this lemma, we will first prove some simple results about affine subspaces and the probabilities of variables surviving the random restriction process.
\[lem:random-subspace-inclusion\] Let $V$ and $W$ be fixed subspaces of $\F^k_2$ such that $W$ is a subspace of $V$. Let $U$ be a subspace of $V$ which is chosen uniformly at random among all subspaces of $V$ of dimension $\dim(U)$. Then, the probability that $W$ is a subspace of $U$ is at most $\prod_{j = 0}^{(\dim(W)-1)}\frac{2^{\dim(U)} - 2^j}{2^{\dim(V)} - 2^j} \leq 2^{-(\dim(V)-\dim(U))\dim(W)}$.
Let us consider $Y$ to be a fixed subspace of dimension $\dim(U)$ of $V$. Now, let $A_U$ be an invertible linear transformation from $U$ to $Y$. Since, $U$ is chosen uniformly at random, so $A_U$ is also a uniformly random invertible matrix. Now, $W$ was a subspace of $U$ if and only if $A_UW$ is a subspace of $Y$. But since $A_U$ is chosen uniformly at random, so $A_UW$ is a uniformly random subspace of $\F^k_2$ of dimension $\dim(W)$. So, the desired probability is the same as the probability that for a fixed subspace $Y$ of dimension $\dim(U)$, a uniformly at random chosen subspace $W$ of dimension $\dim(W)$ lies in $Y$. Observe that sampling a uniformly random subspace can be done by greedily and uniformly at random sampling independent basis vectors for the subspace. Thus $W$ is contained in $Y$ if and only if all of the $\dim(W)$ linearly independent basis vectors chosen while randomly sampling $W$ lie in $Y$. This quantity is at most $\prod_{j = 0}^{(\dim(W)-1)}\frac{2^{\dim(U)} - 2^j}{2^{\dim(V)} - 2^j}$. Since, $\dim(U) \leq \dim(V)$, this probability is upper bounded by $2^{-(\dim(V)-\dim(U))\dim(W)}$.
We will now visualize our variables to be arranged in an $n\times n$ [*variable matrix*]{}, where the $(i,j)^{th}$ entry of this matrix is the variable $x_{i,j}$. We say that a monomial [*survives*]{} the random restriction procedure given by $R_\epsilon$ if no variable in the monomial is set to zero.
We say that the $i^{th}$ row in the variable matrix is [*compact*]{} if the columns of $\cal M$ sampled by the random restriction algorithm span every column of $\eval_{i}$. Thus $\cal M$ and $\cal B$ uniquely determine the value of $f(\alpha_i)$. We say a row is non-compact otherwise.
\[lem:compact\] Suppose that the $i^{th}$ row of the variable matrix is compact. Then, for every $j \in \F_n$, the probability that a variable $x_{i,j}$ survives $R_\epsilon$ is at most $\frac{1}{n}$.
The columns of $\cal M$ sampled by the random restriction algorithm span every column of $\eval_{i}$, so the value of $\cal B$ uniquely determines the value of $[f]\times \eval_{i}$. Moreover, since the columns of $\eval_{i}$ are linearly independent (since for every $j \in [n]$, there exists an $f$ such that $f(i) = j$) and $\cal B$ is chosen uniformly at random, so the value of $[f]\times \eval_{i}$ is a uniformly random element of $\F_2^k$. This implies that the value of $f(i)$ is uniquely determined and is a uniformly random element of $\F_n$. Thus the probability that $f(i) = j$ equals $1/n$, and the result follows.
\[lem:noncompact\] Suppose that the $i^{th}$ row of the variable matrix is non-compact. Then, for every $j \in \{1, 2, \ldots, n\}$, the probability that $x_{i,j}$ survives is at most $\frac{1}{n^{\epsilon}}$. In fact this holds even after conditioning on any choice of $A_{i-1}$, which is the affine subspace $[f]$ is allowed to vary in after $i-1$ stages on the random restriction algorithm.
In the random restriction algorithm, since $i$ is a non-compact row, in stage $i$, we picked $\epsilon k$ independent columns of $\overline{\eval_{i}}$. At the end of stage $i-1$, $[f]$ was restricted to vary in some affine subspace $A_{i-1}$. Thus the possible values of $f(i)$ also varied in some affine subspace $V$. At the end of stage $i$, $[f]$ was restricted to vary in some affine subspace of codimension $\epsilon k$ of $A_{i-1}$. This affine subspace was chosen by restricting the values of $f$ at $i$. Thus $[f(i)]$ was allowed to vary in a random affine subspace of codimension $\epsilon k$ in $V$. Call this subspace $U$. Thus the probability that $x_{i,j}$ survives is at most the probability that $j$ lies in the subspace $U$, which is at most $|U|/|V| = \frac{1}{n^{\epsilon}}$.
We will now prove that any monomial which has a large support in any row of the variable matrix survives the random restriction procedure with only a very small probability.
\[lem:flat-monomial-die\] Any monomial which has a support larger than $t$ in a row in the variable matrix survives $R_\epsilon$ with probability at most $\frac{1}{n^{\epsilon \log t}}$.
Let $\alpha$ be a monomial which has support $\geq t$ in row $i$ of the variable matrix. Let $S = \{x_{i,j_1}, x_{i,j_2}, \ldots, x_{i,j_t}\}$ be any subset of the variables in this support of size $t$. For $t = 1$, the lemma trivially holds. Now, if $t>1$, then if the row $i$ is compact then this monomial survives with probability $0$. So, now we will assume that row $i$ is non-compact. Since we identified $\F_n$ with $\F_2^k$, $\{j_i, j_2, \ldots, j_t\} \subset \F_2^k$. There must be $\log t$ of these elements that are linearly independent. Let this set of independent elements be $\beta_1, \beta_2,\ldots, \beta_{\log t}$. Thus $\alpha$ survives only if for each $j$, there is an $f$ such that $[f] \in A_n$ and $f(i) = \beta_j$.
Recall that in the random restriction algorithm, in stage $i$, we picked $\epsilon k$ independent columns of $\overline{\eval_{i}}$. At the end of stage $i-1$, $[f]$ was restricted to vary in some affine subspace $A_{i-1}$. Thus the possible values of $[f(i)]$ also varied in some affine subspace $V$. If each of $\beta_1, \beta_2,\ldots, \beta_{\log t}$ were not contained in $V$ then $\alpha$ does not survive. Thus let us assume that $\beta_1, \beta_2,\ldots, \beta_{\log t} \in V$.
At the end of stage $i$, $[f]$ was restricted to vary in some affine subspace of codimension $\epsilon k$ of $A_{i-1}$. This affine subspace was chosen by restricting the values of $f$ at $i$. Thus $[f(i)]$ was allowed to vary in a random affine subspace of codimension $\epsilon k$ in $V$. Call this subspace $U$. Let $W$ be the subspace given by the span of $\beta_1, \beta_2,\ldots, \beta_{\log t}$. Then $\beta_1, \beta_2,\ldots, \beta_{\log t} \in U$ if and only if $W \subseteq U$. By Lemma \[lem:random-subspace-inclusion\], the probability of this happening is at most $\frac{1}{n^{\epsilon \log t}}$.
Now, let us consider a monomial which has a large number of variables from different rows. We will now estimate the probability that this monomial survives.
\[lem:tall-monomial-noncompact-die\] Let $t < d-1$. Any monomial which has support in $t$ non-compact rows survives $R_\epsilon$ with probability at most $\frac{1}{n^{\epsilon t}}$.
Let $\alpha$ be a monomial which has at least one variable in each of $t$ distinct non compact rows, say $i_1, i_2, i_3, \ldots, i_t$. From Lemma \[lem:noncompact\], we know that a variable in row $i_j$, $j\in [t]$, survives with probability at most $\frac{1}{n^{\epsilon}}$. In fact, conditioned on the variables in $i_1, i_2, \ldots, i_j$ surviving for any rows $i_1, i_2, \ldots, i_j$, the probability that the variable in row $i_{j+1}$ survives is at most $\frac{1}{n^{\epsilon}}$. Hence, all of them survive with probability at most $\frac{1}{n^{\epsilon t}}$.
We will now show that monomials which have nonzero support in many compact rows survive with very low probability.
\[lem:tall-monomial-compact-die\] Let $t < d-1$. Any monomial which has nonzero support in $t$ compact rows survives $R_\epsilon$ with probability at most $\frac{1}{n^{t}}$.
Let $i_1, i_2, \ldots, i_t$ be some $t$ distinct compact rows. It is easy to see that the columns of the matrices $\eval_{i_1}, \eval_{i_2}, \ldots, \eval_{i_t}$ are all linearly independent, since $f$ can take all possible values at the points $i_1, i_2, \ldots, i_t$. Therefore, the probability that some variable survives in one of these rows is independent of the probability that some variable in another row survives. From Lemma \[lem:compact\], we know that any variable in any of these rows survives with probability at most $\frac{1}{n}$. From the above two observations, the probability that any monomial with support in these rows survives is at most $\frac{1}{n^t}$.
Together, Lemma \[lem:flat-monomial-die\], Lemma \[lem:tall-monomial-noncompact-die\] and Lemma \[lem:tall-monomial-compact-die\] show that any monomial with large support survives only with a very small probability, which completes the proof of Lemma \[lem:ckt-simplifies\]. We formally prove this below.\
[*Proof of Lemma \[lem:ckt-simplifies\]:*]{} From Lemma \[lem:flat-monomial-die\], we know that any monomial which has at least $\frac{\frac{\beta}{100}\log n}{\log\log n}$ variables in any row survives with probability at most $\frac{1}{n^{\epsilon(\log {\frac{\beta}{100}} + 0.9\log\log n)}}$ (for $n$ large enough). Hence, for any circuit of size at most $n^{\rho\log\log n}$, where $\rho < \epsilon/2$, by the union bound, with high probability none of the monomials which has at least $\frac{\frac{\beta}{100}\log n}{\log\log n}$ variables in any row survives. Similarly, by Lemma \[lem:tall-monomial-noncompact-die\], a monomial with nonzero support in at least ${\log\log n}$ non-compact rows survives with probability at most $\frac{1}{n^{\epsilon \log\log n}}$. Hence, for circuits of size $n^{\rho\log\log n}$, where $\rho < \epsilon/2$, with high probability none of these monomials survive.
Similarly, monomials with nonzero support in $\log\log n$ compact rows are eliminated with a very high probability if $\rho < 1/2$. Hence, at the end of any such random restriction process, with probability very close to $1$, none of the surviving monomials has support larger than $\beta\log n$ if $\rho < \epsilon/2$.
Lower Bounds for $NW_d$
=======================
\[sec:lowerbounds\] In this section, we give a proof of our main theorem. We will heavily borrow from the proof of Theorem \[thm:bounded-support-lower-bound\] in Section \[sec:small-support-lb\]. The following lemma provides a lower bound on the complexity of the $NW_d$ polynomial after restricting it via $R_{\epsilon}$.
\[lem:manyshifts-restricted-NW\] Let $\delta$ and $\epsilon$ be any constants such that $0<\epsilon, \delta < 1$. Let $d = \delta n$. Let $\ell, m, r$ be positive integers such that $n-r > d$, $r < d-1$, $m \leq N$, $m = \theta(N)$ and for $\phi = {\frac{N}{m}}$, $r$ satisfies $ r \leq \frac{ (n-d)\log{ \phi} \pm O\left(\phi \frac{(n-d-r)^2}{N}\right)}{(1-\epsilon n/d)\log n + \log{ \phi}}$. Then, for every random restriction $R_{\epsilon}$, $$\dim(\langle\partial^{r} R_{\epsilon}(NW_d)\rangle_{(\ell, m)}) \geq 0.5n^{(1-\epsilon n/d)r}{N \choose m}{\ell-1 \choose m-1}$$
The proof is analogous to the proof of Lemma \[lem:manyshiftsNW\] till the point we substitute the value of ${\cal M}^{[r]}$ in the calculations in the proof of Lemma \[lem:manyshiftsNW\]. For $R_{\epsilon}(NW_d)$, the value to be substituted is now $n^{r(1-\epsilon n/d)}$ as shown in Lemma \[lem:many-derivatives\]. So, we know that
$$\dim(\langle\partial^{r} R_{\epsilon}(NW_d)\rangle_{(\ell, m)}) \geq 0.5n^{(1-\epsilon n/d)r}{N \choose m}{\ell-1 \choose m-1}$$
as long the parameters satisfy
$$~\label{eqn1}
n^{r(1-\epsilon n/d)}\times \frac{(N-(n-d-r))!}{N!}\times \frac{m!}{(m-(n-d-r))!} \leq 1$$
Now, using the approximation from Lemma \[lem:approx\], $$\begin{aligned}
\log\frac{N!}{(N-(n-d-r))!} &=& (n-d-r)\log N \pm O\left(\frac{(n-d-r)^2}{N}\right) \text{and} \\
\log\frac{m!}{(m-(n-d-r))!} &=& (n-d-r)\log m \pm O\left(\frac{(n-d-r)^2}{m}\right)\end{aligned}$$
Now, taking logarithms on both sides in Equation \[eqn1\] and substituting these approximations, we get $$(1-\epsilon n/d)r\log n \leq \log\left(\frac{N}{m}\right)^{n-d-r} \pm O\left(\frac{(n-d-r)^2}{N} + \frac{(n-d-r)^2}{m}\right)$$
Substituting $m = \frac{N}{\phi}$ and noting that $\phi > 1$, we require $$(1-\epsilon n/d)r\log n \leq (n-d-r)\log{ \frac{N}{m}} \pm O\left(\phi\frac{(n-d-r)^2}{N}\right)$$ and $$r \leq \frac{ (n-d)\log{ \phi} \pm O(\phi\frac{(n-d-r)^2}{N})}{(1-\epsilon n/d)\log n + \log{ \phi}}$$
Observe that for any constant $0< \delta < 1$ such that $d = \delta n$, $r$ can be chosen any constant times $\frac{n}{\log n}$ by choosing $\phi$ to be an appropriately large constant. So, for such a choice of $r$, we get $$\dim(\langle\partial^{r} NW_d\rangle_{(\ell, m)}) \geq 0.5n^{(1-\epsilon n/d)r}{N \choose m}{\ell-1 \choose m-1}$$
The following lemma proves a lower bound on the top fan-in of any homogeneous $\spsp^{\{\beta\log n\}}$ circuit for the $R_{\epsilon}(NW_d)$ polynomial for a constant $\beta$. The proof of the lemma is essentially the same as the proof of Theorem \[thm:bounded-support-lower-bound\].
\[lem:bounded-support-lb-restricedNW\] Let $d = \delta n$ for any constant $\delta$ such that $0 < \delta < 1$. Then, there exist constants $\epsilon, \beta$ such that any homogeneous $\spsp^{\{\beta\log n\}}$ circuit computing the $R_{\epsilon}(NW_d)$ polynomial for any random restriction $R_{\epsilon}$ has top fan-in is at least $2^{\Omega(n)}$.
By comparing the complexities of the circuit and the polynomial as given by Corollary \[cor:circuit-complexity-bound\] and Lemma \[lem:manyshiftsNW\], the top fan-in of the circuit must be at least
$$~\label{eqn:top-fan-in-2}
\frac{0.5n^{(1-\epsilon n/d)r}{N\choose m}{\ell-1 \choose m-1}}{\text{poly}(nrs){n+r \choose r}{N \choose m+rs}{\ell+n-r \choose m+rs}}$$
This bound holds for any choice of positive integers $\ell, m, r$, a constant $\beta$ such that $s = \beta\log n$ which satisfy the constraints in the hypothesis of Corollary \[cor:circuit-complexity-bound\] and Lemma \[lem:manyshifts-restricted-NW\]. In other words, we want these parameters to satisfy
- $m+rs \leq \frac{N}{2}$
- $m+rs \leq \frac{\ell}{2}$
- $n-r > d$
- $r < d-1$
- For $\phi = {\frac{N}{m}}$, $ r \leq \frac{ (n-d)\log{ \phi} \pm O\left(\phi\frac{(n-d-r)^2}{N}\right)}{(1-\epsilon n/d)\log n + \log{ \phi}}$
In the rest of the proof, we will show that there exists a choice of these parameters such that we get a bound of $2^{\Omega(n)}$ from expression above. We will show the existence of such parameters satisfying the asymptotics $\ell = \theta(N)$, $r = \theta\left( \frac{n}{\log n} \right)$ and $s = \theta(\log n)$. In the rest of the proof, we will crucially use these asymptotic bounds for various approximations.
Let us now estimate this ratio term by term. We will invoke Lemma \[lem:approx\] for approximations.
- $\frac{{N \choose m}}{{N \choose m+rs}} = \frac{(N-m-rs)!(m+rs)!}{(N-m)!m!} = (\frac{m}{N-m})^{rs}$ upto some constant factors, as long as $(rs)^2 = \theta (N) = \theta(m)$.
- $\frac{{\ell-1 \choose m-1}}{{\ell+n-r \choose m+rs}} = {\frac{(\ell-1)!}{(m-1)!(\ell-m)!} \times \frac{(m+rs)!(\ell-m+n-r-rs)!}{(\ell+n-r)!}}$. Lets now pair up things we know how to approximate within constant factors. $\frac{{\ell-1 \choose m-1}}{{\ell+n-r \choose m+rs}} = \frac{(\ell-1)!}{(\ell+n-r)} \times \frac{(m+rs)!}{(m-1)!} \times \frac{(\ell-m+n-r-rs)!}{(\ell-m)!} = \text{poly(n)} \times {\frac{1}{\ell^{n-r}}}\times m^{rs} \times {\frac{(\ell-m)^{n-r}}{(\ell-m)^{rs}}}$. This simplifies to $\text{poly(n)} \times {\left(\frac{m}{\ell-m}\right)}^{rs} \times {\left(\frac{\ell-m}{\ell}\right)}^{n-r}$.
- $\frac{n^{(1-\epsilon n/d)r}}{{n+r \choose r}} \geq \frac{n^{(1-\epsilon n/d)r}}{{\left(\frac{2(n+r)}{r}\right)}^{r}}$. We just used Stirling’s approximation here.
In the asymptotic range of our parameters, the approximations above imply that the top fan-in, up to polynomial factors is at least $${\left(\frac{r}{3}\right)}^r\times{\left(\frac{m}{\ell-m}\right)}^{rs} \times {\left(\frac{\ell-m}{\ell}\right)}^{n-r} \times \frac{1}{n^{(\epsilon n/d)r}} \times \left(\frac{m}{N-m}\right)^{rs}$$
Simplifying further, this is at least
$$2^{\Omega(r\log r - rs\log\frac{\ell-m}{m} - (n-r)\log\frac{\ell}{\ell-m} - (\epsilon n/d)r\log n - rs\log\frac{N-m}{m})}$$
We will set $m$ and $\ell$ to be $\theta(N)$ and $r$ to be $\theta(\frac{n}{\log n})$. The constants have to be chosen carefully in order to satisfy the constraints. We will choose constants $\alpha, \beta$ and $\eta$ such that $s = \beta\log n$, $r = \alpha\cdot n/\log n$ and $m= \eta \ell$. First let us choose $\epsilon$ to be a very small positive constant such that $\epsilon n/d = \epsilon/\delta << 0.1$ First choose $\eta$ to be any small constant $> 0$ (for instance $\eta= 1/4$). Now, choose $\alpha$ to be a constant much much larger than $\log\frac{1}{1-\eta}$ and $\epsilon/\delta$. This makes sure that $r\log r$ dominates $(n-r)\log\frac{\ell}{\ell-m}$ and $(\epsilon n/d)r\log n$. Recall that $\alpha$ can be chosen to be any large constant by choosing $\phi$ to be appropriately large constant (by the constraint between $r$ and $\phi$ in the fifth bullet). Notice that this sets $m$ to be a small constant factor of $N$. Fix these choices of $\eta$ and $\alpha$. Now, we choose the term $\beta$ to be a small constant such that $rs\log\frac{1-\eta}{\eta}$ and $rs\log\frac{N-m}{m}$is much less than $r\log r$. Observe that this choice of parameters satisfies all the constraints imposed in the calculations above. Hence, the top fan-in must be at least $2^{\Omega(r\log r)} = 2^{\Omega(n)}$.
We now have all the ingredients to prove our main theorem.
\[thm:main3\] Let $d = \delta n$ for any constant $\delta$ such that $0 < \delta < 1$. Any homogeneous $\spsp$ circuit computing the $NW_d$ must have size at least $n^{\Omega{(\log\log n)}}$.
For every value of $\delta$, such that $0 < \delta < 1$, choose the parameters $\epsilon = \tilde{\epsilon}, \beta = \tilde{\beta}$ such that Lemma \[lem:bounded-support-lb-restricedNW\] is true for $\tilde{d} = \delta n$. Now, let us choose a constant $\rho = \tilde{\rho}$ such that Lemma \[lem:ckt-simplifies\] holds. Now, let $C$ be a homogeneous $\spsp$ circuit computing the $NW_{\tilde{d}}$ polynomial. If the number of bottom product gates of $C$ was at least $n^{\tilde{\rho}{\log\log n}}$, then $C$ has large size and we are done. Else, let us now apply a random restriction $R_{\epsilon}$ to the circuit. By the choice of parameters, Lemma \[lem:ckt-simplifies\] holds and so with probability $0.9$ every bottom product gate in $C$ with support larger than $\tilde{\beta}\log n$ is set to zero. After a restriction, the circuit computes $R_{\tilde{\epsilon}}(NW_{\tilde{d}})$. So, now we are in the case when we have a small support homogeneous circuit of depth four computing some random restriction of the $NW_{\tilde{d}}$ polynomial and then, by Lemma \[lem:bounded-support-lb-restricedNW\] above, the top fan-in of $R_{\tilde{\epsilon}}(C)$ must be at least $2^{\Omega(n)}$. Hence, any homogeneous $\spsp$ circuit computing $NW_{\tilde{d}}$ must have size at least $n^{\Omega(\log\log n)}$.
Open Problems
=============
\[sec:conclusion\] The main question left open by this work is to prove much stronger, possibly exponential lower bounds for homogeneous $\spsp$ circuits. Given the earlier related works and the results of this paper, this question might be well within reach. It would be also very interesting to understand the limits of the new complexity measure of bounded support shifted partial derivatives that is introduced in this paper (as well as other variants) and investigate if they can be used to prove lower bounds for other interesting classes of circuits.
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank Mike Saks and Avi Wigderson for many helpful discussions and much encouragement. We are also thankful to Amey Bhangale, Ben Lund and Nitin Saurabh for carefully sitting through a presentation on an earlier draft of the proof.
[^1]: Department of Computer Science, Rutgers University. Email: `mrinal.kumar@rutgers.edu`.
[^2]: Department of Computer Science and Department of Mathematics, Rutgers University. Email: `shubhangi.saraf@gmail.com`.
|
---
abstract: 'Second order cone programs (SOCPs) are a class of structured convex optimization problems that generalize linear programs. We present a quantum algorithm for SOCPs based on a quantum variant of the interior point method. Our algorithm outputs a classical solution to the SOCP with objective value $\epsilon$ close to the optimal in time $\widetilde{O} \left( n\sqrt{r} \frac{\zeta \kappa}{\delta^2} \log \left(1/\epsilon\right) \right)$ where $r$ is the rank and $n$ the dimension of the SOCP, $\delta$ bounds the distance from strict feasibility for the intermediate solutions, $\zeta$ is a parameter bounded by $\sqrt{n}$, and $\kappa$ is an upper bound on the condition number of matrices arising in the classical interior point method for SOCPs. We present applications to the support vector machine (SVM) problem in machine learning that reduces to SOCPs. We provide experimental evidence that the quantum algorithm achieves an asymptotic speedup over classical SVM algorithms with a running time $\widetilde{O}(n^{2.557})$ for random SVM instances. The best known classical algorithms for such instances have complexity $\widetilde{O} \left( n^{\omega+0.5}\log(1/\epsilon) \right)$, where $\omega$ is the matrix multiplication exponent that has a theoretical value of around $2.373$, but is closer to $3$ in practice.'
author:
- Iordanis Kerenidis
- Anupam Prakash
- Dániel Szilágyi
bibliography:
- 'bibliography.bib'
title: 'Quantum algorithms for Second-Order Cone Programming and Support Vector Machines'
---
Introduction
============
Convex optimization is one of the central areas of study in computer science and mathematical optimization. The reason for the great importance of convex optimization is twofold. Firstly, starting with the seminal works of Khachiyan [@khachiyan1980polynomial] and Karmarkar [@karmarkar1984new], efficient algorithms have been developed for a large family of convex optimization problems over the last few decades. Secondly, convex optimization has many real world applications and many optimization problems that arise in practice can be reduced to convex optimization [@boyd2004convex].
There are three main classes of structured convex optimization problems: linear programs (LP), semidefinite programs (SDP), and second-order conic programs (SOCP). The fastest (classical) algorithms for these problems belong to the family of interior-point methods (IPM). Interior point methods are iterative algorithms where the main computation in each step is the solution of a system of linear equations whose size depends on the dimension of the optimization problem. The size of structured optimization problems that can be solved in practice is therefore limited by the efficiency of linear system solvers – on a single computer, most open-source and commercial solvers can handle dense problems with up to tens of thousands of constraints and variables, or sparse problems with the same number of nonzero entries [@mittelmann2019lp; @mittelmann2019socp].
In recent years, there has been a tremendous interest in quantum linear algebra algorithms following the breakthrough algorithm of Harrow, Hassidim and Lloyd [@harrow2009quantum]. Quantum computers are known to offer significant, even exponential speedups [@harrow2009quantum] for problems related to linear algebra and machine learning, including applications to principal components analysis [@lloyd2013quantum], clustering [@lloyd2013quantum; @kerenidis2018q], classification [@li2019sublinear; @kerenidis2018sfa], least squares regression [@kerenidis2017quantum; @chakraborty2018power] and recommendation systems [@kerenidis2016quantum].
This raises the natural question of whether quantum linear algebra can be used to speed up interior point algorithms for convex optimization. The recent work [@kerenidis2018quantum] proposed a quantum interior point method for LPs and SDPs and developed a framework in which the classical linear system solvers in interior point methods can be replaced by quantum linear algebra algorithms. In this work, we extend the results of [@kerenidis2018quantum] and develop a quantum interior point method for second order cone programs (SOCPs).
Second order cone programs are a family of structured optimization problems that have complexity intermediate between LPs and SDPs. They offer an algorithmic advantage over SDPs as the linear systems that arise in the Interior Point Method for SOCPs are of smaller size than those for general SDPs. In fact, the classical complexity for SOCP algorithms is close to that for LP algorithms. Our results indicate that this remains true in the quantum setting, that is the quantum linear systems arising in the interior point method for SOCPs are easier for quantum algorithms than the linear systems for general SDPs considered in [@kerenidis2018quantum].
SOCPs are also interesting from a theoretical perspective, as the interior point method for LPs, SOCPs and SDPs can be analyzed in a unified manner using the machinery of Euclidean Jordan algebras [@faybusovich1997linear]. An important contribution of our work is to present a similar unified analysis for quantum interior point methods, which is equivalent to analyzing the classical interior point method where the linear systems are only solved approximately with an $\ell_{2}$ norm error. Our analysis is not purely Jordan-algebraic like that of [@faybusovich1997linear], however it can still be adapted to analyze quantum interior point methods for LPs and SDPs.
The main advantage of SOCPs from the applications perspective is their high expressive power. Many problems that are traditionally formulated as SDPs can in fact be reduced to SOCPs, an extensive list of such problems can be found in [@alizadeh2003second]. An important special case of SOCPs is the convex quadratic programming (QP) problem, which in turn includes the support vector machine (SVM) problem in machine learning [@cortes1995support] and the portfolio optimization problem in computational finance [@markowitz1952portfolio; @boyd2004convex] as special cases. The SVM and portfolio optimization problems are important in practice and have recently been proposed as applications for quantum computers [@kerenidis2019quantum; @rebentrost2018quantum; @rebentrost2014quantum].
However, these applications consider the special cases of the $\ell_{2}$-(or least-squares-)SVM [@suykens1999least] and the unconstrained portfolio optimization problem, that reduce to a single system of linear equations. The $\ell_{1}$-regularized SVM algorithm is widely used in machine learning as it finds a robust classifier that maximizes the error margin. Similarly, the constrained portfolio optimization is widely applicable in computational finance as it is able to find optimal portfolios subject to complex budget constraints. The ($\ell_{1}$-)SVM and constrained portfolio optimization problems reduce to SOCPs, our algorithm can therefore be regarded as the first specialized quantum algorithm for these problems.
We provide experimental evidence to demonstrate that our quantum SVM algorithm indeed achieves an asymptotic speedup. For suitably constructed ’random’ SVM instances, our experiments indicate that the quantum SVM algorithm has running time $\widetilde{O}(n^{2.557})$ which is an asymptotic speedup over classical SVM algorithms, which have complexity $\Omega(n^{3})$ for such instances.
The significance of these experimental results is two-fold. First, they demonstrate that quantum interior point methods can achieve significant speedups for optimization problems arising in practice, this was not clear apriori from the results in [@kerenidis2018q] where the running time depends on the condition number of intermediate matrices arising in the interior point method for which it is hard to prove worst case upper bounds. It suggests that one may hope to obtain significant polynomial speedups for other real-world optimization problems using quantum optimization methods. Second, support vector machines are one of the most well studied classifiers in machine learning, it is therefore significant to have an asymptotically faster quantum SVM algorithm. Moreover, our SVM algorithm has the same inputs and outputs as that for classical SVMs and is therefore directly comparable to them. This can lead to further developments at the interface of quantum computing and machine learning, for example that of quantum kernel methods.
Our Results
-----------
In order to state our results more precisely, we first introduce second order cone programs (SOCPs). An SOCP is an optimization problem over the product of Lorentz cones ${\mathcal{L}}_k$ which are defined as, $${\mathcal{L}}^k = \left\lbrace\left. {\bm{x}} = (x_0; {\widetilde{{\bm{x}}}}) \in {\mathbb{R}}^{k} \;\right\rvert\; \norm{{\widetilde{{\bm{x}}}}} \leq x_0 \right\rbrace.$$ The SOCP in the standard form is the following optimization problem: $$\begin{array}{ll}
\min\limits_{{\bm{x}}_1, \dots, {\bm{x}}_r} & {\bm{c}}_1^T {\bm{x}}_1+\cdots {\bm{c}}_r^T {\bm{x}}_r\\
\text{s.t.}& A^{(1)} {\bm{x}}_1 + \cdots + A^{(r)}{\bm{x}}_r = {\bm{b}} \\
& {\bm{x}}_i \in {\mathcal{L}}^{n_i},\; \forall i \in [r],
\end{array} \label{prob:SOCP primal verbose}$$ The inputs for the problem are the $r$ constraint matrices $A^{(i)} \in {\mathbb{R}}^{m \times n_i}, i \in [r]$, vectors ${\bm{b}} \in {\mathbb{R}}^m$ and ${\bm{c}}_i \in {\mathbb{R}}^{n_i}, i\in [r]$. The output is a solution vector ${\bm{x}}_i \in {\mathbb{R}}^{n_i}, i\in [r]$. The running time for SOCP algorithms is stated in terms of the parameters $r$ and $n : = \sum_{ i \in [r]} n_{i}$. The rank $r$ determines the number of SOCP constraints while $n$ is the dimension of the solution vector.
The best known classical algorithm for SOCP is based on the interior-point method [@ben2001lectures], and has complexity $O\left( \sqrt{r} n^{\omega} \log(n / \epsilon) \right)$, where $\omega$ the matrix multiplication exponent (approximately equal to $2.373$ in theory[@le2014powers], and equal to $3$ in practical implementations), and solutions obtained are within $\epsilon$ of the optimal value. It is worth noting that very recently an optimal $O\left(n^{\omega} \log(n / \epsilon) \right)$ algorithm has been found for linear programs [@cohen2018solving], which are a special case of SOCPs.
Our main result is a quantum interior point method for SOCPs with the following running time and approximation guarantees.
\[thm:runtime in introduction\] Let be a SOCP with $A \in {\mathbb{R}}^{m\times n_{i}}$, ${\bm{b}} \in {\mathbb{R}}^m$ and ${\bm{c}}_i \in {\mathbb{R}}^{n_i}$ for $i\in [r]$ and $m \leq n$. Then, there is a quantum Algorithm that outputs a solution $x_{i} \in {\mathcal{L}}^{n_i}$ that achieves an objective value that is within $\epsilon$ of the optimal value in time, $$T = \widetilde{O} \left( n\sqrt{r} \cdot \frac{ \kappa \zeta}{\delta^2} \log\left( 1 / \epsilon \right)\log\left( \frac{\kappa \zeta}{\delta} \right) \right).$$ Here $\zeta \leq \sqrt{n}$ is a factor that appears in quantum linear system solvers and $\kappa$ is an upper bound on the condition number of the matrices arising in the interior point method for SOCPs. Similarly, the parameter $\delta$ is a lower bound on the distance of the intermediate iterates to the boundary of the respective cones.
Let us make some observations about the running time of the quantum algorithm. Unlike the classical algorithm that has a logarithmic dependence on the precision, the quantum algorithm scales as $O(1/\delta^{2})$. This is due to the use of quantum tomography to recover classical data, the quantum algorithm is better suited to recover low accuracy solutions. Since the factor $\zeta$ is bounded by $\sqrt{n}$ (and is much less in practice), the quantum algorithm with a worst case running time of $\widetilde{O} \left( n^{1.5}\sqrt{r} \cdot \frac{ \kappa}{\delta^2} \right)$ can achieve a significant speedup over the best known classical algorithms if the condition number $\kappa$ is bounded, and the required precision is not too high. Furthermore, theoretical work on optimization [@dollar2005iterative] suggests that $\kappa = O(1/\delta)$, thus the quantum algorithm can be viewed as having worst case running time $\widetilde{O} \left( \frac{ n^{1.5}\sqrt{r}}{\delta^3} \right)$.
We simulated our algorithm on random SVM instances in order to assess its performance and running time. It turns out that for random SVMs trained using the interior point method, the best classification accuracy and generalization errors are achieved for a modest value of $\epsilon \approx 0.1$ over a large range of problem sizes. For this optimal value of $\epsilon =0.1$ for these random instances, it turns out that the quantity $\kappa\zeta / \delta^2$ that determines the running time grows only slightly faster than $O(n)$. The estimated running time for the quantum SVM algorithm is $O(n^{2.557})$, where the exponent was obtained by fitting a curve to running times for a large number of random SVMs with varying size. Further, a standard statistical analysis shows that the 95% confidence interval for the exponent is $[2.503, 2.610]$, supporting the claim that our quantum SVM algorithm achieves an asymptotic speedup over classical SVM algorithms.
Techniques
----------
Our quantum interior-point method for SOCP is based both on the techniques from [@kerenidis2018quantum] as well as classical SOCP IPMs from [@monteiro2000polynomial; @alizadeh2003second]. In general, structured optimization problems (LPs, SOCPs and SDPs) can be viewed as having the following form: $$\begin{array}{ll}
\min\limits_{{\bm{x}}} & \langle {\bm{c}}, {\bm{x}} \rangle\\
\text{s.t.}& \mathcal{A} {\bm{x}} = {\bm{b}} \\
& {\bm{x}} \in {\mathcal{K}},
\end{array} \label{prob:conic opt primal}$$ where ${\mathcal{K}}$ is some “efficiently representable” cone, $\langle \cdot , \cdot \rangle$ the corresponding inner product, and $\mathcal A$ is a linear operator on ${\mathcal{K}}$. In order to be “efficiently representable”, the cone ${\mathcal{K}}$ is considered to be a direct product ${\mathcal{K}}= {\mathcal{K}}_1\times\cdots\times{\mathcal{K}}_r$ of $r$ basic cones. A basic cone is often taken to be of one of the following three types:
1. ${\mathbb{R}}_+ = \{ x\;|\;x\geq 0 \}$, the cone of nonnegative real numbers,
2. ${\mathcal{L}}^n = \{ {\bm{x}} = (x_0, {\widetilde{{\bm{x}}}}) \in {\mathbb{R}}^{n+1} \;|\; \norm{{\widetilde{{\bm{x}}}}} \leq x_0 \}$, the Lorentz cone of dimension $n$,
3. $\mathcal{S}_+^{n\times n} = \{ X \in {\mathbb{R}}^{n\times n}\;|\; X\text{ symmetric positive semidefinite} \},$ the cone of $n\times n$ positive semidefinite matrices.
If all cones ${\mathcal{K}}_i, i\in [r]$ are of the same type, the optimization problem is a linear program (LP), second-order cone program (SOCP), and semidefinite program (SDP), respectively. In particular, we get the following three problems corresponding to the case of LPs, SOCPs and SDPs respectively. $$\begin{array}{lcr}
\begin{array}{ll}
\min\limits_{{\bm{x}}} & {\bm{c}}^T {\bm{x}}\\
\text{s.t.}& A {\bm{x}} = {\bm{b}} \\
& x_i \geq 0,\; \forall i \in [r]
\end{array}
&
\begin{array}{ll}
\min\limits_{{\bm{x}}} & {\bm{c}}^T{\bm{x}} \\
\text{s.t.}& A {\bm{x}} = {\bm{b}} \\
& {\bm{x}} = [{\bm{x}}_1; \dots; {\bm{x}}_r ]\\
& {\bm{x}}_i \in {\mathcal{L}}^{n_i},\; \forall i \in [r]
\end{array}
&
\begin{array}{ll}
\min\limits_{X} & {\operatorname{tr}}(CX)\\
\text{s.t.}& {\operatorname{tr}}(A_j X) = b_j,\; \forall j \in [r] \\
& X \succeq 0.
\end{array}
\end{array}$$ Here, the notation ${\bm{x}} = [{\bm{x}}_1; \dots; {\bm{x}}_r ]$ means that the vector ${\bm{x}}$ is obtained by (vertically) concatenating the vectors ${\bm{x}}_i$ – we say that ${\bm{x}}$ is a block-vector with blocks ${\bm{x}}_i$, $i \in [r]$. It is well-known [@boyd2004convex] that every LP can be expressed as an SOCP, and every SOCP can be expressed as an SDP. Thus, SOCPs are a sort of “middle ground” between LPs and SDPs.
The fastest known algorithms for these problems are based on a family of algorithms known as interior point methods, which are used for both commercial [@andersen2000mosek] and open-source solvers [@borchers1999csdp; @domahidi2013ecos; @tutuncu2003solving]). Interior point algorithms can be viewed as an application of Newton’s method, the main computation in each iteration is spent on the solution linear systems that are obtained by linearizing certain non-linear optimality constraints. This suggests that quantum linear algebra algorithms [@harrow2009quantum] can be used to obtain speedups for optimization using interior point methods.
However, solving a quantum linear system differs from a classical solution as it is not possible to write down all $n$ coordinates of the solution vector ${\bm{x}} \in {\mathbb{R}}^{n}$ for linear system $A{\bm{x}} = {\bm{b}}$ in time $o(n)$. Instead, these algorithms encode vectors as quantum states, so that $$\label{eq:quantum vector notation}
{\bm{z}} \in {\mathbb{R}}^n \text{ (with } \norm{{\bm{z}}}=1 \text{) is encoded as } \ket{{\bm{z}}} = \sum_{i=1}^{n} z_i \ket{i},$$ where we write $\ket{i}$ for the joint $\left\lceil \log_2(n) \right\rceil$-qubit state corresponding to $\left\lceil \log_2(n) \right\rceil$-bit binary expansion of $i$. Then, the solution they output is a quantum state $\ket{\phi}$ close to $\ket{A^{-1}{\bm{b}}}$. In order to obtain a classical solution, we perform *tomography* on $\ket{\phi}$, and obtain a classical vector ${\overline{{\bm{x}}}}$ that is close to $\ket{\phi}$, so that finally we have a guarantee $\norm{{\bm{x}} - {\overline{{\bm{x}}}}} \leq \epsilon \norm{{\bm{x}}}$ for some $\epsilon > 0$. A quantum interior point method for LPs and SDPs based on quantum linear system solvers and an efficient tomography algorithm was given in [@kerenidis2018quantum].
The main technical contribution of this paper is the analysis of an approximate IPM algorithm for SOCP, which assumes that all linear systems are solved up to a relative error $\epsilon$. Although a similar analysis has been done by [@kerenidis2018quantum] for LPs and SDPs, we feel that our analysis of the SOCP case is especially interesting, since it uses Euclidean Jordan algebras to underscore the similarities between SOCPs on one hand, and LPs and SDPs on the other. Apart from [@kerenidis2018quantum], our analysis is inspired by the analysis of a classical SOCP IPM from [@monteiro2000polynomial], and uses [@alizadeh2003second] as a dictionary for translating concepts between the algebra of Hermitian matrices and the Jordan algebra of the second-order cone.
In the classical case, it has been shown in [@faybusovich1997linear] that Euclidean Jordan algebras capture all important features of conic programs, and thus general IPM can be expressed in purely Jordan-algebraic terms. Here, we do not strive for such generality, and focus on the case when the cone under consideration is a product of second-order cones. Still, by comparing this analysis with classical [@ben2001lectures] and quantum [@kerenidis2018quantum] analyses of LP and SDP interior point methods, we see that only minimal changes are required cover the LP and SDP cases. Indeed, we point out the appropriate analogies throughout the paper.
The rest of the paper is organized as follows. First, in Section \[sec:Preliminaries\], we introduce the necessary background on Jordan algebras and quantum linear algebra algorithms. Then, in Section \[sec:SOCP\] we present second-order conic programming, and a classical IPM for solving it. The main technical results are contained in Section \[sec:Quantum IPM\], where we present our quantum IPM for SOCP, and analyze its runtime and convergence guarantees. In Section \[sec:Applications to SVMs\], we use our algorithm to train a Support Vector Machine (SVM) binary classifier. We present some numerical results that demonstrate the performance of our algorithm when applied to real-world data. Finally, in Section \[sec:Conclusion\], we present some concluding remarks as well as possible directions for future research.
Preliminaries {#sec:Preliminaries}
=============
The goal of this section is to introduce the technical framework necessary to follow the results in Sections \[sec:SOCP\] and \[sec:Quantum IPM\]. In particular, the definitions of the Jordan product $\circ$ and the matrix representation ${\operatorname{Arw}}({\bm{x}})$ (and their block extensions) are necessary for understanding the algorithm itself, whereas the rest is used only for the analysis in Section \[sec:Quantum IPM\]. On the quantum linear algebra side, we give precise meaning to the statement “linear systems can be solved quantumly in polylogarithmic time”, and present the performance and correctness guarantees of the relevant algorithms.
Euclidean Jordan algebras
-------------------------
Jordan algebras were originally developed to formalize the notion of an algebra of observables in quantum mechanics [@jordan1933verallgemeinerungsmoglichkeiten; @albert1946jordan], but they have since been applied to many other areas, most interestingly to provide a unified theory of IPMs for representable symmetric cones [@schmieta2001associative]. In this report, we will not strive for such generality, and will instead focus on SOCPs and the Lorentz cone. Still, most of the results in this section have obvious counterparts in the algebra of Hermitian matrices, to the point that the corresponding claim can be obtained using word-by-word translation.
The main object under consideration is the $n$-dimensional Lorentz cone, defined for $n\geq 0$ as $${\mathcal{L}}^n := \{ {\bm{x}} = (x_0; {\widetilde{{\bm{x}}}}) \in {\mathbb{R}}^{n+1} \;|\; \norm{{\widetilde{{\bm{x}}}}} \leq x_0 \}.$$ We think of the elements of ${\mathcal{L}}^n$ as being “positive” (just like positive semidefinite matrices), since for $n=0$, ${\mathcal{L}}^0 = {\mathbb{R}}_+$ is exactly the set of nonnegative real numbers. The Jordan product of two vectors $(x_0, {\widetilde{{\bm{x}}}}) \in {\mathbb{R}}^{n+1}$ and $(y_0, {\widetilde{{\bm{y}}}}) \in {\mathbb{R}}^{n+1}$ is defined as $${\bm{x}} \circ {\bm{y}} := \begin{bmatrix}
{\bm{x}}^T {\bm{y}} \\
x_0{\widetilde{{\bm{y}}}} + y_0{\widetilde{{\bm{x}}}}
\end{bmatrix}, \text{ and has the identity element }{\bm{e}} = \begin{bmatrix}
1 \\
0^n
\end{bmatrix},$$ where $0^n$ is the column vector of $n$ zeros. This product is the analogue to the matrix product $X\cdot Y$, however, it is commutative and non-associative. In the special case of ${\bm{x}} \circ \cdots \circ {\bm{x}}$, the order of operations does not matter, so we can unambiguously define $
{\bm{x}}^k = \underbrace{{\bm{x}} \circ \cdots \circ {\bm{x}}}_{k \text{ times}},
$ and we even have ${\bm{x}}^p \circ {\bm{x}}^q = {\bm{x}}^{p+q}$, so $\circ$ is *power-associative*. For every vector ${\bm{x}}$, we can define the matrix (or *linear*) representation of ${\bm{x}}$, ${\operatorname{Arw}}({\bm{x}})$ (“arrowhead matrix”) as ${\operatorname{Arw}}({\bm{x}}) := \begin{bmatrix}
x_0 & {\widetilde{{\bm{x}}}}^T \\
{\widetilde{{\bm{x}}}} & x_0 I_n
\end{bmatrix}$, so we have ${\operatorname{Arw}}({\bm{e}}) = I$, as well as an alternative definition of ${\bm{x}} \circ {\bm{y}}$: $${\bm{x}} \circ {\bm{y}} = {\operatorname{Arw}}({\bm{x}}) {\bm{y}} = {\operatorname{Arw}}({\bm{x}}) {\operatorname{Arw}}({\bm{y}}) {\bm{e}}.$$ What makes the structure above particularly interesting is the fact that for any vector, we can define its spectral decomposition in a way the agrees with our intuition from the algebra of Hermitian matrices: We do this by noting that for all ${\bm{x}}$, we have $$\label{eq:preliminary spectral decomposition}
{\bm{x}} = \frac12 \left(x_0 + \norm{{\widetilde{{\bm{x}}}}}\right) \begin{bmatrix}
1 \\
\frac{{\widetilde{{\bm{x}}}}}{\norm{{\widetilde{{\bm{x}}}}}}
\end{bmatrix} +
\frac12 \left(x_0 - \norm{{\widetilde{{\bm{x}}}}}\right) \begin{bmatrix}
1 \\
-\frac{{\widetilde{{\bm{x}}}}}{\norm{{\widetilde{{\bm{x}}}}}}
\end{bmatrix},$$ so we can define the two eigenvalues and eigenvectors of ${\bm{x}}$ as $$\begin{aligned}
\label{eq:jordan eigenvalues and eigenvectors}
\lambda_1 &:= \lambda_1({\bm{x}}) = x_0 + \norm{{\widetilde{{\bm{x}}}}}, \quad
\lambda_2 := \lambda_2({\bm{x}}) = x_0 - \norm{{\widetilde{{\bm{x}}}}} \\
{\bm{c}}_1 &:= {\bm{c}}_1({\bm{x}}) = \frac12 \begin{bmatrix}
1 \\
\frac{{\widetilde{{\bm{x}}}}}{\norm{{\widetilde{{\bm{x}}}}}}
\end{bmatrix}, \quad
{\bm{c}}_2 := {\bm{c}}_2({\bm{x}}) = \frac12 \begin{bmatrix}
1 \\
-\frac{{\widetilde{{\bm{x}}}}}{\norm{{\widetilde{{\bm{x}}}}}}
\end{bmatrix}.
\end{aligned}$$ Thus, using the notation from , we can rewrite as ${\bm{x}} = \lambda_1 {\bm{c}}_1 + \lambda_2 {\bm{c}}_2$. The set of eigenvectors $\{{\bm{c}}_1, {\bm{c}}_2 \}$ is called the *Jordan frame* of ${\bm{x}}$, and satisfies several properties:
Let ${\bm{x}} \in {\mathbb{R}}^{n+1}$ and let $\{ {\bm{c}}_1, {\bm{c}}_2 \}$ be its Jordan frame. Then, the following holds:
1. ${\bm{c}}_1 \circ {\bm{c}}_2 = 0$ (the eigenvectors are “orthogonal”)
2. ${\bm{c}}_1\circ {\bm{c}}_1 = {\bm{c}}_1$ and ${\bm{c}}_2 \circ {\bm{c}}_2 = {\bm{c}}_2$
3. ${\bm{c}}_1$, ${\bm{c}}_2$ are of the form $\left( \frac12; \pm {\widetilde{{\bm{c}}}} \right)$ with $\norm{{\widetilde{{\bm{c}}}}} = \frac12$
On the other hand, just like a given matrix is positive (semi)definite if and only if all of its eigenvalues are positive (nonnegative), a similar result holds for ${\mathcal{L}}^n$ and ${\operatorname{int}}{\mathcal{L}}^n$ (the Lorentz cone and its interior):
Let ${\bm{x}} \in {\mathbb{R}}^{n+1}$ have eigenvalues $\lambda_1, \lambda_2$. Then, the following holds:
1. ${\bm{x}} \in {\mathcal{L}}^n$ if and only if $\lambda_1\geq 0$ and $\lambda_2 \geq 0$.
2. ${\bm{x}} \in {\operatorname{int}}{\mathcal{L}}^n$ if and only if $\lambda_1 > 0$ and $\lambda_2 > 0$.
Now, using this decomposition, we can define arbitrary real powers ${\bm{x}}^p$ for $p \in {\mathbb{R}}$ as ${\bm{x}}^p := \lambda_1^p{\bm{c}}_1 + \lambda_2^p {\bm{c}}_2$, and in particular the “inverse” and the “square root” $$\begin{aligned}
{\bm{x}}^{-1} &= \frac{1}{\lambda_1} {\bm{c}}_1 + \frac{1}{\lambda_2} {\bm{c}}_2, \text{ if } \lambda_1\lambda_2 \neq 0, \\
{\bm{x}}^{1/2} &= \sqrt{\lambda_1} {\bm{c}}_1 + \sqrt{\lambda_2} {\bm{c}}_2, \text{ if } {\bm{x}} \in {\mathcal{L}}^n.
\end{aligned}$$ Moreover, we can also define some operator norms, namely the Frobenius and the spectral one: $$\begin{aligned}
\norm{{\bm{x}}}_F &= \sqrt{\lambda_1^2 + \lambda_2^2} = \sqrt{2}\norm{{\bm{x}}}, \\
\norm{{\bm{x}}}_2 &= \max\{ |\lambda_1|, |\lambda_2| \} = |x_0| + \norm{{\widetilde{{\bm{x}}}}}.
\end{aligned}$$ It is worth noting that both these norms and the powers ${\bm{x}}^p$ can be computed in exactly the same way for matrices, given their eigenvalue decompositions.
Finally, the analysis in Section \[sec:Quantum IPM\] requires an analogue to the operation $Y \mapsto XYX$. It turns out that for this we need another matrix representation (*quadratic representation*) $Q_{{\bm{x}}}$, defined as $$\label{qrep}
Q_{{\bm{x}}} := 2{\operatorname{Arw}}^2({\bm{x}}) - {\operatorname{Arw}}({\bm{x}}^2) = \begin{bmatrix}
\norm{{\bm{x}}}^2 & 2x_0{\widetilde{{\bm{x}}}}^T \\
2x_0{\widetilde{{\bm{x}}}} & \lambda_1\lambda_2 I_n + 2{\widetilde{{\bm{x}}}}{\widetilde{{\bm{x}}}}^T
\end{bmatrix}.$$ Now, the matrix-vector product $Q_{{\bm{x}}}{\bm{y}}$ will behave as the quantity $XYX$. To simplify the notation, we also define the matrix $T_{{\bm{x}}} := Q_{{\bm{x}}^{1/2}}$.
The definitions that we introduced so far are suitable for dealing with a single constraint ${\bm{x}} \in {\mathcal{L}}^n$. For dealing with multiple constraints ${\bm{x}}_1 \in {\mathcal{L}}^{n_1}, \dots, {\bm{x}}_r \in {\mathcal{L}}^{n_r}$, we need to deal with block-vectors ${\bm{x}} = ({\bm{x}}_1;{\bm{x}}_2;\dots;{\bm{x}}_r)$ and ${\bm{y}} = ({\bm{y}}_1;{\bm{y}}_2;\dots;{\bm{y}}_r)$. We call the number of blocks $r$ the *rank* of the vector (thus, up to now, we were only considering rank-1 vectors). Now, we extend all our definitions to rank-$r$ vectors.
1. ${\bm{x}} \circ {\bm{y}} := ({\bm{x}}_1 \circ {\bm{y}}_1; \dots; {\bm{x}}_r \circ {\bm{y}}_r)$
2. The matrix representations ${\operatorname{Arw}}({\bm{x}})$ and $Q_{{\bm{x}}}$ are the block-diagonal matrices containing the representations of the blocks: $${\operatorname{Arw}}({\bm{x}}) := {\operatorname{Arw}}({\bm{x}}_1) \oplus \cdots \oplus {\operatorname{Arw}}({\bm{x}}_r) \text{ and } Q_{{\bm{x}}} := Q_{{\bm{x}}_1} \oplus \cdots \oplus Q_{{\bm{x}}_r}$$
3. ${\bm{x}}$ has $2r$ eigenvalues (with multiplicities) – the union of the eigenvalues of the blocks ${\bm{x}}_i$. The eigenvectors of ${\bm{x}}$ corresponding to block $i$ contain the eigenvectors of ${\bm{x}}_i$ as block $i$, and are zero everywhere else.
4. The identity element is ${\bm{e}} = ({\bm{e}}_1; \dots ;{\bm{e}}_r)$, where ${\bm{e}}_i$’s are the identity elements for the corresponding blocks.
Thus, all things defined using eigenvalues can also be defined for rank-$r$ vectors:
1. The norms are extended as $\norm{{\bm{x}}}^2_F := \sum_{i=1}^r \norm{{\bm{x}}_i}_F^2$ and $\norm{{\bm{x}}}_2 := \max_i \norm{{\bm{x}}_i}_2$, and
2. Powers are computed blockwise as ${\bm{x}}^p := ({\bm{x}}_1^p;\dots;{\bm{x}}_r^p)$ whenever the corresponding blocks are defined.
Quantum linear algebra
----------------------
As it was touched upon in the introduction, in order to “solve linear systems in polylogarithmic time”, we need to change our computational model, and restate the problem in a way that makes this possible. Namely, given a matrix $A$ and a vector ${\bm{b}}$, we construct the quantum state $\ket{A^{-1}{\bm{b}}}$ (using the notation from ). Once we have the state $\ket{A^{-1} {\bm{b}}}$, we can use it in further computations, sample from the corresponding discrete distribution (this forms the basis of the quantum recommendation system algorithm[@kerenidis2016quantum]), or perform *tomography* on it to recover the underlying classical vector. Here, we clarify how we prepare classical vectors and matrices for use within a quantum algorithm, as well as how we get classical data back, once the quantum computation is done.
In this section, we assume that we want to obtain a (classical) vector ${\overline{{\bm{x}}}} \in {\mathbb{R}}^n$ that satisfies $\norm{{\bm{x}} - {\overline{{\bm{x}}}}} \leq \delta \norm{{\bm{x}}}$, where $A{\bm{x}} = {\bm{b}}$ and $A \in {\mathbb{R}}^{n \times n}$, ${\bm{b}} \in {\mathbb{R}}^n$. We also assume that $A$ is symmetric, since otherwise we can work with its symmetrized version $\operatorname{sym}(A) = \begin{bmatrix}
0 & A \\
A^T & 0
\end{bmatrix}$. The quantum linear system solvers from [@chakraborty2018power; @gilyen2018quantum] require access to an efficient *block encoding* of $A$, which is defined as follows:
Let $A \in {\mathbb{R}}^{n\times n}$ be a matrix. Then, the $\ell$-qubit unitary matrix $U \in {\mathbb{C}}^{2^\ell \times 2^\ell}$ is a $(\zeta, \ell)$ block encoding of $A$ if $U = \begin{bmatrix}
A / \zeta & \cdot \\
\cdot & \cdot
\end{bmatrix}$.
Furthermore, $U$ needs to be implemented efficiently, i.e. using an $\ell$-qubit quantum circuit of depth (poly)logarithmic in $n$. Such a circuit would allow us to efficiently create states $\ket{A_i}$ corresponding to rows or columns of $A$. Moreover, we need to be able to construct such a data structure efficiently from the classical description of $A$. It turns out that we are able to fulfill both of these requirements using a data structure built on top of QRAM [@kerenidis2016quantum].
\[qbe\] There exist QRAM data structures for storing vectors ${\bm{v}}_i \in {\mathbb{R}}^n$, $i \in [m]$ and matrices $A \in {\mathbb{R}}^{n\times n}$ such that with access to these data structures one can do the following:
1. Given $i\in [m]$, prepare the state $\ket{{\bm{v}}_i}$ in time $\widetilde{O}(1)$. In other words, the unitary $\ket{i}\ket{0} \mapsto \ket{i}\ket{{\bm{v}}_i}$ can be implemented efficiently.
2. A $(\zeta(A), 2 \log n)$ unitary block encoding for $A$ with $\zeta(A) = \min( \norm{A}_{F}/\norm{A}_{2}, s_{1}(A)/\norm{A}_{2})$, where $s_1(A) = \max_i \sum_j |A_{i, j}|$ can be implemented in time $\widetilde{O}(\log n)$. Moreover, this block encoding can be constructed in a single pass over the matrix $A$, and it can be updated in $O(\log^2 n)$ time per entry.
Here $\ket{i}$ is the notation for the $\left\lceil \log(m) \right\rceil$ qubit state corresponding to the binary expansion of $i$. The QRAM can be thought of as the quantum analogue to RAM, i.e. an array $[b^{(1)}, \dots, b^{(m)}]$ of $w$-bit bitstrings, whose elements we can access in poly-logarithmic time given their address (position in the array). More precisely, QRAM is just an efficient implementation of the unitary transformation $$\ket{i}\ket{0}^{\otimes w} \mapsto \ket{i} \ket{b^{(i)}_1\dots b^{(i)}_w}, \text{ for } i \in [m].$$ Nevertheless, from now on, we will also refer to storing vectors and matrices in QRAM, meaning that we use the data structure from Theorem \[qbe\]. Note that this is the same quantum oracle model that has been used to solve SDPs in [@kerenidis2018quantum] and [@AG19]. Once we have these block encodings, we may use them to perform linear algebra:
\[qlsa\] (Quantum linear algebra with block encodings) [@chakraborty2018power; @gilyen2018quantum] Let $A \in {\mathbb{R}}^{n\times n}$ be a matrix with non-zero eigenvalues in the interval $[-1, -1/\kappa] \cup [1/\kappa, 1]$, and let $\epsilon > 0$. Given an implementation of an $(\zeta, O(\log n))$ block encoding for $A$ in time $T_{U}$ and a procedure for preparing state $\ket{b}$ in time $T_{b}$,
1. A state $\epsilon$-close to $\ket{A^{-1} b}$ can be generated in time $O((T_{U} \kappa \zeta+ T_{b} \kappa) {\operatorname{polylog}}(\kappa \zeta /\epsilon))$.
2. A state $\epsilon$-close to $\ket{A b}$ can be generated in time $O((T_{U} \kappa \zeta+ T_{b} \kappa) {\operatorname{polylog}}(\kappa \zeta /\epsilon))$.
3. For $\mathcal{A} \in \{ A, A^{-1} \}$, an estimate $\Lambda$ such that $\Lambda \in (1\pm \epsilon) \norm{ \mathcal{A} b}$ can be generated in time $O((T_{U}+T_{b}) \frac{ \kappa \zeta}{ \epsilon} {\operatorname{polylog}}(\kappa \zeta/\epsilon))$.
Finally, in order to recover classical information from the outputs of a linear system solver, we require an efficient procedure for quantum state tomography. The tomography procedure is linear in the dimension of the quantum state.
\[vector state tomography\] There exists an algorithm that given a procedure for constructing $\ket{{\bm{x}}}$ (i.e. a unitary mapping $U:\ket{0} \mapsto \ket{{\bm{x}}}$ in time $T_U$) and precision $\delta > 0$ produces an estimate ${\overline{{\bm{x}}}} \in {\mathbb{R}}^d$ with $\norm{{\overline{{\bm{x}}}}} = 1$ such that $\norm{{\bm{x}} - {\overline{{\bm{x}}}}} \leq \sqrt{7} \delta$ with probability at least $(1 - 1/d^{0.83})$. The algorithm runs in time $O\left( T_U \frac{d \log d}{\delta^2}\right)$.
Of course, repeating this algorithm $\widetilde{O}(1)$ times allows us to increase the success probability to at least $1 - 1/{\operatorname{poly}}(n)$. Putting Theorems \[qbe\], \[qlsa\] and \[vector state tomography\] together, assuming that $A$ and ${\bm{b}}$ are already in QRAM, we obtain that the complexity for getting a classical solution of the linear system $A{\bm{x}} = {\bm{b}}$ with error $\delta$ is $\widetilde{O} \left( n \cdot \frac{\kappa \zeta}{\delta^2} \right)$. For well-conditioned matrices, this presents a significant improvement over $O(n^\omega)$ (or, in practice, $O(n^3)$) needed for solving linear systems classically, especially when $n$ is large and the desired precision is not too high.
Second-order conic programming {#sec:SOCP}
==============================
We introduce the SOCP in this section, and describe the basic classical IPM that we use as in order to develop its quantum counterpart. Formally, a SOCP in its standard form is the following optimization problem: $$\begin{array}{ll}
\min & \sum_{i=1}^r {\bm{c}}_i^T {\bm{x}}_i\\
\text{s.t.}& \sum_{i=1}^r A^{(i)} {\bm{x}}_i = {\bm{b}} \\
& {\bm{x}}_i \in {\mathcal{L}}^{n_i}, \forall i \in [r].
\end{array}$$ Concatenating the vectors $({\bm{x}}_1; \dots; {\bm{x}}_r) =: {\bm{x}}$, $({\bm{c}}_1; \dots; {\bm{c}}_r) =: {\bm{c}}$, and concatenating the matrices $[A^{(1)} \cdots A^{(r)}] =: A$ horizontally, we can write the SOCP more compactly as an optimization problem over the product of the corresponding Lorentz cones, ${\mathcal{L}}:= {\mathcal{L}}^{n_1} \times \cdots \times {\mathcal{L}}^{n_r}$:
$$\begin{array}{ll}
\min & {\bm{c}}^T {\bm{x}}\\
\text{s.t.}& A {\bm{x}} = {\bm{b}} \\
& {\bm{x}} \in {\mathcal{L}},
\end{array} \label{prob:SOCP primal}$$
$$\begin{array}{ll}
\max & {\bm{b}}^T {\bm{y}}\\
\text{s.t.}& A^T {\bm{y}} + {\bm{s}} = {\bm{c}}\\
& {\bm{s}} \in {\mathcal{L}}.
\end{array} \label{prob:SOCP dual}$$
The problem is the SOCP *primal*, and is its corresponding dual. A solution $({\bm{x}}, {\bm{y}}, {\bm{s}})$ satisfying the constraints of both and is *feasible*, and if in addition it satisfies ${\bm{x}} \in {\operatorname{int}}{\mathcal{L}}$ and ${\bm{s}} \in {\operatorname{int}}{\mathcal{L}}$, it is *strictly feasible*. If at least one constraint of or , is violated, the solution is *infeasible*. Our goal is to work only with strictly feasible solutions, as they are less likely to become infeasible if corrupted by noise. As it is often the case when describing IPMs, we assume that there exists a strictly feasible solution, since there are well-known methods for reducing a feasible but not strictly feasible problem to one with a strictly feasible solution [@boyd2004convex].
Just like the gradient of a function vanishes at its (unconstrained) local optimum, similar optimality conditions exist for SOCP as well – at the optimal solution $({\bm{x}}^*, {\bm{y}}^*, {\bm{s}}^*)$, the Karush-Kuhn-Tucker (KKT) conditions are satisfied: $$\begin{aligned}
A{\bm{x}}^* &= {\bm{b}} \quad\text{(primal feasibility)}, \nonumber \\
A^T {\bm{y}}^* + {\bm{s}}^* &= {\bm{c}} \quad\text{(dual feasibility)}, \label{eq:KKT} \\
{{\bm{x}}^*}^T {\bm{s}}^* &= 0 \quad\text{(complementary slackness)}. \nonumber
\end{aligned}$$ Since SOCPs are convex, these conditions are both necessary and sufficient for optimality. Convex duality theory tells us that *weak duality* holds, i.e. that at a feasible solution $({\bm{x}}, {\bm{y}}, {\bm{s}})$ the dual objective is bounded by the primal one: ${\bm{c}}^T {\bm{x}} \geq {\bm{b}}^T {\bm{y}}$. The difference between these two values is called the *duality gap*, it is denoted by $\mu$, and usually normalized by a factor of $\frac1r$: $$\mu := \frac1r \left( {\bm{x}}^{T} {\bm{c}} - {\bm{b}}^{T} {\bm{y}}\right) = \frac1r\left({\bm{x}}^{T} ( {\bm{c}} - A^{T} {\bm{y}})\right)= \frac1r{\bm{x}}^{T} {\bm{s}}.$$ Therefore, at the optimum, by , the duality gap is 0, and thus *strong duality* holds. Furthermore, note that together with ${\bm{x}}^*, {\bm{s}}^* \in {\mathcal{L}}$, the complementary slackness is equivalent to ${\bm{x}}^* \circ {\bm{s}}^* = 0$.
Unfortunately, there is no fast and easy method for solving directly (indeed, the complementary slackness condition prevents it from being strictly feasible). What makes an IPM work is that it solves a series of strictly feasible problems of increasing difficulty, so that their solutions converge to a solution of . These problems are designed so that at the beginning, “the only thing that matters” is that they are strictly feasible no matter the objective value, and as the algorithm progresses, the importance of the objective increases, while the importance of strict feasibility shrinks. This is done by defining a sequence of *barrier problems* $$\label{prob:central path barrier}
\begin{array}{ll}
\arg\min & {\bm{c}}^T {\bm{x}} + \nu {L}({\bm{x}})\\
\text{s.t.}& A {\bm{x}} = {\bm{b}} \\
& {\bm{x}} \in {\mathcal{L}}\end{array} \text{ and }
\begin{array}{ll}
\arg\max & {\bm{b}}^T {\bm{y}} - \nu {L}({\bm{s}})\\
\text{s.t .}& A^T {\bm{y}} + {\bm{s}} = {\bm{c}} \\
&{\bm{y}} \in {\mathcal{L}}\end{array},$$ for all $\nu > 0$. Their name comes from the *barrier function* ${L}({\bm{z}})$, whose purpose is to approach $\infty$ as ${\bm{z}}$ approaches the boundary of the cone ${\mathcal{L}}$. For a single Lorentz cone ${\mathcal{L}}^n$, it is defined as $${L}_n({\bm{z}}) = -\log\left(z_0^2 - \norm{{\widetilde{{\bm{z}}}}}^2\right),$$ and for the product cone ${\mathcal{L}}= {\mathcal{L}}^{n_1} \times \cdots \times {\mathcal{L}}^{n_r}$, it is the sum of the blockwise barriers $${L}({\bm{x}}) = \sum_{i=1}^r {L}_{n_i}({\bm{x}}_i).$$ The primal-dual pair also has KKT optimality conditions similar to : $$\begin{aligned}
A{\bm{x}} &= {\bm{b}} \nonumber \\
A^T {\bm{y}} + {\bm{s}} &= {\bm{c}} \label{eq:central path} \\
{\bm{x}} \circ {\bm{s}} &= \nu {\bm{e}}. \nonumber
\end{aligned}$$ We refer to the set of solutions of for $\nu \geq 0$ as the *central path*. The goal of all IPMs is to trace the central path in the direction $\nu \to 0$. Although this can not be done exactly, it turns out that good theoretical guarantees can be proved as long as the iterates stay sufficiently close to the central path. We define the distance from the central path as $d({\bm{x}}, {\bm{s}}, \nu) = \norm{T_{{\bm{x}}} {\bm{s}} - \nu {\bm{e}}}_F$, so the corresponding $\eta$-neighborhood is given by $$\mathcal{N}_\eta(\nu) = \{ ({\bm{x}}, {\bm{y}}, {\bm{s}})\;|\;({\bm{x}}, {\bm{y}}, {\bm{s}}) \;\text{strictly feasible and } d({\bm{x}}, {\bm{s}}, \nu) \leq \eta \nu \}.$$ This set can be thought of as the set of all points that are $\eta$-close to the central path at parameter $\nu$. Intuitively, the elements of $\mathcal{N}_\eta(\nu)$ have duality gap close to $\nu$, so decreasing the central path parameter by a certain factor is equivalent to decreasing the duality gap by the same factor.
In the classical short-step IPM [@monteiro2000polynomial], we move along the central path by iteratively decreasing the duality gap by a factor of $\sigma := 1 - \frac{\chi}{\sqrt{r}}$ (for some constant $\chi > 0$) at each step. More precisely, in each step we apply to our current iterate $({\bm{x}}, {\bm{y}}, {\bm{s}})$ one round of Newton’s method for solving the system with $\nu = \sigma \mu$. By linearizing the last constraint of , we obtain the following linear system (*Newton system*) for computing the update $({\Delta{\bm{x}}}, {\Delta{\bm{y}}}, {\Delta{\bm{s}}})$: $$\begin{aligned}
\begin{bmatrix}
A & 0 & 0 \\
0 & A^T & I \\
{\operatorname{Arw}}({\bm{s}}) & 0 & {\operatorname{Arw}}({\bm{x}})
\end{bmatrix}
\begin{bmatrix}
{\Delta{\bm{x}}}\\
{\Delta{\bm{y}}}\\
{\Delta{\bm{s}}}\end{bmatrix} =
\begin{bmatrix}
{\bm{b}} - A {\bm{x}} \\
{\bm{c}} - {\bm{s}} - A^T {\bm{y}} \\
\sigma \mu {\bm{e}} - {\bm{x}} \circ {\bm{s}}
\end{bmatrix}.
\label{eq:Newton system}
\end{aligned}$$ Since the system above is just a linearization of the true central path condition, we do not expect to follow the central path exactly. Nevertheless, it can be shown that the solutions generated by the interior point method remain in an $\eta$-neighborhood of the central path, for some constant $\eta > 0$. The classical interior point method for SOCP is summarized in Algorithm \[alg:ipm\].
Matrix $A$ and vectors ${\bm{b}}, {\bm{c}}$ in memory, precision $\epsilon>0$.\
1. Find feasible initial point $({\bm{x}}, {\bm{y}}, {\bm{s}}, \mu):=({\bm{x}}, {\bm{y}}, {\bm{s}}, \mu_0)$.
2. Repeat the following steps for $O(\sqrt{r} \log(\mu_0 / \epsilon))$ iterations.
1. Solve the Newton system to get ${\Delta{\bm{x}}}, {\Delta{\bm{y}}}, {\Delta{\bm{s}}}$.
2. Update ${\bm{x}} \gets {\bm{x}} + {\Delta{\bm{x}}}$, ${\bm{y}} \gets {\bm{y}} + {\Delta{\bm{y}}}$, ${\bm{s}} \gets {\bm{s}} + {\Delta{\bm{s}}}$ and $\mu = \frac1r{\bm{x}}^T {\bm{s}}$.
3. Output $({\bm{x}}, {\bm{y}}, {\bm{s}})$.
It can be shown that this algorithm halves the duality gap every $O(\sqrt{r})$ iterations, so indeed, after $O(\sqrt{r} \log(\mu_0 / \epsilon))$ it will converge to a (feasible) solution with duality gap at most $\epsilon$ (given that the initial duality gap was $\mu_0$).
A quantum interior-point method {#sec:Quantum IPM}
===============================
Matrix $A$ and vectors ${\bm{b}}, {\bm{c}}$ in QRAM, parameters $T, \delta>0$.\
1. Find feasible initial point $({\bm{x}}, {\bm{y}}, {\bm{s}}, \mu)=({\bm{x}}, {\bm{y}}, {\bm{s}}, \mu_0)$ and store the solution in the QRAM.
2. Repeat the following steps for $T$ iterations.
1. Compute the vector $\sigma \mu {\bm{e}} - {\bm{x}} \circ {\bm{s}}$ classically and store it in QRAM.
\
2. [*Estimate norm of* $\left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)$.]{}
Solve the Newton linear system using block encoding of the Newton matrix (Theorem \[qlsa\] ) to find estimate ${\overline{\norm{\left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)}}}$ such that with probability $1-1/{\operatorname{poly}}(n)$, $$\left| {\overline{ \norm{\left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)} }} - \norm{\left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)} \right| \leq \delta \norm{\left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)}.$$
3. [*Estimate $\left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)$.*]{}
Let $U_N$ the procedure that solves the Newton linear system to produce states $\ket{\left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)}$ to accuracy $\delta^2/n^3$.\
Perform vector state tomography with $U_N$ (Theorem \[vector state tomography\]) and use the norm estimate from (b) to obtain the classical estimate ${\overline{\left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)}}$ such that with probability $1-1/{\operatorname{poly}}(n)$, $$\norm{ {\overline{\left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)}} - \left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)} \leq 2\delta \norm{\left( {\Delta{\bm{x}}}; {\Delta{\bm{y}}}; {\Delta{\bm{s}}}\right)}.$$\
4. Update ${\bm{x}} \gets {\bm{x}} + {{\overline{{\Delta{\bm{x}}}}}}$, ${\bm{s}} \gets {\bm{s}} + {{\overline{{\Delta{\bm{s}}}}}}$ and store in QRAM.\
Update $\mu \gets \frac1r {\bm{x}}^T {\bm{s}}$.
3. Output $({\bm{x}}, {\bm{y}}, {\bm{s}})$.
Having introduced the classical IPM for SOCP, we can finally introduce our quantum IPM (Algorithm \[alg:qipm\]). In a way, it is similar to the SDP solver from [@kerenidis2018quantum] since we apply a quantum linear system solver to get the solutions of the Newton system as quantum states and then perform tomography to recover the solutions. However, it differs from the SDP solver as the construction of block encodings for the Newton matrix for the SOCP case is much simpler than that for the general SDP case. A procedure for constructing these block encodings is presented in Appendix \[sec:Block encodings\]. We now state our main result about a single iteration of an approximate IPM, from which the runtime and convergence results follow as consequences:
\[thm:main\] Let $\chi = \eta = 0.01$ and $\xi = 0.001$ be positive constants and let $({\bm{x}}, {\bm{y}}, {\bm{s}})$ be solutions of and with $\mu = \frac1r {\bm{x}}^T {\bm{s}}$ and $d({\bm{x}}, {\bm{s}}, \mu) \leq \eta\mu$. Then, the Newton system has a unique solution $({\Delta{\bm{x}}}, {\Delta{\bm{y}}}, {\Delta{\bm{s}}})$. Let ${{\overline{{\Delta{\bm{x}}}}}}, {{\overline{{\Delta{\bm{s}}}}}}$ be approximate solutions of that satisfy $$\norm{{\Delta{\bm{x}}}- {{\overline{{\Delta{\bm{x}}}}}}}_F \leq \frac{\xi}{\norm{T_{{\bm{x}}^{-1}}}} \text{ and }
\norm{{\Delta{\bm{s}}}- {{\overline{{\Delta{\bm{s}}}}}}}_F \leq \frac{\xi}{2\norm{T_{{\bm{s}}^{-1}}}},$$ where $T_{{\bm{x}}}$ and $T_{{\bm{x}}}$ are the square roots of the quadratic representation matrices in equation . If we let ${{\bm{x}}_\text{next}}:= {\bm{x}} + {{\overline{{\Delta{\bm{x}}}}}}$ and ${{\bm{s}}_\text{next}}:= {\bm{s}} + {{\overline{{\Delta{\bm{s}}}}}}$, the following holds:
1. The updated solution is strictly feasible, i.e. ${{\bm{x}}_\text{next}}\in {\operatorname{int}}{\mathcal{L}}$ and ${{\bm{s}}_\text{next}}\in {\operatorname{int}}{\mathcal{L}}$.
2. The updated solution satisfies $d({{\bm{x}}_\text{next}}, {{\bm{s}}_\text{next}}, {\overline{\mu}}) \leq \eta{\overline{\mu}}$ and $\frac1r {{\bm{x}}_\text{next}}^T {{\bm{s}}_\text{next}}= {\overline{\mu}}$ for ${\overline{\mu}} = {\overline{\sigma}}\mu$, ${\overline{\sigma}} = 1 - \frac{\alpha}{\sqrt{r}}$ and a constant $0 < \alpha \leq \chi$.
Since the Newton system is the same as in the classical case, we can reuse Theorem 1 from [@monteiro2000polynomial] for the uniqueness part of Theorem \[thm:main\]. Therefore, we just need to prove the two parts about strict feasibility and improving the duality gap.
Technical results
-----------------
Before proving Theorem \[thm:main\], however, we need several technical results that will be used throughout its proof in section \[subsec:A single IPM iteration\]. We start with a few general facts about vectors, matrices, and their relationship with the Jordan product $\circ$.
\[claim:algebraic properties\] Let ${\bm{x}}, {\bm{y}}$ be two arbitrary block-vectors. Then, the following holds:
1. The spectral norm is subadditive: $\norm{{\bm{x}} + {\bm{y}}}_2 \leq \norm{{\bm{x}}}_2 + \norm{{\bm{y}}}_2$.
2. The spectral norm is less than the Frobenius norm: $\norm{{\bm{x}}}_2 \leq \norm{{\bm{x}}}_F$.
3. If $A$ is a matrix with minimum and maximum singular values $\sigma_{\text{min}}$ and $\sigma_{\text{max}}$ respectively, then the norm $\norm{A{\bm{x}}}$ is bounded as $\sigma_{\text{min}} \norm{{\bm{x}}} \leq \norm{A{\bm{x}}} \leq \sigma_{\text{max}} \norm{{\bm{x}}}$.
4. The minimum eigenvalue of ${\bm{x}} + {\bm{y}}$ is bounded as $\lambda_\text{min}({\bm{x}} + {\bm{y}}) \geq \lambda_\text{min}({\bm{x}}) - \norm{{\bm{y}}}_2$.
5. The following submultiplicativity property holds: $\norm{{\bm{x}} \circ {\bm{y}}}_F \leq \norm{{\bm{x}}}_2 \cdot \norm{{\bm{y}}}_F$.
The proofs of these statements are analogous to those for the corresponding statements for matrices. Nevertheless there are few remarks on how these statements are different from the matrix case. First, the vector spectral norm $\norm{\cdot}_2$ is not actually a norm, since there exist nonzero vectors outside ${\mathcal{L}}$ which have zero norm. It is, however, still bounded by the Frobenius norm (just like in the matrix case), which is in fact a proper norm. Secondly, the minimum eigenvalue bound also holds for matrix spectral norms, with the exact same statement. Finally, the last property is reminiscent of the matrix submultiplicativity property $\norm{A \cdot B}_F \leq \norm{A}_2 \norm{B}_F$.
We now proceed with several well-known properties of the quadratic representation $Q_{{\bm{x}}}$ and $T_{{\bm{x}}} = Q_{{\bm{x}}^{1/2}}$.
\[claim:Properties of Qx\] [@alizadeh2003second] Let ${\bm{x}} \in {\operatorname{int}}{\mathcal{L}}$. Then, the following holds:
1. $Q_{{\bm{x}}} {\bm{e}} = {\bm{x}}^2$, and thus $T_{{\bm{x}}} {\bm{e}} = {\bm{x}}$.
2. $Q_{{\bm{x}}^{-1}} = Q_{{\bm{x}}}^{-1}$, and more generally $Q_{{\bm{x}}^p} = Q_{{\bm{x}}}^p$ for all $p \in {\mathbb{R}}$.
3. $\norm{Q_{{\bm{x}}}}_2 = \norm{{\bm{x}}}_2^2$, and thus $\norm{T_{{\bm{x}}}}_2 = \norm{{\bm{x}}}_2$.
4. $Q_{{\bm{x}}}$ preserves ${\mathcal{L}}$, i.e. $Q_{{\bm{x}}}({\mathcal{L}}) = {\mathcal{L}}$ and $Q_{{\bm{x}}}({\operatorname{int}}{\mathcal{L}}) = {\operatorname{int}}{\mathcal{L}}$
As a consequence of the last part of this claim, we also have $\norm{Q_{{\bm{x}}}}_2 \leq \norm{{\bm{x}}}_F^2$ as well as $\norm{T_{{\bm{x}}}}_2 \leq \norm{{\bm{x}}}_F$. Finally, we state a lemma about various properties that hold in a neighborhood of the central path.
\[lemma:Properties of the central path\] Let $\nu > 0$ be arbitrary and let ${\bm{x}}, {\bm{s}} \in {\operatorname{int}}{\mathcal{L}}$. Then, ${\bm{x}}$ and ${\bm{s}}$ satisfy the following properties:
1. For all $\nu > 0$, the duality gap and distance from the central path are related as $$r\nu - \sqrt{\frac{r}{2}} \cdot d({\bm{x}}, {\bm{s}}, \nu) \leq {\bm{x}}^T {\bm{s}} \leq r\nu + \sqrt{\frac{r}{2}} \cdot d({\bm{x}}, {\bm{s}}, \nu).$$
2. The distance from the central path is symmetric in its arguments i.e. $d({\bm{x}}, {\bm{s}}, \nu) = d({\bm{s}}, {\bm{x}}, \nu)$.
3. Let $\mu = \frac1r {\bm{x}}^T {\bm{s}}$. If $d({\bm{x}}, {\bm{s}}, \mu) \leq \eta \mu$, then $(1+\eta) \norm{{\bm{s}}^{-1}}_2 \geq \norm{\mu^{-1} {\bm{x}}}_2$.
A proof of Lemma \[lemma:Properties of the central path\] is included in Appendix \[b\] for completeness. The first and the third parts of this lemma have nice intuitive interpretations: The former asserts that the duality gap is roughly equal to the central path parameter, and the latter would be trivially true if $\circ$ was associative and we had ${\bm{x}} \circ {\bm{s}} = \mu {\bm{e}}$.
A single IPM iteration {#subsec:A single IPM iteration}
----------------------
The results above allow us to introduce *commutative scalings*, a method commonly used for proving IPM convergence. Our analysis is inspired by the general case analysis from [@ben2001lectures], the derived SDP analysis from [@kerenidis2018quantum], and uses some technical results from the SOCP analysis in [@monteiro2000polynomial]. The proof of Theorem \[thm:main\] consists of three main steps:
1. Rescaling ${\bm{x}}$ and ${\bm{s}}$ so that they share the same Jordan frame.
2. Bounding the norms of ${\Delta{\bm{x}}}$ and ${\Delta{\bm{s}}}$, and proving that ${\bm{x}} + {\Delta{\bm{x}}}$ and ${\bm{s}} + {\Delta{\bm{s}}}$ are still strictly feasible (in the sense of belonging to ${\operatorname{int}}{\mathcal{L}}$).
3. Proving that the new solution $({\bm{x}} + {\Delta{\bm{x}}}, {\bm{y}} + {\Delta{\bm{y}}}, {\bm{s}} + {\Delta{\bm{s}}})$ is in the $\eta$-neighborhood of the central path, and the duality gap/central path parameter have decreased by a factor of $1 - \alpha/\sqrt{n}$, where $\alpha$ is constant.
### Rescaling ${\bm{x}}$ and ${\bm{s}}$ {#rescaling-bmx-and-bms .unnumbered}
As in the case of SDPs, the first step of the proof uses the symmetries of the Lorentz cone to perform a commutative scaling, that is to reduce the analysis to the case when ${\bm{x}}$ and ${\bm{s}}$ share the same Jordan frame. Although $\circ$ is commutative by definition, two vectors sharing a Jordan frame are akin to two matrices sharing a system of eigenvectors, and thus commuting (some authors [@alizadeh2003second] say that the vectors *operator commute* in this case). The easiest way to achieve this is to scale by $T_{{\bm{x}}} = Q_{{\bm{x}}^{1/2}}$ and $\mu^{-1}$, i.e. to change our variables as $${\bm{x}} \mapsto {{\bm{x}}'} := T_{{\bm{x}}}^{-1} {\bm{x}} = {\bm{e}} \text{ and } {\bm{s}} \mapsto {{\bm{s}}'} := \mu^{-1}T_{{\bm{x}}} {\bm{s}}.$$ Note that for convenience, we have also rescaled the duality gap to 1. Recall also that in the matrix case, the equivalent of this scaling was $X \mapsto X^{-1/2} X X^{-1/2} = I$ and $S \mapsto \mu^{-1}X^{1/2} S X^{1/2}$. We use the notation ${{\bm{z}}'}$ to denote the appropriately-scaled vector ${\bm{z}}$, so that we have $${{{\Delta{\bm{x}}}'}}:= T_{{\bm{x}}}^{-1} {\Delta{\bm{x}}},\quad {{{\Delta{\bm{s}}}'}}:= \mu^{-1}T_{{\bm{x}}} {\Delta{\bm{s}}}$$ For approximate quantities (e.g. the ones obtained using tomography, or any other approximate linear system solver), we use the notation ${\overline{\phantom{i}\cdot\phantom{i}}}$, so that the increments become ${{\overline{{\Delta{\bm{x}}}}}}$ and ${{\overline{{\Delta{\bm{s}}}}}}$, and their scaled counterparts are ${{{{\overline{{\Delta{\bm{x}}}}}'}}}:= T_{{\bm{x}}}^{-1} {{\overline{{\Delta{\bm{x}}}}}}$ and ${{{{\overline{{\Delta{\bm{s}}}}}'}}}:= \mu^{-1}T_{{\bm{x}}} {{\overline{{\Delta{\bm{s}}}}}}$. Finally, we denote the scaled version of the next iterate as ${{{{\bm{x}}_\text{next}}'}}:= {\bm{e}} + {{{{\overline{{\Delta{\bm{x}}}}}'}}}$ and ${{{{\bm{s}}_\text{next}}'}}:= {{{\bm{s}}'}}+ {{{{\overline{{\Delta{\bm{s}}}}}'}}}$. Now, we see that the statement of Theorem \[thm:main\] implies the following bounds on $\norm{ {{{\Delta{\bm{x}}}'}}- {{{{\overline{{\Delta{\bm{x}}}}}'}}}}_F$ and $\norm{{{{\Delta{\bm{s}}}'}}- {{{{\overline{{\Delta{\bm{s}}}}}'}}}}_F$: $$\begin{aligned}
\norm{{{{\Delta{\bm{x}}}'}}- {{{{\overline{{\Delta{\bm{x}}}}}'}}}}_F & = \norm{T_{{\bm{x}}^{-1}}{\Delta{\bm{x}}}- T_{{\bm{x}}^{-1}} {{\overline{{\Delta{\bm{x}}}}}}}_F \\
&\leq \norm{T_{{\bm{x}}^{-1}}} \cdot \norm{{\Delta{\bm{x}}}- {{\overline{{\Delta{\bm{x}}}}}}}_F \leq \xi, \text{ and } \\
\norm{{{{\Delta{\bm{s}}}'}}- {{{{\overline{{\Delta{\bm{s}}}}}'}}}}_F &= \mu^{-1}\norm{T_{{\bm{x}}} {\Delta{\bm{s}}}- T_{{\bm{x}}} {{\overline{{\Delta{\bm{s}}}}}}}_F \\
&\leq \mu^{-1}\norm{T_{{\bm{x}}}} \norm{{\Delta{\bm{s}}}- {{\overline{{\Delta{\bm{s}}}}}}}_F \\
&= \mu^{-1} \norm{{\bm{x}}}_2 \norm{{\Delta{\bm{s}}}- {{\overline{{\Delta{\bm{s}}}}}}}_F \\
&\leq (1+\eta) \norm{{\bm{s}}^{-1}}_2 \norm{{\Delta{\bm{s}}}- {{\overline{{\Delta{\bm{s}}}}}}}_F \text{ by Lemma \ref{lemma:Properties of the central path}} \\
&\leq 2 \norm{T_{{\bm{s}}^{-1}}} \norm{{\Delta{\bm{s}}}- {{\overline{{\Delta{\bm{s}}}}}}}_F \leq \xi.
\end{aligned}$$
Throughout the analysis, we will make use of several constants: $\eta > 0$ is the distance from the central path, i.e. we ensure that our iterates stay in the $\eta$-neighborhood $\mathcal{N}_\eta$ of the central path. The constant $\sigma = 1 - \chi / \sqrt{r}$ is the factor by which we aim to decrease our duality gap, for some constant $\chi > 0$. Finally constant $\xi > 0$ is the approximation error for the scaled increments ${{{{\overline{{\Delta{\bm{x}}}}}'}}}, {{{{\overline{{\Delta{\bm{s}}}}}'}}}$. Having this notation in mind, we can state several facts about the relation between the duality gap and the central path distance for the original and scaled vectors.
\[claim:stuff preserved under scaling\] The following holds for the scaled vectors ${{{\bm{x}}'}}$ and ${{{\bm{s}}'}}$:
1. The scaled duality gap is $\frac1r {{{\bm{x}}'}}^T {{{\bm{s}}'}}= 1$.
2. $d({\bm{x}}, {\bm{s}}, \mu) \leq \eta \mu$ is equivalent to $\norm{{{{\bm{s}}'}}- {\bm{e}}} \leq \eta$.
3. $d({\bm{x}}, {\bm{s}}, \mu\sigma) = \mu \cdot d({{{\bm{x}}'}}, {{{\bm{s}}'}}, \sigma)$, for all $\sigma > 0$.
Moreover, if ${{\overline{{\bm{x}}}}}, {{\overline{{\bm{s}}}}}$ are some vectors scaled using the same scaling, the duality gap between their unscaled counterparts can be recovered as $\frac{\mu}{r} {{\overline{{\bm{x}}}}}^T {{\overline{{\bm{s}}}}}$.
At this point, we claim that it suffices to prove the two parts of Theorem \[thm:main\] in the scaled case. Namely, assuming that ${{{{\bm{x}}_\text{next}}'}}\in {\operatorname{int}}{\mathcal{L}}$ and ${{{{\bm{s}}_\text{next}}'}}\in {\operatorname{int}}{\mathcal{L}}$, by construction and Claim \[claim:Properties of Qx\], we get $${{\bm{x}}_\text{next}}= T_{{\bm{x}}} {{{{\bm{x}}_\text{next}}'}}\text{ and } {{\bm{s}}_\text{next}}= T_{{\bm{x}}^{-1}} {{{{\bm{s}}_\text{next}}'}}$$ and thus ${{\bm{x}}_\text{next}}, {{\bm{s}}_\text{next}}\in {\operatorname{int}}{\mathcal{L}}$. On the other hand, if $\mu d({{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}, {\overline{\sigma}}) \leq \eta {\overline{\mu}}$, then $d({{\bm{x}}_\text{next}}, {{\bm{s}}_\text{next}}, {\overline{\mu}}) \leq \eta {\overline{\mu}}$ follows by Claim \[claim:stuff preserved under scaling\]. Similarly, from $\frac1r {{{{\bm{x}}_\text{next}}'}}^T {{{{\bm{s}}_\text{next}}'}}= {\overline{\sigma}}$, we also get $\frac1r {\bm{x}}_\text{next}^T {\bm{s}}_\text{next} = {\overline{\mu}}$.
We conclude this part with two technical results from [@monteiro2000polynomial], that use the auxiliary matrix $R_{xs}$ defined as $R_{xs} := T_{{\bm{x}}} {\operatorname{Arw}}({\bm{x}})^{-1} {\operatorname{Arw}}({\bm{s}}) T_{{\bm{x}}}$. These results are useful for the later parts of the proof of Theorem \[thm:main\].
\[claim:R\_xs bound\] Let $\eta$ be the distance from the central path, and let $\nu > 0$ be arbitrary. Then, $R_{xs}$ is bounded as $$\norm{R_{xs} - \nu I} \leq 3 \eta \nu.$$
\[claim:dss expression\] Let $\mu$ be the duality gap. Then, the scaled increment ${{{\Delta{\bm{s}}}'}}$ is $${{{\Delta{\bm{s}}}'}}= \sigma {\bm{e}} - {{{\bm{s}}'}}- \mu^{-1}R_{xs} {{{\Delta{\bm{x}}}'}}.$$
### Maintaining strict feasibility {#maintaining-strict-feasibility .unnumbered}
The main tool for showing that strict feasibility is conserved is the following bound on the increments ${{{\Delta{\bm{x}}}'}}$ and ${{{\Delta{\bm{s}}}'}}$:
\[lemma:bounds for increment\] Let $\eta$ be the distance from the central path and let $\mu$ be the duality gap. Then, we have the following bounds for the scaled direction: $$\begin{array}{rl}
\norm{{{{\Delta{\bm{x}}}'}}}_F &\leq \frac{\Theta}{\sqrt{2}} \\
\norm{{{{\Delta{\bm{s}}}'}}}_F &\leq \Theta \sqrt{2}
\end{array}
, \quad\text{where}\quad \Theta = \frac{2\sqrt{\eta^2 / 2+ (1-\sigma)^2 r}}{1 - 3\eta}$$
Moreover, if we substitute $\sigma$ with its actual value $1 - \chi/\sqrt{r}$, we get $\Theta = \frac{\sqrt{2\eta^2 + 4\chi^2}}{1-3\eta}$, which we can make arbitrarily small by tuning the constants. Now, we can immediately use this result to prove ${{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}\in {\operatorname{int}}{\mathcal{L}}$.
\[l1\] Let $\eta = \chi = 0.01$ and $\xi = 0.001$. Then, ${{{{\bm{x}}_\text{next}}'}}$ and ${{{{\bm{s}}_\text{next}}'}}$ are strictly feasible, i.e. ${{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}\in {\operatorname{int}}{\mathcal{L}}$.
By Claim \[claim:algebraic properties\] and Lemma \[lemma:bounds for increment\], $\lambda_\text{min}({{\overline{{\bm{x}}}}}) \geq 1 - \norm{{{{{\overline{{\Delta{\bm{x}}}}}'}}}}_F \geq 1 - \frac{\Theta}{\sqrt{2}} - \xi$. On the other hand, since $d({\bm{x}}, {\bm{s}}, \mu) \leq \eta \mu$, we have $d({{{\bm{x}}'}}, {{{\bm{s}}'}}, 1) \leq \eta$, and thus $$\begin{aligned}
\eta^2 &\geq \norm{{{{\bm{s}}'}}- e}_F^2 = \sum_{i=1}^{2r} (\lambda_i({{{\bm{s}}'}})-1)^2
\end{aligned}$$ The above equation implies that $\lambda_i({{{\bm{s}}'}}) \in \left[ 1-\eta, 1+\eta \right], \forall i \in [2r]$. Now, using Claim \[claim:algebraic properties\] twice (together with the fact that $\norm{{\bm{z}}}_2\leq \norm{{\bm{z}}}_F$), $$\begin{aligned}
\lambda_\text{min}({\overline{{\bm{s}}}}) &\geq \lambda_\text{min}({{{\bm{s}}'}}+ {{{\Delta{\bm{s}}}'}}) - \norm{{{{{\overline{{\Delta{\bm{s}}}}}'}}}- {{{\Delta{\bm{s}}}'}}}_F \\
&\geq \lambda_\text{min}({{{\bm{s}}'}}) - \norm{{{{\Delta{\bm{s}}}'}}}_F - \norm{{{{{\overline{{\Delta{\bm{s}}}}}'}}}- {{{\Delta{\bm{s}}}'}}}_F \\
&\geq 1-\eta - \Theta \sqrt{2} - \xi,
\end{aligned}$$ where we used Lemma \[lemma:bounds for increment\] for the last inequality. Substituting $\eta = \chi = 0.01$ and $\xi = 0.001$, we get that $\lambda_\text{min}({{\overline{{\bm{x}}}}}) \geq 0.8$ and $\lambda_\text{min}({\overline{{\bm{s}}}}) \geq 0.8$.
### Maintaining closeness to central path {#maintaining-closeness-to-central-path .unnumbered}
Finally, we move on to the most technical part of the proof of Theorem \[thm:main\], where we prove that ${{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}$ is still close to the central path, and the duality gap has decreased by a constant factor. We split this into two lemmas.
\[lemma:we stay on central path\] Let $\eta = \chi = 0.01$, $\xi=0.001$, and let $\alpha$ be any value satisfying $0 < \alpha \leq \chi$. Then, for ${\overline{\sigma}} = 1 - \alpha / \sqrt{r}$, the distance to the central path is maintained, that is, $d({{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}, {\overline{\sigma}}) < \eta{\overline{\sigma}}$.
By Claim \[claim:stuff preserved under scaling\], the distance of the next iterate from the central path is $d({{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}, {\overline{\sigma}}) = \norm{T_{{{{{\bm{x}}_\text{next}}'}}} {{{{\bm{s}}_\text{next}}'}}- {\overline{\sigma}}{\bm{e}}}_F$, and we can bound it from above as $$\begin{aligned}
d({{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}, {\overline{\sigma}}) &= \norm{T_{{{{{\bm{x}}_\text{next}}'}}} {{{{\bm{s}}_\text{next}}'}}- {\overline{\sigma}}{\bm{e}}}_F \\
&= \norm{T_{{{{{\bm{x}}_\text{next}}'}}} {{{{\bm{s}}_\text{next}}'}}- {\overline{\sigma}} T_{{{{{\bm{x}}_\text{next}}'}}}T_{{{{{\bm{x}}_\text{next}}'}}^{-1}} {\bm{e}}}_F \\
&\leq \norm{T_{{{{{\bm{x}}_\text{next}}'}}}} \cdot \norm{{{{{\bm{s}}_\text{next}}'}}- {\overline{\sigma}}\cdot ({{{{\bm{x}}_\text{next}}'}})^{-1}}.
\end{aligned}$$ So, it is enough to bound $\norm{{\bm{z}}}_F := \norm{{{{{\bm{s}}_\text{next}}'}}- {\overline{\sigma}}\cdot {{{{\bm{x}}_\text{next}}'}}^{-1}}_F$ from above, since $$\norm{T_{{{{{\bm{x}}_\text{next}}'}}}} = \norm{{{{{\bm{x}}_\text{next}}'}}}_2 \leq 1 + \norm{{{{{\overline{{\Delta{\bm{x}}}}}'}}}}_2 \leq 1 + \norm{{{{\Delta{\bm{x}}}'}}}_2 + \xi \leq 1 + \frac{\Theta}{\sqrt{2}} + \xi.$$ We split ${\bm{z}}$ as $$\begin{aligned}
{\bm{z}} &= \underbrace{\left( {{{\bm{s}}'}}+ {{{{\overline{{\Delta{\bm{s}}}}}'}}}- {\overline{\sigma}}e + {{{{\overline{{\Delta{\bm{x}}}}}'}}}\right)}_{{\bm{z}}_1}
+ \underbrace{({\overline{\sigma}} - 1){{{{\overline{{\Delta{\bm{x}}}}}'}}}}_{{\bm{z}}_2}
+ \underbrace{{\overline{\sigma}} \left( e - {{{{\overline{{\Delta{\bm{x}}}}}'}}}- (e + {{{{\overline{{\Delta{\bm{x}}}}}'}}})^{-1} \right) }_{{\bm{z}}_3},
\end{aligned}$$ and we bound $\norm{{\bm{z}}_1}_F, \norm{{\bm{z}}_2}_F$, and $\norm{{\bm{z}}_3}_F$ separately.
1. By the triangle inequality, $\norm{{\bm{z}}_1}_F \leq \norm{{{{\bm{s}}'}}+ {{{\Delta{\bm{s}}}'}}- {\overline{\sigma}} e + {{{\Delta{\bm{x}}}'}}}_F + 2\xi$. Furthermore, after substituting ${{{\Delta{\bm{s}}}'}}$ from Claim \[claim:dss expression\], we get $$\begin{aligned}
{{{\bm{s}}'}}+ {{{\Delta{\bm{s}}}'}}- {\overline{\sigma}} e + {{{\Delta{\bm{x}}}'}}&= \sigma e - \mu^{-1}R_{xs} {{{\Delta{\bm{x}}}'}}- {\overline{\sigma}} e + {{{\Delta{\bm{x}}}'}}\\
&= \frac{\alpha - \chi}{\sqrt{r}} e + \mu^{-1}(\mu I - R_{xs}){{{\Delta{\bm{x}}}'}}.
\end{aligned}$$ Using the bound for $\norm{\mu I - R_{xs}}$ from Claim \[claim:R\_xs bound\] as well as the bound for $\norm{{{{\Delta{\bm{x}}}'}}}_F$ from Lemma \[lemma:bounds for increment\], we obtain $$\begin{aligned}
\norm{{\bm{z}}_1}_F \leq 2\xi + \frac{\chi}{\sqrt{r}} + \frac{3}{\sqrt{2}}\eta\Theta.
\end{aligned}$$
2. $\norm{{\bm{z}}_2}_F \leq \frac{\chi}{\sqrt{r}} \left( \frac{\Theta}{\sqrt{2}} + \xi \right)$, where we used the bound from Lemma \[lemma:bounds for increment\] again.
3. Here, we first need to bound $\norm{ (e + {{{\Delta{\bm{x}}}'}})^{-1} - (e + {{{{\overline{{\Delta{\bm{x}}}}}'}}})^{-1} }_F$. For this, we use the submultiplicativity of $\norm{\cdot}_F$ from Claim \[claim:algebraic properties\]: $$\begin{aligned}
\norm{ (e + {{{\Delta{\bm{x}}}'}})^{-1} - (e + {{{{\overline{{\Delta{\bm{x}}}}}'}}})^{-1} }_F &= \norm{ (e+{{{\Delta{\bm{x}}}'}})^{-1} \circ \left( e - (e+{{{\Delta{\bm{x}}}'}}) \circ (e + {{{{\overline{{\Delta{\bm{x}}}}}'}}})^{-1} \right) }_F\\
&\leq \norm{ (e+{{{\Delta{\bm{x}}}'}})^{-1} }_2 \cdot \norm{ e - (e+{{{{\overline{{\Delta{\bm{x}}}}}'}}}+ {{{\Delta{\bm{x}}}'}}- {{{{\overline{{\Delta{\bm{x}}}}}'}}}) \circ (e + {{{{\overline{{\Delta{\bm{x}}}}}'}}})^{-1} }_F \\
&= \norm{ (e+{{{\Delta{\bm{x}}}'}})^{-1} }_2 \cdot \norm{ ({{{\Delta{\bm{x}}}'}}- {{{{\overline{{\Delta{\bm{x}}}}}'}}}) \circ (e + {{{{\overline{{\Delta{\bm{x}}}}}'}}})^{-1} }_F \\
&\leq \norm{ (e+{{{\Delta{\bm{x}}}'}})^{-1} }_2 \cdot \norm{ {{{\Delta{\bm{x}}}'}}- {{{{\overline{{\Delta{\bm{x}}}}}'}}}}_F \cdot \norm{ (e + {{{{\overline{{\Delta{\bm{x}}}}}'}}})^{-1} }_2 \\
&\leq \xi \cdot \norm{ (e+{{{\Delta{\bm{x}}}'}})^{-1} }_2 \cdot \norm{ (e + {{{{\overline{{\Delta{\bm{x}}}}}'}}})^{-1} }_2.
\end{aligned}$$ Now, we have the bound $\norm{(e+{{{\Delta{\bm{x}}}'}})^{-1}}_2 \leq \frac{1}{1 - \norm{{{{\Delta{\bm{x}}}'}}}_F}$ and similarly $\norm{(e+{{{{\overline{{\Delta{\bm{x}}}}}'}}})^{-1}}_2 \leq \frac{1}{1 - \norm{{{{\Delta{\bm{x}}}'}}}_F - \xi}$, so we get $$\norm{ (e + {{{\Delta{\bm{x}}}'}})^{-1} - (e + {{{{\overline{{\Delta{\bm{x}}}}}'}}})^{-1} }_F \leq \frac{\xi}{(1 - \norm{{{{\Delta{\bm{x}}}'}}}_F - \xi)^2}.$$ Using this, we can bound $\norm{ {\bm{z}}_3 }_F$: $$\begin{aligned}
\norm{ {\bm{z}}_3 }_F &\leq {\overline{\sigma}}\left( \norm{ e - {{{\Delta{\bm{x}}}'}}- (e+{{{\Delta{\bm{x}}}'}})^{-1} }_F + \xi + \frac{\xi}{(1 - \norm{{{{\Delta{\bm{x}}}'}}}_F - \xi)^2} \right).
\end{aligned}$$ If we let $\lambda_i$ be the eigenvalues of ${{{\Delta{\bm{x}}}'}}$, then by Lemma \[lemma:bounds for increment\], we have $$\begin{aligned}
\norm{ e - {{{\Delta{\bm{x}}}'}}- (e+{{{\Delta{\bm{x}}}'}})^{-1} }_F &= \sqrt{ \sum_{i=1}^{2r} \left( (1-\lambda_{i})-\frac{1}{1+\lambda_{i}} \right)^2 } \\
&= \sqrt{\sum_{i=1}^{2r} \frac{\lambda_{i}^4}{(1+\lambda_{i})^2}}
\leq \frac{\Theta}{\sqrt2 - \Theta} \sqrt{\sum_{i=1}^{2r} \lambda_{i}^2} \\
&\leq \frac{\Theta^2}{2 - \sqrt{2}\Theta}.
\end{aligned}$$
Combining all bound from above, we obtain $$\begin{aligned}
d({{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}, {\overline{\sigma}}) \leq \left( 1 + \frac{\Theta}{\sqrt{2}} + \xi \right) \cdot
&\left(2\xi + \frac{\chi}{\sqrt{r}} + \frac{3}{\sqrt{2}}\eta\Theta\right. \\
&+\frac{\chi}{\sqrt{r}} \left( \frac{\Theta}{\sqrt{2}} + \xi \right) \\
&+\left. {\overline{\sigma}}\left( \frac{\Theta^2}{2 - \sqrt{2}\Theta} + \xi + \frac{\xi}{(1 - \Theta/\sqrt{2} - \xi)^2} \right) \right).
\end{aligned}$$ Finally, if we plug in $\chi=0.01$, $\eta=0.01$, $\xi= 0.001$, we get $$d({{\overline{{\bm{x}}}}}, {{\overline{{\bm{s}}}}}, {\overline{\sigma}}) \leq 0.005{\overline{\sigma}} \leq \eta{\overline{\sigma}}.$$
Now, we prove that the duality gap decreases.
\[gap\] For the same constants, the updated solution satisfies $\frac1r {{{{\bm{x}}_\text{next}}'}}^T {{{{\bm{s}}_\text{next}}'}}= \left( 1-\frac{\alpha}{\sqrt{r}} \right)$ for $\alpha = 0.005$.
Since ${{{{\bm{x}}_\text{next}}'}}$ and ${{{{\bm{s}}_\text{next}}'}}$ are scaled quantities, the duality gap between their unscaled counterparts is $\frac{\mu}{r}{{{{\bm{x}}_\text{next}}'}}^T {{{{\bm{s}}_\text{next}}'}}$. Applying Lemma \[lemma:Properties of the central path\] (and Claim \[claim:stuff preserved under scaling\]) with $\nu=\sigma\mu$ and ${{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}$, we obtain the upper bound $$\begin{aligned}
\mu{{{{\bm{x}}_\text{next}}'}}^T {{{{\bm{s}}_\text{next}}'}}\leq r \sigma\mu + \sqrt{\frac{r}{2}} \mu d({{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}, \sigma),
\end{aligned}$$ which in turn implies $$\frac1r {{{{\bm{x}}_\text{next}}'}}^T {{{{\bm{s}}_\text{next}}'}}\leq \left( 1 - \frac{0.01}{\sqrt{r}} \right) \left( 1 + \frac{d({{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}, \sigma)}{\sigma \sqrt{2r}} \right).$$ By instantiating Lemma \[lemma:Properties of the central path\] for $\alpha = \chi$, from its proof, we obtain $d({{{{\bm{x}}_\text{next}}'}}, {{{{\bm{s}}_\text{next}}'}}, \sigma) \leq 0.005 \sigma$, and thus $$\frac1r {{{{\bm{x}}_\text{next}}'}}^T {{{{\bm{s}}_\text{next}}'}}\leq 1 - \frac{0.005}{\sqrt{r}}$$ Therefore, the final $\alpha$ for this Lemma is $0.005$.
Running time and convergence
----------------------------
By Theorem \[thm:main\], each iteration of Algorithm \[alg:qipm\] decreases the duality gap by a factor of $1 - \alpha/\sqrt{r}$, it can be shown that we need $O(\sqrt{r})$ iterations to decrease the duality gap by a factor of 2. Therefore, to achieve a duality gap of $\epsilon$, Algorithm \[alg:qipm\] requires $$T = O \left( \sqrt{r} \log\left( \mu_0 / \epsilon \right) \right)$$ iterations, which is the same as the bound we had for the classical IPM in Algorithm \[alg:ipm\]. Here, $n$ is the dimension of ${\bm{x}}$, or, equivalently, $n = r + \sum_{i=1}^r n_i$.
By contrast, whereas each iteration of Algorithm \[alg:ipm\] is some linear algebra with complexity $O(n^\omega)$ (or $O(n^3)$ in practice), in Algorithm \[alg:qipm\] we need to solve the Newton system to a precision dependent on the norms of $T_{{\bm{x}}^{-1}}$ and $T_{{\bm{s}}^{-1}}$. Thus, to bound the running time of the algorithm (since the runtime of Theorem \[vector state tomography\] depends on the desired precision), we need to bound $\norm{T_{{\bm{x}}^{-1}}}$ and $\norm{T_{{\bm{s}}^{-1}}}$. Indeed, by Claim \[claim:Properties of Qx\], we get $$\norm{T_{{\bm{x}}^{-1}}} = \norm{{\bm{x}}^{-1}} = \lambda_\text{min}({\bm{x}})^{-1} \text{ and } \norm{T_{{\bm{s}}^{-1}}} = \norm{{\bm{s}}^{-1}} = \lambda_\text{min}({\bm{s}})^{-1}.$$ If the tomography precision for iteration $i$ is chosen to be at least (i.e. smaller than) $$\label{eq:definition of delta}
\delta_i := \frac{\xi}{4} \min \left\{ \lambda_\text{min}({\bm{x}}_i), \lambda_\text{min}({\bm{s}}_i) \right\},$$ then the premises of Theorem \[thm:main\] are satisfied. The tomography precision for the entire algorithm can therefore be chosen to be $\delta := \min_i \delta_i$. Note that these minimum eigenvalues are related to how close the current iterate is to the boundary of ${\mathcal{L}}$ – as long as ${\bm{x}}_i, {\bm{s}}_i$ are not “too close” to the boundary of ${\mathcal{L}}$, their minimal eigenvalues should not be “too small”.
There are two more parameters that impact the runtime of Theorem \[vector state tomography\]: the condition number of the Newton matrix $\kappa_i$ and the matrix parameter $\zeta_i$ of the QRAM encoding in iteration $i$. For both of these quantities we define their global versions as $\kappa = \max_i \kappa_i$ and $\zeta = \max_i \zeta_i$. Finally, we can state the theorem about the entire running time of Algorithm \[alg:qipm\]:
\[thm:runtime\] Let be a SOCP with $A \in {\mathbb{R}}^{m\times n}$, $m \leq n$, and ${\mathcal{L}}= {\mathcal{L}}^{n_1} \times \cdots \times {\mathcal{L}}^{n_r}$. Then, Algorithm \[alg:qipm\] achieves duality gap $\epsilon$ in time $$T = \widetilde{O} \left( \sqrt{r} \log\left( \mu_0 / \epsilon \right) \cdot \frac{n \kappa \zeta}{\delta^2}\log\left( \frac{\kappa \zeta}{\delta} \right) \right).$$
This complexity can be easily interpreted as product of the number of iterations and the cost of $n$-dimensional vector tomography with error $\delta$. So, improving the complexity of the tomography algorithm would improve the running time of Algorithm \[alg:qipm\] as well.
Note that up to now, we cared mostly about strict (conic) feasibility of ${\bm{x}}$ and ${\bm{s}}$. Now, we address the fact that the linear constraints $A{\bm{x}} = {\bm{b}}$ and $A^T {\bm{y}} + {\bm{s}} = {\bm{c}}$ are not exactly satisfied during the execution of the algorithm. Luckily, it turns out that this error is not accumulated, but is instead determined just by the final tomography precision:
Let be a SOCP as in Theorem \[thm:runtime\]. Then, after $T$ iterations, the (linear) infeasibility of the final iterate ${\bm{x}}, {\bm{y}}, {\bm{s}}$ is bounded as $$\begin{aligned}
\norm{A{\bm{x}}_T - {\bm{b}}} &\leq \delta\norm{A} , \\
\norm{A^T {\bm{y}}_T + {\bm{s}}_T - {\bm{c}}} &\leq \delta \left( \norm{A} + 1 \right).
\end{aligned}$$
Let $({\bm{x}}_T, {\bm{y}}_T, {\bm{s}}_T)$ be the $T$-th iterate. Then, the following holds for $A{\bm{x}}_T - {\bm{b}}$: $$\label{eq:eq11}
A {\bm{x}}_T - {\bm{b}} = A{\bm{x}}_0 + A\sum_{t=1}^{T} {{\overline{{\Delta{\bm{x}}}}}}_t - {\bm{b}} = A \sum_{t=1}^T {{\overline{{\Delta{\bm{x}}}}}}_t.$$ On the other hand, the Newton system at iteration $T$ has the constraint $A{\Delta{\bm{x}}}_T = {\bm{b}} - A{\bm{x}}_{T-1}$, which we can further recursively transform as, $$\begin{aligned}
A{\Delta{\bm{x}}}_T &= {\bm{b}} - A{\bm{x}}_{T-1} = {\bm{b}} - A\left( {\bm{x}}_{T-2} + {{\overline{{\Delta{\bm{x}}}}}}_{T-1} \right) \\
&= {\bm{b}} - A{\bm{x}}_0 - \sum_{t=1}^{T-1} {{\overline{{\Delta{\bm{x}}}}}}_t = - \sum_{t=1}^{T-1} {{\overline{{\Delta{\bm{x}}}}}}_t.
\end{aligned}$$ Substituting this into equation , we get $$A{\bm{x}}_T - {\bm{b}} = A \left( {{\overline{{\Delta{\bm{x}}}}}}_T - {\Delta{\bm{x}}}_T \right).$$ Similarly, using the constraint $A^T{\Delta{\bm{y}}}_T + {\Delta{\bm{s}}}_{T} = {\bm{c}} - {\bm{s}}_{T-1} - A^{T} {\bm{y}}_{T-1}$ we obtain that $$A^T {\bm{y}}_T + {\bm{s}}_T - {\bm{c}} = A^T \left( {{\overline{{\Delta{\bm{y}}}}}}_T - {\Delta{\bm{y}}}_T \right) + \left( {{\overline{{\Delta{\bm{s}}}}}}_T - {\Delta{\bm{s}}}_T \right).$$ Finally, we can bound the norms of these two quantities, $$\begin{aligned}
\norm{A{\bm{x}}_T - {\bm{b}}} &\leq \delta\norm{A} , \\
\norm{A^T {\bm{y}}_T + {\bm{s}}_T - {\bm{c}}} &\leq \delta \left( \norm{A} +1 \right).
\end{aligned}$$
Quantum support-vector machines {#sec:Applications to SVMs}
===============================
Reducing SVMs to SOCP
---------------------
In this section we present applications of our quantum SOCP algorithm to the support vector machine (SVM) problem in machine learning. Given a set of vectors $\mathcal{X} = \{ {\bm{x}}^{(i)} \in {\mathbb{R}}^n\;|\;i \in [m] \}$ (*training examples*) and their *labels* $y^{(i)} \in \{-1, 1\}$, the objective of the SVM training process is to find the “best” hyperplane that separates training examples with label $1$ from those with label $-1$. More precisely, when there exists a hyperplane separating the examples with label $1$ from those with label $-1$ (i.e. $\mathcal X$ is *linearly separable*), the SVM solution is the separating hyperplane that maximizes the distance from the closest point with either label – this is the *maximum margin* hyperplane.
It has been shown [@cortes1995support] that if we parameterize the hyperplane with its normal vector ${\bm{w}}$ and displacement (or *bias*) $b$, we can obtain the maximum margin hyperplane by solving the following optimization problem: $$\begin{array}{ll}
\min\limits_{{\bm{w}}, b} & \norm{{\bm{w}}}^2 \\
\text{s.t.}& y^{(i)}({\bm{w}}^T{\bm{x}}^{(i)}+b) \geq 1, \;\forall i \in [m]. \\
\end{array} \label{prob:hard-margin SVM}$$ This problem is known as the *hard-margin* SVM (training) problem. On the other hand, if $\mathcal X$ is not linearly separable, clearly for some $i$, the constraints $y^{(i)}({\bm{w}}^T{\bm{x}}^{(i)}+b) \geq 1$ will be violated no matter which hyperplane we choose. So, we define the *soft-margin* SVM problem by modifying to allow a small $\xi_i \geq 0$ violation in each of these constraints. Geometrically, this means that the points on the correct side of the hyperplane have $\xi = 0$, whereas the others will have $\xi_i \geq 0$. Thus, in order to minimize the number of misclassified points, we add $\norm{{\bm{\xi}}}_1 = \sum_i \xi_i$ to the objective of , to get the following: $$\begin{array}{ll}
\min\limits_{{\bm{w}}, b, {\bm{\xi}}} & \norm{{\bm{w}}}^2 + C\norm{{\bm{\xi}}}_1 \\
\text{s.t.}& y^{(i)}({\bm{w}}^T{\bm{x}}^{(i)}+b) \geq 1 - \xi_i, \;\forall i \in [m] \\
&{\bm{\xi}} \geq 0.
\end{array} \label{prob:SVM}$$ Here, the constant $C > 0$ is just a hyperparameter that quantifies the tradeoff between maximizing the margin and minimizing the constraint violations. Finally, it is worth noting that there exists another way of removing the assumption of linear separability from : the $\ell_2$-SVM (or least-squares SVM, LS-SVM), introduced by [@suykens1999least]. Namely, we can treat the SVM problem as a least-squares regression problem with binary targets $\{-1, 1\}$, by trying to solve for $y^{(i)}({\bm{w}}^T{\bm{x}}^{(i)}+b) = 1$, and minimizing the squared $2$-norm of the residuals: $$\begin{array}{ll}
\min\limits_{{\bm{w}}, b, {\bm{\xi}}} & \norm{{\bm{w}}}^2 + C \norm{{\bm{\xi}}}^2 \\
\text{s.t.}& y^{(i)}({\bm{w}}^T{\bm{x}}^{(i)}+b) = 1 - \xi_i, \;\forall i \in [m]
\end{array} \label{prob:LS-SVM}$$ Since this is a least-squares problem, the optimal ${\bm{w}}, b$ and ${\bm{\xi}}$ can be obtained by solving a linear system. In [@rebentrost2014quantum], a quantum algorithm for LS-SVM is presented, which uses quantum linear algebra to construct and solve this system. Unfortunately, replacing the $\ell_1$-norm with $\ell_2$ in the objective of leads to the loss of a key property of ($\ell_1$-)SVM – weight sparsity [@suykens2002weighted].
Finally, we are going to reduce the SVM problem to SOCP. In order to do that, we define an auxiliary vector ${\bm{t}} = \left( t+1; t; {\bm{w}} \right)$, where $t \in {\mathbb{R}}$ – this allows us to “compute” $\norm{{\bm{w}}}^2$ using the constraint ${\bm{t}} \in {\mathcal{L}}^{n+1}$ since $${\bm{t}} \in {\mathcal{L}}^{n+1} \Leftrightarrow (t+1)^2 \geq t^2 + \norm{{\bm{w}}}^2 \Leftrightarrow 2t+1 \geq \norm{{\bm{w}}}^2.$$ Thus, minimizing $\norm{{\bm{w}}}^2$ is equivalent to minimizing $t$. Also, we note that we can restrict our bias $b$ to be nonnegative without any loss in generality, since the case $b < 0$ can be equivalently described by a bias $-b > 0$ and weights $-{\bm{w}}$. Using these transformations, we can restate as the following SOCP: $$\begin{array}{ll}
\min\limits_{{\bm{t}}, b, {\bm{\xi}}} & \begin{bmatrix}
0 & 1 & 0^n & 0 & C^m
\end{bmatrix} \begin{bmatrix}
{\bm{t}} & b & {\bm{\xi}}
\end{bmatrix}^T \\
\text{s.t.}&
\begin{bmatrix}
0 & 0 & & 1 & \\
\vdots & \vdots& X^T & \vdots & \operatorname{diag}({\bm{y}}) \\
0 & 0 & & 1 & \\
1 & -1 & 0^n & 0 & 0^m
\end{bmatrix} \begin{bmatrix}
{\bm{t}} \\
b \\
{\bm{\xi}}
\end{bmatrix} = \begin{bmatrix}
{\bm{y}} \\
1
\end{bmatrix}\\
& {\bm{t}} \in {\mathcal{L}}^{n+1},\; b \in {\mathcal{L}}^0,\; \xi_i \in {\mathcal{L}}^0\quad \forall i \in [m]
\end{array} \label{prob:SVM SOCP primal}$$ Here, we use the notation $X \in {\mathbb{R}}^{n\times m}$ for the matrix whose columns are the training examples ${\bm{x}}^{(i)}$, and ${\bm{y}} \in {\mathbb{R}}^m$ for the vector of labels. This problem has $O(n+m)$ variables, and $O(m)$ conic constraints (i.e. its rank is $r = O(m)$). Therefore, in the interesting case of $m = \Theta(n)$, it can be solved in $\widetilde{O}(\sqrt{n})$ iterations. More precisely, if we consider both the primal and the dual, in total they have $3m+2n+7$ scalar variables and $2m+4$ conic constraints. Additionally, it turns out that it is possible to find an initial strictly-feasible primal-dual solution of this SOCP without introducing more slack variables, which will be useful for the numerical experiments later on.
It is also worth noting that if we wanted to solve the LS-SVM problem , we could have also reduced it to a SOCP in a similar manner. In fact, this would have resulted in just $O(1)$ conic constraints, so an IPM would converge to a solution in $\widetilde{O}(1)$ iterations, which is comparable with the result from [@rebentrost2014quantum].
Experimental Results {#sec:Conclusion}
--------------------
We next present some experimental results to assess the running time parameters and the performance of our algorithm for random instances of SVM. A random SVM instance is parametrized by two integers $n$ and $m$, specifying the dimension and the number of training examples, respectively. Given these parameters, we construct the SVM instance as follows:
1. Generate $m$ points $\left\{ {\bm{x}}^{(i)} \in {\mathbb{R}}^n \;|\;i \in [m] \right\}$ in the unit hypercube $[-1, 1]^n$.
2. Generate a random unit vector ${\bm{w}} \in {\mathbb{R}}^n$ and assign labels to the points as $y^{(i)} = \operatorname{sgn}({\bm{w}}^T. {\bm{x}}^{(i)})$.
3. Corrupt some fixed proportion $p$ of the labels, by flipping the sign of each $y^{(i)}$ with probability $p$.
4. Shift the entire dataset by a vector ${\bm{d}} \sim \mathcal{N}(0, 2I)$.
Denote the distribution of such SVMs by $\mathcal{SVM}(n, m, p)$, and let $N(n, m, p)$ be the dimension of the Newton system arising from solving $\mathcal{SVM}(n, m, p)$ using the SOCP reduction .
We simulated the execution of Algorithm \[alg:qipm\] by implementing the classical Algorithm \[alg:ipm\] and adding noise to the solution of the Newton system . The noise added to each coordinate is uniform, from an interval selected so that the noisy increment $({\Delta{\bm{x}}}, {\Delta{\bm{y}}}, {\Delta{\bm{s}}})$ simulates the outputs of the tomography algorithm from Theorem \[vector state tomography\] with precision determined by Theorem \[thm:main\] and equation . A sample run of Algorithms \[alg:ipm\] and \[alg:qipm\] applied to the same instance of $\mathcal{SVM}(50, 100, 0.2)$ with target duality gap $10^{-3}$ can be seen in Figure \[fig:mu evolution\].
[0.49]{} ![Evolution of parameters $\mu, \delta, \kappa$ over iterations for Algorithms \[alg:ipm\] and \[alg:qipm\].[]{data-label="fig:iteration evolution"}](mu_delta "fig:"){width="\textwidth"}
[0.49]{} ![Evolution of parameters $\mu, \delta, \kappa$ over iterations for Algorithms \[alg:ipm\] and \[alg:qipm\].[]{data-label="fig:iteration evolution"}](kappa "fig:"){width="\textwidth"}
From Figure \[fig:mu evolution\] we see that in terms of convergence, the quantum IPM converges as fast as its classical counterpart, i.e. the duality gap $\mu$ decreases exponentially for both the cases. Since from we know that the running time of the tomography algorithm has an inverse square dependence on $\delta = \min\left\{ \lambda_\text{min}({\bm{x}}), \lambda_\text{min}({\bm{s}}) \right\}$, it is worth noting that both of these parameters asymptotically seem to be $O(\mu)$. So, in the running time , we can think of $\delta$ and $\epsilon$ as quantities of the same order.
In any case, with the number of iterations being the same for Algorithms \[alg:ipm\] and \[alg:qipm\], it is worth comparing their per-iteration and total costs. In particular, since from Figure \[fig:mu evolution\] we see that the highest tomography precision is required at the end, and since we know that the condition number $\kappa$ of the Newton system behaves roughly as the inverse of the duality gap [@dollar2005iterative Chapter 2] (this can also be seen on Figure \[fig:kappa evolution\]), the cost of the last iteration is an upper bound for the cost of all previous iterations.
We next analyze the evolution of the SVM classification accuracy and the generalization error with the duality gap $\mu$ for the optimization algorithm. This leads us to the somewhat surprising conclusion that the optimal classification accuracy and generalization error are in fact achieved for a modest value of $\epsilon \sim 0.1$. Further, this value of $\epsilon$ does not depend on the size of the instances, we found it to be constant for random SVM instances with varying sizes.
![Evolution of the SVM classification accuracy with respect to duality gap $\mu$ for an instance of $\mathcal{SVM}(50, 100, 0.2)$.[]{data-label="fig:accuracy and duality gap"}](mu_acc){width="0.7\linewidth"}
We conclude from Figure \[fig:accuracy and duality gap\] that for training random SVM instances, it is enough to let Algorithm \[alg:qipm\] run until it reaches duality gap $\mu \leq \epsilon = 0.1$. So, in spite of Algorithm \[alg:qipm\] having running time that scales as $\epsilon^{-3}$, here we already achieve the best possible performance from the end product (a trained SVM) at the modest value of $\epsilon = 0.1$.
Finally, we estimate the computational cost needed to achieve these results. In particular, on Figure \[fig:complexity fit\] we show how the complexity from Theorem \[thm:runtime\] grows with the problem size. In Figure \[fig:complexity fit\] we considered the a sample of 1200 instances of $\mathcal{SVM}(n, 2n, 0.2)$, where $n$ is sampled uniformly from $[2^2, 2^{10}]$, and the problem size on the $x$-axis is $8n+7$, the dimension of the Newton system.
![Total complexity of Algorithm \[alg:qipm\] according to Theorem \[thm:runtime\], without the logarithmic terms. The data points represent the observed values of the quantity $\frac{n^{1.5} \kappa \zeta}{\delta^2}$, while the line is the least-squares fit of the form $y = ax^b$.[]{data-label="fig:complexity fit"}](complexity){width="0.7\linewidth"}
By finding the least-squares fit of the power law $y = ax^b$ through the observed values of the quantity $\frac{n^{1.5} \kappa \zeta}{\delta^2}$, we obtain the exponent $b = 2.557$, and its 95% confidence interval $[2.503, 2.610]$ (this interval is computed in the standard way using Student’s $t$-distribution, as described in [@neter1996applied]). Thus, we can say that for random $\mathcal{SVM}(n, 2n, 0.2)$-instances, and fixed $\epsilon=0.1$, the complexity of Algorithm \[alg:qipm\] scales as $O(n^{2.557})$, which is a polynomial improvement over the classical runtimes of $O(n^{\omega + 0.5}) \approx O(n^{2.873})$ and $O(n^{3.5})$ in theory and practice, respectively.
The experiments also suggest that in practice “everything works out” much more easily than in theory. An example of this is the fact that both Algorithm \[alg:ipm\] and \[alg:qipm\] converge in the same number of iterations, which suggests that there is no loss in the speed of convergence (i.e. in practice, $\sigma \sim {\overline{\sigma}}$).
Conclusions
-----------
In this work we presented an approximate interior-point method for second-order conic programming, that converges in the same number of iterations as its exact counterpart (e.g. from [@monteiro2000polynomial; @ben2001lectures]), namely $O(\sqrt{r} \log(\mu_0 / \epsilon))$. By using a quantum algorithm for solving linear systems, the per-iteration complexity has been reduced from $O(n^3)$ to $\widetilde{O}\left( \frac{n \kappa \zeta}{\delta^2}\log\left( \frac{\kappa \zeta}{\delta} \right) \right)$, which opens the possibility of a significant speedup for large $n$. The presence of this speedup is further corroborated with numerical experiments, using random SVM instances as model problems. It also may be possible to further improve the running time for quantum interior point methods using the recently developed stochastic variants of the classical IPMs [@cohen2018solving].
We also note that the running time parameters for a quantum SVM algorithm using the best known multiplicative weights based quantum SDP solver [@AG19] were computed in [@arodz2019quantum], where it was found that while there are no quantum speedup for general SVMs, in the $m=O(n)$ regime the quantum algorithm has running time $\widetilde{O}(\sqrt{m} \frac{\log^{10}{m}}{\epsilon^{5}})$. This result also demonstrates the potential for quantum speedups for real world optimization problems.
The experimental results demonstrate the potential for quantum optimization to provide significant speedups over the state of the art classical algorithms. We are confident that, just like numerical linear algebra packages have evolved from naive implementations to their highly-optimized and robust state today, the quantum building blocks will also evolve – bringing more practical speedups for real-world problems, in the near future.
**Acknowledgments:** Part of this work was supported by the grants QuantERA QuantAlgo and ANR QuData.
Constructing block encodings for the Newton matrix {#sec:Block encodings}
==================================================
Recall that the Newton system at each iteration is determined by the Newton matrix $M$ and the right-hand side ${\bm{r}}$, defined as $$M := \begin{bmatrix}
A & 0 & 0 \\
0 & A^T & I \\
{\operatorname{Arw}}({\bm{s}}) & 0 & {\operatorname{Arw}}({\bm{x}})
\end{bmatrix} \text{ and }
{\bm{r}} := \begin{bmatrix}
{\bm{b}} - A {\bm{x}} \\
{\bm{c}} - {\bm{s}} - A^T {\bm{y}} \\
\sigma \mu {\bm{e}} - {\bm{x}} \circ {\bm{s}}
\end{bmatrix}.$$ In order to achieve the complexity from Theorem \[thm:runtime\], both need to be constructed in $\widetilde{O}(n)$ time at each iteration. Moreover, we allow ourselves a one-time cost of $\widetilde{O}(\operatorname{nnz}(A))$ (where $\operatorname{nnz}(A)$ is the number of nonzero entries in $A$) to construct a block encoding of $A$, that will be reused in later iterations. Since the identity block can be easily constructed, the only remaining task is to quickly construct block encodings for ${\operatorname{Arw}}({\bm{x}})$ and ${\operatorname{Arw}}({\bm{s}})$, since they change every iteration. Luckily, both of them are sparse, with $3n-2r$ entries, so they can be constructed entry-by-entry in $\widetilde{O}(n)$ time per iteration.
On the other hand, we also need to explain how to efficiently construct $\ket{{\bm{r}}}$ at every iteration. First off, note that given $\ket{{\bm{u}}}$ and $\ket{{\bm{v}}}$, we can construct $\ket{{\bm{u}} - {\bm{v}}}$, if we multiply $({\bm{u}}; {\bm{v}})$ by the matrix $[ I \;| -I ]$. Thus, for constructing $\ket{{\bm{b}} - A{\bm{x}}}$, we exploit the fact that $\ket{A {\bm{x}}}$ can be constructed in $\widetilde{O}(\kappa \zeta)$, and then just use the trick above to perform subtraction. Similarly, $\ket{{\bm{c}} - {\bm{s}} - A^T {\bm{y}}}$ can be constructed using 3 multiplications. Finally, since we do not need to perform any dense matrix-vector multiplications to compute $\sigma \mu {\bm{e}} - {\bm{x}} \circ {\bm{s}}$, we can compute it completely classically, in $O(n)$. In total, constructing $\ket{{\bm{r}}}$ takes $\widetilde{O}\left( n \kappa\zeta \right)$, which is still dominated by needed to perform tomography.
Proof of Lemma 1 {#b}
================
We include a proof of Lemma 1 for completeness.
Let $\nu > 0$ be arbitrary and let ${\bm{x}}, {\bm{s}} \in {\operatorname{int}}{\mathcal{L}}$. Then, ${\bm{x}}$ and ${\bm{s}}$ satisfy the following properties:
1. For all $\nu > 0$, the duality gap and distance from the central path are related as $$r\nu - \sqrt{\frac{r}{2}} \cdot d({\bm{x}}, {\bm{s}}, \nu) \leq {\bm{x}}^T {\bm{s}} \leq r\nu + \sqrt{\frac{r}{2}} \cdot d({\bm{x}}, {\bm{s}}, \nu).$$
2. The distance from the central path is symmetric in its arguments i.e. $d({\bm{x}}, {\bm{s}}, \nu) = d({\bm{s}}, {\bm{x}}, \nu)$.
3. Let $\mu = \frac1r {\bm{x}}^T {\bm{s}}$. If $d({\bm{x}}, {\bm{s}}, \mu) \leq \eta \mu$, then $(1+\eta) \norm{{\bm{s}}^{-1}}_2 \geq \norm{\mu^{-1} {\bm{x}}}_2$.
For part 1, let $\{\lambda_i\}_{i=1}^{2r}$ be the eigenvalues of $T_{{\bm{x}}}{\bm{s}}$, note that $T_{x}$ is invertible as $x \in {\mathcal{L}}$. Then using part 1 of Claim \[claim:Properties of Qx\], we have $${\bm{x}}^T{\bm{s}} = {\bm{x}}^T T_{{\bm{x}}}^{-1} T_{{\bm{x}}} {\bm{s}} = (T_{{\bm{x}}^{-1}} {\bm{x}})^T T_{{\bm{x}}} {\bm{s}} = {\bm{e}}^T T_{{\bm{x}}} {\bm{s}} = \frac12 \sum_{i=1}^{2r} \lambda_i,$$ We can therefore bound the duality gap $x^{T}s$ as follows, $$\begin{aligned}
{\bm{x}}^T {\bm{s}} = \frac12 \sum_{i=1}^{2r} \lambda_i &\leq r\nu + \frac12 \sum_{i=1}^{2r} |\lambda_i - \nu| \\
&\leq r\nu + \sqrt{\frac{r}{2}}\sqrt{\sum_{i=1}^{2r} (\lambda_i - \nu)^2} \\
&= r\nu + \sqrt{\frac{r}{2}} \cdot d({\bm{x}}, {\bm{s}}, \nu).
\end{aligned}$$ The second step used the Cauchy-Schwarz inequality while the third follows from the definition $d({\bm{x}}, {\bm{s}}, \nu)^2 = \sum_{i=1}^{2r} (\lambda_i - \nu)^2$. The proof of the lower bound is similar, but starts instead with the inequality $$\frac12\sum_{i=1}^{2r} \lambda_i \geq r\nu - \frac12 \sum_{i=1}^{2r} |\nu - \lambda_i|.$$
For part 2, it suffices to prove that $T_{{\bm{x}}} {\bm{s}}$ and $T_{{\bm{s}}} {\bm{x}}$ have the eigenvalues. This follows from part 2 of Theorem 10 in [@alizadeh2003second].
Finally for part 3, as $d({\bm{s}}, {\bm{x}}, \mu) \leq \eta \mu$ we have, $$\begin{aligned}
\eta \mu &\geq \norm{T_{{\bm{s}}}{\bm{x}} - \mu {\bm{e}}}_{F} \\
&= \norm{T_{{\bm{s}}} {\bm{x}} - \mu \left( T_{{\bm{s}}} T_{{\bm{s}}^{-1}} \right) {\bm{e}} }_{F} \\
&= \norm{T_{{\bm{s}}} \left( {\bm{x}} - \mu T_{{\bm{s}}^{-1}}{\bm{e}} \right) }_{F} \\
&\geq \lambda_\text{min} (T_{{\bm{s}}}) \norm{{\bm{x}} - \mu T_{{\bm{s}}^{-1}}{\bm{e}}}_{F} \\
&\geq \lambda_\text{min} (T_{{\bm{s}}}) \norm{{\bm{x}} - \mu {\bm{s}}^{-1}}_{2} \\
&= \frac{1}{\norm{{\bm{s}}^{-1}}_{2}} \cdot \mu \cdot \norm{ \mu^{-1} {\bm{x}} - {\bm{s}}^{-1} }_{2}
\end{aligned}$$ Therefore, $\eta \norm{{\bm{s}}^{-1}}_{2} \geq \norm{\mu^{-1} {\bm{x}} - {\bm{s}}^{-1}}_{2}$. Finally, by the triangle inequality for the spectral norm, $$\eta \norm{{\bm{s}}^{-1}}_{2} \geq \norm{\mu^{-1} {\bm{x}}}_{2} -\norm{{\bm{s}}^{-1}}_{2},$$ so we can conclude that $\norm{{\bm{s}}^{-1}}_{2} \geq \frac{1}{1+\eta} \norm{\mu^{-1} {\bm{x}}}_{2}$.
|