_id
stringlengths 36
36
| text
stringlengths 5
665k
| marker
stringlengths 3
6
| marker_offsets
sequence | label
stringlengths 28
32
|
---|---|---|---|---|
86be470c-beb6-42ee-82e7-83050d92298e | From lm:empiricalrepartitionlemma, we have \(\pi (y_{n+1}) \ge \alpha \) with probability larger than \(1 - \alpha \) . Whence one can define the OracleCP as \(\pi ^{-1}([\alpha , +\infty ))\) where \(\pi \) is obtained with a model fit optimized on the oracle data \(_{n+1}(y_{n+1})\) . In the case where the conformity function is the absolute value, we obtain the reference prediction set as in [1]}
\(\texttt {oracleCP: } [\mu _{y_{n+1}}(x_{n+1}) \;\pm \; Q_{1 - \alpha }(y_{n+1})] \hspace{5.0pt}.\)
| [1] | [
[
401,
404
]
] | https://openalex.org/W2974753703 |
416744fe-69ba-4283-aed6-f2f495527606 | We remind that the target variable \(y_{n+1}\) is not available in practice.
In the case of Ridge regression, exact conformal prediction sets can be computed by homotopy without data splitting and without additional assumptions [1]}. This allows us to finely assess the precisions of the proposed approaches and illustrate the speed up benefit in fig:benchmarkridge.
| [1] | [
[
229,
232
]
] | https://openalex.org/W176885573 |
5a4abbb8-4f1b-4298-96af-55c82e7666f0 | It is possible to distinguish fluctuation contribution from
non-fluctuating SSC. SSC always leads to excess conductance, but
fluctuation contribution to magnetoresistance can be both positive
and negative [1]}, [2]}, [3]}. In
particular, at low \(T\) and at field perpendicular to the film the
density-of-state contribution to fluctuations leads to excess
resistance, rather than excess conductance [3]}. We
clearly see such a contribution in our data. In Fig.
REF (c) we show high-field part of excess conductance
in the linear scale. It is seen that in parallel field there is
always an excess conductance \(\Delta S >0\) , which rapidly
decreases upon approaching the surface critical field
\(H_{c3}\simeq 1.7 H_{c2}^{\parallel }\) , but never really vanishes.
The remaining tail is a signature of fluctuations that persist at
any field. For perpendicular field, \(\Delta S\) at high fields
becomes negative, which is consistent with theoretical
expectations for fluctuation contribution at \(T\ll T_c\)
[3]}.
| [1] | [
[
205,
208
]
] | https://openalex.org/W4210745439 |
64a7adcc-d1bc-411f-bd48-7b0ad2341b5f | Our results suggest that surface superconductivity is the primary
cause of broadening of superconducting transition in magnetic
field. As indicated in Figs. REF (b) and (c),
\(H=H_{c2}\) corresponds to the bottom of transition, consistent
with earlier studies [1]}, and \(H_{c3}\) to the top
of the resistive transition. Thus the full width of the transition
is dominated by SSC. Although SSC is well known for carefully
polished single crystals [2]}, [3]}, it is usually
considered to be insignificant for disordered, rough or
inhomogeneous superconducting films because of its assumed
fragility and sensitivity to surface conditions
[4]}, [5]}, [6]}, [7]}.
Therefore, observation of a very robust SSC in our strongly
disordered polycrystalline films is rather surprising, especially
for field perpendicular to the film. In perfectly uniform films
SSC should not occur at perpendicular field orientation
[4]}, [9]}. Yet, SSC in perpendicular fields
has been directly visualized by scanning laser microscopy for
similar films [10]} and also reported for some
layered supercondcutors [11]} and sintered
polycrystalline MgB\(_2\) samples [12]}. Presumably
it is the polycrystallinity of our films that allows SSC at grain
boundaries even in perpendicular fields. Thus we conclude that
surface superconductivity is a robust phenomenon that should be
carefully considered in analysis of data close to superconducting
transition.
| [4] | [
[
638,
641
],
[
908,
911
]
] | https://openalex.org/W2005653406 |
b1c2f060-3b4a-451f-a443-5a9f15812904 | Domain Adaptation has been widely used to alleviate the performance degradation when the distribution of target data differs from that of source data [1]}, [2]}. DA methods mainly have three categorises based on the type of annotations. The first is fully-supervised DA, where fine-tuning is the most representative technique for adapting a trained model to the target domain with full annotations [3]}, [4]}. The second is weakly or semi-supervised DA where only coarse or partial annotations in the target domain are used for model adaptation [5]}. For example, Dorent et al. [6]} employed scribbles in the target domain to perform model adaptation for vestibular schwannoma segmentation. Li et al. [7]} used a dual-teacher semi-supervised domain adaptation method when a small ratio of the target domain images have annotations. Additionally, Unsupervised Domain Adaptation (UDA) does not require annotations in the target domain.
UDA methods usually use Generative Adversarial Networks (GAN) to achieve image-level or feature-level alignment between the source and target domains. For example, Zhu et al. [8]} used Cycle-GAN to convert source domain images to target-domain like images to reduce the domain gap. Kamnitsas et al. [9]} and Dou et al. [10]} used GAN to obtain domain invariant features for adaptation. The Simultaneous Image and Feature Alignment (SIFA) [11]} combined the advantage of these two categories and has achieved state-of-the-art performance for UDA task.
| [11] | [
[
1372,
1376
]
] | https://openalex.org/W2963797156 |
52754995-40be-4a5d-9d25-c3cf6928e7cb | To demonstrate the effectiveness of CS-CADA for solving cross-anatomy domain shift, we compared it with the baseline in two settings: 1) Using only \(S_L\) : A standard U-Net only learns from the source domain images, which is denoted as Baseline (source); 2) Using only \(T_L\) : A standard U-Net learns only from the labeled target domain images, which is denoted as Baseline (target). We also compared it with several state-of-the-art methods in four categories:
1) UDA methods that use \(S_L\) and unlabeled \(T_U\) : We investigated three UDA methods including ADDA [1]}, SIFA [2]}, and SC-GAN [3]}; 2) Supervised Domain Adaptation (SDA) [4]} methods that use \(S_L\) and \(T_L\) for training. We considered four methods, including Joint Training that takes \(S_L \cup T_L\) as a single uniform training set, Fine-tuning [4]} where the segmentation model is pre-trained on \(S_L\) and fine-tuned with \(T_L\) . Here, we consider two types of fine-tuning strategies: fine-tuning (last) means only updating parameters in last convolutional block of the decoder, and fine-tuning (all) means updating the whole set of parameters of the model. Both fine-tuning methods used 2000 iterations. X-shape [6]} that separately learns from \(S_L\) and \(T_L\) ,
and DSBN [7]} that uses domain-specific batch normalization for joint training;
3) SSL methods that use \(T_L\) and \(T_U\) for training, and we considered three state-of-the-art methods including SE-MT [8]}, UA-MT [9]} and Cross Pseudo Supervision (CPS) [10]};
and 4) Semi-Supervised Domain Adaptation (SSDA) method, i.e., Dual-teacher (Dual-T) [11]}. All the compared methods were quantitatively evaluated with Recall, Precision and Dice score.
| [4] | [
[
644,
647
],
[
830,
833
]
] | https://openalex.org/W2346062110 |
830b0c0d-d01f-4c58-9fcd-7a10614ae79f | To demonstrate the effectiveness of CS-CADA for solving cross-anatomy domain shift, we compared it with the baseline in two settings: 1) Using only \(S_L\) : A standard U-Net only learns from the source domain images, which is denoted as Baseline (source); 2) Using only \(T_L\) : A standard U-Net learns only from the labeled target domain images, which is denoted as Baseline (target). We also compared it with several state-of-the-art methods in four categories:
1) UDA methods that use \(S_L\) and unlabeled \(T_U\) : We investigated three UDA methods including ADDA [1]}, SIFA [2]}, and SC-GAN [3]}; 2) Supervised Domain Adaptation (SDA) [4]} methods that use \(S_L\) and \(T_L\) for training. We considered four methods, including Joint Training that takes \(S_L \cup T_L\) as a single uniform training set, Fine-tuning [4]} where the segmentation model is pre-trained on \(S_L\) and fine-tuned with \(T_L\) . Here, we consider two types of fine-tuning strategies: fine-tuning (last) means only updating parameters in last convolutional block of the decoder, and fine-tuning (all) means updating the whole set of parameters of the model. Both fine-tuning methods used 2000 iterations. X-shape [6]} that separately learns from \(S_L\) and \(T_L\) ,
and DSBN [7]} that uses domain-specific batch normalization for joint training;
3) SSL methods that use \(T_L\) and \(T_U\) for training, and we considered three state-of-the-art methods including SE-MT [8]}, UA-MT [9]} and Cross Pseudo Supervision (CPS) [10]};
and 4) Semi-Supervised Domain Adaptation (SSDA) method, i.e., Dual-teacher (Dual-T) [11]}. All the compared methods were quantitatively evaluated with Recall, Precision and Dice score.
| [7] | [
[
1269,
1272
]
] | https://openalex.org/W2949813473 |
28c67f3a-19ce-41f8-84d7-86a35b3236af | To demonstrate the effectiveness of CS-CADA for solving cross-anatomy domain shift, we compared it with the baseline in two settings: 1) Using only \(S_L\) : A standard U-Net only learns from the source domain images, which is denoted as Baseline (source); 2) Using only \(T_L\) : A standard U-Net learns only from the labeled target domain images, which is denoted as Baseline (target). We also compared it with several state-of-the-art methods in four categories:
1) UDA methods that use \(S_L\) and unlabeled \(T_U\) : We investigated three UDA methods including ADDA [1]}, SIFA [2]}, and SC-GAN [3]}; 2) Supervised Domain Adaptation (SDA) [4]} methods that use \(S_L\) and \(T_L\) for training. We considered four methods, including Joint Training that takes \(S_L \cup T_L\) as a single uniform training set, Fine-tuning [4]} where the segmentation model is pre-trained on \(S_L\) and fine-tuned with \(T_L\) . Here, we consider two types of fine-tuning strategies: fine-tuning (last) means only updating parameters in last convolutional block of the decoder, and fine-tuning (all) means updating the whole set of parameters of the model. Both fine-tuning methods used 2000 iterations. X-shape [6]} that separately learns from \(S_L\) and \(T_L\) ,
and DSBN [7]} that uses domain-specific batch normalization for joint training;
3) SSL methods that use \(T_L\) and \(T_U\) for training, and we considered three state-of-the-art methods including SE-MT [8]}, UA-MT [9]} and Cross Pseudo Supervision (CPS) [10]};
and 4) Semi-Supervised Domain Adaptation (SSDA) method, i.e., Dual-teacher (Dual-T) [11]}. All the compared methods were quantitatively evaluated with Recall, Precision and Dice score.
| [9] | [
[
1477,
1480
]
] | https://openalex.org/W2979907638 |
f9a6bb8d-9a03-4b0f-8ac1-c623638d83c8 | Comprehensive experimental results are shown in Table REF . It presented that Baseline (source) only achieved an average Dice of \(37.31\%\) , showing the large domain gap between FIs and XAs. Baseline (target) only obtained average Dice at \(69.81\%\) , indicating that using a small set of labeled XAs cannot lead to accurate results.
The UDA methods outperformed Baseline (source), and SC-GAN [1]} was better than ADDA [2]} and SIFA [3]}, but its performance is worse than baseline (target) due to the large domain gap and they do not use supervision from the target domain.
For SDA methods, Fine-tuning (all) [4]} achieved the highest Dice score of \(75.50\%\) . Joint Training, X-shape [5]} and DSBN [6]} based methods achieved better results than fine tuning the only last block of the decoder and UDA methods, demonstrating that small set of labeled XAs can provide effective supervision for bridging the cross-anatomy domain shift, in which the DSBN got the highest Dice (\(74.12\%\) ) among them. The SSL methods generally performed better than the SDA methods, showing the usefulness of unannotated images in the target domain. However, all these methods were inferior to our proposed CS-CADA that obtained an average Dice of \(79.28\%\) , which is a large improvement from \(75.94\%\) obtained by existing SSDA method Dual-T [7]}. Our method also has higher Precision and Recall than the other methods as shown in Table REF .
| [1] | [
[
396,
399
]
] | https://openalex.org/W2979420277 |
efc452ca-fccf-4619-afd1-224e52f574e3 | Comprehensive experimental results are shown in Table REF . It presented that Baseline (source) only achieved an average Dice of \(37.31\%\) , showing the large domain gap between FIs and XAs. Baseline (target) only obtained average Dice at \(69.81\%\) , indicating that using a small set of labeled XAs cannot lead to accurate results.
The UDA methods outperformed Baseline (source), and SC-GAN [1]} was better than ADDA [2]} and SIFA [3]}, but its performance is worse than baseline (target) due to the large domain gap and they do not use supervision from the target domain.
For SDA methods, Fine-tuning (all) [4]} achieved the highest Dice score of \(75.50\%\) . Joint Training, X-shape [5]} and DSBN [6]} based methods achieved better results than fine tuning the only last block of the decoder and UDA methods, demonstrating that small set of labeled XAs can provide effective supervision for bridging the cross-anatomy domain shift, in which the DSBN got the highest Dice (\(74.12\%\) ) among them. The SSL methods generally performed better than the SDA methods, showing the usefulness of unannotated images in the target domain. However, all these methods were inferior to our proposed CS-CADA that obtained an average Dice of \(79.28\%\) , which is a large improvement from \(75.94\%\) obtained by existing SSDA method Dual-T [7]}. Our method also has higher Precision and Recall than the other methods as shown in Table REF .
| [2] | [
[
422,
425
]
] | https://openalex.org/W2593768305 |
18f8133b-a009-4c5a-98cb-9fafa21518f8 | Comprehensive experimental results are shown in Table REF . It presented that Baseline (source) only achieved an average Dice of \(37.31\%\) , showing the large domain gap between FIs and XAs. Baseline (target) only obtained average Dice at \(69.81\%\) , indicating that using a small set of labeled XAs cannot lead to accurate results.
The UDA methods outperformed Baseline (source), and SC-GAN [1]} was better than ADDA [2]} and SIFA [3]}, but its performance is worse than baseline (target) due to the large domain gap and they do not use supervision from the target domain.
For SDA methods, Fine-tuning (all) [4]} achieved the highest Dice score of \(75.50\%\) . Joint Training, X-shape [5]} and DSBN [6]} based methods achieved better results than fine tuning the only last block of the decoder and UDA methods, demonstrating that small set of labeled XAs can provide effective supervision for bridging the cross-anatomy domain shift, in which the DSBN got the highest Dice (\(74.12\%\) ) among them. The SSL methods generally performed better than the SDA methods, showing the usefulness of unannotated images in the target domain. However, all these methods were inferior to our proposed CS-CADA that obtained an average Dice of \(79.28\%\) , which is a large improvement from \(75.94\%\) obtained by existing SSDA method Dual-T [7]}. Our method also has higher Precision and Recall than the other methods as shown in Table REF .
| [7] | [
[
1337,
1340
]
] | https://openalex.org/W3091785623 |
d0f1a834-1204-42b0-9a54-f29361755863 | Quantitative evaluation results of the compared methods are shown in Table REF . It can be observed that Baseline (source) obtained a very low average Dice of \(10.12\%\) for the LV and Myo segmentation, showing the large domain shift between retinal fundus images and CMR. Baseline (target) obtained an average Dice of \(66.21\%\) , which shows that only using the small number of annotated images in the target domain will limited the model's performance. For UDA methods, SIFA [1]} performed better than ADDA [2]} and CycleGAN [3]}, but its average Dice was only \(59.97\%\) , showing that classical UDA methods cannot be directly applied to cross-anatomy domain adaptation.
For SDA methods, Fine-tuning (last) [4]} had the worst performance with an average Dice of \(35.75\%\) while fine-tuning (all) achieved an average Dice score of \(78.63\%\) . This is very impressive and it shows that just fine-tuning is effective to achieve a better performance than learning directly from the small set of target images. However, its performance is lower than our CS-CADA that improved the average Dice to \(80.80\%\) . For the values for Joint Training, X-shape [5]} and DSBN [6]} were \(66.32\%\) , \(75.41\%\) and \(77.80\%\) , respectively. The results shows the effectiveness of DSBN to deal with the domain gap. The
SSL methods are generally better than the UDA and SDA methods, and the average Dice for SE-MT [7]}, UA-MT [8]} and CPS [9]} were \(75.65\%\) , \(76.76\%\) and \(74.96\%\) , respectively. The excising SSDA method Dual-T [10]} obtained an average Dice of \(76.22\%\) , which was similar to UA-MT and outperformed the other existing methods. In comparison, our proposed CS-CADA achieved higher performance than most above state-of-the-art methods, and it obtained an average Dice of \(80.80\%\) by combining DSBN and our proposed cross-domain contrastive learning in a semi-supervised framework. The ASSD values obtained by our CS-CADA for the LV and Myo were \(3.15mm\) and \(3.37mm\) , respectively, which were also superior in ASSD values of the other methods.
| [3] | [
[
531,
534
]
] | https://openalex.org/W2962793481 |
70730c4f-e475-4d63-81dc-8eac291765ff | Quantitative evaluation results of the compared methods are shown in Table REF . It can be observed that Baseline (source) obtained a very low average Dice of \(10.12\%\) for the LV and Myo segmentation, showing the large domain shift between retinal fundus images and CMR. Baseline (target) obtained an average Dice of \(66.21\%\) , which shows that only using the small number of annotated images in the target domain will limited the model's performance. For UDA methods, SIFA [1]} performed better than ADDA [2]} and CycleGAN [3]}, but its average Dice was only \(59.97\%\) , showing that classical UDA methods cannot be directly applied to cross-anatomy domain adaptation.
For SDA methods, Fine-tuning (last) [4]} had the worst performance with an average Dice of \(35.75\%\) while fine-tuning (all) achieved an average Dice score of \(78.63\%\) . This is very impressive and it shows that just fine-tuning is effective to achieve a better performance than learning directly from the small set of target images. However, its performance is lower than our CS-CADA that improved the average Dice to \(80.80\%\) . For the values for Joint Training, X-shape [5]} and DSBN [6]} were \(66.32\%\) , \(75.41\%\) and \(77.80\%\) , respectively. The results shows the effectiveness of DSBN to deal with the domain gap. The
SSL methods are generally better than the UDA and SDA methods, and the average Dice for SE-MT [7]}, UA-MT [8]} and CPS [9]} were \(75.65\%\) , \(76.76\%\) and \(74.96\%\) , respectively. The excising SSDA method Dual-T [10]} obtained an average Dice of \(76.22\%\) , which was similar to UA-MT and outperformed the other existing methods. In comparison, our proposed CS-CADA achieved higher performance than most above state-of-the-art methods, and it obtained an average Dice of \(80.80\%\) by combining DSBN and our proposed cross-domain contrastive learning in a semi-supervised framework. The ASSD values obtained by our CS-CADA for the LV and Myo were \(3.15mm\) and \(3.37mm\) , respectively, which were also superior in ASSD values of the other methods.
| [5] | [
[
1161,
1164
]
] | https://openalex.org/W2802798675 |
c0b1ba2d-d5fb-4029-9cac-af0379bfca3c | Quantitative evaluation results of the compared methods are shown in Table REF . It can be observed that Baseline (source) obtained a very low average Dice of \(10.12\%\) for the LV and Myo segmentation, showing the large domain shift between retinal fundus images and CMR. Baseline (target) obtained an average Dice of \(66.21\%\) , which shows that only using the small number of annotated images in the target domain will limited the model's performance. For UDA methods, SIFA [1]} performed better than ADDA [2]} and CycleGAN [3]}, but its average Dice was only \(59.97\%\) , showing that classical UDA methods cannot be directly applied to cross-anatomy domain adaptation.
For SDA methods, Fine-tuning (last) [4]} had the worst performance with an average Dice of \(35.75\%\) while fine-tuning (all) achieved an average Dice score of \(78.63\%\) . This is very impressive and it shows that just fine-tuning is effective to achieve a better performance than learning directly from the small set of target images. However, its performance is lower than our CS-CADA that improved the average Dice to \(80.80\%\) . For the values for Joint Training, X-shape [5]} and DSBN [6]} were \(66.32\%\) , \(75.41\%\) and \(77.80\%\) , respectively. The results shows the effectiveness of DSBN to deal with the domain gap. The
SSL methods are generally better than the UDA and SDA methods, and the average Dice for SE-MT [7]}, UA-MT [8]} and CPS [9]} were \(75.65\%\) , \(76.76\%\) and \(74.96\%\) , respectively. The excising SSDA method Dual-T [10]} obtained an average Dice of \(76.22\%\) , which was similar to UA-MT and outperformed the other existing methods. In comparison, our proposed CS-CADA achieved higher performance than most above state-of-the-art methods, and it obtained an average Dice of \(80.80\%\) by combining DSBN and our proposed cross-domain contrastive learning in a semi-supervised framework. The ASSD values obtained by our CS-CADA for the LV and Myo were \(3.15mm\) and \(3.37mm\) , respectively, which were also superior in ASSD values of the other methods.
| [7] | [
[
1415,
1418
]
] | https://openalex.org/W2963449430 |
42b4c509-3d9d-412f-ac87-c4e952d854e1 | Quantitative evaluation results of the compared methods are shown in Table REF . It can be observed that Baseline (source) obtained a very low average Dice of \(10.12\%\) for the LV and Myo segmentation, showing the large domain shift between retinal fundus images and CMR. Baseline (target) obtained an average Dice of \(66.21\%\) , which shows that only using the small number of annotated images in the target domain will limited the model's performance. For UDA methods, SIFA [1]} performed better than ADDA [2]} and CycleGAN [3]}, but its average Dice was only \(59.97\%\) , showing that classical UDA methods cannot be directly applied to cross-anatomy domain adaptation.
For SDA methods, Fine-tuning (last) [4]} had the worst performance with an average Dice of \(35.75\%\) while fine-tuning (all) achieved an average Dice score of \(78.63\%\) . This is very impressive and it shows that just fine-tuning is effective to achieve a better performance than learning directly from the small set of target images. However, its performance is lower than our CS-CADA that improved the average Dice to \(80.80\%\) . For the values for Joint Training, X-shape [5]} and DSBN [6]} were \(66.32\%\) , \(75.41\%\) and \(77.80\%\) , respectively. The results shows the effectiveness of DSBN to deal with the domain gap. The
SSL methods are generally better than the UDA and SDA methods, and the average Dice for SE-MT [7]}, UA-MT [8]} and CPS [9]} were \(75.65\%\) , \(76.76\%\) and \(74.96\%\) , respectively. The excising SSDA method Dual-T [10]} obtained an average Dice of \(76.22\%\) , which was similar to UA-MT and outperformed the other existing methods. In comparison, our proposed CS-CADA achieved higher performance than most above state-of-the-art methods, and it obtained an average Dice of \(80.80\%\) by combining DSBN and our proposed cross-domain contrastive learning in a semi-supervised framework. The ASSD values obtained by our CS-CADA for the LV and Myo were \(3.15mm\) and \(3.37mm\) , respectively, which were also superior in ASSD values of the other methods.
| [9] | [
[
1440,
1443
]
] | https://openalex.org/W3171581326 |
e6f156b8-4b26-4d1f-827c-4e02f930cfbb | where we have used \(J/\psi \) as an example, \(k_\gamma \) and \(k_\psi \) represent the momenta for incoming photon and outgoing \(J/\psi \) , \(p_1\) and \(p_2\) for incoming and outgoing nucleons. Similar diagrams have been considered in Ref. [1]} where it was argued that the three-gluon exchange diagrams dominate the near threshold production of \(J/\psi \) . However, from our analysis, the contribution from the three-gluon exchange diagrams vanishes due to \(C\) -parity conservation. Explicitly, the three gluons from the nucleon side carry symmetric color structure (such as \(d_{abc}\) ) [2]} while those from the heavy quarkonium (\(J/\psi \) ) side are antisymmetric (such as \(f_{abc}\) ). We notice that, however, \(\eta _c\) production will be dominated by the three-gluon exchange diagrams.
| [2] | [
[
606,
609
]
] | https://openalex.org/W3208861568 |
97c5ede2-0865-48fb-8780-658f3b1bab32 |
where \(\bar{P}=(p_1+p_2)/2\) , \(\lbrace x\rbrace =(x_1,x_2,x_3 )\) represent the momentum fractions carried by the three quarks, \([d x]= d x_1 d x_2 d x_3\delta (1-x_1-x_2-x_3)\) , and \( \Phi _3(x_i)\) is the twist-three distribution amplitude of the proton [1]}, [2]}. In the above equation, \({\cal M}_p\) and \({\cal M}_\psi ^{\mu \nu }\) contains contributions from the nucleon and photon-quarkonium sides, respectively. The spinor structure in Eq. (REF ) is a consequence of the leading-twist amplitude which conserves the nucleon helicity. This is similar to the \(A\) form factor calculation in Ref. [3]}. Furthermore, we find that \({\cal M}_p\) can be simplified as
\({\cal M}_p= \frac{ C_B^2 }{96} (4\pi \alpha _s)^2 \left(2{\cal H}_3+{\cal H^{\prime }}_3 \right),\quad \)
| [1] | [
[
265,
268
]
] | https://openalex.org/W2066711625 |
14d5ab86-012b-4d9f-8043-7e4dee0ea483 | where \(\Psi _4\) is one of the twist-four distribution amplitudes of the proton related to the three quark Fock state with one unit OAM [1]}, [2]}. Similar contribution can be obtained for another twist-four distribution amplitude \(\Phi _4\) . Here we emphasize a couple of important points. First, the factor \(M_p\) in Eq. (REF ) indicates it is a higher-twist effect. Explicitly, it comes from the parameterization of the twist-four distribution amplitude [3]}. Second, the nucleon helicity-flip is manifest in the spinor structure. This amplitude is negligible at high energy, but will be important at the threshold, because it is not suppressed in the limit of \(\chi \rightarrow 1\) . The amplitude squared can be written as
\(|\overline{{\cal A}_4}|^2=\widetilde{m}_t^2 G_\psi G_{p4}(t) G_{p4}^*(t) \ , \)
| [3] | [
[
463,
466
]
] | https://openalex.org/W2037449465 |
ad059539-2640-4cc1-9c59-1d854011eda3 | Recently, Ref. [1]} has suggested that the GPD formalism could be applied in near threshold heavy quarkonium production, see also the discussion in Ref. [2]}. It will be interesting to check this statement with our results, where the gluon GPDs at large momentum transfer can be calculated following the example of the quark GPDs in [3]}.
| [1] | [
[
15,
18
]
] | https://openalex.org/W3137434940 |
87f4f99c-296c-4024-9013-1f6da3502894 | It is important to note that the above power counting analysis was derived for large \((-t)\) differential cross sections. The consistency between our predictions and the GlueX data shall encourage further theoretical developments, in particular, in the lower momentum transfer region where one can study the interplay between the perturbative and non-perturbative physics. Regarding this point, the comparison between \(J/\psi \) and \(\Upsilon \) productions will play an important role, because they offer different kinematic coverage of momentum transfer due to their large mass difference. We expect these processes will be extensively investigated at the future EIC [1]}, [2]}.
| [1] | [
[
675,
678
]
] | https://openalex.org/W3133872066 |
d6afb9cb-e932-4fd4-8eaa-f4218e8108e4 | The mmWave channel has several characteristics that differentiate it from the traditional microwave channels, such as higher path loss (due to higher operating frequencies), the spatial selectivity (due to high path losses and beamforming), and increased correlation among antennas (due to densely collocated arrays). These distinctive characteristics imply that the statistical fading distributions such as the Rayleigh distribution used in traditional wireless channels become inaccurate, since the number of fading paths is small. Hence, the mmWave channel between two different nodes is likely modeled as a geometric wideband frequency-selective channel according to the extended Saleh-Valenzuela model, studied in [1]}, [2]}.
| [1] | [
[
719,
722
]
] | https://openalex.org/W2469174679 |
8744d65d-350c-4673-b706-2063eaf9e31c | The FD-IAB node comprises a transmit antenna array and a receive antenna array. In FD operations, an mmWave SI channel is defined as the mmWave channel between the transmit antenna and the receiver antenna at the IAB node. Through measurements, the mmWave SI channel is verified to have both line-of-sight (LoS) and non-line-of-sight (NLoS) components [1]}. The LoS component accounts for deterministic direct path loss. Its strength is very high due to a very short distance between the transceiver of the IAB node and is assumed to adopt a near-field model, since the distance between the transceivers is smaller than \(2D^2/\lambda \) , where \(D\) is the antenna aperture diameter, and \(\lambda \) is the wavelength [2]}. The coefficient of the LoS channel matrix depends on the distance between the individual elements of the transceiver. The NLoS component indicates random components caused by reflections from obstacles around the IAB node, where the general mmWave channel model may be accept- able, except with a smaller number of rays. A Rician-alike channel model could be utilized to model the SI channel due to a strong LoS path. A detailed hypothetical wideband mmWave SI channel model is formulated in our recent work [3]}. It is worth noting that there is still ambiguity in characterizing the mmWave SI channel model in the literature.
| [2] | [
[
723,
726
]
] | https://openalex.org/W2963903846 |
acf7a08a-b58b-40da-9250-584e6a20fc9a | The OV is
installed above the ID, IV and
15 cm of shielding steel. A lower outer veto is mounted
directly above the shielding and provides \((x,y)\) coordinate for muons
passing through
a 13 m \(\times \) 7 m area centered on the chimney; a 110 cm \(\times \) 30 cm
region
around the chimney is left open. The lower outer veto has been installed
for 68.9% of the data presented here, and is used to help reduce
background levels quoted in [1]}.
An upper outer veto, again measuring
\((x,y)\) coordinates, has been mounted above the chimney and glove box
used for source insertion, to cover this area. The upper
outer veto was not
present for this analysis.
| [1] | [
[
442,
445
]
] | https://openalex.org/W2018511639 |
6aa302e4-62a1-472e-82b1-9491e1bec1b1 | The complexity in evaluating Eq. (REF ) can be significantly reduced by using the Monte Carlo sampling.
Indeed, Metropolis et al. [1]} suggested an efficient Monte Carlo scheme to approximate the ratio in Eq. (REF ). Let us denote the probability density function in finding a microstate in the canonical ensemble in a configuration \(\textbf {r}\) by
\(P(\textbf {r})=\frac{\exp [-\beta E(\textbf {r})]}{\int d\textbf {r} \exp [-\beta E(\textbf {r})]}.\)
| [1] | [
[
130,
133
]
] | https://openalex.org/W2056760934 |
16cbbe3f-0752-444f-bf1a-f18f3bc33e0b | where \(\phi _0(\bf {r}_k)\) is the free space solution to the PB equation assuming no solvent-solute interface.
To solve the PB equation, we apply the accurate and robust 2nd order MIBPB solver [1]}, [2]} developed in our group, which applies rigorous treatment on geometric complexity, interface condition, and charge singularity. The \(\Delta G_{\rm elec}^{\rm PB}\) results generated by MIBPB solver for a set of macromolecules are used as the training labels in the representability hypothesis.
| [1] | [
[
196,
199
]
] | https://openalex.org/W2170802700 |
06c6d353-ad14-481d-82ff-5eb81374a79d | The structure of the manuscript is as follows. In Section we introduce the model and its assumptions, as well as the Multisplit method [1]} and the test based on sign-flipping score contributions [2]}, [3]}. Then we define the method and the approximate version in Sections and , respectively. Finally, in Sections ans we explore the behavior of the proposed methods on simulated and real data. Proofs, algorithmic implementation and some additional results are postponed to the appendix.
| [2] | [
[
197,
200
]
] | https://openalex.org/W3105821711 |
f2c0f394-3756-4384-8b34-8db19de561f5 | We have considered the problem of testing multiple hypotheses in high-dimensional linear regression. Our proposed approach provides asymptotically valid resampling-based tests for any subset of hypotheses, which can be employed within multiple testing procedures to make confidence statements on active predictor variables.
For instance, it can be used within the maxT-method [1]} and closed testing methods that give simultaneous confidence sets for the TDP of subsets [2]}, [3]}, [4]}.
| [3] | [
[
476,
479
]
] | https://openalex.org/W3099580112 |
efa9acc4-9e11-4c13-9fd9-7fb8f3b2f3c9 | The DIACR-Ita task definition is taken from SemEval-2020 Task 1 Subtask 1 (binary change detection): Given a list of target words and a diacronic corpus pair C\(_1\) and C\(_2\) , the task is to identify the target words which have changed their meanings between the respective time periods t\(_1\) and t\(_2\) [1]}, [2]}.The time periods t\(_1\) and t\(_2\) were not disclosed to participants. C\(_1\) and C\(_2\) have been extracted from Italian newspapers and books. Target words which have changed their meaning are labeled with the value `1', the remaining target words are labeled with `0'. Gold data for the 18 target words is semi-automatically generated from Italian online dictionaries. According to the gold data, 6 of the 18 target words are subject to semantic change between t\(_1\) and t\(_2\) . This gold data was only made public after the evaluation phase. During the evaluation phase each team was allowed to submit up to 4 predictions for the full list of target words, which were scored using classification accuracy between the predicted labels and the gold data. The final competition ranking compares only the highest of the scores achieved by each team.
| [2] | [
[
320,
323
]
] | https://openalex.org/W3115228514 |
59434417-0fff-4844-a008-75c9a45f04a3 | The choice of BERT layers and the measure used to compare the resulting vectors (e.g. APD, COS or clustering) strongly influence the performance [1]}. Hence, we tuned these parameters/modules on the English SemEval data [2]}. For the 40 English target words we had access to the sentences that were used for the human annotation (in contrast to task participants who had only access to the lemmatized larger corpora containing more target word uses than just the annotated ones).
| [1] | [
[
145,
148
]
] | https://openalex.org/W3115289550 |
bc8c51a4-c5d9-4436-8d40-8e612082d911 | The “Lottery-Ticket-Hypothesis" (LTH) [1]} states that within large randomly initialized neural networks there exist smaller sub-networks which, if trained from their initial weights, can perform just as well as the fully trained unpruned network from which they are extracted. This happens to be possible because the weights of these sub-networks seem to be particularly well initialized before training starts, therefore making these smaller architectures suitable for learning (see Fig REF for an illustration). These sub-networks, i.e., the pruned structure together with their initial weights, are called winning tickets, as they appear to have won the initialization lottery. Since winning tickets only contain a very limited amount of parameters, they yield faster training, inference, and sometimes even better final performance than their larger over-parametrized counterparts [1]}, [3]}. So far, winning tickets are typically identified by an iterative procedure that cycles through several steps of network training and weight pruning, starting from a randomly initialized unpruned network. While simple and intuitive, the resulting algorithm, has unfortunately a high computational cost. Despite the fact that the resulting sparse networks can be trained efficiently and in isolation from their initial weights, the LTH idea has not yet led to more efficient solutions for training a sparse network, than existing pruning algorithms that all also require to first fully train an unpruned network [4]}, [5]}, [6]}, [7]}, [8]}.
<FIGURE> | [3] | [
[
893,
896
]
] | https://openalex.org/W2948635472 |
145b0599-7a4e-4d7a-905c-3fa176e19b9c | In this paper, we build on top of this latter work. While Morcos et al. [1]} focused on the natural image domain, we investigate the possibility of transferring winning tickets obtained from the natural image domain to datasets in non natural image domains. This question has an important practical interest as datasets in non natural image domains are typically scarcer than datasets in natural image domains. They would therefore potentially benefit more from a successful transfer of sparse networks, since the latter can be expected to require less data for training than large over-parametrized networks. Furthermore, besides studying their generalization capabilities, we also focus on another interesting property that characterizes models that win the LTH, and which so far has received less research attention. As originally presented in [2]}, pruned models which are the winners of the LTH can yield a final performance which is better than the one obtained by larger over-parametrized networks. In this work we explore whether it is worth seeking for such pruned models when training data is scarce, a scenario that is well known to constraint the training of deep neural networks. To answer these two questions, we carried out experiments on several datasets from two very different non natural image domains: digital pathology and digital heritage.
| [1] | [
[
72,
75
]
] | https://openalex.org/W2970072941 |
56c31e84-73bd-4d8c-bc6b-d9d53dd9fe0d | In all of our experiments we have used a ResNet-50 convolutional neural network which has the same structure as the one presented in [1]}.
We have chosen this specific architecture since it has proven to be successful both when used on DP data [2]} as on DH datasets [3]}. Specifically when it comes to the amount of strides, the sizes of the filters, and the number of output channels, the residual blocks of the network come in the following form: (\(1\times 1\) , 64, 64, 256) \(\times \) 3, (2\(\times \) 2, 128, 128, 512) \(\times \) 4, (2\(\times \) 2, 256, 256, 1024) \(\times \) 6, (2\(\times \) 2, 512, 512, 2048) \(\times \) 3. The last convolution operation of the network is followed by an average pooling layer and a final linear classification layer which has as many output nodes as there is classes to classify in our datasets. Since we only considered classification problems, the model always minimizes the categorical-crossentropy loss function. When feeding the model with the images of the datasets presented in Table REF we extract a random crop of size \(224\times 224\) and used mini-batches of size 64. No data-augmentation was used. We train the neural network with the Stochastic Gradient Descent (SGD) algorithm with an initial learning rate of \(10^{-1}\) . SGD is used in combination with Nesterov Momentum \(\rho \) , set to 0.9, and a weight decay factor \(\alpha \) set to \(10^{-5}\) . Training is controlled by the early-stopping regularization method which stops the training process as soon as the validation loss does not decrease for five epochs in a row. When it comes to the parameters used for pruning we follow a magnitude pruning scheme as the one presented in [4]} which has a pruning-rate of \(20\%\) . In order to construct winning-tickets we have used the late-resetting procedure with \(k=2\) . We summarize all this information in Table REF .
<TABLE> | [1] | [
[
133,
136
]
] | https://openalex.org/W2964299589 |
a2c93d15-1c8b-4f29-bca0-8048f5c37426 | In all of our experiments we have used a ResNet-50 convolutional neural network which has the same structure as the one presented in [1]}.
We have chosen this specific architecture since it has proven to be successful both when used on DP data [2]} as on DH datasets [3]}. Specifically when it comes to the amount of strides, the sizes of the filters, and the number of output channels, the residual blocks of the network come in the following form: (\(1\times 1\) , 64, 64, 256) \(\times \) 3, (2\(\times \) 2, 128, 128, 512) \(\times \) 4, (2\(\times \) 2, 256, 256, 1024) \(\times \) 6, (2\(\times \) 2, 512, 512, 2048) \(\times \) 3. The last convolution operation of the network is followed by an average pooling layer and a final linear classification layer which has as many output nodes as there is classes to classify in our datasets. Since we only considered classification problems, the model always minimizes the categorical-crossentropy loss function. When feeding the model with the images of the datasets presented in Table REF we extract a random crop of size \(224\times 224\) and used mini-batches of size 64. No data-augmentation was used. We train the neural network with the Stochastic Gradient Descent (SGD) algorithm with an initial learning rate of \(10^{-1}\) . SGD is used in combination with Nesterov Momentum \(\rho \) , set to 0.9, and a weight decay factor \(\alpha \) set to \(10^{-5}\) . Training is controlled by the early-stopping regularization method which stops the training process as soon as the validation loss does not decrease for five epochs in a row. When it comes to the parameters used for pruning we follow a magnitude pruning scheme as the one presented in [4]} which has a pruning-rate of \(20\%\) . In order to construct winning-tickets we have used the late-resetting procedure with \(k=2\) . We summarize all this information in Table REF .
<TABLE> | [2] | [
[
244,
247
]
] | https://openalex.org/W2804905867 |
43ed8368-3319-4400-bdf7-403e9e2102d4 | In all of our experiments we have used a ResNet-50 convolutional neural network which has the same structure as the one presented in [1]}.
We have chosen this specific architecture since it has proven to be successful both when used on DP data [2]} as on DH datasets [3]}. Specifically when it comes to the amount of strides, the sizes of the filters, and the number of output channels, the residual blocks of the network come in the following form: (\(1\times 1\) , 64, 64, 256) \(\times \) 3, (2\(\times \) 2, 128, 128, 512) \(\times \) 4, (2\(\times \) 2, 256, 256, 1024) \(\times \) 6, (2\(\times \) 2, 512, 512, 2048) \(\times \) 3. The last convolution operation of the network is followed by an average pooling layer and a final linear classification layer which has as many output nodes as there is classes to classify in our datasets. Since we only considered classification problems, the model always minimizes the categorical-crossentropy loss function. When feeding the model with the images of the datasets presented in Table REF we extract a random crop of size \(224\times 224\) and used mini-batches of size 64. No data-augmentation was used. We train the neural network with the Stochastic Gradient Descent (SGD) algorithm with an initial learning rate of \(10^{-1}\) . SGD is used in combination with Nesterov Momentum \(\rho \) , set to 0.9, and a weight decay factor \(\alpha \) set to \(10^{-5}\) . Training is controlled by the early-stopping regularization method which stops the training process as soon as the validation loss does not decrease for five epochs in a row. When it comes to the parameters used for pruning we follow a magnitude pruning scheme as the one presented in [4]} which has a pruning-rate of \(20\%\) . In order to construct winning-tickets we have used the late-resetting procedure with \(k=2\) . We summarize all this information in Table REF .
<TABLE> | [3] | [
[
267,
270
]
] | https://openalex.org/W2911801153 |
4c709860-7d6c-4afa-98b9-04e6055a3afc | Neuromorphic computing has gained considerable attention as an energy-efficient alternative to conventional Artificial Neural Networks (ANNs) [1]}, [2]}, [3]}, [4]}.
Especially, Spiking Neural Networks (SNNs) process binary spikes through time like a human brain, resulting in 1-2 order of magnitude energy efficiency over ANNs on emerging neuromorphic hardware [5]}, [6]}, [7]}, [8]}.
Due to the energy advantages and neuroscientific interest, SNNs have made great strides on various applications such as image recognition [9]}, [10]}, [11]}, visualization [12]}, optimization [13]}, [14]}, and object detection [15]}.
Therefore, SNNs have a huge potential to be exploited on real-world edge devices.
<FIGURE> | [1] | [
[
142,
145
]
] | https://openalex.org/W2194775991 |
71564034-1720-4a61-9d00-57146e00447f | [t]ANN-SNN Conversion [1]}
Input: input set (\(X\) ); label set (\(Y\) ); max timestep (\(T\) ); pre-trained ANN model (\(ANN\) ); SNN model (\(SNN\) ); total layer number (L)
| [1] | [
[
22,
25
]
] | https://openalex.org/W2964338223 |
f30b4faf-38c2-4005-8f1b-f272bfb1fe41 | However, this technique is based on the assumption that we can compute the exact gradient for all layers.
It is difficult to compute gradient value of SNNs due to the non-differentiable nature of LIF neuron (Eq. REF and Eq. REF ).
Therefore, in order to generate proper class representation, the attacker should reconvert SNNs to ANNs.
The re-conversion process depends on the type of conversion technique; weight scaling or threshold scaling.
There are several conversion algorithms [1]}, [2]} that scales weight parameters of each layer.
In such cases, the attacker cannot directly recover original ANN weights.
However, each layer is scaled by a constant value, therefore the original ANN weights might be recovered by searching several combinations of layer-wise scaling factors.
Also, recent state-of-the-art conversion algorithms [3]}, [4]}, [5]} change the thresholds while maintaining the weight parameters (, threshold scaling) to obtain high performance.
Therefore, in our experiments, we use threshold scaling for ANN-SNN conversion and then, explore the class leakage issues.
In this case, the original ANN can be reconverted by simply changing LIF neuron to ReLU neuron.
With the reconverted ANN, the attacker can simply reconstruct class representation by backpropagation as shown in Algorithm 2.
Overall, a non-linear weight encryption technique is required to make SNNs robust to class leakage.
| [2] | [
[
491,
494
]
] | https://openalex.org/W1645800954 |
68688196-698c-4cdb-968a-a8a66412a36b | However, this technique is based on the assumption that we can compute the exact gradient for all layers.
It is difficult to compute gradient value of SNNs due to the non-differentiable nature of LIF neuron (Eq. REF and Eq. REF ).
Therefore, in order to generate proper class representation, the attacker should reconvert SNNs to ANNs.
The re-conversion process depends on the type of conversion technique; weight scaling or threshold scaling.
There are several conversion algorithms [1]}, [2]} that scales weight parameters of each layer.
In such cases, the attacker cannot directly recover original ANN weights.
However, each layer is scaled by a constant value, therefore the original ANN weights might be recovered by searching several combinations of layer-wise scaling factors.
Also, recent state-of-the-art conversion algorithms [3]}, [4]}, [5]} change the thresholds while maintaining the weight parameters (, threshold scaling) to obtain high performance.
Therefore, in our experiments, we use threshold scaling for ANN-SNN conversion and then, explore the class leakage issues.
In this case, the original ANN can be reconverted by simply changing LIF neuron to ReLU neuron.
With the reconverted ANN, the attacker can simply reconstruct class representation by backpropagation as shown in Algorithm 2.
Overall, a non-linear weight encryption technique is required to make SNNs robust to class leakage.
| [4] | [
[
843,
846
]
] | https://openalex.org/W3035644810 |
46f5f1b1-18cc-43aa-a550-7e63985135cf | Data Generation from a Pre-trained Model:
Without accessing real data, we generate synthetic images from a pre-trained model.
Conversion performance relies on the maximum activation value of features, therefore synthetic images have to carefully reflect underlying data distribution from a pre-trained ANN model.
Nayak [1]} take into account the relationship between classes in order to generate data, resulting in better performance on a distillation task.
Following this pioneering work, we generate synthetic images based on class relationships from the weights of the last fully-connected layer.
Specifically, we can define a weight vector \(w_{c}\) between the penultimate layer and the class logit \(c\) in the last layer. Then, we calculate the class similarity score between class \(i\) and \(j\) :
\(s_{ij} = \frac{w_i^T w_j}{\Vert w_i\Vert _2^2 \Vert w_j\Vert _2^2}.\)
| [1] | [
[
319,
322
]
] | https://openalex.org/W2945528222 |
05786950-7cb3-4059-86a9-805020807c0c | We quantify the security of the model by measuring how much generated images represent similar features with that of original images.
To this end, we use Fr̀echet inception distance (FID) metric [1]} that is widely used in GAN evaluation [2]}, [3]}.
The FID score compares the statistics of embedded features in a feature space of the pre-trained Inception V3 model.Thus, a lower FID score means that the generated images provide the original data-like feature.
In Table REF , the SNN model without encryption training on attack scenario 1 achieves a much lower FID score (, 354.8) compared to the others, which supports our visualization results.
<TABLE> | [1] | [
[
195,
198
]
] | https://openalex.org/W2963981733 |
eb7fd85a-aec8-4adf-842a-3e7ce99a0850 | In what follows, each time convergence holds in the Meyer-Zheng topology we will say so explicitly. If no topology is mentioned, then we mean convergence in \(\mathcal {C}_b([0,\infty ),[0,1])\) . In Appendix we collect some basic facts about the Meyer-Zheng topology taken from [1]} and [2]}.
| [1] | [
[
280,
283
]
] | https://openalex.org/W164858608 |
268e2d3b-4fd9-4c0f-9e7a-bc3d3debe574 | Existence and uniqueness of a strong solution is again standard (see e.g. [1]} and recall Remark REF ). We start by proving existence and uniqueness of the equilibrium. Afterwards we show that the solution converges to this equilibrium.
| [1] | [
[
74,
77
]
] | https://openalex.org/W1534354411 |
310ca645-5900-4b1b-bf2c-d65805e35377 | The propagation of chaos and the weak law of large numbers for \(t>0\) therefore follows from [1]}. Since the martingale problem is well-posed [1]}, the limiting process exists and is unique.
| [1] | [
[
95,
98
],
[
144,
147
]
] | https://openalex.org/W2119062437 |
c3ec3e85-f702-4e44-a83e-833a5feba7ba | we are left to show the tightness of \(\mathcal {L}[(X^{[N]}(sN+t),Y^{[N]}(sN+t))_{t\ge 0}]_{N\in \mathbb {N}}\) in the path space \(\mathcal {C}([0,\infty ), \left([0,1]\times [0,1] \right)^{\mathbb {N}_0})\) . Since \(([0,1]^2)^{\mathbb {N}_0}\) is endowed with the product topology, it is enough to show for all sequence components \((x^{[N]}_i(t))_{t\ge 0}\) and \((y^{[N]}_i(t))_{t\ge 0}\) that they are tight in path space (see [1]}).
| [1] | [
[
438,
441
]
] | https://openalex.org/W2802739963 |
e2605a71-cdae-40d4-a0bc-8791e934977c | To check that the sequence of component processes \(((x_i^{[N]}(t),y_i^{[N]}(t)))_{N\in \mathbb {N}}\) is tight, we need the local characteristics of the \(\mathcal {D}\) -semi-martingale, which are defined in [1]} as (recall (REF ))
\(\begin{aligned}b^{[N]}_1((x,y),t,\omega )&=G^{[N]}_\dagger (f_1,(x,y),t,\omega ),\\b^{[N]}_2((x,y),t,\omega )&=G^{[N]}_\dagger (f_2,(x,y),t,\omega ),\\a^{[N]}_{(1,1)}((x,y),t,\omega )&=G^{[N]}_\dagger (f_1 f_1,(x,y),t,\omega )- 2x\, b_1((x,y),t,\omega ),\\a^{[N]}_{(2,1)}((x,y),t,\omega )&=G^{[N]}_\dagger (f_1 f_2,(x,y), t,\omega )- x\, b_2((x,y),t,\omega )- y b_1((x,y),t,\omega ),\\a^{[N]}_{(1,2)}((x,y),t,\omega )&=a_{(2,1)}((x,y),t,\omega ),\\a^{[N]}_{(2,2)}((x,y),t,\omega )&=G^{[N]}_\dagger (f_2 f_2,(x,y),t,\omega )- 2y\, b_2((x,y),t,\omega ).\end{aligned}\)
| [1] | [
[
211,
214
]
] | https://openalex.org/W2314710522 |
89b2157b-12fd-4e36-8d16-7d422cb3ba4c | Moreover, [1]} states that convergence of measures in the Prohorov distance, \(\lim _{n\rightarrow \infty } d_P(\mathbb {P}_n, \mathbb {P})=0\) , is the same as weak convergence \(\mathbb {P}_n\Rightarrow \mathbb {P}\) . Hence, since convergence of pseudopaths is weak convergence, we can endow the space of pseudopaths \(\Psi \) with the metric \(d_P\) .
| [1] | [
[
10,
13
]
] | https://openalex.org/W2126794261 |
3fb81fa1-5355-4711-8ef5-50f48b06a0ac | Real applications motivate the community to derive algorithms in more adaptive environments. The first of which considers adversarial bandits with the classic EXP3 algorithm. The arms are assumed to be non-stationary but non-adaptive (which means that algorithms will adapt to the adversarial) [1]}. Despite that adversarial bandits do not fall into the scale of this paper's related work, it leads tremendous effort to the following topics in adaptive arms.
| [1] | [
[
294,
297
]
] | https://openalex.org/W2077902449 |
7df9a00d-3d54-432d-b954-238f2a1da000 | Armed with Lemma REF , we present one of our main theorems, Theorem REF , which gives the regret bound of \(O(\log T)\) of SCUCB. The outline of the proof follows that of CUCB by [1]}. To complete the proof, we carefully choose \(\psi _t\) , which controls the trade-off between the exploration and exploitation periods.
| [1] | [
[
180,
183
]
] | https://openalex.org/W35251828 |
99c91d2e-703e-4cbd-8636-ea1deb4cb832 | where \(G\) is the Newton's constant. We will take the region \(\Sigma \) to be a spherical cap on the boundary delimited by \(\theta \le \theta _0\)To avoid contamination of the entanglement entropy by the thermal entropy, we will choose a small entangling region as pointed out in [1]}, [2]}.. Then based on the definition of area and (REF ),
(REF ) can be rewritten as
\(S=\frac{ \pi }{2} \int _0 ^{\theta _0}r \sin \theta \sqrt{\frac{(r^{\prime })^2}{f(r)}+r^2},\)
| [1] | [
[
285,
288
]
] | https://openalex.org/W3098905534 |
ca58bfad-0876-4283-9cc6-6b539b9ffca0 | The first phase of our algorithm is similar to the first phase of the
algorithm of Lenzen et al. [1]}. It is a
preprocessing step that leaves us with only vertices whose
neighborhoods can be dominated by a few other vertices. Lenzen et al. proved that there exist less than \(3\gamma \)
many vertices \(v\) such
that the open neighborhood \(N(v)\) of \(v\) cannot be dominated by 6
vertices of \(V(G)\setminus \lbrace v\rbrace \) [1]}. The lemma can be generalized to more
general graphs, see [3]}). We prove the
following lemma, which is stronger in the sense that the number of
vertices required to dominate the open neighborhoods is smaller than
6, at the cost of having slightly more vertices with that property.
| [1] | [
[
97,
100
],
[
435,
438
]
] | https://openalex.org/W2048769918 |
18a3ece9-a111-4ba8-bd87-c2f3c04208c8 | We use the clustered results to train the Re-ID network in a supervised way after progressive viewpoint-aware clustering.
However, the clustering performance of different viewpoints significantly relies on the clustering results from the same viewpoint.
Therefore, we introduce the \(k\) -reciprocal encoding [1]}, [2]}, [3]} as the distance metric to feature comparison of the same viewpoint due to its powerful ability in mining similar samples.
| [3] | [
[
321,
324
]
] | https://openalex.org/W2999929549 |
acd4fdfc-ddbc-460e-b89a-c348265979f6 | In addition, recent methods [1]}, [2]}, [3]} achieve remarkable performance on target-only unsupervised person Re-ID.
However, they directly employ the prevalent DBSCAN [4]} to obtain pseudo labels while discarding all noisy samples (the hard positive and hard negative samples with pseudo labels assigned as -1) in the training stage.
We argue that it is more important to learn the discriminative embeddings by mining hard positive samples than naively learning from simple samples, which has been proven in a large number of machine learning tasks [5]}, [6]}, [7]}, [8]}, [9]}, [10]}.
To this end, we propose a noise selection method to classify each noise sample into a suitable cluster by the similarity between the noise sample and other clusters.
| [3] | [
[
40,
43
]
] | https://openalex.org/W2988852559 |
16f0ebcd-01bf-4439-8abb-6519a470d1cb | Along with the great achievement on person Re-ID, unsupervised person Re-ID offers more challenges, which has attracted more and more attention recently.
Recent advances of unsupervised person Re-ID methods generally fall into
two categories. 1) The domain adaption based methods [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, which aims to transfer the knowledge in the labeled source domain to the unlabeled target domain.
Although the domain adaption based methods make impressive achievement in unsupervised Re-ID by exploring domain-invariant features, they still require a large amount of label annotation in the source domain.
Furthermore, the huge diversity in different domains limits their transferring capabilities.
2) The target-only based methods [7]}, [8]}, [9]}, which fulfill the unsupervised task by dividing the unlabeled samples into different categories based on specific similarity.
Lin et al. [7]} treat each image as a single category and then gradually reduces the number of categories in subsequent clusters.
Lin et al. [9]} propose a framework that mines the similarity as a soft constraint and introduce camera information to encourage similar samples under different cameras to approach.
| [7] | [
[
749,
752
],
[
904,
907
]
] | https://openalex.org/W2904427185 |
ee8905c5-3146-46ad-8963-c7fc7599913e | Due to the extreme viewpoint changes in vehicles, there are relatively small inter-class differences between different vehicles. We argue that global comparison in previous unsupervised clustering methods [1]}, [2]}, [3]} tends to group the different vehicles with the same viewpoint into the same cluster.
Therefore, this global comparison scheme cannot guarantee the promising performance for target-only unsupervised vehicle Re-ID without any label supervision in network training.
To handle this problem, we propose to introduce a viewpoint prediction model to identify the vehicle's viewpoint information during the forthcoming clustering.
| [2] | [
[
211,
214
]
] | https://openalex.org/W3022591699 |
69c057f6-a947-4f0a-a727-9c50794685d4 | In specific, we use a viewpoint prediction network to predict the viewpoint of each unlabeled vehicle image \(x_i\) in training set \(\left\lbrace X\mid x_{1},x_{2},...,x_{N}\right\rbrace \) .
We train our viewpoint prediction model on VeRi-776 [1]}, which contains all the visible viewpoints of the vehicle.
Following the viewpoints annotation in previous work [2]}, we divide vehicle images into five viewpoints, e.g., \(v = \lbrace front, front\_side, side, rear\_side, rear\rbrace \) .
Furthermore, we have additionally labeled 3000 samples in VeRi-Wild [3]} data to fine-tune the model to improve the robustness of the viewpoint prediction.
We use the commonly used cross-entropy loss \(L_{\eta }\) to optimize the viewpoint classifier \(W\left(x_{i}\mid \theta \right)\) ,
\(L_{\eta }=-\Sigma _{i}^{N}y_{v}log\left(W\left(x_{i}\mid \theta \right)\right)\)
| [1] | [
[
247,
250
]
] | https://openalex.org/W2519904008 |
264fa14d-11ac-4ffa-aa95-dce4a8aaeade | where \(F_{v}\) and \(N_{v}\) represent the feature set and the number of samples in the \(v\) -th viewpoint.
We compare the similarity of all features \(F_{v}\) belonging to the same viewpoint cluster to obtain the distance matrix \(D( F_{m} ,F_{n}) ,\ m=n\) .
\(D\) represents the scoring matrix of Euclidean distance \(d_{ij} =\Vert f_{i} -f_{j} \Vert ^{2}\) .
There is no doubt that the same vehicle with the same viewpoint has the highest similarity and thus tends to be clustered together (assigned to the same pseudo label) with the highest priority.
For each different distance matrix in the same viewpoint, we obtain pseudo labels by the prevalent cluster algorithm DBSCAN [1]}, which can effectively deal with noise points and achieve spatial clusters of arbitrary shapes without information of the number of clusters compared to the conventional k-means [2]} clustering.
| [1] | [
[
688,
691
]
] | https://openalex.org/W1673310716 |
f94cafea-e484-4f27-b2a4-b8303313e895 | Distance metric by \(k\) -reciprocal encoding.
Clearly, more positive samples in the same-viewpoint cluster in the first period, higher clustering quality at different viewpoints in the second period, which in turn will benefit the performance in the next iteration.
Note that the clustering method significantly relies on the distance metric, we propose introducing the widely used \(k\) -reciprocal encoding [1]}, [2]}, [3]} as the distance metric for feature comparison.
For the sample \(x_{i}^{v}\) in \(X^{v}\) , we record its \(k\) nearest neighbors with index-labels \(K_{k}(x_{i}^{v})\) , for all indexes \(ind\in K_{k}(x_{i}^{v})\) , if \( \left| K_{k}(_{i}^{v}) \cap K_{\frac{k}{2}}(x_{ind}^{v}) \right|\geqslant \frac{2}{3} \left| K_{ \frac{k}{2}}(x_{ind}^{v}) \right| \) , \(x_{i}^{v}\) 's mutual \(k\) nearest neighbors set \( S_{i} \leftarrow \left| K_{k}(x_{i}^{v}) \cup K_{\frac{k}{2}}(x_{ind}^{v}) \right| \) .
In this case, all reliable samples similar to \(x_{i}^{v}\) are recorded in \(S_{i}\) .
Then distance \(d_{ij}\) of the sample pair in the same viewpoint distance matrix, \(D( F_{m} ,F_{n}) ,\ m=n\) reassigns weight by,
\(\tilde{d}_{ij}=\left\lbrace \begin{aligned}&e^{-d_{ij}}\quad if \ j\in S_{i},\\&0\quad \quad \ else\end{aligned}\right.\)
| [1] | [
[
410,
413
]
] | https://openalex.org/W2584637367 |
b6c43c85-34e1-489c-8c51-83ce4f865a0e | We evaluate our proposed method VAPC on two benchmark datasets VeRi-776 [1]} and VeRi-Wild [2]}, which contain 5 and 4 view-points respectively.
We compare our method with the prevalent domain adaption based unsupervised, and target-only methods without domain adaption for evaluation.
| [2] | [
[
91,
94
]
] | https://openalex.org/W2953759675 |
3f929b52-cbb1-4406-b530-6a1173f5d612 | We use ResNet50 [1]} as the backbone by eliminating the last classification layer.
All experiments are implemented on two NVIDIA TITAN Xp GPUs.
We initialize our model with pre-trained weights on ImageNet [2]}.
For the viewpoint prediction network, we set the batch size as 32 and the learning rate as 0.001, with a maximum 20 epochs.
If not specified, we use stochastic gradient descent with a momentum of 0.9 and the dropout rate as 0.5 to optimize the model.
For the Re-ID feature extraction network, we resize the input images of VeRi-776 [3]} and VeRi-Wild [4]} as (384,384).
The batch size is set to 16. The learning rates at the recognition stage is set to 0.1 and divided by 10 after every 15 epochs, and set to 0.001 in the clustering stage. We only use a random horizontal flip as a data augmentation strategy. Following the protocol in [5]} we set \(k\) to 20.
| [2] | [
[
205,
208
]
] | https://openalex.org/W2108598243 |
bbe4d000-d0db-450d-a259-d2008b694d4c | Compared with the target-only method.
We first compare our method with three state-of-the-art target-only unsupervised methods OIM [1]}, Bottom [2]} and AE [3]}.
Generally speaking, our method (VAPC_TO) outperforms the three state-of-the-art target-only methods by a large margin by exploring the intra-class relationship.
OIM [1]} devotes to extracting discriminative features efficiently, which ignores the intra-class relationship, thus results in stumbling performance.
Bottom [2]} designs a bottom-up clustering strategy by merging the fixed clusters during each step.
However, each clustering may produce the wrong classification, and more clustering steps, more clustering errors.
Especially on the VeRi-776 [6]}, almost all visible viewpoints are included, which brings greater clustering challenges. Each clustering step only focuses on the same viewpoint and can not bring more samples of different viewpoints together. Our method effectively alleviates this problem and brings greater improvement.
AE [3]} clusters the samples via a similarity threshold and constrains the cluster size by embedded a balance term into the loss.
However, due to the similarity dilemma of vehicles, where the same viewpoints of different identities may have higher similarities, it is difficult to set an optimal similarity threshold for clustering.
In addition, more and more samples meeting the similarity threshold are treated as the same identity during the training,
especially on larger scaled dataset VeRi-Wild [8]}, it will cause more severe data imbalance in each cluster and damage the feature representation.
Therefore the performance of AE [3]} on VeRi-Wild [8]} declines comparing with Bottom [2]}.
| [1] | [
[
131,
134
],
[
327,
330
]
] | https://openalex.org/W2963574614 |
199db5ad-0146-49e4-9db4-5392d630a985 | SPGAN [1]} considers the style change among different datasets and trains a style conversion model to bridge the style discrepancy between the source domain data and the target domain.
However, due to the huge gap between the vehicle datasets in the real scene, e.g., the diverse viewpoints, resolution and illumination, it is challenging to obtain the desired translated image, which is crucial in SPGAN [1]}, and thus results in poor performance for vehicle Re-ID.
ECN [3]} joins the source domain for model constraints while using the \(k\) -nearest neighbor algorithm to mine the same identity in the target domain.
The setting of the \(k\) value not only has a greater impact on the experimental results, but the most similar top \(k\) samples are always at the same viewpoint.
UDAP [4]} uses source domain data to initialize the model and theoretically analyzes the rules that the model needs to follow when adapting to the target domain from the source domain.
It achieves satisfactory results on vehicle Re-ID due to the strengthening of the constraints on the target domain training. The target domain feature extractor has stronger learnability while obtaining the source domain knowledge.
However, it relies on global comparison, which may cause more clustering errors, especially on VeRi-Wild [5]} dataset presents a much smaller inter-class differences than VeRi-776 [6]}.
<TABLE><FIGURE> | [1] | [
[
6,
9
],
[
405,
408
]
] | https://openalex.org/W2963000559 |
2754a7c4-4f90-4755-91b1-5b73fe558c2c | SPGAN [1]} considers the style change among different datasets and trains a style conversion model to bridge the style discrepancy between the source domain data and the target domain.
However, due to the huge gap between the vehicle datasets in the real scene, e.g., the diverse viewpoints, resolution and illumination, it is challenging to obtain the desired translated image, which is crucial in SPGAN [1]}, and thus results in poor performance for vehicle Re-ID.
ECN [3]} joins the source domain for model constraints while using the \(k\) -nearest neighbor algorithm to mine the same identity in the target domain.
The setting of the \(k\) value not only has a greater impact on the experimental results, but the most similar top \(k\) samples are always at the same viewpoint.
UDAP [4]} uses source domain data to initialize the model and theoretically analyzes the rules that the model needs to follow when adapting to the target domain from the source domain.
It achieves satisfactory results on vehicle Re-ID due to the strengthening of the constraints on the target domain training. The target domain feature extractor has stronger learnability while obtaining the source domain knowledge.
However, it relies on global comparison, which may cause more clustering errors, especially on VeRi-Wild [5]} dataset presents a much smaller inter-class differences than VeRi-776 [6]}.
<TABLE><FIGURE> | [3] | [
[
471,
474
]
] | https://openalex.org/W2962859295 |
006159e0-1c85-4e7c-b0f8-677269116fa9 | In addition, we evaluate our method in the "Direct Transfer" fashion by training on the source domain and directly testing on the target domain indicated as (VAPC_DT) in TABLE REF .
First of all, by leveraging the information in the training data, VAPC_DT generally outperforms VAPC_TO, which verifies the knowledge of the source domain during the training improves the vehicle retrieval ability of the model.
The only exception is the rank-1 score on VeRi-776 [1]}.
The main reason is the huge gap between VehicleID [2]} and VeRi-776 [1]} datasets, e.g., VeRi-776 has lower resolution and more viewpoints, which results in poor generalization performance.
Even though VAPC_DT significantly boosts the mAP score on both VeRi-776 [1]} comparing to the target-only fashion (VAPC_TO).
Second, VAPC_DT is even significantly superior to the domain adaption methods SPGAN [5]} and ECN [6]}, and comparable to UDAP [7]} on mAP, which proves the robustness of our method for unsupervised vehicle Re-ID.
| [2] | [
[
517,
520
]
] | https://openalex.org/W2470322391 |
8c3d8551-d43d-4833-b6b0-f75a955dd67c | As illustrated in Fig. REF , the classic clustering algorithm k-means [1]} and DBSCAN [2]} work stumblingly in the global comparison fashion.
Furthermore, k-means [1]} specifies the number of clusters, which makes the change of samples in the cluster relatively stable. However, due to global comparison, a large number of samples with the same viewpoint and different identities appear in the same cluster, which makes model training continue to decline.
DBSCAN [2]} is sensitive to noise; therefore, a large number of noise samples under various challenges in real scenes deteriorates the clustering quality.
Bottom [5]} causes the final collapse due to the accumulation of the number of clustering errors each step.
Since clustering based on viewpoint division greatly simplifies the clustering task, and the strategy of progressively merges different viewpoints and gradually gathers vehicles of the same identity from different viewpoints, our method continues to improve with training.
| [1] | [
[
70,
73
],
[
163,
166
]
] | https://openalex.org/W2161160262 |
ea8e1b1c-cf60-4dcb-865c-3d2be383e315 | The Ising fusion category has been extensively studied in the literature [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}. The existence of the duality line \(N\) implies the conformal field theory is invariant under the \(\mathbb {Z}_2\) gauging. One possible generalization is of course to consider the duality line \(N\) under \(A\) -gauging where \(A\) is a finite Abelian group. The corresponding fusion category is known as the Tambara-Yamagami Fusion category \(\mathcal {T}(A,\chi ,\epsilon )\) [10]}, [11]} where \(\chi \) is a symmetric bicharacter on \(A\) and \(\epsilon = \pm 1\) is the Frobenius-Schur indicator. The \(\mathcal {T}(A,\chi ,\epsilon )\) with the same \(A\) but different \(\chi \) and \(\epsilon \) satisfies the same fusion rules, yet they are different fusion categories distinguished by the \(F\) -symbols (as known as the crossing kernels \(\mathcal {K}\) in [12]}), which measures the difference between two different ways of resolving the crossing of two topological defect lines. The \(F\) -symbols reduce to the familiar group anomaly measured by \(H^3(G,U(1))\) when considering only invertible topological defect lines. For instance, there are two types of Ising fusion categories with different FS indicators \(\epsilon = \pm 1\) and they can also be distinguished from the so-called spin-selection rule [12]}. The studies of the Tambara-Yamagami fusion category symmetries in the physics literature include [1]}, [3]}, [12]}, [8]}, [9]}.
| [9] | [
[
121,
124
],
[
1493,
1496
]
] | https://openalex.org/W3175329602 |
79006000-aaca-4ddb-b704-f77f76319b5b | Another generalization of the Ising fusion category is the triality fusion category and recently is studied in [1]}, [2]}. The simple topological defect lines contain symmetry operators which generate \(\mathbb {Z}_2\times \mathbb {Z}_2\) global symmetries, as well as a triality line \(\mathcal {L}_Q\) and its orientation reversal \(\mathcal {L}_{\overline{Q}}\) , satisfying the fusion rule In general, the invertible symmetries in a triality fusion category does not have to be \(\mathbb {Z}_2\times \mathbb {Z}_2\) . For simplicity, however, the notion of the triality fusion category would specifically mean the case where the invertible symmetries are \(\mathbb {Z}_2\times \mathbb {Z}_2\) . generalizing (REF )
\(\mathcal {L}_Q\times \mathcal {L}_{\overline{Q}}= \sum _{g\in \mathbb {Z}_2\times \mathbb {Z}_2} g, \quad \mathcal {L}_Q\times \mathcal {L}_Q= 2\mathcal {L}_{\overline{Q}}, \quad g\times \mathcal {L}_Q= \mathcal {L}_Q\times g = \mathcal {L}_Q, \quad g \in \mathbb {Z}_2\times \mathbb {Z}_2.\)
| [1] | [
[
111,
114
]
] | https://openalex.org/W2993696085 |
30e71383-5b0f-432a-86f2-ef38cdd02c13 | (REF ) is also called the no-swirl condition. From [1]} we know that, under the assumption (REF ), the vorticity field \( \mathbf {w} \) satisfies
\(\mathbf {w}=\frac{w_3}{k}\overrightarrow{\zeta },\)
| [1] | [
[
51,
54
]
] | https://openalex.org/W2021619188 |
f2458fd1-bf2b-4c0f-b066-97fed92138b4 | In the sequel, we give proof of Theorem REF . The idea is to represent the Green's function of the Dirichlet
problem (REF ) as a combination of the fundamental solution of the classical Laplacian in \( \mathbb {R}^2 \) and some positive-definite matrices and functions on the domain \( U \) .
We first determine \( S_K \) . Define
\(G_0(x,y):=\frac{\sqrt{\det K(x)}^{-1}+\sqrt{\det K(y)}^{-1}}{2}\Gamma \left(\frac{T_x+T_y}{2}(x-y)\right).\)
Then for fixed \( y\in U \) ,
\(\begin{split}-\nabla _x\cdot (K(x)\nabla _x G_0(x,y))=&-\partial _{x_1}(K_{11}(x)\partial _{x_1}G_0(x,y)+K_{12}(x)\partial _{x_2}G_0(x,y))\\&-\partial _{x_2}(K_{21}(x)\partial _{x_1}G_0(x,y)+K_{22}(x)\partial _{x_2}G_0(x,y))\\=&-K_{11}(x)\partial _{x_1x_1}G_0(x,y)-K_{12}(x)\partial _{x_1x_2}G_0(x,y)\\&-K_{21}(x)\partial _{x_1x_2}G_0(x,y)-K_{22}(x)\partial _{x_2x_2}G_0(x,y)\\&+F_1(x,y),\end{split}\)
where \(F_1(\cdot ,y)\in L^q(U)\) for \(1<q<2\) since \(|\nabla \Gamma (x-y)|\le \frac{1}{2\pi |x-y|} \) .
Denote \( z=\frac{T_x+T_y}{2}(x-y) \) and \( T_x=\begin{pmatrix}T_{11}(x) & T_{12}(x) \\T_{21}(x) & T_{22}(x)\end{pmatrix}. \)
One computes directly that
\(\begin{split}-\nabla &_x \cdot (K(x)\nabla _x G_0(x,y))\\=&-\frac{\sqrt{\det K(x)}^{-1}+\sqrt{\det K(y)}^{-1}}{2}\bigg (K_{11}(x)\left(\frac{T_{11}(x)+T_{11}(y)}{2}\partial _{z_1}+\frac{T_{21}(x)+T_{21}(y)}{2}\partial _{z_2}\right)^2\\&+K_{12}(x)\left(\frac{T_{11}(x)+T_{11}(y)}{2}\partial _{z_1}+\frac{T_{21}(x)+T_{21}(y)}{2}\partial _{z_2}\right)\left(\frac{T_{12}(x)+T_{12}(y)}{2}\partial _{z_1}+\frac{T_{22}(x)+T_{22}(y)}{2}\partial _{z_2}\right)\\&+K_{21}(x)\left(\frac{T_{11}(x)+T_{11}(y)}{2}\partial _{z_1}+\frac{T_{21}(x)+T_{21}(y)}{2}\partial _{z_2}\right)\left(\frac{T_{12}(x)+T_{12}(y)}{2}\partial _{z_1}+\frac{T_{22}(x)+T_{22}(y)}{2}\partial _{z_2}\right)\\&+K_{22}(x)\left(\frac{T_{12}(x)+T_{12}(y)}{2}\partial _{z_1}+\frac{T_{22}(x)+T_{22}(y)}{2}\partial _{z_2}\right)^2\bigg )\Gamma (z)+F_2(x,y)\end{split}\)
for some \(F_2(\cdot ,y)\in L^q(U)(1<q<2) \) . Define
\(\begin{split}c_{11}(x,y)=&K_{11}(x)\left(\frac{T_{11}(x)+T_{11}(y)}{2}\right)^2+ 2K_{12}(x)\left(\frac{T_{11}(x)+T_{11}(y)}{2}\right)\left(\frac{T_{12}(x)+T_{12}(y)}{2}\right)\\&+K_{22}(x)\left(\frac{T_{12}(x)+T_{12}(y)}{2}\right)^2,\end{split}\)
\(\begin{split}c_{12}&(x,y)\\=&2K_{11}(x)\left(\frac{T_{11}(x)+T_{11}(y)}{2}\right)\left(\frac{T_{21}(x)+T_{21}(y)}{2}\right)+ 2K_{12}(x)\left(\frac{T_{11}(x)+T_{11}(y)}{2}\right)\left(\frac{T_{22}(x)+T_{22}(y)}{2}\right)\\&+ 2K_{12}(x)\left(\frac{T_{21}(x)+T_{21}(y)}{2}\right)\left(\frac{T_{12}(x)+T_{12}(y)}{2}\right)+2K_{22}(x)\left(\frac{T_{12}(x)+T_{12}(y)}{2}\right)\left(\frac{T_{22}(x)+T_{22}(y)}{2}\right),\end{split}\)
\(\begin{split}c_{22}(x,y)=&K_{11}(x)\left(\frac{T_{21}(x)+T_{21}(y)}{2}\right)^2+ 2K_{12}(x)\left(\frac{T_{21}(x)+T_{21}(y)}{2}\right)\left(\frac{T_{22}(x)+T_{22}(y)}{2}\right)\\&+K_{22}(x)\left(\frac{T_{22}(x)+T_{22}(y)}{2}\right)^2,\end{split}\)
then
\(\begin{split}-&\nabla _x \cdot (K(x)\nabla _x G_0(x,y))\\&=-\frac{\sqrt{\det K(x)}^{-1}+\sqrt{\det K(y)}^{-1}}{2}(C_{11}(x,y)\partial _{z_1z_1}+C_{12}(x,y)\partial _{z_1z_2}+C_{22}(x,y)\partial _{z_2z_2})\Gamma (z)+F_2(x,y).\end{split}\)
When \( x=y \) , direct computation shows that
\(\begin{split}C_{12}(y,y)=&2(K_{11}T_{11}T_{21}+K_{12}T_{11}T_{22}+K_{12}T_{12}T_{21}+K_{22}T_{12}T_{22})(y)\\=&2((T^{-1}_{11}T^{-1}_{11}+T^{-1}_{12}T^{-1}_{12})T_{11}T_{21}+(T^{-1}_{11}T^{-1}_{21}+T^{-1}_{12}T^{-1}_{22})T_{11}T_{22}\\&+(T^{-1}_{11}T^{-1}_{21}+T^{-1}_{12}T^{-1}_{22})T_{12}T_{21}+(T^{-1}_{21}T^{-1}_{21}+T^{-1}_{22}T^{-1}_{22})T_{12}T_{22})(y)\\=&2((T_{11}T^{-1}_{11}+T_{12}T^{-1}_{21})T^{-1}_{11}T_{21}+(T_{21}T^{-1}_{12}+T_{22}T^{-1}_{22})T_{12}^{-1}T_{11}\\&+(T_{11}T_{11}^{-1}+T_{12}T_{21}^{-1})T_{21}^{-1}T_{22}+(T_{21}T_{12}^{-1}+T_{22}T_{22}^{-1})T_{22}^{-1}T_{12})(y)\\=&2((T^{-1}_{11}T_{21}+T_{21}^{-1}T_{22})+(T_{12}^{-1}T_{11}+T_{22}^{-1}T_{12}))(y)\\=&0.\end{split}\)
Similarly,
\(C_{11}(y,y)=C_{22}(y,y)=1.\)
Thus by (REF ), (REF ) and the regularity of \( T \) , one has
\(C_{12}(x,y)=O(|x-y|),\ \ C_{ii}(x,y)-1=O(|x-y|),\ \ i=1,2.\)
Substituting (REF ) into (REF ), we conclude that
\(\begin{split}-\nabla _x\cdot (K(x)\nabla _x G_0(x,y))=&-\frac{\sqrt{\det K(x)}^{-1}+\sqrt{\det K(y)}^{-1}}{2}\Delta _z\Gamma (z)+F(x,y),\end{split}\)
for some \(F(\cdot ,y)\in L^q(U)(1<q<2)\) , which implies that \( -\nabla _x\cdot (K(\cdot )\nabla _x G_0(\cdot ,y))=F(\cdot ,y) \) in any subdomain of \( U\setminus \lbrace y\rbrace . \)
For fixed \(y\in U\) , let \(S_K(\cdot ,y)\in W^{1,2}(U)\) be the unique weak solution to the following Dirichlet problem
\(\left\lbrace \begin{aligned}&-\nabla _x\cdot (K(x)\nabla _x S_K(x,y))=-F(x,y)\ \ \ \text{in}\ U,\\&S_K(x,y)=-G_0(x,y)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{on}\ \partial U.\\\end{aligned}\right.\)
Since \( K \) is smooth and positive definite, by classical elliptic regularity estimates (see [1]}), we have \( S_K(\cdot , y)\in W^{2,q}(U) \) for every \( 1<q<2 \) .
By Sobolev embedding theorem, \( S_K(\cdot ,y)\in C^{0,\gamma }(\overline{U}) \) for every \( \gamma \in (0,1) \) .
Moreover, from the definition of \( F \) and \( G_0 \) , we have
\(||F(\cdot , y)||_{L^q(U)}\le \bar{C}_1\left(\int _{B_{diam(U)}(0)}\left(\ln \frac{1}{|z|}\right)^{q}+\left(\frac{1}{|z|}\right)^qdz\right)^{\frac{1}{q}}\le \bar{C}_2\ \ \ \ \ \forall \ y\in U,\)
and
\(-G_0(x,y)\le \bar{C}_3\ln |x-y|\le \bar{C}_4 \ \ \ \ \ \forall \ x,y \in U,\)
for some \( \bar{C}_i>0 (i=1,2,3,4)\) . Thus by (REF ), we have (REF ).
Now we prove (REF ). We first prove that for any \(u\in W^{2,\bar{p}}\cap W^{1,\bar{p}}_0(U) \) with \(\bar{p}>2\) ,
\(\begin{split}u(y)=\int _{U}G_K(x,y)\mathcal {L}_Ku(x)dx\ \ \ \ \forall \ y\in U.\end{split}\)
Fix any point \(p\in U\) . Let \( \bar{\epsilon }>0 \) sufficiently small such that \( B_{\bar{\epsilon }}(p)\Subset U\) . Then by (REF ) and (REF ), we have \( G_K(\cdot ,p)\in W^{2,q}(U\setminus B_{\bar{\epsilon }}(p))\) and
\(\left\lbrace \begin{aligned}&-\nabla _x\cdot (K(x)\nabla _x G_K(x,p))=0\ \ \ \text{in}\ U\setminus B_{\bar{\epsilon }}(p),\\&G_K(x,p)=0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{on}\ \partial U.\\\end{aligned}\right.\)
Let
\(\left\lbrace \begin{aligned}&x^{\prime }=T_px,\ p^{\prime }=T_pp,\\&\tilde{G}_K(x^{\prime },p^{\prime })=G_K(T^{-1}_px^{\prime },T^{-1}_pp^{\prime }),\\&U_p=\left\lbrace x^{\prime }=T_px\mid x\in U\right\rbrace .\end{aligned}\right.\)
Direct computation shows that
\(\ \left\lbrace \begin{aligned}&\tilde{L}\tilde{G}_K=-\nabla _{x^{\prime }}\cdot (\tilde{K}(x^{\prime })\nabla _{x^{\prime }} \tilde{G}_K(x^{\prime },p^{\prime }))=0\ \ \ \ \text{in}\ U_p\setminus B_{\bar{\epsilon }}(p^{\prime }),\\&\tilde{G}_K(x^{\prime },p^{\prime })=0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{on}\ \partial U_p,\\\end{aligned}\right.\)
where
\(&\tilde{K}_{11}(x^{\prime })=K_{11}(x)T_{11}^2(p)+2K_{12}(x)T_{11}(p)T_{12}(p)+K_{22}(x)T_{12}^2(p),\\&\tilde{K}_{12}(x^{\prime })=K_{11}(x)T_{11}(p)T_{21}(p)+K_{12}(x)T_{11}(p)T_{22}(p)+K_{12}(x)T_{12}(p)T_{21}(p)+K_{22}(x)T_{12}(p)T_{22}(p),\\&\tilde{K}_{21}(x^{\prime })=K_{11}(x)T_{11}(p)T_{21}(p)+K_{12}(x)T_{11}(p)T_{22}(p)+K_{12}(x)T_{12}(p)T_{21}(p)+K_{22}(x)T_{12}(p)T_{22}(p),\\&\tilde{K}_{22}(x^{\prime })=K_{11}(x)T_{21}^2(p)+2K_{12}(x)T_{21}(p)T_{22}(p)+K_{22}(x)T_{22}^2(p).\)
Let \(v(x^{\prime })=u(T^{-1}_px^{\prime })\) . We denote by \(q\in (1,2)\) the conjugate exponent of \(\bar{p}\) . By Green's formula and (REF ), we have
\(\begin{split}\int _{U_p\setminus B_{\bar{\epsilon }}(p^{\prime })}\tilde{G}_K(x^{\prime },p^{\prime })\tilde{L}v(x^{\prime })dx^{\prime }=&\int _{\partial B_{\bar{\epsilon }}(p^{\prime })}v(x^{\prime })\left(\tilde{K}(x^{\prime })\nabla _{x^{\prime }}\tilde{G}_K(x^{\prime },p^{\prime })|\frac{p^{\prime }-x^{\prime }}{{\bar{\epsilon }}}\right)dS_{x^{\prime }}\\&-\int _{\partial B_{\bar{\epsilon }}(p^{\prime })}\tilde{G}_K(x^{\prime },p^{\prime })\left(\tilde{K}(x^{\prime })\nabla _{x^{\prime }}v(x^{\prime })|\frac{p^{\prime }-x^{\prime }}{{\bar{\epsilon }}}\right)dS_{x^{\prime }}\\=:&I_1-I_2.\end{split}\)
Denote
\(\tilde{G}_0(x^{\prime },p^{\prime })=G_0(T_p^{-1}x^{\prime }, T_p^{-1}p^{\prime }), \ \ \ \ \tilde{S}_K(x^{\prime },p^{\prime })=S_K(T_p^{-1}x^{\prime }, T_p^{-1}p^{\prime }).\)
Then \( \tilde{G}_K(\cdot ,p^{\prime })=\tilde{G}_0(\cdot ,p^{\prime })+\tilde{S}_K(\cdot ,p^{\prime }) \) in \( U_p \) .
For \(I_2\) , from the definition of \(\tilde{G}_K\) and the fact that \(\nabla v\in W^{1,\bar{p}}(U_p)\subset C(\overline{U_p})\) , \(\tilde{S}_K(\cdot ,p^{\prime })\in W^{2,q}(U_p)\subset C(\overline{U_p})\) , we get
\(\begin{split}I_2=&\int _{\partial B_{\bar{\epsilon }}(p^{\prime })}\tilde{G}_0(x^{\prime },p^{\prime })\left(\tilde{K}(x^{\prime })\nabla _{x^{\prime }}v(x^{\prime })|\frac{p^{\prime }-x^{\prime }}{{\bar{\epsilon }}}\right)dS_{x^{\prime }}\\&+\int _{\partial B_{\bar{\epsilon }}(p^{\prime })}\tilde{S}_K(x^{\prime },p^{\prime })\left(\tilde{K}(x^{\prime })\nabla _{x^{\prime }}v(x^{\prime })|\frac{p^{\prime }-x^{\prime }}{{\bar{\epsilon }}}\right)dS_{x^{\prime }}\\=&O({\bar{\epsilon }}|\ln {\bar{\epsilon }}|+{\bar{\epsilon }}).\end{split}\)
For \(I_1\) , notice that
\(\begin{split}I_1=&\int _{\partial B_{\bar{\epsilon }}(p^{\prime })}v(x^{\prime })\left(\tilde{K}(x^{\prime })\nabla _{x^{\prime }}\tilde{G}_0(x^{\prime },p^{\prime })|\frac{p^{\prime }-x^{\prime }}{{\bar{\epsilon }}}\right)dS_{x^{\prime }}\\&\int _{\partial B_{\bar{\epsilon }}(p^{\prime })}v(x^{\prime })\left(\tilde{K}(x^{\prime })\nabla _{x^{\prime }}\tilde{S}_K(x^{\prime },p^{\prime })|\frac{p^{\prime }-x^{\prime }}{{\bar{\epsilon }}}\right)dS_{x^{\prime }}\\=:&J_1+J_2.\end{split}\)
On the one hand, by the trace theorem
\(\begin{split}|J_2|\le &\int _{\partial B_{\bar{\epsilon }}(p^{\prime })}|v(x^{\prime })||\tilde{K}(x^{\prime })\nabla _{x^{\prime }}\tilde{S}_K(x^{\prime },p^{\prime })|dS_{x^{\prime }}\\\le & C\int _{\partial B_{\bar{\epsilon }}(p^{\prime })}|\partial _{x_1^{\prime }}\tilde{S}_K(x^{\prime },p^{\prime })|+|\partial _{x_2^{\prime }}\tilde{S}_K(x^{\prime },p^{\prime })|dS_{x^{\prime }}\\=&C\int _{\partial B_1(0)}|\partial _{z_1}\tilde{S}_K({\bar{\epsilon }} z+p,p^{\prime })|+|\partial _{z_2}\tilde{S}_K({\bar{\epsilon }} z+p,p^{\prime })|dS_{z}\\\le & C\bigg (\int _{B_1(0)}|\partial _{z_1}\tilde{S}_K({\bar{\epsilon }} z+p,p^{\prime })|+|\partial _{z_2}\tilde{S}_K({\bar{\epsilon }} z+p,p^{\prime })|dz\\&+\int _{B_1(0)}|\partial _{z_1z_1}\tilde{S}_K({\bar{\epsilon }} z+p,p^{\prime })|+|\partial _{z_1z_2}\tilde{S}_K({\bar{\epsilon }} z+p,p^{\prime })|+|\partial _{z_2z_2}\tilde{S}_K({\bar{\epsilon }} z+p,p^{\prime })|dz\bigg )\\=&C\bigg (\frac{1}{{\bar{\epsilon }}}\int _{B_{\bar{\epsilon }}(p^{\prime })}|\partial _{x_1^{\prime }}\tilde{S}_K(x^{\prime },p^{\prime })|+|\partial _{x_2^{\prime }}\tilde{S}_K(x^{\prime },p^{\prime })|dx^{\prime }\\&+\int _{B_{\bar{\epsilon }}(p^{\prime })}|\partial _{x_1^{\prime }x_1^{\prime }}\tilde{S}_K(x^{\prime },p^{\prime })|+|\partial _{x_1^{\prime }x_2^{\prime }}\tilde{S}_K(x^{\prime },p^{\prime })|+|\partial _{x_2^{\prime }x_2^{\prime }}\tilde{S}_K(x^{\prime },p^{\prime })|dx^{\prime }\bigg )\\\rightarrow & 0 \ \ \ \ \text{as}\ \ {\bar{\epsilon }}\rightarrow 0,\end{split}\)
where we have used \( \tilde{S}_K(\cdot ,p^{\prime }) \in W^{2,q}(U_p)\) for \( q\in (1,2). \)
On the other hand,
\(\begin{split}J_1&=\int _{\partial B_{\bar{\epsilon }}(p^{\prime })}v(x^{\prime })\frac{\sqrt{\det K(T_p^{-1}x^{\prime })}^{-1}+\sqrt{\det K(T_p^{-1}p^{\prime })}^{-1}}{2}\left( \tilde{K}(x^{\prime })\nabla _{x^{\prime }}\Gamma (x^{\prime }-p^{\prime })|\frac{p^{\prime }-x^{\prime }}{{\bar{\epsilon }}}\right) dS_{x^{\prime }}+o(1)\\&=\frac{1}{2\pi {\bar{\epsilon }}}\int _{\partial B_{{\bar{\epsilon }}}(p^{\prime })}\tilde{K}_{11}(x^{\prime })\frac{\sqrt{\det K(T_p^{-1}x^{\prime })}^{-1}+\sqrt{\det K(T_p^{-1}p^{\prime })}^{-1}}{2}v(x^{\prime })dS_{x^{\prime }}+o(1)\\&=\sqrt{\det K(p)}^{-1}v(p^{\prime })+o(1)\\&=\sqrt{\det K(p)}^{-1}u(p)+o(1).\end{split}\)
Applying (REF ), (REF ), (REF ) and (REF ) to (REF ) and letting \({\bar{\epsilon }}\rightarrow 0\) , we get
\(\begin{split}\sqrt{\det K(p)}^{-1}u(p)=&\int _{U_p}\tilde{G}_K(x^{\prime },p^{\prime })\tilde{L}v(x^{\prime })dx^{\prime }\\=&\det (T_p)\int _{U}{G}_K(x,p)\mathcal {L}_Ku(x)dx\\=&\sqrt{\det K(p)}^{-1}\int _{U}{G}_K(x,p)\mathcal {L}_Ku(x)dx.\end{split}\)
Then (REF ) holds.
Finally, we observe that if \( f_1,f_2\in L^q(U) \) , then there holds
\(\begin{split}&\int _U f_1\mathcal {G}_K f_2-\iint _{U\times U}G_0(y,x)f_1(x)f_2(y)dxdy\\&\ \ =\int _U f_2\mathcal {G}_K f_1-\iint _{U\times U}G_0(y,x)f_2(x)f_1(y)dxdy,\end{split}\)
and thus
\(\iint _{U\times U}S_K(y,x)f_1(x)f_2(y)dxdy=\iint _{U\times U}S_K(y,x)f_2(x)f_1(y)dxdy.\)
It follows that for every \( x,y\in U \) , \( S_K(x, y) = S_K(y, x) \) , and thus the function \( S_K\in C^{0,\gamma }_{loc}(U\times U). \) By (REF ), (REF ) and Proposition REF , we get (REF ). The proof of Theorem REF is therefore finished.
| [1] | [
[
4947,
4950
]
] | https://openalex.org/W1866311589 |
ba9e8ca2-7264-4a27-a72a-316dd3948980 | Let \(d_{M, \tilde{g}}\) denote the Riemannian distance function in \((M, \tilde{g})\) . In a similar manner as to how Riemannian distances are estimated in Isomap [1]} and C-Isomap [2]}, we estimate \(d_{M, \tilde{g}}\) as follows.
| [1] | [
[
165,
168
]
] | https://openalex.org/W2001141328 |
45b009b3-2fe1-48dd-a5d3-c874715dea41 | A persistence module \(\mathbb {V}\) is q-tame if rank\((v_s^t) < \infty \) for all \(s \le t\) . The following theorem says that the persistence diagrams of \(q\) -tame persistence modules that are \(\epsilon \) -interleaved have bottleneck distance at most \(\epsilon \) . We note that if \(\mathcal {K}_r\) is a finite complex for all \(r\) , then its persistent homology \(H(\mathcal {K})\) is q-tame because \(H(\mathcal {K}_r)\) is finite-dimensional for all \(r\) . Thus Theorem applies to the persistent homology of the density-scaled complexes, which are finite at all filtration levels \(r\) .
[[1]}] If \(\mathbb {U}\) and \(\mathbb {V}\) are \(q\) -tame persistence modules that are \(\epsilon \) -interleaved, then the bottleneck distance between the persistence diagrams satisfies
\(W_{\infty }(\text{dgm}(\mathbb {U}), \text{dgm}(\mathbb {V})) \le \epsilon \,.\)
| [1] | [
[
611,
614
]
] | https://openalex.org/W1596721197 |
6a2d4e0b-629d-4b6e-9005-a7c300cd8f89 | [Lemma 4.3 in [1]}]
If \(d_H(X, Y, (M, \tilde{g})) < \epsilon \) , then \(H(\textnormal {DVR}(M, g, f, X))\) and \(H(\textnormal {DVR}(M, g, f, Y))\) are \(\epsilon \) -interleavedIn [1]}, the Vietoris–Rips complex is defined so that there is an edge between \(x\) and \(y\) at filtration level \(r\) if \(d(x, y) \le r\) . In this paper, we use \(2r\) instead. The condition from [1]} on the Hausdorff distance is adjusted accordingly..
| [1] | [
[
14,
17
],
[
185,
188
],
[
387,
390
]
] | https://openalex.org/W2060702495 |
5debe211-548a-422f-ae9b-bbb9c7a37a56 | Our definitions of the density-scaled complexes required the point clouds to be sampled from a Riemannian manifold with global intrinsic dimension \(n\) . However, our implementation \(\widehat{\textnormal {DVR}}\) immediately generalizes to metric spaces for which the intrinsic dimension varies locally. (A trivial example of such a metric space is the disjoint union of Riemannian manifolds with different dimensions.) One can estimate the local intrinsic dimension \(n_x\) near a point \(x\) using one of the methods of [1]}, [2]}, [3]}, [4]}, [5]}, for example. In the density estimator \(\hat{f}_N(y)\) defined by Equation REF , one replaces \(n\) by \(n_y\) . For estimating Riemannian distance, one can construct the \(k\) -nearest neighbor graph \(G_{kNN}(X)\) as usual, but with \(n\) replaced by an average of \(n_{x_i}\) and \(n_{x_j}\) when defining the weight of the edge \((x_i, x_j)\) . Thus one can compute \(\widehat{\textnormal {DVR}}(X)\) for a point cloud \(X\) that is sampled from a space whose intrinsic dimension varies. It would certainly be of interest to analyze the performance and theoretical guarantees of \(\widehat{\textnormal {DVR}}\) on metric spaces of varying local dimension.
| [1] | [
[
527,
530
]
] | https://openalex.org/W2121122425 |
6a830904-9e58-49bf-9044-f65a07546dd6 | Our definitions of the density-scaled complexes required the point clouds to be sampled from a Riemannian manifold with global intrinsic dimension \(n\) . However, our implementation \(\widehat{\textnormal {DVR}}\) immediately generalizes to metric spaces for which the intrinsic dimension varies locally. (A trivial example of such a metric space is the disjoint union of Riemannian manifolds with different dimensions.) One can estimate the local intrinsic dimension \(n_x\) near a point \(x\) using one of the methods of [1]}, [2]}, [3]}, [4]}, [5]}, for example. In the density estimator \(\hat{f}_N(y)\) defined by Equation REF , one replaces \(n\) by \(n_y\) . For estimating Riemannian distance, one can construct the \(k\) -nearest neighbor graph \(G_{kNN}(X)\) as usual, but with \(n\) replaced by an average of \(n_{x_i}\) and \(n_{x_j}\) when defining the weight of the edge \((x_i, x_j)\) . Thus one can compute \(\widehat{\textnormal {DVR}}(X)\) for a point cloud \(X\) that is sampled from a space whose intrinsic dimension varies. It would certainly be of interest to analyze the performance and theoretical guarantees of \(\widehat{\textnormal {DVR}}\) on metric spaces of varying local dimension.
| [2] | [
[
533,
536
]
] | https://openalex.org/W2028569884 |
c1e60f34-cba9-4661-8f0e-1a67d12012f9 | Our definitions of the density-scaled complexes required the point clouds to be sampled from a Riemannian manifold with global intrinsic dimension \(n\) . However, our implementation \(\widehat{\textnormal {DVR}}\) immediately generalizes to metric spaces for which the intrinsic dimension varies locally. (A trivial example of such a metric space is the disjoint union of Riemannian manifolds with different dimensions.) One can estimate the local intrinsic dimension \(n_x\) near a point \(x\) using one of the methods of [1]}, [2]}, [3]}, [4]}, [5]}, for example. In the density estimator \(\hat{f}_N(y)\) defined by Equation REF , one replaces \(n\) by \(n_y\) . For estimating Riemannian distance, one can construct the \(k\) -nearest neighbor graph \(G_{kNN}(X)\) as usual, but with \(n\) replaced by an average of \(n_{x_i}\) and \(n_{x_j}\) when defining the weight of the edge \((x_i, x_j)\) . Thus one can compute \(\widehat{\textnormal {DVR}}(X)\) for a point cloud \(X\) that is sampled from a space whose intrinsic dimension varies. It would certainly be of interest to analyze the performance and theoretical guarantees of \(\widehat{\textnormal {DVR}}\) on metric spaces of varying local dimension.
| [4] | [
[
545,
548
]
] | https://openalex.org/W2169036209 |
2e8b053e-b3b6-4eff-b2ce-c73f6c38e18d | Our definitions of the density-scaled complexes required the point clouds to be sampled from a Riemannian manifold with global intrinsic dimension \(n\) . However, our implementation \(\widehat{\textnormal {DVR}}\) immediately generalizes to metric spaces for which the intrinsic dimension varies locally. (A trivial example of such a metric space is the disjoint union of Riemannian manifolds with different dimensions.) One can estimate the local intrinsic dimension \(n_x\) near a point \(x\) using one of the methods of [1]}, [2]}, [3]}, [4]}, [5]}, for example. In the density estimator \(\hat{f}_N(y)\) defined by Equation REF , one replaces \(n\) by \(n_y\) . For estimating Riemannian distance, one can construct the \(k\) -nearest neighbor graph \(G_{kNN}(X)\) as usual, but with \(n\) replaced by an average of \(n_{x_i}\) and \(n_{x_j}\) when defining the weight of the edge \((x_i, x_j)\) . Thus one can compute \(\widehat{\textnormal {DVR}}(X)\) for a point cloud \(X\) that is sampled from a space whose intrinsic dimension varies. It would certainly be of interest to analyze the performance and theoretical guarantees of \(\widehat{\textnormal {DVR}}\) on metric spaces of varying local dimension.
| [5] | [
[
551,
554
]
] | https://openalex.org/W2127248062 |
4ed54896-8194-4e49-bd29-38aa50e40000 | The density-scaled complexes we defined in this paper are conformally invariant, but it is desirable to construct filtered complexes that are invariant under a wider class of diffeomorphisms. Conformal invariance is the best that one can hope for in our setting because the density-scaled Riemannian manifold \((M, \tilde{g})\) is conformally equivalent to \((M, g)\) . One idea for improving on the current definition is to consider the local covariance of the probability distribution at each point [1]} and to modify the Riemannian metric in such a way that with respect to the new Riemannian metric, the local covariance matrix at each point is the identity matrix (with respect to a positively oriented orthonormal basis). This idea is akin to the usual normalization that data scientists often do in Euclidean space, and it is also reminiscent of the ellipsoid-thickenings of [2]}.
| [1] | [
[
502,
505
]
] | https://openalex.org/W1969002377 |
9cc597c4-a636-4c80-8b63-714daa381ff5 | The secret sharing schemes is introduced by Blakley [1]} and Shamir [2]} in 1979. Based on linear codes, many secret sharing schemes are constructed [3]}, [4]}, [5]}, [6]}, [7]}. Especially, for those linear codes with all nonzero codewords minimal, their dual codes can be used to construct secret sharing schemes with nice access structures [7]}.
| [4] | [
[
155,
158
]
] | https://openalex.org/W2120756519 |
09e2150e-5a40-4dc6-932d-9e0ed9bdd436 | The secret sharing schemes is introduced by Blakley [1]} and Shamir [2]} in 1979. Based on linear codes, many secret sharing schemes are constructed [3]}, [4]}, [5]}, [6]}, [7]}. Especially, for those linear codes with all nonzero codewords minimal, their dual codes can be used to construct secret sharing schemes with nice access structures [7]}.
| [7] | [
[
173,
176
],
[
343,
346
]
] | https://openalex.org/W2962947453 |
73f4293f-946f-4f07-a7f1-aa1e5e2c5ba4 | Some notations and results for strongly regular graphs are given as follows [1]}.
| [1] | [
[
76,
79
]
] | https://openalex.org/W1973483608 |
eca7e0c1-36f5-4b85-b0f4-053ca1ca0ded | Compared with the known strongly regular graphs [1]}, [2]}, [3]}, [4]}, the above strongly regular graphs are new.
| [4] | [
[
66,
69
]
] | https://openalex.org/W2113800264 |
eaa77f4e-3c76-4bc9-b87e-902ace1d4a22 | In experiments, we randomly sample 2,000 instances as query and the rest as database (retrieval set). To reduce computational cost, similar to [1]}, we randomly sample 5,000 instances from the database for training. After completing the training, we binarize the database samples into the hash codes and perform cross-modal retrieval.
| [1] | [
[
143,
146
]
] | https://openalex.org/W2604880013 |
0e16cafe-6818-487e-9563-ef8b1e0d77ad | Baselines: In experiments, we compare our FDCH with six state-of-the-art cross-modal hashing methods.
They can be divided into two categories: LSSH [1]}, CMFH [2]} and DCH [3]} are supervised methods; SCM [4]}, SePH\(_{km}\) [5]} and DCMH [6]} are unsupervised methods. For fair comparison, at the \(fc7\) from the initial CNN-F network used by our method, we extracted the CNN feature for the shallowed baselines.
| [2] | [
[
159,
162
]
] | https://openalex.org/W2512032049 |
c7ba7d4a-d0c5-4bd2-94da-2c885f927409 | Baselines: In experiments, we compare our FDCH with six state-of-the-art cross-modal hashing methods.
They can be divided into two categories: LSSH [1]}, CMFH [2]} and DCH [3]} are supervised methods; SCM [4]}, SePH\(_{km}\) [5]} and DCMH [6]} are unsupervised methods. For fair comparison, at the \(fc7\) from the initial CNN-F network used by our method, we extracted the CNN feature for the shallowed baselines.
| [3] | [
[
172,
175
]
] | https://openalex.org/W2591669147 |
f0d5301c-0402-4ea0-b52b-dad6fcf39969 | Baselines: In experiments, we compare our FDCH with six state-of-the-art cross-modal hashing methods.
They can be divided into two categories: LSSH [1]}, CMFH [2]} and DCH [3]} are supervised methods; SCM [4]}, SePH\(_{km}\) [5]} and DCMH [6]} are unsupervised methods. For fair comparison, at the \(fc7\) from the initial CNN-F network used by our method, we extracted the CNN feature for the shallowed baselines.
| [4] | [
[
205,
208
]
] | https://openalex.org/W2203543769 |
8dd8303f-d976-4f43-a081-b23d7dfe2943 | Baselines: In experiments, we compare our FDCH with six state-of-the-art cross-modal hashing methods.
They can be divided into two categories: LSSH [1]}, CMFH [2]} and DCH [3]} are supervised methods; SCM [4]}, SePH\(_{km}\) [5]} and DCMH [6]} are unsupervised methods. For fair comparison, at the \(fc7\) from the initial CNN-F network used by our method, we extracted the CNN feature for the shallowed baselines.
| [5] | [
[
226,
229
]
] | https://openalex.org/W2526152041 |
4246acd9-c1a1-4fb8-9f1d-ce283534f425 | Baselines: In experiments, we compare our FDCH with six state-of-the-art cross-modal hashing methods.
They can be divided into two categories: LSSH [1]}, CMFH [2]} and DCH [3]} are supervised methods; SCM [4]}, SePH\(_{km}\) [5]} and DCMH [6]} are unsupervised methods. For fair comparison, at the \(fc7\) from the initial CNN-F network used by our method, we extracted the CNN feature for the shallowed baselines.
| [6] | [
[
240,
243
]
] | https://openalex.org/W2266728343 |
16578597-ae69-46f5-b1ad-0d2ca6bfea1c | In recent years, NIROMs DL-based have been the subject of several studies aiming to overcome the limitations of the linear projections methods [1]}, by recovering non-linear, low-dimensional manifolds [2]}, [3]}. Various methodologies have been proposed, including supervised and unsupervised DL techniques, to identify low-dimensional manifolds and nonlinear dynamic behaviors [4]}. A common methodology applied in the frame of NIROMs DL-based is the convolutional autoencoders (CAE) for the non-linear dimensionality reduction, and the long short-term memory (LSMT) networks to predict the temporal evolution [5]}, [6]}. In [7]}, mode decomposition is achieved through POD and AE for the nonlinear feature extraction of flow fields. In a recent work [2]}, the authors utilized CAE for dimensionality reduction, temporal CAE to encode the solution manifold, and dilated temporal convolutions to model the dynamics. In [9]}, the performance of the CAE, variational autoencoders and POD to obtain low dimensional embedding has been examined and Gaussian processes regression to implement the mapping between the input parameters and the reduced solutions.
| [2] | [
[
201,
204
],
[
752,
755
]
] | https://openalex.org/W2998104826 |
5554addc-21ee-4027-8405-1841d20d591f | In recent years, NIROMs DL-based have been the subject of several studies aiming to overcome the limitations of the linear projections methods [1]}, by recovering non-linear, low-dimensional manifolds [2]}, [3]}. Various methodologies have been proposed, including supervised and unsupervised DL techniques, to identify low-dimensional manifolds and nonlinear dynamic behaviors [4]}. A common methodology applied in the frame of NIROMs DL-based is the convolutional autoencoders (CAE) for the non-linear dimensionality reduction, and the long short-term memory (LSMT) networks to predict the temporal evolution [5]}, [6]}. In [7]}, mode decomposition is achieved through POD and AE for the nonlinear feature extraction of flow fields. In a recent work [2]}, the authors utilized CAE for dimensionality reduction, temporal CAE to encode the solution manifold, and dilated temporal convolutions to model the dynamics. In [9]}, the performance of the CAE, variational autoencoders and POD to obtain low dimensional embedding has been examined and Gaussian processes regression to implement the mapping between the input parameters and the reduced solutions.
| [5] | [
[
611,
614
]
] | https://openalex.org/W2886997576 |
da046929-2581-4a94-9591-08f0acd05fb9 | Pre-trained Language Models.
Self-supervised [1]} pre-trained language models [2]} has become the backbone of natural language processing. From early stage when GPT [3]}, BERT [4]}, XLNet [5]}, RoBERTa [6]} has limited amount of parameters (less than 350M), the advent of T5 [7]} and GPT-3 [8]} boosts the development of giant language models with billion and even trillions of parameters.
| [8] | [
[
290,
293
]
] | https://openalex.org/W3030163527 |
c914cff7-3999-4764-a35e-4eb5ca318791 | The choice of prior distributions for single network Bayesian ERGMs has yet to be studied in any great detail. The appropriate setting of priors is a challenging task due to the typically high levels of dependence between parameters [1]}. Studies thus far have generally assumed (flat) multivariate normal prior distributions on the model parameters [2]}, [3]}, [4]}. Through the choice of the prior distribution \(\pi (\phi )\) , we can encode additional information on the population of networks, such as group structure. For conceptual and computational simplicity, we will also assume multivariate normal priors, though other prior specifications warrant further investigation.
| [2] | [
[
350,
353
]
] | https://openalex.org/W1680396847 |
09a88493-a7cd-4fbb-8d82-a921057c0731 | Crucially, the ratio of intractable normalising constants cancel out, and so this acceptance ratio can indeed be evaluated. The stationary distribution of the Markov chain constructed through this scheme is \(\pi (\theta , \theta ^{\prime }, y^{\prime }|y)\) [1]}. Thus, by marginalising out \(\theta ^{\prime }\) and \(y^{\prime }\) , the algorithm yields samples from the desired posterior, namely \(\pi (\theta |y)\) .
| [1] | [
[
260,
263
]
] | https://openalex.org/W1533758202 |
d2e70f66-fd1a-4166-bc7c-17d82285b684 | The centred parametrisation and the non-centred parametrisation tend to be complementary: when one performs poorly, the other tends to perform better [1]}. However, when the parameters of interest are the group-level parameters \((\mu , \Sigma _\theta )\) , it is possible to combine both approaches using an ancillarity-sufficiency interweaving strategy (ASIS; [2]}). ASIS works by combining the updating schemes of the CP and NCP approaches. The ASIS algorithm for a multilevel Bayesian ERGM is described in Algorithm REF .
| [2] | [
[
362,
365
]
] | https://openalex.org/W1964451245 |
b18ce9d2-c9b0-459d-973c-48a8eb0c2be4 | where \(\Sigma _k\) is the sample covariance matrix of the posterior samples \((\theta _{1},\dots ,\theta _{k-1})\) and \(\delta _k\) is an additional scaling factor that is varied to control the magnitude of the proposals. Following [1]}, we set \(\beta = 0.05\) . The role of \(\Sigma _k\) is to adapt the direction of the proposals to the MCMC run so far, while \(\delta _k\) serves to target an acceptance rate of 0.234. Specifically, we start with \(\delta _1 = 1\) and increase (resp. decrease) \(\log (\delta _k)\) by \(\min (0.5, 1 / \sqrt{(}k)\) if the acceptance rate was below (resp. above) 0.234 in the previous 20 iterations.
| [1] | [
[
237,
240
]
] | https://openalex.org/W2047978125 |
dc29943c-d76e-4b14-a36c-dfbe8130e80d | In this paper we explore a theoretical
scenario where the dynamics of a multiquark system
remains marked by correlations between
heavy flavors dictated by QCD [1]}.
For this reason we have chosen hidden-flavor
pentaquarks for our study, i.e., \(Q\bar{Q} qqq\) .
The theoretical pattern obtained will be an additional tool
for analyzing the growing number
of states in the quarkonium-nucleon
energy region. As discussed below, the correlations
between the heavy flavors turn the five-body
problem into a more tractable three-body problem.
Our study is based on a constituent quark model that
has often been used for exploratory studies,
whose results have been refined and confirmed
by more rigorous treatments of QCD.
For instance, the recently discovered
flavor-exotic mesons, \(T^+_{cc}\equiv cc\bar{u}\bar{d}\) [2]}, [3]}, were first
predicted by potential-model calculations [4]} and later reinforced
by more refined potential-model calculations, lattice simulations and QCD sum
rules [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}.
| [2] | [
[
815,
818
]
] | https://openalex.org/W4282958578 |
8bfc4552-95f2-4ff4-9d60-abef4a156218 | In this paper we explore a theoretical
scenario where the dynamics of a multiquark system
remains marked by correlations between
heavy flavors dictated by QCD [1]}.
For this reason we have chosen hidden-flavor
pentaquarks for our study, i.e., \(Q\bar{Q} qqq\) .
The theoretical pattern obtained will be an additional tool
for analyzing the growing number
of states in the quarkonium-nucleon
energy region. As discussed below, the correlations
between the heavy flavors turn the five-body
problem into a more tractable three-body problem.
Our study is based on a constituent quark model that
has often been used for exploratory studies,
whose results have been refined and confirmed
by more rigorous treatments of QCD.
For instance, the recently discovered
flavor-exotic mesons, \(T^+_{cc}\equiv cc\bar{u}\bar{d}\) [2]}, [3]}, were first
predicted by potential-model calculations [4]} and later reinforced
by more refined potential-model calculations, lattice simulations and QCD sum
rules [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}.
| [3] | [
[
821,
824
]
] | https://openalex.org/W3196854701 |
1d027719-f90f-4200-bb4e-067fadb7ebb5 | The idea behind these approaches is to select the most favorable configurations
to generate stable multiquarks. For example, the diquark models of
Refs. [1]}, [2]}, [3]}, [4]} are based on the fact
that a color-\({\bf \bar{3}}\) \(qq\) state is an attractive channel
whereas the color-\({\bf 6}\) is repulsive. In the same vein, a
color-\({\bf 1}\) \(q\bar{q}\) state is an attractive channel
whereas the color-\({\bf 8}\) is repulsive. Working at
leading order with a \(Q\bar{Q}qqq\) pentaquark, neglecting the spin-spin interaction,
if a \(Qq\) color-\({\bf \bar{3}}\) diquark has a binding proportional to
\(m_q\) , in the same units the \(Q\bar{Q}\) color-\({\bf 1}\) system
has a binding proportional to \(2M_Q\) . Therefore, the color Coulomb-like interaction
between the components of a hidden-flavor pentaquark
favors a \(Q\bar{Q}\) color singlet instead of a color octet, as emphasized in Ref. [5]}.
As a consequence, the color wave function of a pentaquark
would be uniquely determined, see Fig. REF , and would be given by,
\(\Psi ^{\rm Color}_{\rm Pentaquark} \,\, = \,\, {\bf 3}_q \, \otimes \, {\bf 1}_{(Q \bar{Q})} \, \otimes \, {\bf \bar{3}}_{(qq)} \, ,\)
| [2] | [
[
159,
162
]
] | https://openalex.org/W2107202675 |
d5f54664-cb62-4d4a-8c8d-96a3a4f3e739 | Preliminary analysis of the experimental data suggested the coexistence
of negative and positive parity pentaquarks in the same energy region [1]}.
We have studied such possibility within our model.
For this purpose, we have calculated the mass of the lowest positive parity
state, the first orbital angular momentum excitation of the \(v_1\) state.
The technical details have been described in Sec. REF .
We chose this state because it is made up of the most strongly correlated structures,
\(Q\bar{Q}\) \(\lbrace {\bf 1}_c, {\bf 1}_f, 0_s\rbrace \) and
\(qq\) \(\lbrace {\bf \bar{3}}_c, {\bf \bar{3}}_f, 0_s\rbrace \) .
Then, it might have a similar mass to negative parity states
made up of spin 1 structures. We have obtained an
energy of 197 MeV above threshold. By using the values given in Eq. (REF )
one obtains a mass of 4518 MeV for two degenerate states with quantum numbers
\(J^P=1/2^+\) and \(3/2^+\) . Therefore, positive parity pentaquark states
would appear above 4.5 GeV, a mass slightly larger than that of the states measured so far.
Similarly, most of the theoretical works prefer to
assign the lowest lying pentaquarks to negative parity states.
Almost degenerate negative and positive parity states
may occur for hidden-flavor pentaquarks that have been detected
in the same channel but that were formed by different pairs
of quarkonium-nucleon states [2]}, one of them radially excited. Thus the negative
parity pentaquark of the \((Q\bar{Q})_{n+1, S}(qqq)\)\(n\) stands for the
radial quantum number of the \(Q\bar{Q}\) system. system would
have a similar mass than the positive parity orbital angular
momentum excited state of the \((Q\bar{Q})_{n, S}(qqq)\) system. The assignment
of negative and positive parity states to different parity
Born-Oppenheimer multiplets has already been suggested as a
plausible solution in the triquark-diquark
picture of Ref. [3]}. Nevertheless, this issue remains
one of the most challenging problems in the pentaquark phenomenology
that should be first confirmed experimentally.
| [1] | [
[
142,
145
]
] | https://openalex.org/W2019595721 |
f5d21165-bf77-45b4-ad6c-490d6fc52ba4 | Finally, the results we have presented could be further used to study the
possible existence of charmonium states bound to atomic nuclei suggested
by Brodsky [1]} more than three decades ago. As it has been
mentioned above, since charmonium and nucleons do not share light \(u\) and \(d\) quarks,
the OZI rule suppresses the interactions mediated by the exchange of mesons made of only light
quarks. Thus, if such states are indeed bound to nuclei, it has been emphasized the relevance
to search for other sources of attraction [2]}.
A charmonium-nucleon interaction which provides a binding mechanism has been found,
in the heavy-quark limit, in terms of charmonium chromoelectric
polarizabilities and densities of the nucleon energy-momentum tensor [3]}, [4]}, [5]}.
The existence of such bound states has also
been justified by changes of the internal structure of the hadrons in the nuclear medium.
Thus, for example, \(J/\Psi \) -nuclei bound states were found in Ref. [6]}.
In a similar model it has been recently concluded that the \(\eta _c\) meson should form bound states
with all the nuclei considered, from \(^4\) He to \(^{208}\) Pb [7]}.
Our model presents an alternative mechanism based on the
short-range one-gluon exchange interaction between the constituents of charmonium and
nucleons. This mechanism has already been suggested to lead to dibaryon
resonances [8]}, [9]}, [10]}, [11]}, [12]}, [13]}.
To our knowledge, this result has never been obtained before based on
pure quark-gluon dynamics using a restricted Hilbert space.
| [3] | [
[
753,
756
]
] | https://openalex.org/W2184130272 |
325725da-6a7e-46b9-9c41-0a5eb33c30ea | Finally, the results we have presented could be further used to study the
possible existence of charmonium states bound to atomic nuclei suggested
by Brodsky [1]} more than three decades ago. As it has been
mentioned above, since charmonium and nucleons do not share light \(u\) and \(d\) quarks,
the OZI rule suppresses the interactions mediated by the exchange of mesons made of only light
quarks. Thus, if such states are indeed bound to nuclei, it has been emphasized the relevance
to search for other sources of attraction [2]}.
A charmonium-nucleon interaction which provides a binding mechanism has been found,
in the heavy-quark limit, in terms of charmonium chromoelectric
polarizabilities and densities of the nucleon energy-momentum tensor [3]}, [4]}, [5]}.
The existence of such bound states has also
been justified by changes of the internal structure of the hadrons in the nuclear medium.
Thus, for example, \(J/\Psi \) -nuclei bound states were found in Ref. [6]}.
In a similar model it has been recently concluded that the \(\eta _c\) meson should form bound states
with all the nuclei considered, from \(^4\) He to \(^{208}\) Pb [7]}.
Our model presents an alternative mechanism based on the
short-range one-gluon exchange interaction between the constituents of charmonium and
nucleons. This mechanism has already been suggested to lead to dibaryon
resonances [8]}, [9]}, [10]}, [11]}, [12]}, [13]}.
To our knowledge, this result has never been obtained before based on
pure quark-gluon dynamics using a restricted Hilbert space.
| [10] | [
[
1393,
1397
]
] | https://openalex.org/W2029965107 |
31d21715-64e6-404a-baa7-10cd93ec97bc | Type-based AHT: Type-based AHT methods [1]}, [2]}, [3]} assume access to predefined teammate policies for learning. In these methods, training data is gathered by letting the learner interact with the predefined teammates. The learner is subsequently trained to infer the types of teammates based on the limited experience that are gathered by interacting with them. Inferred teammate types are finally used to decide the learner's optimal action for collaborating with its teammates. For instance, a learner could employ an expert policy to choose its optimal action when collaborating with a certain type of teammate.
| [1] | [
[
39,
42
]
] | https://openalex.org/W2535584654 |